text
stringlengths 4
2.78M
| meta
dict |
---|---|
---
abstract: 'Experimental studies of hypernuclear dynamics, besides being essential for the understanding of strong interactions in the strange sector, have important astrophysical implications. The observation of neutron stars with masses exceeding two solar masses poses a serious challenge to the models of hyperon dynamics in dense nuclear matter, many of which predict a maximum mass incompatible with the data. In this article, it is argued that valuable new insight may be gained extending the experimental studies of kaon electro production from nuclei to include the $\isotope[208][]{\rm Pb}(e,e^\prime K^+) \isotope[208][\Lambda]{\rm Tl}$ process. The connection with proton knockout reactions and the availability of accurate $\isotope[208][]{\rm Pb}(e,e^\prime p) \isotope[207][]{\rm Tl}$ data can be exploited to achieve a largely model-independent analysis of the measured cross section. A framework for the description of kaon electro production based on the formalism of nuclear many-body theory is outlined.'
author:
- Omar Benhar
title: 'Extracting Hypernuclear Properties from the $(e, e^\prime K^+)$ Cross Section'
---
Introduction
============
Experimental studies of the $(e,e^\prime K^+)$ reaction on nuclei have long been recognised as a valuable source of information on hypernuclear spectroscopy. The extensive program of measurements performed or approved at Jefferson Lab [@E94-107; @E12-15-008]encompassing a variety of nuclear targets ranging from $\isotope[6][]{\rm Li}$ to $\isotope[40][]{\rm Ca}$ and $\isotope[48][]{\rm Ca}$ has the potential to shed new light on the dynamics of strong interactions in the strange sector, addressing outstanding issues such as the isospin-dependence of hyperon-nucleon interactions and the role of three-body forces involving nucleons and hyperons. In addition, because the appearance of hyperons is expected to become energetically favoured in dense nuclear matter, these measurements have important implications for neutron star physics.
The recent observation of two-solar-mass neutron stars [@demorest; @antonio] the existence of which is ruled out by many models predicting the presence of hyperons in the neutron star core [@isaac_etal]suggests that the present understanding of nuclear interactions involving hyperons is far from being complete. In the literature, the issue of reconciling the calculated properties of hyperon matter with the existence of massive stars is referred to as [*hyperon puzzle*]{} [@puzzle].
Owing to the severe difficulties involved in the determination of the potential describing hyperon-nucleon (YN) interactions from scattering data, the study of hypernuclear spectroscopy has long been regarded as a very effective alternative approach to obtain much needed complementary information.
In this context, the $(e,e^\prime K^+)$ process offers clear advantages. The high resolution achievable by $\gamma$-ray spectroscopy can only be exploited to study energy levels below nucleon emission threshold, while $(K^-,\pi^-)$ and $(\pi^+, K^+)$ reactions mainly provide information on non-spin-flip interactions. Moreover, compared to hadron induced reactions, kaon electro production allows for a better energy resolution, which may in turn result in a more accurate identification of the hyperon binding energies [@E94-107]. However, the results of several decades of study of of the $(e,e^\prime p)$ reaction [@Benhar:NPN] show that to achieve this goal the analysis of the measured cross sections must be based on a theoretical model taking into account the full complexity of electron-nucleus interactions. Addressing this issue will be critical for the extension of the Jefferson Lab program to the case of a heavy target with large neutron excess, such as $\isotope[208][]{\rm Pb}$, best suited to study hyperon dynamics in an environment providing the best available proxy of the neutron star interior.
This article is meant to be a first step towards the development of a comprehensive framework for the description of the $(e,e^\prime K^+)$ cross section within the formalism of nuclear many-body theory, which has been extensively and successfully employed to study the proton knockout reaction [@Benhar:NPN]. In fact, the clear connection between $(e,e^\prime p)$ and $(e,e^\prime K^+)$ processes, that naturally emerges in the context of the proposed analysis, shows that the missing energy spectra measured in $(e,e^\prime p)$ experiments provide the baseline needed for a model-independent determination of the hyperon binding energies.
The text is structured as follows. In Sect.\[Axsec\] the description of kaon electro-production from nuclei in the kinematical regime in which factorisation of the nuclear cross section is expected to be applicable is reviewed, and the relation to the proton knockout process is highlighted. The main issues associated with the treatment of the elementary electron-proton vertex and the calculation of the nuclear amplitudes comprising the structure of the $\isotope[208][]{\rm Pb}(e,e^\prime K^+) \isotope[208][\Lambda]{\rm Tl}$ cross section are discussed in Sect. \[Pbxsec\]. Finally, the summary and an outlook to future work can be found in in Sect. \[summary\].
The ${\rm A}(e, e^\prime K^+){_\Lambda}{\rm A}$ cross section {#Axsec}
=============================================================
Let us consider the kaon electro-production process $$\begin{aligned}
\label{eek:A}
e(k) + {\rm A}(p_{\rm A}) \to e^\prime(k^\prime) + K^+(p_K) + {_\Lambda}{\rm A}(p_R) \ , \end{aligned}$$ in which an electron scatters off a nucleus of mass number ${\rm A}$, and the hadronic final state $$\begin{aligned}
\label{def:F}
| F \rangle = | K^+ {_\Lambda}{{\rm A}} \rangle \ , \end{aligned}$$ comprises a $K^+$ meson and the recoiling hypernucleus, resulting from the replacement of a proton with a $\Lambda$ in the target nucleus. The incoming and scattered electrons have four-momenta $k \equiv (E,{\bf k})$ and $k^\prime \equiv(E^\prime,{\bf k}^\prime)$, respectively, while the corresponding quantities associated with the kaon and the recoiling hypernucleus are denoted $p_K \equiv (E_K,{\bf p}_k)$ and $p_R \equiv(E_R,{\bf p}_R)$. Finally, in the lab reference framein which the lepton kinematical variables are measured $p_A \equiv(M_A,0)$.
The differential cross section of reaction can be written in the form $$\begin{aligned}
\label{A:xsec}
d \sigma_A \propto L_{\mu\nu} W^{\mu\nu} \ \delta^{(4)}( p_0 + q - p_F) \ , \end{aligned}$$ with $\lambda, \mu = 1,2,3$, where $q = k - k^\prime$ and $p_F~=~p_K + p_R$ are the four-momentum transfer and the total four-momentum carried by the hadronic final state, respectively. The tensor $L_{\mu\nu}$, fully specified by the electron kinematical variables, can be written in the form [@AFF]
$$\begin{aligned}
L = \left(
\begin{array}{ccc}
\eta_+ & 0 & -\sqrt{\epsilon_L \eta_+} \\
0 & \eta_- & 0 \\
-\sqrt{\epsilon_L \eta_+} & 0 & \epsilon_L \\
\end{array}
\right) \ ,\end{aligned}$$
with $\eta_\pm = \left( 1 \pm \epsilon \right)/2$ and $$\begin{aligned}
\epsilon = \left( 1 + 2 \frac{|{\bf q}|^2}{Q^2}\ \tan^2 \frac{\theta_e}{2} \right)^{-1} \ \ ,\end{aligned}$$ where $\theta_e$ is the electron scattering angle, $q \equiv ( \omega, {\bf q} )$, $Q^2 = - q^2$, and $\epsilon_L = \epsilon Q^2 / \omega^2$.
All the information on hadronic, nuclear and hypernuclear dynamics in contained in the nuclear response tensor, defined as $$\begin{aligned}
\label{A:tensor}
W^{\mu\nu} = \langle 0 | {J_{\rm A}^\mu}^\dagger(q) | F \rangle \langle F | J_{\rm A}^\nu(q) | 0 \rangle \ ,\end{aligned}$$ where $|0 \rangle$ denotes the target ground state and the final state $|F\rangle$ is given by Eq..
Equation shows that the theoretical calculation of the cross section requires a consistent description of the nuclear and hypernuclear wave functions, as well as of the nuclear current operator appearing in the transition matrix element, $ J_{\rm A}^\mu$. This problem, which in general involves non trivial difficulties, greatly simplifies in the kinematical region in which the impulse approximation can be exploited.
Impulse Approximation and Factorisation {#IA}
---------------------------------------
Figure \[graph\] provides a diagrammatic representation of the $(e,e^\prime K^+)$ process based on the factorisation [*ansatz*]{}. This scheme is expected to be applicable in the impulse approximation regime, corresponding to momentum transfer such that the wavelenght of the virtual photon, $\lambda~\sim~1/|{\bf q}|$, is short compared to the average distance between nucleons in the target nucleus, $d_{NN}~\sim~1.5 \ {\rm fm}$.
![Schematic representation of the scattering amplitude associated with the process of Eq. in the impulse approximation regime.[]{data-label="graph"}](Fig1.pdf)
Under these condition, which can be easily met at Jefferson Lab, hereafter JLab, the beam particles primarily interacts with individual protons, the remaining ${\rm A}-1$ nucleons acting as spectators. As a consequence, the nuclear current operator reduces to the sum of one-body operators describing the electron-proton interaction $$\begin{aligned}
J_{\rm A}^\mu(q) = \sum_{i=1}^A j^\mu_i(q)\ , \end{aligned}$$ and the hadronic final state takes the product form $$\begin{aligned}
| F \rangle = | K^+ \rangle \otimes \vert {_\Lambda}{{\rm A}} \rangle \ , \end{aligned}$$ with the outgoing $K^+$ being described by a plane wave, or by a distorted wave obtained from a kaon-nucleus optical potential [@E94-107].
From the above equations, it follows that the nuclear transition amplitude $$\begin{aligned}
{\mathcal M}^\mu = \langle K^+ {_\Lambda}{{\rm A}} | J_{\rm A}^\mu(q) | 0 \rangle \ , \end{aligned}$$ can be written in factorised form through insertion of the completeness relations $$\begin{aligned}
\int \frac{d^3p}{(2\pi)^3} | {\bf p} \rangle \langle {\bf p} | = \int \frac{d^3p_\Lambda}{(2\pi)^3} | {\bf p}_\Lambda \rangle \langle {\bf p}_\Lambda |
= \openone \ ,\end{aligned}$$ where the integrations over the momenta carried by the proton and the $\Lambda$ also include spin summations, and $$\begin{aligned}
\sum_n | ({\rm A}-1)_n \rangle \langle ({\rm A}-1)_n | = \openone , \end{aligned}$$ the sum being extended to all eigenstates of the $({\rm A}-1)$-nucleon spectator system.
The resulting expression turns out to be
$$\begin{aligned}
\label{factorisation}
\mathcal{M}^\mu & = \langle K^+ {_\Lambda}{\rm A} | J_{\rm A}^\mu | 0 \rangle
= \sum_{i=1}^{\rm A} \sum_n \int \frac{d^3p}{(2\pi)^3} \frac{d^3p_\Lambda}{(2 \pi)^3} \
{\mathcal M}^\star_{ _\Lambda {\rm A} \to ({\rm A}-1)_n + \Lambda} \ \langle {\bf p}_K {\bf k}_\Lambda | j_i^\mu | {\bf p} \rangle
\ {\mathcal M}_{{\rm A} \to ({\rm A}-1)_n + p} \ \ , \end{aligned}$$
where the current matrix element describes the elementary electromagnetic process $\gamma^* + p \to K^+ + \Lambda$.
The nuclear and hypernuclear amplitudes in the right-hand side of Eq., labelled ${\mathcal M}_N$ and ${\mathcal M}_\Lambda$ in Fig. \[graph\], are given by $$\begin{aligned}
\label{ampl:N}
{\mathcal M}_{{\rm A} \to ({\rm A}-1)_n + p} = \{ \langle {\bf p} | \otimes \langle ({\rm A}-1)_n | \} | 0 \rangle \ , \end{aligned}$$ and $$\begin{aligned}
\label{ampl:Y}
{\mathcal M}_{_\Lambda {\rm A} \to ({\rm A}-1)_n + \Lambda} = \{ \langle {\bf p}_\Lambda | \otimes \langle ({\rm A}-1)_n | \} | _\Lambda{{\rm A}} \rangle \ .\end{aligned}$$ In the above equations, the states $ \vert ({\rm A}-1)_n \rangle$ and $ \vert _\Lambda{\rm A} \rangle$ describe the $({\rm A}-1)$-nucleon spectator system, appearing as an intermediate state, and the final-state $\Lambda$-hypernucleus, respectively.
The amplitudes of Eq. determine the Green’s function describing the propagation of a proton in the target nucleus, $G({\bf k},E)$, and the associated spectral function, defined as $$\begin{aligned}
\label{SF:N}
P({\bf k},E) & = - \frac{1}{\pi} {\rm Im} \ G({\bf k},E) \\
\nonumber
& = \sum_n \vert {\mathcal M}_{{\rm A }\to ({\rm A}-1)_n + p} \vert^2
\ \delta(E + M_A-m-E_n) \ , \end{aligned}$$ where $m$ is the nucleon mass and $E_n$ denotes the energy of the $({\rm A}-1)$-nucleon system in the state $n$. The spectral function describes the [*joint*]{} probability to remove a nucleon of momentum ${\bf k}$ from the nuclear ground state leaving the residual system with excitation energy $E>0$.
Within the mean-field approximation underlying the nuclear shell model, Eq. reduces to the simple form $$\begin{aligned}
\label{SF:N:MF}
P({\bf k},E) = \sum_{\alpha \in \{F\}} |\varphi({\bf k})|^2 \delta(E - |\epsilon_\alpha|) \ ,\end{aligned}$$ where $\alpha \equiv \{ nj\ell \}$ is the set of quantum numbers specifying the single-nucleon orbits. The sum is extended to all states belonging to the Fermi sea, the momentum-space wave functions and energies of which are denoted $\varphi_\alpha({\bf k})$ and $\epsilon_\alpha$, respectively, with $\epsilon_\alpha<0$.
Equation shows that within the independent particle model the spectral function reduces to a set of $\delta$-function peaks, representing the energy spectrum of single-nucleon states. Dynamical effects beyond the mean field shift the position of the peaks, that also acquire a finite width. In addition, the occurrence of virtual scattering processes leading to the excitation of nucleon pairs to states above the Fermi surface leads to the appearance of a sizeable continuum contribution yo the Green’s funcion, accounting for $\sim 20 \%$ of the total strength. As a consequence, the normalisation of a shell model state $\varphi_\alpha$, referred to as spectroscopic factor, is reduced from unity to a value $Z_\alpha <1$.
The nuclear spectral functions have been extensively studied measuring the cross section of the $(e,e^\prime p)$ reaction, in which the scattered electron and the knocked out nucleon are detected in coincidence. The results of these experiments, carried out using a variety of nuclear targets, have unambiguous identified the states predicted by the shell model, highlighting at the same time the limitations of the mean-field approximation and the effects of nucleon-nucleon correlations [@FruMug; @Benhar:NPN].
In analogy with Eqs. and , the amplitudes of Eq. comprise the spectral function $$\begin{aligned}
\label{SF:L}
P_\Lambda({\bf k}_\Lambda,E_\Lambda) & = \sum_n \vert {\mathcal M}_{_\Lambda A \to (A-1)_n + \Lambda} \vert^2 \\
\nonumber
& \times \delta(E_\Lambda+ M_{_\Lambda {\rm A}} - M_\Lambda - E_n) \ ,\end{aligned}$$ describing the joint probability to remove a $\Lambda$ from the hypernucleus $_\Lambda{\rm A}$ leaving the residual system with energy $E_\Lambda$. Here $M_\Lambda$ and $M_{_\Lambda {\rm A}}$ denote the mass of the $\Lambda$ and the hypernucleus, respectively.
The observed $(e,e^\prime K^+)$ cross section, plotted as a function of the missing energy $$\begin{aligned}
\label{def:emiss}
E^\Lambda_{\rm miss} = \omega - E_{K^+} \ . \end{aligned}$$ exhibits a collection of peaks, providing the sought-after information on the energy spectrum of the $\Lambda$ in the final state hypernucleus[^1] .
Note that both the electron energy loss, $\omega$, and the energy of the outgoing kaon,$E_{K^+}$, are [*measured*]{} kinematical quantities.
Kinematics {#Kin}
----------
The expression of $E^\Lambda_{\rm miss}$, Eq., can be conveniently rewritten considering that the $\delta$-function of Eq. implies the condition $$\begin{aligned}
\label{full:encons}
\omega + M_A = E_{K^+} + E_{_\Lambda{\rm A}} \ .\end{aligned}$$ Combining the above relation with the requirement of conservation of energy at the nuclear and hypernuclear vertices, dictating that $$\begin{aligned}
\label{cons:ampl}
M_A = E_p + E_n \ \ \ , \ \ \ E_\Lambda + E_n = E_{_\Lambda{\rm A}} \ , \end{aligned}$$ we find $$\begin{aligned}
\label{cons:vert}
\omega + E_p = E_{K^+} + E_\Lambda \ .\end{aligned}$$ Finally, substitution into Eq. yields $$\begin{aligned}
\label{lambda:emiss}
E^\Lambda_{\rm miss} = E_\Lambda - E_p \ .\end{aligned}$$
The above equation, while providing a relation between the [*measured*]{} missing energy and the binding energy of the $\Lambda$ in the final state hypernucleus, defined as $B_\Lambda~=~-E_\Lambda$, [*does not*]{} allow for a model independent identification of $E_\Lambda$. The position of a peak observed in the missing energy spectrum turns out to be determined by the difference between the energies needed to remove a $\Lambda$ from the final state hypernucleus, $E_\Lambda$, or a proton from the target nucleus, $E_p$, leaving the residual $(A-1)$-nucleon system in the same bound state, specified by the quantum numbers collectively denoted $n$.
The proton removal energies, however, can be independently obtained from the missing energy [*measured*]{} in proton knockout experiments, in which the scattered electron and the ejected proton are detected in coincidence, defined as $$\begin{aligned}
\label{def:emissp}
E^p_{\rm miss} = \omega - E_{p^\prime} = - E_p \ .\end{aligned}$$ where $E_{p^\prime}$ is the energy of the outgoing proton. Note that, consistently with Eq., in the right-hand side of the above equation the kinetic energy of the recoiling nucleus has been omitted.
From Eqs. and it follows that the $\Lambda$ binding energy can be determined in a fully model independent fashion from $$\begin{aligned}
B_\Lambda = - E_\Lambda = - ( E^\Lambda_{\rm miss} - E^p_{\rm miss} ) \ ,\end{aligned}$$ combining the information provided by the missing energy spectra measured in $(e,e^\prime K^+)$ and $(e,e^\prime p)$ experiments.
The $\isotope[208][]{\rm Pb}(e, e^\prime K^+)\isotope[208][\Lambda]{\rm Tl}$ Cross Section {#Pbxsec}
==========================================================================================
In view of astrophysical applications, it will be of outmost importance to extend the ongoing experimental studies of kaon electro-production, to include heavy nuclear targets with large neutron excess, such as $\isotope[208][]{Pb}$, that provide the best available proxy for neutron star matter. In this section, I will briefly discuss the main elements needed to carry out the calculation of the $\isotope[208][]{\rm Pb}(e, e^\prime K^+)\isotope[208][\Lambda]{\rm Tl}$ cross section within the factorisation scheme illustrated in Section \[IA\].
The $e+p \to e^\prime + K^+ + \Lambda$ process
----------------------------------------------
The description of the elementary $e+p \to e^\prime + K^+ + \Lambda$ process involving an isolated proton at rest has been obtained from the isobar model [@Adam; @isobar_model], in which the hadron current is derived from an effective Lagrangian comprising baryon and meson fields. Different implementations of this model are characterised by the intermediate states appearing in processes featuring the excitation of resonances [@Sotona; @petr1; @petr2]. The resulting expressionsinvolving a set of free parameters determined by fitting the available experimental datahave been employed to obtain nuclear cross sections within the approach based on the nuclear shell model and the frozen-nucleon approximation [@Sotona; @E94-107]
In principle, the calculation of the nuclear cross section within the scheme outlined in Sect. \[IA\] should be performed taking into account that the elementary process involves a bound, moving nucleon, with four-momentum $p \equiv (E_p,{\bf p})$ and energy $$\begin{aligned}
\label{offshell:momentum}
E_p = m - E \ , \end{aligned}$$ as prescribed by Eq. . However, the generalisation to off-shell kinematics of phenomenological approaches constrained by free proton data, such as the isobar model of Refs. [@Sotona; @petr1; @petr2], entails non trivial difficulties.
A simple procedure to overcome this problem is based on the observation that in the scattering process on a bound nucleon, a fraction $\delta \omega$ of the energy transfer goes to the spectator system. The amount of energy given to the struck proton, the expression of which naturally emerges from the impulse approximation formalism, turns out to be [@benhar_RMP] $$\begin{aligned}
\label{omegatilde}
{\widetilde \omega} & = \omega - \delta \omega \\
\nonumber
& = \omega + m - E - \sqrt{ m^2 + {\bf p}^2 } \ .\end{aligned}$$ Note that from the above equations it follows that $$\begin{aligned}
\label{omegatilde2}
E_p + \omega = \sqrt{ m^2 + {\bf p}^2 } + {\widetilde \omega} \ , \end{aligned}$$ implying in turn $$\begin{aligned}
\label{omegatilde2}
(p + q )^2 = ( {\widetilde p} + {\widetilde q} )^2 = W^2\ , \end{aligned}$$ with ${\widetilde q} \equiv ( {\widetilde \omega} , {\bf q})$ and ${\widetilde p} \equiv ( \sqrt{ m^2 + {\bf p}^2 }, {\bf p})$.
The above equations show that the replacement $q \to {\widetilde q}$ allows to establish a correspondence between scattering on an off-shell moving proton leading to the appearance of a final state of invariant mass $W$, and the corresponding process involving a proton in free space.
It has to be mentioned that, although quite reasonable on physics grounds, the use of ${\widetilde q}$ in the hadron current leads to a violation of current conservation. This problem is inherent in the impulse approximation scheme, which does not allow to simultaneously conserve energy and current in correlated systems. A very popular and effective workaround for this issue, widely employed in the analysis of $(e,e^\prime p)$ data, has been first proposed by de Forest in the 1980s [@forest].
In view of the fact that the extension of the work of Refs.[@petr1; @petr2] to the case of a moving proton does not involve severe conceptual difficulties, the consistent application of the formalism developed for proton knock-out processes the case of kaon electro production appears to be feasible. In this context, it should also be pointed out that the factorisation scheme discussed in Sect. \[Axsec\] allows for a fully relativistic treatment of the electron-proton vertex, which is definitely required in the kinematical region accessible at JLab [@benhar_RMP].
Nuclear and Hypernuclear Dynamics
---------------------------------
Vauable information needed to obtain $\Lambda$ removal energies from the $\isotope[208][]{Pb}(e, e^\prime K^+)\isotope[208][\Lambda]{Tl}$ cross section, using the procedure described in Sect. \[Kin\], has been gained by the high-resolution studies of the $\isotope[208][]{Pb}(e,e^\prime p)\isotope[207][]{Tl}$ reaction performed at NIKHEF-K in the late 1980s and 1990s [@Quint1; @Quint2; @Irene1; @Irene2]. The available missing energy spectrameasured with a resolution of better than 100 KeV and extending up to $\sim 30$ MeVprovide both position and width of the peaks corresponding to the bound states of the recoiling $\isotope[207][]{Tl}$ nucleus.
It is very important to realise that a meaningful interpretation of NIKHEF-K data requires the use of a theoretical framework taking into account effects of nuclear dynamics beyond the mean-field approximation. This issue is clearly illustrated in Figs. \[deviations\] and \[spectrum\].
Figure \[deviations\] displays the difference between the energies corresponding to the peaks in the measured missing energy spectrum, $\langle E^p_\alpha \rangle$, and the predictions of the mean-field model reported in Ref. [@meanfield], $E_\alpha^{HF}$. It is apparent that the discrepancy, measured by the quantity $$\begin{aligned}
\label{def:delta}
\Delta_\alpha = | E_\alpha^{HF} - \langle E^p_\alpha \rangle | \ , \end{aligned}$$ where the index $\alpha \equiv \{ nj\ell \}$ specifies the state of the recoiling system, is sizeable, and as large as $\sim 3$ MeV for deeply bound states.
![Difference between the energies corresponding to the peaks of the missing energy spectrum of the $\isotope[208][]{Pb}(e,e^\prime p)\isotope[207][]{Tl}$ reaction reported in Ref. [@Quint1] and the results of the mean-field calculations of Ref. [@meanfield], displayed as a function of the proton binding energy $E_p = -E_{\rm miss}$. The states are labeled according to the standard spectroscopic notation[]{data-label="deviations"}](Fig2.pdf)
In Fig. \[spectrum\], the spectroscopic factors extracted from NIKHEF-K data are compared to the results of the theoretical analysis of Ref. [@BFF0]. The solid line, exhibiting a remarkable agreement with the experiment, has been obtained combining theoretical nuclear matter results, displayed by the dashed line, and a phenomenological correction to the nucleon self-energy, accounting for finite size and shell effects. The energy dependence of the spectroscopic factors of nuclear matter at equilibrium density has been derived from a calculation of the pole contribution to the spectral function of Eq. , carried out using Correlated Basis Function (CBF) perturbation theory and a microscopic nuclear Hamiltonian including two- and three-nucleon potentials [@GF1].
The results of Fig. \[spectrum\] show that the spectroscopic factors of the deeply bound proton states of $\isotope[208][]{\rm Pb}$ are largely unaffected by surface and shell effect, and can be accurately estimated using the results of nuclear matter calculations. Finite size effects, mainly driven by long-range nuclear dynamics, are more significant in the vicinity of the Fermi surface, where they account for up to $\sim$ 35% of the deviation from the mean-field prediction, represented by the solid horizontal line.
![Spectroscopic factors of the shell model states of $\isotope[208][]{\rm Pb}$, obtained from the analysis of the $\isotope[208][]{{\rm Pb}}(e,e^\prime K^+) \isotope[208][\Lambda]{Tl}$ cross section measured at NIKHEF-K [@Quint1]. The dashed line represent the results of theoretical calculations of the spectroscopic factors of nuclear matter, while the solid line has been obtained including corrections taking into account finite size and shell effects in $\isotope[208][]{\rm Pb}$ [@BFF0]. For comparison, the horizontal line shows the prediction of the independent particle model. The deviations arising form short- and long-range correlations are highligted and labelled SCR and LRC, respectively.[]{data-label="spectrum"}](Fig3.pdf)
In addition to the nucleon spectral function, the analysis of the $\isotope[208][]{\rm Pb}(e, e^\prime K^+)\isotope[208][\Lambda]{\rm Tl}$ cross section requires a consistent description of the $\Lambda$ spectral function, defined by Eq. . Following the pioneering nuclear matter study of Ref. [@wim], microscopic calculations of $P_\Lambda({\bf k}_\Lambda,E_\Lambda)$ in a variety of hypernucleiranging from $\isotope[5][\Lambda]{\rm He}$ to $\isotope[208][\Lambda]{\rm Pb}$have been recently carried out by the author of Ref. [@Isaac]. In this work, the self-energy of the $\Lambda$ was obtained from $G$-matrix perturbation theory in the Brueckner-Hartree-Fock approximation, using the Jülich [@julich1; @julich2] and Nijmegen [@nijmegen1; @nijmegen2; @nijmegen3] models of the YN potential .
The generalisation of the approach of Ref. [@Isaac]needed to treat $\isotope[207][]{\rm Tl}$ using Hamiltonians including both YN and YNN potentialsdoes not appear to involve severe difficulties, of either conceptual or technical nature. Therefore, a consistent description of the $\isotope[208][]{\rm Pb}(e, e^\prime K^+)\isotope[208][\Lambda]{\rm Tl}$ process within the factorisation scheme described in the previous section is expected to be achievable within the time frame relevant to the JLab experimental program.
Summary and outlook {#summary}
===================
The results discussed in this article suggest that precious new information on hypernuclear dynamics can be obtained from a largely model independent analysis of the measured $\isotope[208][]{{\rm Pb}}(e,e^\prime K^+) \isotope[208][\Lambda]{Tl}$ cross section, and that a consistent theoretical framework, allowing to exploit the data to constrain YN and YNN potential models, can be developed within the well established approach based on nuclear many-body theory and the Green’s function formalism.
More recent computational approaches, mostly based on the Monte Carlo method [@Carlson:2014vla], have been very successful in obtaining ground-state expectation values of Hamiltonians involving nucleons and hyperons, needed to model the equation of state of strange baryon matter, see, e.g. Ref. [@puzzle]. However, the present development of these techniques does not allow the calculation of either $(e,e^\prime p)$ or $(e,e^\prime K^+)$ cross sections, most notably in the kinematical regime in which the underlying non-relativistic approximation is no longer applicable. On the other hand, the approach based on factorisation, allowing for a fully relativistic treatment of the electron-proton interaction, has proved very effective for the interpretation of the available $(e,e^\prime p)$ data.
Owing to the extended region of constant density, $\isotope[208][]{{\rm Pb}}$ is the best available proxy for uniform nuclear matter. This feature, which also emerges from the results displayed in Fig. \[spectrum\], will be critical to acquire new information on three-body forces, complementary to that obtainable using a Calcium target.
The results of accurate many-body calculations of the ground-state energies of finite nuclei [@CVMC] and isospin-symmetric nuclear matter [@APR]performerd with the [*same*]{} nuclear Hamiltonian including the Argonne $v_{18}$ [@AV18] and Urbana IX [@UIX] NN and NNN interaction models, respectivelyshow that the potential energy per nucleon arising from three-nucleon interactions is a monotonically increasing function of A whose value changes sign, varying from -0.23 MeV in $\isotope[40][]{{\rm Ca}}$ to 2.78 MeV in nuclear matter at equilibrium density. In view of astrophysical applications, constraining three-body forces in the mass region in which they change from attractive to repulsive in the non-strange sector appears to be needed.
The solution of the “hyperon puzzle” is likely to require a great deal of theoretical and experimental work for many years to come. The results discussed in this article strongly suggest that the extension of the JLab kaon electro production program to $\isotope[208][]{{\rm Pb}}$ will allow to collect data useful to broaden the present understanding of hypernuclear dynamics in nuclear matter.
This work was supported by the Italian National Institute for Nuclear Research (INFN) under grant TEONGRAV. The author is deeply indebted to Petr Byd[ž]{}ovsk[ý]{}, Franco Garibaldi and Isaac Vidaña for many illuminating discussions on issues related to the subject of this article.
[^1]: In principle, the right-hand side of Eq. should also include a term accounting for the kinetic energy of the recoiling hypernucleus. However, for heavy targets this contribution turns out to be negligibly small, and will be omited.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Spin dipole (SD) strengths for double beta-decay (DBD) nuclei were studied experimentally for the first time by using measured cross sections of ($^3$He,$t$) charge exchange reactions (CERs). Then SD nuclear matrix elements (NMEs) $M_{\alpha}(SD)$ for low-lying 2$^-$ states were derived from the experimental SD strengths by referring to the experimental $\alpha$=GT (Gamow-Teller) and $\alpha$=F (Fermi) strengths. They are consistent with the empirical NMEs $M(SD)$ based on the quasi-particle model with the empirical effective SD coupling constant. The CERs are used to evaluate the SD NME, which is associated with one of the major components of the neutrino-less DBD NME.\
Key words: Charge exchange reaction, spin dipole strength, double beta decay,\
nuclear matrix element, quenching of axial vector transitions.
address: |
1. Research Center for Nuclear Physics, Osaka University, Osaka 567-0047, Japan\
2. Institute for Nuclear Physics, University of Muenster, Muenster, D-48149, Germany
author:
- 'H. Ejiri$^1$ and D. Frekers$^2$'
title: Spin dipole nuclear matrix elements for double beta decay nuclei by charge exchange reactions
---
Neutrino-less double beta decay (0$\nu \beta \beta $) is a unique probe for studying the Majorana nature of neutrinos ($\nu$), the absolute $\nu $-mass scales, the lepton sector CP phases and the fundamental weak interactions, which are beyond the standard weak model (SM). Nuclear matrix elements (NMEs) $M^{0\nu}$ for 0$\nu \beta \beta $ are crucial to extract the neutrino properties from double beta decay (DBD) experiments and even to design DBD detectors. DBDs within the SM are 2-neutrino double beta decays (2$\nu \beta \beta $) and the NMEs $M^{2\nu}$ have been derived from experimentally measured 2$\nu \beta \beta $ rates. DBD theories and experiments have been discussed in reviews [@eji05; @avi08; @ver12; @suh98; @eji00] and references therein.
The objective of the present letter is to show that ($^3$He,$t$) charge exchange reactions (CERs) at non-zero angles with momentum transfer $q \approx$ 30 - 100 MeV/c are used to study spin dipole (SD) NMEs for low-lying $J^{\pi}=2^-$ intermediate states associated with the major component of 0$\nu \beta \beta $ NMEs. Actually, accurate theoretical calculations for $M^{0\nu}$ and $M^{2\nu }$ are hard since they are very small and are sensitive to nucleonic and non-nucleonic correlations, nuclear models and nuclear structures [@eji05; @ver12; @suh98]. Accordingly, experimental studies of $M^{0\nu}$ and $M^{2\nu }$ are of great interest to help evaluate and/or confirm theoretical calculations of the NMEs. CERs are used to provide single $\beta $ NMEs associated with DBD NMEs, as discussed in reviews [@eji05; @ver12; @eji00; @zeg07].
One of the 0$\nu \beta \beta $ processes of current interest is the light Majorana-$\nu$ mass process, where a light Majorana $\nu$ is exchanged between two nucleons 1 and 2 in the DBD nucleus. The axial-vector NME $M_A$ is the main component of the DBD NME . We consider the 0$\nu \beta ^-\beta ^-$ DBD from the initial nucleus A to the final nucleus C. $M_A$ is written as the sum of the NMEs via the intermediate nuclear states B as [@eji05; @ver12]
$$M_A=\sum_B <|\tau_1\sigma_1 h^+(r_BE_B)\tau_2\sigma_2|>,$$
where $\tau_i, \sigma_i$ is the isospin and spin operators for $i$=1 and 2 nucleons, and $h^+(r_BE_B)$ is the neutrino potential with $r_B=r_{1,2}$ being the two-nucleon distance and $E_B$ being the intermediate energy. Then the momentum involved is of the order of 1/$r_B \approx $ 40-100 MeV/c, and the corresponding orbital angular-momentum is $l \hbar \approx 1\hbar-3\hbar$. Then intermediate states are mainly $J^{\pi}=2^{\pm}$, $3^{\pm}$ and 4$^{\pm}$. Among them spin dipole (SD) states with $J^{\pi}$=2$^-$ play a major role [@suh98; @suh12; @hyv15]. On the other hand, the $2\nu \beta \beta $ process within the SM involves low-energy s-wave neutrinos with $q \approx $ a few MeV/c, and the intermediate states are mainly Gamow-Teller (GT) states with $J^{\pi}=1^+$.
In fact, the 0$\nu\beta \beta $ NME is the two-body $\beta ^\pm$ NMEs as given in eq. (1), while the CER NME is the one-body single $\beta ^-$ NME. Then the CER of A $\rightarrow $ B provides experimentally the single $\beta ^- $ A $\rightarrow $ B SD NME with the effective axial-vector SD coupling $g_A^{eff}$ [@eji78; @eji14; @eji15]. Thus the CER SD strength is indirectly associated with the single-$\beta ^-$ component of the 0$\nu \beta \beta $ A$\rightarrow$C NME via the SD intermediate state B, while the CER GT NME is directly linked to the single $\beta ^-$ component of the 2$\nu \beta \beta $ NME [@eji05; @ver12].
Experimental studies of DBD NMEs by using pion CERs were discussed [@faz86; @mod88; @aue89]. Neutrino and muon CERs [@eji03; @eji06; @ego06], and photo nuclear reactions [@eji13a] give useful information on DBD NMEs. Light ion CERs have been extensively used for studying 2$\nu \beta \beta$ DBD NMEs [@ver12; @eji00]. Heavy ion double-CERs are of potential interest for DBD NME studies [@cap15; @ver16]. Transfer reactions provide nuclear structures of DBD nuclei [@sch08].
So far we have studied high energy-resolution ($^3$He,$t$) CERs on DBD nuclei to get GT strengths $B(GT)$ from the cross sections at forward ($\theta \approx$ 0 deg.) angles with $q\approx $ 0 MeV/c as given in the previous works [@aki97; @gue11; @pup11; @pup12; @thi12; @thi12a; @thi12b; @fre13; @fre16]. There low-lying SD states are also populated in all DBD nuclei, but we concentrated our studies on the GT strengths for low-lying states in the previous works. GT NMEs for low-lying states are used to evaluate the NMEs $M^{2\nu}$ [@eji09; @eji12].
For the present SD studies, we use CER cross sections at finite angles around $\theta $ =2 deg. (i.e. $q\approx $ 55 MeV/c) to extract the SD strengths $B(SD)$ for low-lying $J^{\pi}$=2$^-$ intermediate states and the SD NMEs associated with the 0$\nu \beta \beta$ NMEs. The 2$^-$ state is preferentially excited by the SD interaction operator of $T(SD)=\tau ^-[\sigma \times rY_1]_2$ with $\tau ^-$ and $\sigma$ being the isospin lowering and spin operators in the medium energy CER [@eji00; @eji13].
The differential cross section of CER induced by the medium energy projectile is expressed on the basis of the simple direct CER with the $\sigma \tau$ central interaction as [@eji00] $$\frac{d\sigma_{\alpha}(q,\omega)}{d\Omega}=K(E_i,\omega) f_{\alpha}(q)N^D_{\alpha}(q,\omega) |J_{\alpha}|^2 B(\alpha),$$ where $\alpha$ denotes the Fermi (F), GT and SD mode excitations, and $q$ and $\omega $ are the momentum and energy transfers, $K(E_i, \omega)$ is the kinematic factor, $N^D_{\alpha}$ is the distortion factor, $J_{\alpha}$ is the volume integral of the $\alpha$ mode interaction, and $f_{\alpha}(q)$ stands for the momentum distribution. The $q$ dependences for the GT and SD excitations caused by GT ($l$=0) and SD ($l$=1) interactions are given, respectively, by the spherical Bessel functions $f_{GT}(q)$=$|j_0(qR)|^2$ and $f_{SD}(q)$=$|j_1(qR)|^2$ with $R$ being the effective interaction radius. Then the angular distributions for the GT and SD excitations show maximum at $q_0R\approx$ 0 and $q_1R\approx$ 2, respectively.
The expression given in eq.(2) is appropriate for strongly excited GT states with $B(GT)\geq0.03 $ , where the central $\tau \sigma $ interaction is dominant. This equation is known as the proportionality relation of the GT cross section corrected for the distortion effect to the GT strength, and has been applied for extracting the GT strength $B(GT)$ from the cross section at $q\approx $0 (i.e. $\theta \approx$0 deg.), as given in the review article [@eji00] and references therein and in [@zeg07]. The proportionality coefficient of the interaction integral is obtained by comparing the measured cross section with the B(GT) known from the $\beta $ decay rate.
In medium heavy DBD nuclei, SD states are located nearby GT states in the same nucleus. Then the SD strength is written in terms of the $\alpha$=GT strength as [@eji13] $$B_{\alpha}(SD) = R_{\alpha} B_{R\alpha}(SD),$$ $$B_{R\alpha}(SD)=[\frac{d\sigma_{SD}(\theta_1)}{d\Omega}][\frac{d\sigma_{\alpha}(\theta_0)}{d\Omega}] ^{-1}B(\alpha),$$ where $d\sigma_{SD}(\theta_1)/d\Omega$ and $d\sigma_{\alpha}(\theta_0)/d\Omega$ are the maximum differential cross sections for the SD and $\alpha$=GT states in their angular distributions, respectively. $R_{R\alpha}(SD)$ is the SD strength relative to the $\alpha$=GT strength. The coefficient $R_{\alpha}$ with $\alpha$=GT is expressed as $$R_{\alpha}= \frac{f(q_0)N^D_{\alpha}(q_0,\omega_0) |J_{\alpha}|^2}{ f(q_1)N^D_{SD}(q_1,\omega_1) |J_{SD}|^2}.$$ Here the kinematic factors are nearly same for the low lying GT and SD states since the energy difference between the low lying GT and SD states is much smaller than the incident projectile energy of $E\approx$0.42 GeV. The distortion factor and the volume integral of the interaction depend a little on the mass number, but the ratio may be considered to be nearly same in the present mass region of A=70-140.
The SD NME $M_{\alpha}(SD)$ is expressed in the present case of 0$^+ \rightarrow 2^-$ transition as $$M_{\alpha}(SD) = B_{\alpha}(SD)^{1/2}=R_{\alpha}^{1/2}M_{R\alpha}(SD),
\label{eq:sdm}$$ where $M_{R\alpha}(SD)=B_{R\alpha}(SD)^{1/2}$ with $\alpha$=GT is the SD NME relative to the GT NME, and $M_{\alpha}(SD)$ with $\alpha$=GT is the SD NME to be derived from the SD CER cross section by referring to the GT CER cross section and the GT NME. We note that the relative SD strengths and relative SD NMEs are free from uncertainties of the absolute cross section, which are comon to both SD and $\alpha$=(GT/F) states in the same target nucleus.
Differential cross sections for the low-lying SD states show the angular distribution characteristic of the $l$=1 transfer with the maximum at around $\theta_1\approx $2 deg. (i.e. $q_1\approx$ 50 - 60 MeV/c), while those for the GT states show the maximum at around $\theta _0=0$ deg. (i.e. $q \approx 0$) , as given in the previous works. The momentum transfer $q_1$ at the maximum is consistent with the value for the $j_1(qr)$ distribution with the effective interaction radius $R$=1.45 $A^{1/3}$fm. The ratio of the $q$ dependent factors of $f_0(q)$ and $f_1(q)$ for GT and SD states are same for all nuclei.
DBD nuclei of current interest for realistic $\nu$-mass studies include $^{76}$Ge, $^{82}$Se, $^{96}$Zr $^{100}$Mo, $^{128}$Te, $^{130}$Te, and $^{136}$Xe. Here we discuss the lowest quasi-particle (QP) SD state in each nucleus, which is strongly excited by the CER. The relative SD strengths $B_{RGT}(SD)$ as given in eq.(4) are obtained from CER cross sections for the SD and GT states and the observed GT strengths as given in Table 1. Here the GT cross sections are the values extraporated to $q$=0 from the values at $\theta$=0. The CER cross sections and the $B(GT)$ are those given in the previous works in refs.22-30. They are $0^+\rightarrow2^-$ QP transitions of $[(1g9/2)_n(1g9/2)_n]_0 \rightarrow [(1g9/2)_n(1f5/2)_p]_2$ for $A$ = 76 and 82, $[(2d5/2)_n(2d5/2)_n]_0 \rightarrow [(2d5/2)_n(2p1/2)_p]_2$ for $A$ = 96 and 100, and $[(1h11/2)_n(1h11/2)_n]_0 \rightarrow [(1h11/2)_n(1g7/2)_p]_2$ for $A$ = 128, 130, and 136.
Nucleus $E(1^+)$ $d\sigma(GT)/d\Omega$ $B(GT)$ $E(2^-)$ $d\sigma(SD)/d\Omega$ $B_{RGT}(SD)$ $M_{GT}(SD)$10$^{-2}$
------------ ---------- ----------------------- --------- ---------- ----------------------- --------------- ----------------------- --
$^{76}$Ge 1065 1.07 0.136 0 0.40 0.052 0.20
$^{82}$Se 75 2.5 0.338 543 0.30 0.041 0.17
$^{96}$Zr 694 0.95 0.162 511 0.105 0.018 0.12
$^{100}$Mo 0 2.25 0.345 223 0.135 0.021 0.13
$^{128}$Te 0 0.31 0.079 134 0.70 0.178 0.37
$^{130}$Te 43 0.28 0.072 354 0.95 0.250 0.43
$^{136}$Xe 590 0.71 0.149 1000 1.43 0.302 0.47
: CER cross sections and strengths for GT and SD states in DBD nuclei.\
$E$: excitation energy in keV. $d\sigma(GT)/d\Omega$ and $d\sigma(SD)/d\Omega$: differential cross sections in mb/sr. $B(GT)$: GT strength. $B_{RGT}(SD)$: SD strength relative to the GT strength. $M_{GT}(SD)$: SD NME in n.u. derived from $B_{RGT}(SD)$. \[tab:1\]
The relative SD strengths $B_{R\alpha}(SD)$ with $\alpha$=F are also obtained from the CER SD cross sections relative to the CER F cross sections extrapolated to $q$=0 for IAS (Isobaric Analogue State) and the F strength of $B(F)=N-Z$. The obtained relative SD strengths for DBD nuclei are given in Table 2.
The relative SD NMEs $M_{\alpha}(SD)$ with $\alpha$=GT and F as derived from the relative SD CER strengths are assumed to have possible uncertainty of 15$\%$, which are due to the possible coherent tensor-interaction contribution with $\Delta l$=3 to the SD cross section at $\theta _1\approx $2 deg., and the possible state-dependence of the ratio of the SD to GT/F interaction integrals. The tensor contribution is minor in the present QP SD transition since the SD NME due to the major central SD interaction with $\Delta l$=1 is large. It is noted that the present SD strengths relative to the GT and F strengths depend on the relative distortion effects and the relative interaction strengths, but are free from the uncertainties of the their absolute values.
Nucleus $E(0^+)$ $d\sigma(F)/d\Omega$ $B(F)$ $E(2^-)$ $d\sigma(SD)/d\Omega$ $B_{RF}(SD)$ $M_F(SD)$10$^{-2}$
------------ ---------- ---------------------- -------- ---------- ----------------------- -------------- -------------------- --
$^{76}$Ge 8308 15 12 0 0.40 0.32 0.16
$^{82}$Se 9576 14 14 543 0.30 0.30 0.16
$^{96}$Zr 11309 12 16 511 0.105 0.14 0.11
$^{100}$Mo 11085 13 16 223 0.135 0.17 0.11
$^{128}$Te 11948 11 24 134 0.70 1.5 0.34
$^{130}$Te 12718 11.5 26 354 0.95 2.2 0.41
$^{136}$Xe 13380 12.5 28 1000 1.43 3.2 0.49
: CER cross sections and strengths for the F (IAS) and SD states in DBD nuclei. $E$: excitation energy in keV. $d\sigma(F)/d\Omega$, $d\sigma(SD)/d\Omega$: differential cross sections in mb/sr. $B(F)$: F strength. $B_{RF}(SD)$: SD strength relative to the F strength. $M_{F}(SD)$: SD NME in n.u.derived from $B_{RF}(SD)$.\[tab:2\]
Now let us compare the SD strengths derived from the CER cross sections with empirical SD strengths based on the $\beta $ decay $f_1t$ values for the SD states
with the same QP configurations in the same mass regions. In fact, none of the SD strengths for the lowest 2$^-$ states in DBD nuclei are known from $\beta $ decays. This is because the EC($\beta ^+)$ branch from the 2$^-$ ground state in $^{76}$As to $^{76}$Ge is too small to be observed, and the lowest 2$^-$ states in all other nuclei are excited states which decay mainly by electro-magnetic transitions. Therefore, we evaluate the SD NMEs empirically by referring to the experimental NMEs in neighboring nuclei with known $f_1t$ values [@eji78; @eji14; @eji15].
First, we derive experimental SD strengths $B(SD)$ from the known $f_1t$ values as $$B(SD)=\frac{9D}{4\pi}(\frac{g_V}{g_A})^2 (f_1t)^{-1},$$
$$M(SD) = (2J_i+1)^{1/2}B(SD)^{1/2}, ~~~M(SD)=\langle[\sigma \times r Y_1]_2\rangle,$$
where $D$=6250 is the weak coupling constant and $g_v/g_A$=1/1.267 is the ratio of the vector to axial-vector coupling constants. The SD NMEs $M(SD)$ in the three DBD mass regions of $A$=72-88, $A$=94-106, and $A$=122-140 are obtained from the observed $f_1t$ values [@fir99]. The NMEs in natural units (n.u = $\hbar /mc$=386 fm) for nuclei in the same DBD nuclear mass regions are given in the 2nd column of Table \[tab:3\].
Transition $M(SD)$ 10$^{-2}$ $M_{QP}(SD)$ 10$^{-2}$ $k^{eff}$
-------------------------------------- ------------------- ------------------------ -----------
$^{72}$Ge$\leftrightarrow^{72}$As 0.14 0.68 0.21
$^{74}$Ge$\leftrightarrow ^{74}$As 0.17 0.73 0.23
$^{76}$Ge$\leftrightarrow^{76}$As 0.21\* 0.91 0.23\*\*
$^{82}$Se$\leftrightarrow^{82}$Br 0.20\* 0.85 0.23\*\*
$^{84}$Kr$\leftrightarrow ^{84}$Rb 0.21 0.76 0.28
$^{86}$Kr$\leftrightarrow ^{86}$Rb 0.15 0.84 0.18
$^{95}$Mo$\leftrightarrow ^{95}$Nb 0.19 0.59 0.32
$^{95}$Mo$\leftrightarrow ^{95}$Tc 0.18 0.63 0.29
$^{96}$Zr$\leftrightarrow ^{96}$Nb 0.15\* 0.52 0.30\*\*
$^{100}$Mo$\leftrightarrow^{100}$Tc 0.13\* 0.43 0.30\*\*
$^{122}$Sn $\leftrightarrow^{122}$Sb 0.38 1.47 0.26
$^{124}$Te $\leftrightarrow^{124}$I 0.28 1.28 0.22
$^{126}$Te$\leftrightarrow^{126}$I 0.33 1.38 0.24
$^{128}$Te$\leftrightarrow^{128}$I 0.34\* 1.56 0.22\*\*
$^{130}$Te $\leftrightarrow^{130}$I 0.37\* 1.65 0.22\*\*
$^{130}$Ba$\leftrightarrow^{132}$La 0.22 1.20 0.18
$^{136}$Xe$\leftrightarrow^{136}$Cs 0.47\* 2.13 0.22\*\*
: SD NMEs in n.u for medium heavy nuclei in $A$=70-90, $A$=92-104 and $A$=120-140. $M(SD)$: empirical NMEs derived from $f_1t$ values. $M_{QP}$: QP SD NMEs. $k^{eff}$: effective reduction factor. \* stand for the empirical SD NMEs derived from the QP NME and the experimental reduction factor $k^{eff}$ given by \*\*. See text. \[tab:3\]
Then , we describe the experimental SD NMEs as $M(SD)=k^{eff}M_{QP}(SD)$, where $M_{QP}(SD)$ is for the QP NME $M_{QP}(SD)$ and $k^{eff}$ stands for all kinds of nuclear correlation effects. The QP NME is expressed in terms of the single particle NME $M_{SP}(SD)$ and the pairing factor $P_{np}$ as [@eji78; @eji14; @eji15] $$M_{QP}(SD) = P_{np}M_{SP}(SD),$$ where the paring factor is expressed in terms of the proton and neutron occupation($V$)/vacancy($U$) amplitudes. Thus it reflects the neutron and proton configurations in the relevant orbits near the Fermi surface. The obtained $M_{QP}(SD)$ are given in the 3rd column of Table 3.
The actual SD NMEs are uniformly reduced with respect to $M_{QP}(SD)$ due to such nucleonic and non-nucleonic $\sigma \tau $ correlations and nuclear-medium effects that are not explicitly included in the QP model. The uniform effect expressed by $k^{eff}$ is a kind of the nuclear core effect [@eji00; @eji78; @eji14; @eji15]. The coefficient $k^{eff}$ includes partially the nuclear-medium and non-nucleonic effect, which is alternatively expressed as the effective (renormalized) axial coupling constant $g_A^{eff}$ in units of the free $g_A$ [@ver12; @eji14; @eji15]. The values for $k^{eff}$ are obtained as the ratios of the experimental NMEs and the QP NMEs, as given in the 4th column of Table 3. They are $k^{eff} \approx$ 0.23, 0.3, and 0.22 for the mass regions of $A$=72-88, $A$=94-106, and $A$=122-140, respectively.
Finally, the SD NMEs $M(SD)$ for DBD nuclei are obtained, as given in the 2nd column with \* in Table 3, by using the QP NME $M_{QP}(SD)$ evaluated for the DBD nuclei and the empirical values for the $k^{eff}$ coefficients in the same mass region, i.e. $k^{eff}$=0.23 for $^{76}$Ge and $^{82}$Se, $k^{eff}$=0.30 for $^{96}$Zr and $^{100}$Mo, and $k^{eff}$=0.22 for $^{128}$Te, $^{130}$Te and $^{136}$Xe, The present SD NMEs are empirical NMEs based on the experimental $k^{eff}$ for the nuclear core effects and the paring correlation $P_{np}$ for the Fermi surface $V/U$ effects given by the BSC QP model. Here we assume possible uncertainty of 15$\%$ for the reduction factor $k^{eff}$ and thus the same for $M(SD)$. The uncertainty is mainly due to the experimental evaluation for the $k^{eff}$. Actually observed NMEs in these mass regions are well located within 15$\%$ of the central value of $k^{eff}M_{QP}$ as discussed in previous works [@eji00; @eji78; @eji14; @eji15]. The present empirical SD NMEs are quite realistic since pure theoretical calculations for SD NMEs and $g_A^{eff}$ are very hard. In fact, QRPA SD NMEs are far from the experimental NMEs by a factor around 2 or so [@eji14].
The proportional coefficients of $R_{\alpha}^{1/2}$ with $\alpha$=GT and F in eq. (6) are obtained by comparing the relative CER NMEs of $B_{R\alpha}(SD)^{1/2}$ =$M_{R\alpha}(SD)$ with the empirical NMEs $M(SD)$. They are $R_{GT}^{1/2}$=8.6 10$^{-3}$ and $R_F^{1/2}$=2.8 10$^{-3}$ in n.u. The SD NMEs $M_{\alpha}(SD)$ with $\alpha$=GT and F are obtained from the relative CER NMEs $M_{R\alpha}(SD)$ by using these proportional factors, as given in the 8th column of the Tables 1 and 2.
The $M_{GT}(SD)$ and $M_{F}(SD)$ agree well with each other, and also with the empirical SD NMEs $M(SD)$ given in the 2nd column \* in Table 3, as shown in Fig. 1 in a wide range of $M(SD) $= 0.12 - 0.5 10$^{-2}$ in n.u. In other words, the SD NMEs are derived by using CER SD strengths for the simple QP SD states with the large SD NME, just like as in the case of the GT NME.
![The CER SD NMEs $M_{GT}(SD)$ (left hand side) and $M_{F}(SD)$ (right hand side) are plotted against the SD NMEs $M(SD)$. A : $(1g9/2)_n \leftrightarrow(1f5/2)_p$ for $A$ = 76 and 82, B: $(2d5/2)_n \leftrightarrow(2p1/2)_p$ for $A$ = 96 and 100, and C: $(1h11/2)_n \leftrightarrow(1g7/2)_p$ for $A$ = 128, 130 and 136. The vertical errors around 15 $\%$ reflect the errors for the CER NMEs, while the horizontal ones for empirical NMEs based on $\beta $-decay data. \[fig:SDfig1\]](SDfig1){width="90.00000%"}
The present analyses show for the first time that SD NMEs are derived from the medium energy ($^3$He,$t$) CER cross sections for the SD states by referring to the cross sections and NMEs for GT and F (IAS) states. Here the CER SD NME is proportional to the SD NME $M(SD)$. The proportionality coefficient is derived by comparing the CER SD NME with the SD NME derived empirically from known $\beta $ decay SD NMEs in neighbouring nuclei. Note that GT NMEs so far have been obtained from CER GT NMEs by using the proportionality relation, where the proportionality coefficient is derived from CER GT NMEs and $\beta $-decay GT NMEs in neighboring nuclei.
The central $\tau \sigma$ interaction dominates the CER interaction at the present medium energy of $E(^3He)$=420 MeV, and the tensor-type NME of $|<[\sigma \times r Y_3]_2>|$ is much smaller than the SD NME of $|<[\sigma \times r Y_1]_2>|$ for the present simple QP SD transition of $j=l+1/2 \rightarrow j'=l'-1/2$ with $j-j'$=2 and $l-l'$=1. Actually, there may be 2$^-$ and 1$^-$ SD states with complex configurations which are not well excited by the central $\tau \sigma$ interaction in CERs. These weak SD states, which may include more tensor contribution, however, do not play major roles for the 0$\nu \beta \beta $ DBD NMEs. The GT, SD, and other multipole axial vector NMEs are much reduced with respect to the QP and QRPA NMEs [@eji00; @eji78; @eji14; @eji15; @eji13; @jok16]. The reduction may be expressed by the quenched coupling constant $g_A^{eff}$ [@eji05; @ver12; @eji14; @eji15]. The quenching of $g_A$ in DBD NMEs is discussed in [@fae08; @suh13; @suh14; @bar13]. The DBD NMEs are also discussed by various models [@ver12; @pov08; @hor07]. Then CERs are used for getting experimentally absolute SD NMEs with the $g_A^{eff}$ for the ground and excited states, which are relevant to DBD NMEs.
The present analysis uses the proportionality relation based the SD NMEs in neighbouring SD $\beta $ decays. The proportionality relation itself is directly checked by comparing SD CER NMEs with SD NMEs for non-DBD nuclei with known SD $\beta $ ft values [@eji16a]. CER NMEs themselves could in principle be derived from CER cross sections by using calcurated values for the distortion factor, the interaction volume integral and other contributions on the basis of the CE reaction theory. In this case, one may not rely on an empirical proportionality relation derived from empirical $\beta $-decay SD NMEs and the experimental GT/F strengths. This direction is discussed in GT NMEs [@fre13], and certainly is encouraged for SD NMEs as discussed elsewhere. It is remarked that CERs are used to study SD responses for supernova neutrinos [@eji02; @vol02; @alm15; @laz07]. The CER SD strengths for low lying states below 5 MeV in DBD nuclei have been studied. The sum of the strengths are compared with model evaluations, as reported elsewhere [@fre16A].\
The authors thank Profs. H. Akimune, M. Harakeh and J. Suhonen for discussions.\
[**References**]{}\
[9]{}
H. Ejiri 2005 [*J. Phys. Soc. Jpn.*]{} [**74**]{} 2101 F. Avignone, S. Elliott, and J. Engel 2008 [*Rev. Mod. Phys.*]{} [**80**]{}481 J. Vergados, H. Ejiri, F. [Š]{}imkovic 2012 [*Rep. Prog. Phys.* ]{} [**75**]{} 106301 J. Suhonen, O. Civitarese 1998 [*Phys. Rep.*]{} [**300**]{} 123 H. Ejiri 2000 [*Phys. Rep.*]{} [**338**]{} 265 G.G. Zegers [*et al.*]{} 2007 [Phys. Rev. Lett.]{} [**99**]{} 202501 J. Suhonen, O. Civitarese 2012 [*J. Phys. J.*]{} [**39**]{} 124005 J. Hyvärinen and J. Suhonen 2015 [*Phys. Rev.*]{} C [**91**]{} 024613
H. Ejiri and J.I. Fujita 1978 [*Phys. Rep.*]{} C [**38**]{} 85 H. Ejiri, N. Soukouti, J. Suhonen 2014 [*Phys. Lett.*]{} B [**729**]{} 27 H. Ejiri and J. Suhonen 2015 [*J. Phys. G*]{} [**42**]{} 055201
N. Fazely and L.C. Liu 1986 [*Phys. Rev. Lett.*]{} [**57**]{} 968 S. Modechai et al. 1988 [*Phys. Rev. Lett.*]{} [**61** ]{} 531 N. Auerbach et al. 1989 [*Ann. Phys.*]{} [**192**]{} 77 H. Ejiri 2003 [*Nucl. Instr. Meth. Phys. Research*]{} A [**503**]{} 276 H. Ejiri 2006 [*Czechoslovakk J. Phys.*]{}[**56**]{} 459 V. Egorov et al. 2006 [*Czechoslovakk J. Phys.*]{}[**56**]{} 453 H. Ejiri , A. I. Titov, M. Boswell and A. Young 2013 [*Phys. Rev.*]{} C [**88**]{} 054610 F. Cappuzzello et al. 2015 [*Eur. Phys. J.* ]{} A [**51**]{} 145 J. Vergados H. Ejiri and F. Simkovic 2016 [*Int. J. Modern Physics*]{} to be published. J.P. Schiffer et al. 2008 [*Phys. Rev. Lett.*]{} [**100**]{} 12501 H. Akimune [*et al.*]{} 1997 [*Phys. Lett.*]{}, B [**394**]{} 23 C. Guess [*et al.*]{} 2011 [*Phys. Rev. C*]{}, [**83**]{} 064318 P. Puppe [*et al.*]{} 2011 [*Phys. Rev. C*]{}, [**84**]{} 051305 P. Puppe [*et al.*]{} 2012 [*Phys. Rev. C*]{}, [**86**]{} 044603 J. H. Thies [*et al.*]{} 2012 [*Phys. Rev. C*]{}, [**86**]{} 014304 J. H. Thies [*et al.*]{} 2012 [*Phys. Rev. C*]{}, [**86**]{} 044309 J. H. Thies [*et al.*]{} 2012 [*Phys. Rev. C*]{}, [**86**]{} 054323 D. Frekers P. Puppe, J.H. Thies and H. Ejiri 2013 [*Nucl. Phys. A*]{} [**916**]{} 219 D. Frekers [*et al.*]{} 2016 [*Phys. Rev.*]{} C [**94**]{} 014614
H. Ejiri 2009 [*J. Phys. Soc. Jpn.*]{} [**78**]{} 074201 H. Ejiri 2012 [*J. Phys. Soc. Jpn. letters.*]{} [**81**]{} 033201
H. Ejiri 2013 [*AIP conference Proceedings*]{} [**1572**]{} 40
R.B. Firestone, et al. 1999 [*Table of Isotopes, 8$th$ ed., LBL*]{} L. Jokiniemi, J. Suhonen, and H. Ejiri 2016 [*arXiv*]{}: 1604.04399v1 \[nucl. th\]; [*Advances in High Energy Physics*]{} [**2016**]{} ID 8417598
A. Faessler et al. 2008 [*J. Phys. G*]{} [**35**]{} 075104 J. Suhonen, O. Civitarese 2013 [*Phys. Lett.*]{} B [**725**]{} 153 J. Suhonen, O. Civitarese 2014 [*Nucl. Phys.*]{} A [**924**]{} 1 J. Barea, J. Kotila, F. Iachello 2013 [*Phys. Rev.*]{} C [**87**]{} 014315 A. Poves, E. Caurier and F. Nowacki 2008 [*Eur. Phys. J.*]{} A [**36**]{} 195 M. Horoi, S. Stoica and B. A. Brown 2007 [*Phys. Rev.*]{} C [**75**]{} 034303
H. Akimune, H. Ejiri, D. Frekers, M. Harakeh 2016 NNR workshop, Osaka Sept. 2016
H. Ejiri, J. Engel, and N. Kudomi 2002 [*Phys. Lett.*]{} [**530**]{} 27 C. Volpe, N. Auerbach, G. Colò and N. Van Giai 2002 [*Phys. Rev.*]{} C [**65**]{} 044603 W. Almosly, E. Ydrefors, J. Suhonen 2015 [*J. Phys. G Nucl. Part. Phys.*]{} [**42**]{} 095106 R. Lazauskas and C. Volpe 2007 [*Nucl. Phys.*]{} A [**792**]{} 219
D. Frekers et al., 2016 to be submitted.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Some aspects of the relationship between conservativeness of a dynamical system (namely the preservation of a finite measure) and the existence of a Poisson structure for that system are analyzed. From the local point of view, due to the Flow-Box Theorem we restrict ourselves to neighborhoods of singularities. In this sense, we characterize Poisson structures around the typical zero-Hopf singularity in dimension 3 under the assumption of having a local analytic first integral with non-vanishing first jet by connecting with the classical Poincaré center problem. From the global point of view, we connect the property of being strictly conservative (the invariant measure must be positive) with the existence of a Poisson structure depending on the phase space dimension. Finally, weak conservativeness in dimension two is introduced by the extension of inverse Jacobi multipliers as weak solutions of its defining partial differential equation and some of its applications are developed. Examples including Lotka-Volterra systems, quadratic isochronous centers, and non-smooth oscillators are provided.'
author:
- 'Isaac A. García$^{\ 1,*}$ and Benito Hernández-Bermejo$^{\ 2}$'
date: |
$^{\ (1)}$ [Departament de Matemàtica. Universitat de Lleida.\
Avda. Jaume II, 69. 25001 Lleida, Spain.\
E–mail: [[email protected]]{}\
$ $\
$^{\ (2)}$ Departamento de Biología y Geología, Física y Química Inorgánica.\
Universidad Rey Juan Carlos.\
Calle Tulipán S/N. 28933–Móstoles–Madrid, Spain.\
E-mail: [[email protected]]{}]{}
title: |
Inverse Jacobi multiplier as a link between\
conservative systems and Poisson structures
---
[**Keywords:**]{} Inverse Jacobi multipliers; Conservative systems; Poisson systems.
[**PACS codes:**]{} 02.30.Hq, 05.45.-a, 45.20.-d, 45.20.Jj.
$^*$ Corresponding author. Telephone: (+34) 973702728. Fax: (+34) 973702702.
Introduction
============
The presence of finite-dimensional [*Poisson systems*]{} (see [@olv1; @wei1] and references therein for an overview of the classical theory) is ubiquitous in many branches of physics and applied mathematics. The specific format of Poisson systems has allowed the development of many tools for their analysis (for instance, see [@dlrjc1]-[@dlrjc3], [@bs6]-[@bs8] and references therein for a sample). In addition, Poisson dynamical systems are significant due to several reasons. One is that they constitute a generalization of classical Hamiltonian systems comprising nonconstant and degenerate structure matrices, as well as odd-dimensional vector fields (in contrast to classical Hamiltonian systems, which are always even dimensional). Additionally, the Poisson system format is not limited by the use of canonical transformations, since every diffeomorphic change of variables maps a Poisson system into another Poisson system.
Let us consider a smooth vector field having a finite-dimensional Poisson structure $$\label{poisson-V-1}
\frac{\mbox{\rm d}x}{\mbox{\rm d}t} = {\cal J}(x) \cdot \nabla H (x)$$ of dimension $n$ and rank $r \leq n$ constant in a domain (open and simply connected set) $\Omega \subseteq \mathbb{R}^n$. Here ${\cal J}(x)$ and $H(x)$ are the associated structure matrix and Hamiltonian function, respectively. Then under these hypothesis for each point $x_0 \in \Omega$ there is (at least locally in a neighborhood $\Omega_0 \subset \Omega$ of $x_0$) a complete set of functionally independent Casimir invariants $\{ D_{r+1}(x), \ldots , D_n(x) \}$ in $\Omega_0$, as well as a transformation $x \mapsto \Phi(x) = y$ where $\Phi$ is a smooth diffeomorphism in $\Omega_0$ bringing the system (\[poisson-V-1\]) into its Darboux canonical form. Thus, beyond the fact that Poisson systems are a formal generalization of classical Hamiltonian flows, Darboux Theorem provides the dynamical basis for such a generalization.
[*Conservative dynamical systems*]{} are those that preserve a finite measure equivalent to a generalized volume. Classical Hamiltonian systems are important examples of conservative systems. Since classical Hamiltonian systems are also a particular case of Poisson systems, it is thus natural that many Poisson systems are also conservative, and conversely that many conservative systems are Poisson systems (but not necessarily Hamiltonian). In spite that the connection between both Poisson systems and conservative flows exists, none of them implies the other, and such link seems to remain relatively unexplored in the literature, at least to the authors’ knowledge. The investigation of some aspects of such relationship is the [*leitmotiv*]{} of this work.
More precisely, we say that a $C^1$ vector field $\mathcal{Y} = \sum_{i=1}^n f_i(x) \partial_{x_i}$ defined on $\Omega \subset \mathbb{R}^n$ is conservative if there is a non-negative integrable scalar function $V$ non-identically vanishing on any open subset of $\Omega$ such that the volume integral is preserved under the flow, that is, $$\label{int-inv}
\int_{\Gamma} \frac{d x}{V(x)} = \int_{\varphi_t(\Gamma)} \frac{d x}{V(x)}$$ where $\Gamma$ is any measurable subset of $\Omega$ and $\varphi_t(x)$ denotes the associated flow to $\mathcal{Y}$. Various versions of the following result can be found in books such as [@Nemitskii] and [@Whittaker].
The $C^1$ function $V : \Omega \to \mathbb{R}$ which is non-identically vanishing on any open subset of $\Omega$ satisfies (\[int-inv\]) on any measurable subset $\Gamma \subset \Omega$ if and only if $V$ is a solution of the following linear partial differential equation $$\label{ijm}
\mathcal{Y}(V) = V \, {\rm div} \mathcal{Y},$$ where ${\rm div} \mathcal{Y} = \sum_{i=1}^n \partial_{x_i}(f_i(x))$ is the divergence of the $C^1$ vector field $\mathcal{Y}$.
Any real $C^1$ non locally null function $V$ in $\Omega$ satisfying (\[ijm\]) is called [*inverse Jacobi multiplier*]{}. In 1844 C.G.J. Jacobi introduced in the literature the nowadays called Jacobi (last) multiplier $1/V$. Initially it was mainly used to find the last additional first integral needed to achieve complete integrability of $\mathcal{Y}$. Later, S. Lie found some relationships between $V$ and Lie point symmetries of $\mathcal{Y}$. Recently, it has been proved that the existence of $V$ implies severe consequences to the dynamics of $\mathcal{Y}$ on $\Omega$. In particular, the invariant zero-set $V^{-1}(0)$ contains, under some assumptions, orbits which are relevant in the phase portrait of $\mathcal{Y}$ such as periodic orbits, limit cycles, stable, unstable and center manifolds, etc. (see [@BerroneGiacomini2; @BGM] for details).
We will use the following general lemma.
\[lemadivzero\] Any $C^1$ vector field $\mathcal{Y}$ is divergence free if and only if it has the constant inverse Jacobi multiplier $V(x) = 1$.
[*Proof*]{}. Any inverse Jacobi multiplier of $\mathcal{Y}$ satisfies $\mathcal{Y}(V) = V \, {\rm div}(\mathcal{Y})$. Hence it is obvious that if $V(x) = 1$ then ${\rm div}(\mathcal{Y}) \equiv 0$.
Conversely, assume now ${\rm div}(\mathcal{Y}) \equiv 0$. Then any inverse Jacobi multiplier $V$ of $\mathcal{Y}$ satisfies $\mathcal{Y}(V) = 0$, and clearly $V(x) = 1$ is a solution of the equation. $\Box$
From the point of view of the relationship with Poisson systems and their diffeomorphic transformation properties, it will be useful for us to know how inverse integrating factors change under orbital equivalence of vector fields, see [@BerroneGiacomini2; @BGM] for further details.
\[V-change\] Let $\Phi$ be a diffeomorphism in $\Omega \subset \mathbb{R}^n$ with non-vanishing Jacobian determinant $J_{\Phi}$ on $\Omega$ and let $\eta : \Omega \to \mathbb{R}$ be such that $\eta \in C^1(\Omega)$ and $\eta(x) \neq 0$ everywhere in $\Omega$. If $V$ is an inverse Jacobi multiplier of the $C^1$-vector field $\mathcal{Y}$ in $\Omega$ then $\eta (V \circ \Phi) / J_{\Phi}$ is an inverse Jacobi multiplier of the orbitally equivalent vector field $\eta \, \Phi_*(\mathcal{Y})$.
The structure of the article is the following. Section 2 is devoted to the relationship between three-dimensional conservative and Poisson systems around a zero-Hopf singularity. In Section 3 the concept of strict conservativeness is introduced and its consequences for the existence of a Poisson structure are developed for general $n$-dimensional flows. To conclude, in Section 4 a theory of weak conservativeness for planar flows is outlined.
Characterizing Poisson structures around a zero-Hopf singularity
================================================================
In a neighborhood of a regular point, due to the Flow-Box Theorem, any analytic vector field is both Poisson and conservative. Hence, from the local point of view, we shall restrict ourselves to neighborhoods of singular points. In agreement with the result in [@agz1], in this section we shall focus on 3-d Poisson systems that we shall name [*generic*]{}: given a $3 \times 3$ structure matrix of constant rank 2 in the domain $\Omega \subset \mathbb{R}^3$, such Poisson structure is called generic if there exists one Casimir invariant globally defined in $\Omega$. (Note that in some cases, often related to non-holonomic dynamics, it is possible that a 3-d Poisson structure of constant rank 2 in $\Omega$ is not generic in such domain, see [@rus1]-[@rus4], [@rus5] and references therein for further details).
The following preliminary result is required:
\[lema-P-R3\] An analytic vector field $\mathcal{Y}$ in an open set $\Omega \subseteq \mathbb{R}^3$ is a generic Poisson system if and only if it is analytically completely integrable in $\Omega$. In such case it can be written as $\mathcal{Y}(x) = \eta(x) \, (\nabla H_2(x) \times \nabla H_1(x))$ where $H_1$ and $H_2$ are independent first integrals and $\eta$ is an inverse Jacobi multiplier of $\mathcal{Y}$. If $\eta$ is a constant then ${\rm div}(\mathcal{Y}) \equiv 0$.
[*Proof*]{}. Clearly in $\Omega \subseteq \mathbb{R}^3$ any generic Poisson system is analytically completely integrable since it possesses two functionally independent analytic first integrals in $\Omega$, namely the Hamiltonian and one Casimir.
Conversely, assume that $\mathcal{Y}$ has two analytic independent first integrals $H_1$ and $H_2$ in $\Omega$. Then it is obvious that it can be written as $\mathcal{Y}(x) = \eta(x) \, (\nabla H_2(x) \times \nabla H_1(x))$ where $x \in \Omega$, $\nabla H_i$ is the gradient of $H_i$, $\eta$ is an analytic scalar function in $\Omega$ and the symbol $\times$ denotes the cross product in $\mathbb{R}^3$. It is straightforward to check that actually such a $\mathcal{Y}$ is a Poisson vector field with Hamiltonian $H_1$ and structure matrix $$\label{structure-J}
{\cal J}(x) = \eta(x) \, \left( \begin{array}{ccc} 0 & \partial_{x_3} H_2(x) & -\partial_{x_2} H_2(x) \\ - \partial_{x_3} H_2(x) & 0 & \partial_{x_1} H_2(x) \\ \partial_{x_2} H_2(x) & - \partial_{x_1} H_2(x) & 0 \end{array} \right).$$ Actually $H_2$ becomes the Casimir of ${\cal J}$.
The fact that $\eta(x)$ is an inverse Jacobi multiplier of $\mathcal{Y}$ can be easily checked by direct evaluation.
The last sentence of the lemma follows from ${\rm div}(\nabla H_2(x) \times \nabla H_1(x)) \equiv 0$. $\Box$
\[rem1\] [It is worth emphasizing that the singular points $x_0 \in \Omega \subset \mathbb{R}^3$ where the Poisson vector field vanishes have a special nature. More specifically, since the rank of the structure matrix is assumed to be constant and equal to 2, we have $\mathcal{J}(x_0) \neq 0$. Therefore, we focus on the points where $\nabla H(x_0)=0$, namely on the critical points of the Hamiltonian. In the particular case of $\Omega \subset \mathbb{R}^3$ and after diffeomorphically reducing the system to the Darboux canonical form, it can be seen that the eigenvalues associated to the singularity only can be of the form either $\{ 0 , \pm \lambda \}$ or $\{ 0 , \pm i \omega \}$, with both $\lambda$ and $\omega$ real numbers. In the particular case $\omega \neq 0$ the singularity is called a zero-Hopf singular point. Consequently, this is the generic singularity that can be found in a neighborhood of phase-space completely foliated by periodic orbits. ]{}
The previous lemma allows developing the next result.
\[Teo-poisson-V-2\] Let $\mathcal{Y}$ be an analytic vector field in a sufficiently small neighborhood $\Omega \subseteq \mathbb{R}^3$ of a zero-Hopf singularity at $(x_1 ,x_2 ,x_3)=(0,0,0)$ and assume it has an analytic first integral $D$ with $\partial_{x_3} D(0,0,0) \neq 0$ in $\Omega$. Define the 1-parameter family of planar vector fields $\mathcal{Z}_h = \mathcal{Y}|_{\{ D = h \}}$ as the restrictions of $\mathcal{Y}$ to the level sets $\{ D = h \}$ of $D$ with $|h|$ sufficiently small. If $\mathcal{Y}$ is a generic Poisson system in $\Omega$ then $\mathcal{Z}_h$ has a branch of nondegenerate center singularities emerging from the origin at $h=0$. The converse is also true if $\mathcal{Z}_h$ has a family of first integrals depending analytically on $h$.
[*Proof*]{}. Since the origin is a zero-Hopf singularity of $\mathcal{Y}$, its linear part has associated eigenvalues $\{ 0, \pm i \omega \}$ with $i^2 = -1$ and $\omega \in \mathbb{R} \backslash \{0\}$. Performing a linear change of variables and rescaling the time to set $\omega = 1$ we write the linear part of $\mathcal{Y}$ into real Jordan canonical form, that is, $$\mathcal{Y} = (-x_2 + F_1(x)) \partial_{x_1} + (x_1 + F_2(x)) \partial_{x_2} + F_3(x) \partial_{x_3}$$ where the $F_j$ are real analytic functions in $\Omega$ only possessing nonlinear terms. Since the linear part of $\mathcal{Y}$ has two independent first integrals $x_3$ and $x_1^2+x_2^2$ it is clear that $D$ can be chosen in the form $D(x)=x_3 + \cdots$, where the dots denote higher order terms. Then the analytic diffeomorphism $\Phi = ({\rm Id}_2, D)$ in $\Omega$ (where ${\rm Id}_2$ is the identity in $\mathbb{R}^2$) is tangent to the identity and $$\Phi_* \mathcal{Y} = (-y_2 + \hat{F}_1(y)) \partial_{y_1} + (y_1 + \hat{F}_2(y)) \partial_{y_2}$$ where $\hat{F}_i$ are analytic nonlinear terms. By construction it is clear that $$\mathcal{Z}_h = (-y_2 + \hat{F}_1(y_1,y_2,h)) \partial_{y_1} + (y_1 + \hat{F}_2(y_1,y_2,h)) \partial_{y_2}$$ is an analytic family of vector fields defined in a neighborhood of the origin in $\mathbb{R}^2$ and with parameter values of $h$ close to zero. We emphasize that $\mathcal{Z}_h$ has a branch of singularities $(y_1^*(h),y_2^*(h))$ emerging from $(y_1^*(0),y_2^*(0))=(0,0)$ with associated eigenvalues $(\lambda_1(h), \lambda_2(h))$ and $(\lambda_1(0), \lambda_2(0)) = (i, -i)$. Clearly the above singularities are monodromic for $|h|$ sufficiently small.
We now make use of the assumption that $\mathcal{Y}$ is a generic Poisson vector field in $\Omega$. Then $\mathcal{Y}$ has an analytic first integral $H$ in $\Omega$ functionally independent of $D$. Clearly this additional first integral can be selected as $H(x) = x_1^2+x_2^2 + \cdots$ and the orbits near the origin of $\mathcal{Y}$ are closed since they are the intersection of the level sets of $H$ and $D(x)=x_3 + \cdots$. Thus this $H$ exists if and only if $(y_1^*(h),y_2^*(h))$ is a branch of center singularities of $\mathcal{Z}_h$ and we prove the first part of the theorem.
Conversely, we assume that $\mathcal{Z}_h$ has the branch $(y_1^*(h),y_2^*(h))$ of nondegenerate center singularities. Therefore $\mathcal{Z}_h$ possesses a family of first integrals $\hat H(x_1, x_2; h) = x_1^2 + x_2^2 + \cdots$ analytic at $(x_1,x_2)=(0,0)$ for any admissible $h$. Furthermore, if additionally $\hat H$ is analytic at $h=0$ then function $H(x) = \hat H(x_1, x_2; D(x)) = x_1^2 + x_2^2 + \cdots$ is an analytically first integral of $\mathcal{Y}$. Therefore $\mathcal{Y}$ is analytically completely integrable in a sufficiently small neighborhood $\Omega$ and from Lemma \[lema-P-R3\] it is a Poisson system in $\Omega$. $\Box$
[It is interesting to note that from the Poincaré-Dulac normal form theory there is an analytic diffeomorphism $\Psi$ near the origin such that, when $\mathcal{Z}_h$ has a center at the origin then $\Psi_* \mathcal{Z}_h$ becomes the vector field: $$\Psi_* \mathcal{Z}_h = -z_2 (1 + f(z_1^2+z_2^2,h)) \partial_{z_1} + z_1(1 + f(z_1^2+z_2^2,h)) \partial_{z_2}$$ with $f(0,h)=0$. Moreover, this is a classical Hamiltonian vector field with Hamiltonian function: $$\hat{H}(z_1,z_2;h) = \frac{1}{2} \left( z_1^2 + z_2^2 + \hat{G}( z_1^2 + z_2^2 ; h) \right) \; , \;\: \mbox{with} \:\;\: \hat{G}(w;h) = \int f(w,h) dw.$$ In short, $\Psi_* \mathcal{Z}_h = (- \partial_{z_2} \hat{H}) \partial_{z_1} + (\partial_{z_1} \hat{H}) \partial_{z_2}$ which is the planar reduction of the Darboux canonical form of the Poisson system $\mathcal{Y}$ of Theorem \[Teo-poisson-V-2\]. ]{}
[Observe that, from the Implicit Function Theorem, under the conditions of Theorem \[Teo-poisson-V-2\] there is an analytic function $\phi(x_1, x_2, h)$ defined in a neighborhood $U$ of the point $(x_1, x_2, h)=(0,0, 0)$ such that $\phi(0, 0, 0)=0$ and satisfies $D(x_1, x_2, \phi(x_1, x_2, h)) \equiv h$ in $U$. Then, since $\mathcal{Y} = (-x_2 + F_1(x)) \partial_{x_1} + (x_1 + F_2(x)) \partial_{x_2} + F_3(x) \partial_{x_3}$, it follows that the reduced vector field $\mathcal{Z}_h$ of Theorem \[Teo-poisson-V-2\] is $\mathcal{Z}_h = (-x_2 + F_1(x_1, x_2, \phi(x_1, x_2, h))) \partial_{x_1} + (x_1 + F_2(x_1, x_2, \phi(x_1, x_2, h))) \partial_{x_2}$. Clearly in practice we do not have the explicit expression of $\phi$ but we can compute enough terms of the Taylor expansion of $\phi$ at $(x_1, x_2)=(0,0)$. This expansion will permit us to calculate a sufficiently large string of Poincaré-Liapunov constants associated to the branch of monodromic nondegenerate singularities $(x_1^*(h), x_2^*(h))$ of the vector field $\mathcal{Z}_h$ in an algorithmic way and try to solve the associated center-focus problem. ]{}
Example: 3D Lotka-Volterra system
---------------------------------
Let us now consider the quadratic Lotka-Volterra family: $$\label{lv3de125}
\left\{ \begin{array}{ccl}
\dot{x}_1 & = & x_1( \lambda_1 + c x_2 + x_3) \\
\dot{x}_2 & = & x_2( \lambda_2 + x_1 + a x_3) \\
\dot{x}_3 & = & x_3( \lambda_3 + b x_1 + x_2)
\end{array} \right.$$ These are models of common use in mathematical biology for the description of population interactions. In addition equations (\[lv3de125\]) are of the Poisson type [@Nutku]. Using the Darboux’s integrability theory it is easy to check that the full family possesses the inverse Jacobi multiplier $V(x) = x_1 x_2 x_3$. Hence (\[lv3de125\]) is a conservative family with strictly positive measure in $\Omega = \{ x \in \mathbb{R}^3 : x_i > 0 \}$. On the other hand ${\rm div} \mathcal{Y} \equiv 0$ if and only if $\lambda_1 +\lambda_2 +\lambda_3 =0$ and $a=b=c=-1$. Furthermore (\[lv3de125\]) has a first integral of the form $I_1(x) = x_1^{1/c} x_2^{b} x_3^{-1}$ when $$\label{param}
abc=-1 \:\; , \;\:\;\:\;\: \lambda_3 = \lambda_2 b - \lambda_1 ab.$$ Now we will use the classical procedure to obtain the additional first integral $I_2(x)$ of (\[lv3de125\]) using $V(x)$ and $I_1(x)$. More specifically we compute a planar system after restricting (\[lv3de125\]) to the level sets $\{I_1(x)=h\}$. This can be done substituting $x_3= x_1^{1/c} x_2^{b} / h$ into the two first components of (\[lv3de125\]) yielding $$\dot{x}_1 = x_1(\lambda_1 + c x_2 + x_1^{1/c} x_2^{b} / h ), \ \ \dot{x}_2 = x_2( \lambda_2 + x_1 + a x_1^{1/c} x_2^{b} / h).$$ This planar system has the inverse Jacobi multiplier $$v(x_1, x_2) = V(x) \partial_{x_3} I_1(x_1, x_2, x_1^{1/c} x_2^{b} / h) = -h x_1 x_2$$ from where we can obtain the first integral $I(x_1, x_2; h)$ of it having the form $$I(x_1, x_2; h) = -x_1 + c x_2 - \frac{a c}{h} x_1^{1/c} x_2^b - \lambda_2 \log x_1 + \lambda_1 \log x_2.$$ Therefore $I_2(x) = I(x_1, x_2; I_1(x)) = -x_1 + c x_2 - a c x_3 - \lambda_2 \log x_1 + \lambda_1 \log x_2$. We have proved that (\[lv3de125\]) under the parameter restrictions (\[param\]) is analytically completely integrable in the domain $\Omega$, hence it is a Poisson system.
In fact we have that the Casimir invariant is $D(x) = I_1(x)$, the Hamiltonian function is $$\label{lv3de12h}
H(x) = abx_1+x_2-ax_3+\lambda_3 \ln x_2 - \lambda_2 \ln x_3$$ which can be easily deduced from $I_1$ and $I_2$ and the structure matrix is $${\cal J}(x) = \left( \begin{array}{ccc}
0 & cx_1x_2 & bcx_1x_3 \\
-cx_1x_2 & 0 & -x_2x_3 \\
-bcx_1x_3 & x_2x_3 & 0
\end{array} \right).$$
Example 1 of Theorem \[Teo-poisson-V-2\]
----------------------------------------
Consider a vector field having a zero-Hopf singularity at the origin. System $$\label{0H-Ej-Lienard-a.1.1}
\begin{array}{lll}
\dot{x}_1 &=& -x_2, \\ \dot{x}_2 &=& x_1 + a x_1^2 + b x_1 x_3, \\ \dot{x}_3 &=& c x_1 x_2 + d x_2 x_3.
\end{array}$$ corresponds to case (i) in Theorem 1.4 of [@GV]. Note that system (\[0H-Ej-Lienard-a.1.1\]) has the analytic first integral $$\label{eq:novoH2}
D(x)= x_3 + \cdots = \left\{ \begin{array}{ll}
\frac{c}{d^2} + \left(- \frac{c}{d^2} + \frac{c}{d} x_1 + x_3 \right) {\rm e}^{d x_1}, & \mbox{if $d \neq 0$}, \\
x_3 + \frac{c}2 x_1^2, & \mbox{if $d = 0$}.
\end{array}
\right.$$ Therefore $\partial_{x_3} D(0,0,0) = 1 \neq 0$ and the level sets $\{ x \in \Omega : D(x) = h \}$ of $D$ are given by the graph of a function $x_3 = \phi(x_1; h)$. Hence the restriction of the vector field (\[0H-Ej-Lienard-a.1.1\]) to the level sets $\{ D = h \}$ is $\mathcal{Z}_h$ whose expression is $$\label{0H-Ej-Lienard-a.1.2}
\begin{array}{lll}
\dot{x}_1 &=& -x_2, \\ \dot{x}_2 &=& x_1 + a x_1^2 + b x_1 \phi(x_1; h).
\end{array}$$ We note that since $\phi(0; h) = h$, the eigenvalues at the origin of this planar family are $\pm i \sqrt{1+b h}$ and therefore are pure imaginary for small values of $|h|$. Additionally, for such values of $h$, family (\[0H-Ej-Lienard-a.1.2\]) has a center at the origin because it is a Hamiltonian family. Since the Hamiltonian depends analytically on $h$ near $h=0$ then, using Theorem \[Teo-poisson-V-2\] we deduce that family (\[0H-Ej-Lienard-a.1.1\]) has a Poisson structure around the origin.
We will complete this example by explicitly showing the Poisson structure of (\[0H-Ej-Lienard-a.1.1\]). First we compute the function $$\phi(x_1; h)= \left\{ \begin{array}{ll}
\frac{1}{d^2}(c + {\rm e}^{-d x_1} (-c + d^2 h) - c d x_1), & \mbox{if $d \neq 0$}, \\
h - \frac{c x_1^2}{2}, & \mbox{if $d = 0$}.
\end{array}
\right.$$ Then the Hamiltonian $\hat{H}(x_1, x_2; h)$ of (\[0H-Ej-Lienard-a.1.2\]) is $$\hat{H}= \left\{ \begin{array}{ll}
\frac{1}{6 d^4}( {\rm e}^{-d x_1} (-6 b (c - d^2 h) (1 + d x_1) - d^2 {\rm e}^{d x_1} (b c x_1^2 (3 - 2 d x_1) + & \\
d^2 (x_1^2 (3 + 2 a x_1) + 3 x_2^2))) ) , & \mbox{if $d \neq 0$}, \\
-\frac{1}{24} x_1^2 (12 + 12 b h + 8 a x_1 - 3 b c x_1^2) - \frac{1}{2} x_2^2, & \mbox{if $d = 0$}.
\end{array}
\right.$$ Now we have the first integral $H(x) = \hat H(x_1, x_2, D(x))$ of (\[0H-Ej-Lienard-a.1.1\]) given, up to a multiplicative constant, by $$H(x)= \left\{ \begin{array}{ll}
b c (-6 + d^2 x_1^2 (3 + 2 d x_1)) - d^4 (x_1^2 (3 + 2 a x_1) + 3 x_2^2) + & \\
6 b d^2 (1 + d x_1) x_3, & \mbox{if $d \neq 0$}, \\
-12 x_2^2 - x_1^2 (8 a x_1 + 3 (4 + b c x_1^2 + 4 b x_3)), & \mbox{if $d = 0$}.
\end{array}
\right.$$ Let $\mathcal{X}$ be the associated vector field to (\[0H-Ej-Lienard-a.1.1\]). It follows that there exists a scalar function $\eta : \Omega \to \mathbb{R}$ such that $\mathcal{X}(x) = \eta(x) \, (\nabla H(x) \times \nabla D(x))$ where $x \in \Omega$. From the explicit expressions of $D(x)$ and $H(x)$, direct calculations yield $$\label{eq:eta}
\eta(x)= \left\{ \begin{array}{ll}
-\frac{{\rm e}^{-d x_1}}{6 d^4}, & \mbox{if $d \neq 0$}, \\
- \frac{1}{24}, & \mbox{if $d = 0$}.
\end{array}
\right.$$ Finally we obtain that (\[0H-Ej-Lienard-a.1.1\]) is a Poisson system with Hamiltonian $H$ and structure matrix ${\cal J}(x)$ which comes from (\[structure-J\]) with $H_2 = D$, that is, $${\cal J}(x) = - \frac{1}{24} \left( \begin{array}{ccc} 0 & 1& 0 \\-1& 0& c x_1 \\ 0 & -c x_1 & 0 \end{array} \right)$$ when $d = 0$ and $${\cal J}(x) = \frac{1}{6 d^4} \left( \begin{array}{ccc} 0 & 1& 0 \\ -1& 0 & c x_1 + d x_3 \\ 0 & -c x_1 - d x_3 & 0 \end{array} \right)$$ if $d \neq 0$.
Example 2 of Theorem \[Teo-poisson-V-2\]
----------------------------------------
The quintic family $$\begin{aligned}
\dot{x}_1 &=& P(x_1, x_2, x_3), \nonumber \\
\dot{x}_2 &=& x_1 +B_2 x_1 x_2 (-x_1^2 + x_3), \label{Ej2} \\
\dot{x}_3 &=& 2 x_1 P(x_1, x_2, x_3) \nonumber\end{aligned}$$ with $P(x_1, x_2, x_3) = -x_2 -C x_1 x_2 + B_1 (x_1^2 + x_2^2) (-x_1^2 + x_3)$ has a zero-Hopf point at the origin and the first integral $D(x) = x_3- x_1^2$. Then the reduced vector field $\mathcal{Z}_h$ to the level sets $\{ D = h \}$ is given by the planar quadratic family $$\begin{array}{lll}
\dot{x}_1 &=& -x_2 - C x_1 x_2 + B_1 h (x_1^2 + x_2^2), \\ \dot{x}_2 &=& x_1 + B_2 h x_1 x_2
\end{array}$$ having a singularity at the origin with eigenvalues $\pm i$. Therefore, the conditions under which the origin becomes a center for family $\mathcal{Z}_h$ are well known, see the seminal papers [@Ka1; @Ka2] and [@B]. In short, $\mathcal{Z}_h$ has a center for all $h$ if and only if either $B_1=0$ or $C=0$, in which case there is an analytic first integral $\hat H(x_1, x_2; h)$ at $(x_1, x_2)=(0,0)$ for each $h$. We observe that the former center cases are not always Hamiltonian (this situation only appears when $C = 2 B_1 + B_2 =0$). Consequently, if $B_1 C \neq 0$ then family (\[Ej2\]) does not have a generic Poisson structure in any domain $\Omega \subset \mathbb{R}^3$ containing the origin.
In the analysis of the first center strata we let $B_1=0$. If $B_2=0$ then $\mathcal{Z}_h = \mathcal{Z}_0$ is independent of $h$ so the first integral $\hat H(x_1, x_2)$ is also independent. When $B_2 \neq 0$ then (see [@S]) $$\hat H(x_1, x_2; h) = (1 + C x_1)^{2 B_2^2 h^2} (1 + B_2 h x_2)^{2 C^2} \, \exp\left[-2 B_2 C h (B_2 h x_1 + C x_2)\right].$$ The second center strata will be analyzed with parameter restrictions $B_1 \neq 0$ and $C=0$. If moreover $B_2 \neq 0$ then (see again [@S]) $$\begin{aligned}
\hat H(x_1, x_2; h) &=& [-B_2 - B_1 (2 B_1^2 - 3 B_1 B_2 + B_2^2) h^2 x_1^2 - 2 B_1 B_2 h x_2 + \\
& & B_1^2 (-2 B_1 + B_2) h^2 x_2^2] \, (1 + B_2 h x_2)^{-\frac{2 B_1}{B2}},\end{aligned}$$ whereas when $B_2 = 0$ then $$\begin{aligned}
\hat H(x_1, x_2; h) &=& (x_1^2 + x_2^2) \, \exp(-2 B_1 h x_2).\end{aligned}$$ We note that in any center case the first integral $\hat H(x_1, x_2; h)$ is analytic with respect to $h$ at $h=0$. Therefore, from Theorem \[Teo-poisson-V-2\] we conclude that family (\[Ej2\]) has a Poisson structure around the origin if and only if $B_1 C = 0$. In fact, the explicit construction of the Hamiltonian $H(x)$ and structure matrix ${\cal J}(x)$ can be done in a way analogous to that of family (\[0H-Ej-Lienard-a.1.1\]).
Example 3 of Theorem \[Teo-poisson-V-2\]
----------------------------------------
The polynomial family of sixth degree $$\begin{aligned}
\dot{x}_1 &=& P(x_1, x_2, x_3), \nonumber \\
\dot{x}_2 &=& Q(x_1, x_2, x_3), \label{Ej3} \\
\dot{x}_3 &=& 2 (x_1 P(x_1, x_2, x_3) + x_2 Q(x_1, x_2, x_3)) \nonumber\end{aligned}$$ with $P(x_1, x_2, x_3) = -x_2 + A x_1^2 + B x_1 x_2 + C x_2^3 + x_1^3 x_3 - x_1^3 x_2^2 - x_1^5$ and $Q(x_1, x_2, x_3) = x_1 + F x_1^2 + E x_2^2 - x_1^3 x_2 - x_1 x_2^3 + x_1 x_2 x_3$ has a zero-Hopf point at the origin and the first integral $D(x) = x_3 - x_1^2-x_2^2$. Then the reduced vector field $\mathcal{Z}_h$ to the level sets $\{ D = h \}$ is given by the planar cubic family $$\begin{array}{lll}
\dot{x}_1 &=& R(x_1, x_2; \mu) = -x_2 + A x_1^2 + B x_1 x_2 + h x_1^3 + C x_2^3, \\ \dot{x}_2 &=& S(x_1, x_2; \mu) = x_1 + h x_1^2 + h x_1 x_2 + E x_2^2
\end{array}$$ having a singularity at the origin with eigenvalues $\pm i$. We have defined the parameter vector $\mu =(h, A, B, C, E) \in I \times \mathbb{R}^4$ where $I \subset \mathbb{R}$ is a small neighborhood of the origin.
We claim that family $\mathcal{Z}_h$ does not have a center at the origin for any parameter value $h \in I$. To prove such claim we will see that the first focal value associated to the origin of $\mathcal{Z}_h$ is not identically zero for any $h \in I$. We briefly recall here the theory of focal values, see for example [@RS]. Using the complex coordinate $z = x_1 + i x_2 \in \mathbb{C}$ with $i^2 = -1$, any planar family $\mathcal{Z}_h$ can be written in the form $\dot z = i z + F(z, \bar z; \mu)$ where $\bar{z} = x_1 - i x_2$ and $F(z, \bar z; \mu) = R\left(\frac{1}{2}(z + \bar{z}), \frac{i}{2} (\bar{z} - z); \mu \right) + i S\left(\frac{1}{2}(z + \bar{z}), \frac{i}{2} (\bar{z} - z); \mu \right)$. Finally we complement this complex differential equation with its complex conjugate. Denoting by $w = \bar{z}$ we arrive at the complex polynomial system $$\label{sist-C^2}
\dot z = i z + F(z, w; \mu), \ \ \dot{w} = - i w + \bar{F}(z, w; \mu).$$ Now we define the [*focus quantities*]{} $g_j(\mu) \in \mathbb{R}[\mu]$ as those polynomials such that $\mathfrak{X}_\mu(\mathcal{H}) = \sum_{j \geq 1} g_j(\mu) (z w)^{j+1}$ where $\mathfrak{X}_\mu = (i z + F(z, w; \mu)) \partial_z + ( - i w + \bar{F}(z, w; \mu)) \partial_{w}$ is the complex vector field in $\mathbb{C}^2$ and $\mathcal{H}(z, w; \mu) = z w + \cdots \in \mathbb{C}[[x,w]]$ is a formal power series. It is known that $\mathcal{Z}_{h^*}$ has a center at the origin for a specific parameter value $\mu = \mu^*$ if and only if $g_j(\mu^*) = 0$ for all $j \in \mathbb{N}$. Performing computations we find that $g_1(\mu) = \frac{1}{4} [(A B) + (3 - 2 A - E) h - h^2]$. Since for every parameter combination we have $g_1(\mu) \not\equiv 0$ for arbitrary $h \in I$ we prove that family $\mathcal{Z}_h$ cannot have a center at the origin for any $h \in I$ and therefore, from Theorem \[Teo-poisson-V-2\] we know that family (\[Ej3\]) has no generic Poisson structure around the origin.
Conservativeness and Poisson structure
======================================
First of all, notice that there exist conservative planar vector fields that do not have a Poisson structure and do not describe physically conservative dynamics. For example, the linear system $$\label{consnotPoisson}
\dot{x}_1 = - x_2+ \mu x_1 , \ \dot{x}_2 = x_1 + \mu x_2$$ with $\mu \neq 0$, has a focus at the origin and non-negative inverse Jacobi multiplier $V(x_1,x_2)=x_1^2+x_2^2$. Therefore system (\[consnotPoisson\]) is conservative in $\Omega = \mathbb{R}^2$, which clearly does not correspond to a conservative flow in direct physical terms. Moreover this system is not of Poisson type since it does not have an analytical first integral in any neighborhood of the origin.
This situation is not exceptional. For instance, in [@BerroneGiacomini2] the following example appears. System $$\dot{x}_1 = \frac{1}{2} [-x_2 + x_1 (1-x_1^2-x_2^2)], \ \dot{x}_2 = \frac{1}{2} [x_1 + x_2 (1-x_1^2-x_2^2)], \ \dot{x}_3 = x_3$$ has the inverse Jacobi multiplier $V(x) = (x_1^2+x_2^2)^2$ in $\Omega = \mathbb{R}^3$. Since $V$ is non-negative it is clear that the system is conservative in $\mathbb{R}^3$. The eigenvalues of the linearization at the origin are $\{ \frac{1}{2} (1 \pm i), 1 \}$ and therefore, see Remark \[rem1\], the system has not a Poisson structure in a neighborhood of the origin. We note that additionally the system also possesses another inverse Jacobi multiplier $V_2(x) = x_3$ which implies the existence of the rational first integral $I_1(x) = V_2(x)/V_1(x) = x_3 / (x_1^2+x_2^2)^2$ not well defined at the origin.
These examples suggest that the formal definition of conservative flow is not sufficiently restrictive from the most standard physical perspective. However, this difficulty can be overcome in very simple terms just by imposing that the Jacobi multiplier be strictly positive (or strictly negative, since $-V$ is an inverse Jacobi multiplier if and only if $V$ is). It is clear that system (\[consnotPoisson\]) has a positive inverse Jacobi multiplier if and only if $\mu = 0$ and thus the origin is a center and the flow becomes Hamiltonian. This simple example reflects what is going to be the actual general situation in the two-dimensional case.
[A conservative vector field is [*strictly conservative*]{} if it has a strictly positive invariant measure, that is, satisfying (\[int-inv\]) with $V > 0$ in $\Omega$.]{}
The trivial examples of strictly conservative systems are the divergence free systems, see Lemma \[lemadivzero\]. The next theorem connects the property of being strictly conservative with the existence of a Poisson structure depending on the phase space dimension.
\[Teo-Poisson-Conservative\] Let $\mathcal{Y}$ be a smooth vector field in $\Omega \subseteq \mathbb{R}^n$. Then:
- In the case $n=2$, if $\mathcal{Y}$ is strictly conservative with smooth invariant measure in a simply-connected domain $\Omega$ then it is a Poisson vector field in $\Omega$.
- Let $V$ be an inverse Jacobi multiplier of $\mathcal{Y}$ in $\Omega \subseteq \mathbb{R}^2$. Then the zero-set $V^{-1}(0) \subset \Omega$ is an invariant curve that induces a natural partition of $\Omega$ into $m$ disjoint invariant domains $\Omega_i$ with boundaries $\partial \Omega_i \subset V^{-1}(0)$ such that $\cup_{i=1}^m \partial \Omega_i = V^{-1}(0)$. Provided $\Omega_i$ is simply connected, the restricted field $\mathcal{Y}|_{\Omega_i}$ is a Poisson system orbitally equivalent to the Darboux canonical form which can be constructed globally in $\Omega_i$.
- In the case $n = 3$, if $\mathcal{Y}$ is strictly conservative in $\Omega$ it is not necessarily a generic Poisson vector field in $\Omega$.
[*Proof*]{}. In the planar case (i), if the smooth vector field $\mathcal{Y} = P(x_1,x_2) \partial_{x_1} + Q(x_1,x_2) \partial_{x_2}$ is strictly conservative in $\Omega \subset \mathbb{R}^2$ with a smooth invariant measure given by $d x_1 d x_2/ V(x_1, x_2)$ then we can construct a first integral $H$ of $\mathcal{Y}$ in $\Omega$ as the line integral $$\label{H}
H(x_1,\,x_2) = \int_{(x_1^0,\,x_2^0)}^{(x_1,\,x_2)} \frac{P(x_1,\,x_2)\,dx_2-Q(x_1,\,x_2)\,dx_1}{V(x_1,\,x_2)}$$ along any curve connecting an arbitrarily chosen point $(x_1^0,\,x_2^0)$ and the point $(x_1,\,x_2)$ in $\Omega$. We remark that this line integral might not be well-defined if $\Omega$ is not simply-connected, which is not our case. Also, clearly, since $V > 0$ and smooth in $\Omega$ we find that $H$ is smooth in $\Omega$, and consequently $\mathcal{Y}$ is a Poisson vector field in $\Omega$ (see [@CF] for the last sentence). This proves statement (i).
The proof of (ii) is constructive. First we recall that since $V$ satisfies $\mathcal{Y}(V) = V \, {\rm div} \mathcal{X}$ it is obvious that the zero-set $V^{-1}(0) \subset \Omega$ is an invariant curve of $\mathcal{Y}$. Therefore the induced partition $\{ \Omega_i \}_{i=1}^m$ of $\Omega$ is formed by disjoint invariant domains $\Omega_i$ with boundaries $\partial \Omega_i \subset V^{-1}(0)$ and $\cup_{i=1}^m \partial \Omega_i = V^{-1}(0)$.
In what follows we restrict the analysis to one single simply connected $\Omega_i$. Since $\Omega_i \not\subset V^{-1}(0)$ the planar vector field $\mathcal{Y} = P(x_1,x_2) \partial_{x_1} + Q(x_1,x_2) \partial_{x_2}$ is strictly conservative in $\Omega_i$. Then it follows that ${\rm div}(\mathcal{Y} / V) \equiv 0$ in the set $\Omega_i$. Since $\Omega_i$ is simply connected, there exists a smooth function $H : \Omega_i \to \mathbb{R}$ given by (\[H\]) such that $\mathcal{Y} / V$ is a Hamiltonian vector field in $\Omega_i$ with Hamiltonian $H$. This induces the noncanonical Poisson structure of $\mathcal{Y}$ in $\Omega_i$ in terms of Hamiltonian $H$ and structure matrix $$\label{J-V}
{\cal J}(x_1,x_2) = V(x_1, x_2) \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right).$$ This completes the proof of (ii).
To prove (iii) a counterexample can be used in dimension $n = 3$. We will show a vector field $\mathcal{Y}$ in a sufficiently small neighborhood $\Omega \subset \mathbb{R}^3$ of a zero-Hopf singularity at the origin such that $\mathcal{Y}$ is strictly conservative in $\Omega$ but it is not a generic Poisson vector field in $\Omega$. Let us consider the quadratic family of vector fields in $\mathbb{R}^3$ $$\label{0H-Ej-1}
\begin{array}{lll}
\dot{x}_1 &=& -x_2, \\ \dot{x}_2 &=& f(x_1,x_3) + x_2 g(x_1,x_3), \\ \dot{x}_3 &=& F(x_1,x_2,x_3),
\end{array}$$ where $f(x_1, x_3) = x_1 + a_0 x_1^2 + a_1 x_1 x_3 + a_2 x_3^2$, $g(x_1, x_3) = b_0 x_1 + b_1 x_3$ and $F(x_1, x_2, x_3) = c_0 x_1^2 + c_1 x_2^2 + c_2 x_3^2 + c_3 x_1 x_2 + c_4 x_1 x_3 + c_5 x_2 x_3$ being $a_i, b_i, c_i \in \mathbb{R}$ the parameters of the family. It can be seen that ${\rm div}(\mathcal{Y}) \equiv 0$ and consequently $\mathcal{Y}$ is strictly conservative in $\Omega$ (see Lemma \[lemadivzero\]) if and only if $$\label{conddiv0}
b_0+c_4 = b_1 + 2 c_2 = c_5 = 0.$$
We consider $\hat{\mathcal{Y}} = -x_2 \partial_{x_1} +(f(x_1,x_3) + x_2 g(x_1,x_3)) \partial_{x_2} + F(x_1,x_2,x_3) \partial_{x_3}$, the three-dimensional vector field formed by the first components of (\[0H-Ej-1\]). Using Theorem 1.5 of [@GV], we know that there is a neighborhood $\hat\mathcal{U}$ of the origin in $\mathbb{R}^3$ completely foliated by periodic orbits of $\hat{\mathcal{Y}}$, including continua of equilibria as trivial periodic orbits, if and only if one of the following parameter conditions hold:
- $f(x_1,x_3) = x_1 + a_0 x_1^2 + a_1 x_1 x_3$, $g(x_1, x_3) \equiv 0$ and $F(x_1,x_2,x_3) = c_3 x_1 x_2 + c_5 x_2 x_3$;
- $f(x_1,x_3) =x_1 + a_0 x_1^2 + a_2 x_3^2$, $g(x, z) \equiv 0$ and $F(x_1,x_2,x_3) = c_3 x_1 x_2 + c_5 x_2 x_3$;
- $f(x_1,x_3) =x_1$, $g(x_1, x_3) = b_0 x_1$ and $F(x_1,x_2,x_3) = c_0 x_1^2 + c_3 x_1 x_2 - c_0 x_2^2 + c_4 x_1 x_3 + c_5 x_2 x_3$ with the parameter restriction $b_0 c_0 c_4 - c_0 c_4^2 - c_3 c_4 c_5 + c_0 c_5^2 = 0$;
- $f(x_1,x_3) =x_1 + a_1 x_1 x_3$, $g(x_1, x_3) = b_0 x_1$ and $F(x_1,x_2,x_3) = c_3 x_1 x_2 + c_4 x_1 x_3$;
- $f(x_1,x_3) = x_1 + a_1 x_1 x_3 + a_2 x_3^2$, $g(x_1, x_3) \equiv 0$ and $F(x_1,x_2,x_3) = c_3 x_1 x_2 + c_5 x_2 x_3$.
On the other hand, in [@Isaac] it is proved that $\hat\mathcal{U}$ exists if and only if $\hat{\mathcal{Y}}$ is completely analytically integrable, that is, there are two independent analytic first integrals in $\hat\mathcal{U}$. Now it is easy to check that there are vector fields $\hat{\mathcal{Y}}$ satisfying (\[conddiv0\]) that do not satisfy any of the conditions (A-E). The counterexample follows by taking one of these vector fields and $\hat\Omega \subset \hat\mathcal{U}$, since generic Poisson vector fields in $\hat\mathcal{U}$ are in particular completely analytically integrable in $\hat\mathcal{U}$. This proves part (iii). This completes the proof of statement (iii). $\Box$
\[Rem-\*\] [Let $x^* \in \partial \Omega_i \subset V^{-1}(0)$ be a point of the boundary of $\Omega_i$. Observe that the rank of the Poisson structure matrix (\[J-V\]) vanishes on $x^*$. Accordingly, the Darboux canonical form is not defined on such boundary, since there is no constant-rank neighborhood of $x^*$. At the same time, it is worth recalling that the line integral defining the Hamiltonian $H$ in (\[H\]) is not defined on any neighborhood of $x^*$ due to the vanishing of $V$ on such point.]{}
[Under the same hypotheses of statement (ii) of Theorem \[Teo-Poisson-Conservative\], if some $\Omega_i$ is not simply connected then the same construction is valid for every simply connected subdomain of it. This implies that $\Omega_i$ can be fully decomposed as the union of simply connected subdomains on which the Darboux canonical form can be globally constructed. ]{}
Example: Poisson structure of the quadratic isochronous centers
---------------------------------------------------------------
An isolated singular point of $\mathcal{Y}$ is said to be a [*center*]{} if every orbit in a punctured neighborhood of it is a periodic orbit. Additionally, it is said to be an [*isochronous center*]{} if every periodic orbit in such a neighborhood has the same period. For the class of planar quadratic vector fields having an isochronous center at the origin, Loud proved in [@L] that after a linear change of coordinates and a constant time rescaling the system can be brought into four canonical forms. Use of statements (i) and (ii) of Theorem \[Teo-Poisson-Conservative\] can be made in a similar way on the four isochronous cases in order to construct their Poisson structure and invariant measures. For the sake of illustration only one of them will be analyzed here. For this purpose the following isochronous system $\mathcal{Y}$ is chosen: $$\label{Isoc-1}
\dot{x}_1 = - x_2 - \frac{4}{3} x_1^2, \ \dot{x}_2 = x_1 \left(1- \frac{16}{3} x_2 \right).$$ In [@CS] it is found the inverse Jacobi multiplier $V(x_1, x_2) = (3 - 16 x_2) (9 - 24 x_2 + 32 x_1^2)$. The set $V^{-1}(0)$ is composed by one straight line and one parabola which do not intersect. Thus $\Omega = \mathbb{R}^2$ has the natural partition given by $\Omega_1 = \{ (x_1, x_2) \in \Omega : 9 - 24 x_2 + 32 x_1^2 < 0 \}$, $\Omega_2 = \{ (x_1, x_2) \in \Omega : 3 - 16 x_2 < 0 < 9 - 24 x_2 + 32 x_1^2 \}$ and $\Omega_3 = \{ (x_1, x_2) \in \Omega : 3 - 16 x_2 > 0 \}$. From the previous discussion it is found that (\[Isoc-1\]) is a Poisson vector field in each $\Omega_i$ with the same structure matrix $${\cal J}(x_1,x_2) = (3 - 16 x_2) (9 - 24 x_2 + 32 x_1^2) \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right).$$ and Hamiltonian $$H(x_1, x_2) = \frac{1}{384} \log \frac{|-3 + 16 x_2|}{(18 + 64 x_1^2 - 48 x_2)^2}$$ obtained after evaluation of the line integral (\[H\]). As anticipated in Remark \[Rem-\*\], the structure matrix ${\cal J}$ becomes singular on $V^{-1}(0) = \cup_{i=1}^3 \partial \Omega_i$ and, in addition, $H$ is smooth on each $\Omega_i$ but it is not defined on $V^{-1}(0)$.
\[remark13\] [Let $\mathcal{Y}$ be a smooth Poisson vector field in $\Omega \subseteq \mathbb{R}^n$ with constant rank $r$. Then for every $x_0 \in \Omega$ there is a neighborhood $\Omega_0 \subset \Omega$ of $x_0$ and a smooth diffeomorphism $\Phi$ in $\Omega_0$ such that $\Phi_* \mathcal{Y}$ is written in the Darboux canonical form, hence as a Poisson system with symplectic structure matrix $\mathcal{S}_{(n,r)}$ of dimension $n$ and rank $r$. Since Darboux theorem is not constructive, sometimes in practice [@bs6]-[@bs8] only a more general structure matrix $\eta \, \mathcal{S}_{(n,r)}$ can be reached for $\Psi_* \mathcal{Y}$ under a different diffeomorphism $\Psi$. In this case an additional time rescaling is required to complete the Darboux reduction. Now it is easy to check that function $\eta : \Omega_0 \to \mathbb{R}$ is just an inverse Jacobi multiplier of the Darboux canonical form $(1/\eta) \Psi_* \mathcal{Y}$. This implies that $V = \eta /J_{\Psi}$ is an inverse Jacobi multiplier of the original Poisson system $\mathcal{Y}$ according to Proposition \[V-change\]. ]{}
A theory of weak conservativeness
=================================
Non-smooth differential systems are natural models in many branches of science such as mechanics, electromagnetic theory, automatic control, etc (see for instance [@BBCK; @F; @Kunze]). Therefore we conclude by generalizing the previous planar theory to vector fields with less regularity. The first step is to extend the definition of inverse Jacobi multiplier for a planar vector field $\mathcal{Y}$ as weak solutions of the partial differential equation (\[ijm\]). In this section we will restrict ourselves to $C^1$ vector fields ${\cal Y} = P(x,y) \partial_x + Q(x,y) \partial_y$ defined on simply connected domains $\Omega \subset \mathbb{R}^2$ having smooth boundary $\partial \Omega$.
The well known test functions will be used to introduce the forthcoming Definition \[dwifi3\] which is our definition of weak solution of the partial differential equation (\[ijm\]) defining the classical (hence $C^1$) inverse Jacobi multipliers. Recall that a function $\varphi: \Omega \to \mathbb{R}$ is called [*test function*]{} if $\varphi \in C^1(\Omega)$ and there is a compact set $K \subset \Omega$ such that the support of $\varphi$ is included in $K$. The linear space of all the test functions in $\Omega$ is denoted by ${\cal D}(\Omega)$.
\[dwifi3\] [A function $W: \Omega\subset\mathbb{R}^2 \to \mathbb{R}$ is a [*weak inverse Jacobi multiplier*]{} in $\Omega$ of the vector field ${\cal Y} = P(x,y) \partial_x + Q(x,y) \partial_y$ with integrable divergence in $\Omega$ provided $W$ is integrable in $\Omega$ and verifies $$\int_\Omega W \, [ \mathcal{Y}(\varphi) + 2 \varphi \; {\rm div} \mathcal{Y} ] \ dx dy = 0,$$ for all $\varphi \in {\cal D}(\Omega)$.]{}
The next result gives a relationship between inverse Jacobi multipliers in the plane and weak inverse Jacobi multipliers.
\[twifi1\] Let $V: \Omega\subset\mathbb{R}^2 \to \mathbb{R}$ and $\mathcal{Y}$ a vector field in $\Omega$ both $C^1(\Omega)$. Then $V$ is a weak inverse Jacobi multiplier of $\mathcal{Y}$ in $\Omega$ if and only if it is an inverse Jacobi multiplier.
[*Proof.*]{} Let $V$ be an inverse Jacobi multiplier of ${\cal Y}$ in $\Omega$. Consequently we have $$\int_\Omega [\mathcal{Y}(V) - V \, {\rm div} \mathcal{Y}] \ \varphi \ dx dy = 0 \ ,
\label{wifi2}$$ for every $\varphi \in {\cal D}(\Omega)$. Now taking into account the identities $$P V_x \varphi = (P V \varphi)_x - (P \varphi)_x V \ , \ \ Q V_y \varphi = (Q V \varphi)_y - (Q \varphi)_y V \ ,$$ the integrand of (\[wifi2\]) can be rewritten in the form $$\begin{aligned}
[\mathcal{Y}(V) - V {\rm div} \mathcal{Y} ] \ \varphi & = & (P V \varphi)_x + (Q V \varphi)_y - V \{ \varphi \; {\rm div} \mathcal{Y} + {\rm div} [\varphi \mathcal{Y} ] \} \\
& = & {\rm div} [ V \varphi \mathcal{Y} ] - V \{ \varphi \; {\rm div} \mathcal{Y} - {\rm div} [\varphi \mathcal{Y}] \} \ .\end{aligned}$$ Due to the additivity of the integral, equation (\[wifi2\]) becomes $$\int_\Omega {\rm div} [ V \varphi \mathcal{Y} ] \ dx dy - \int_\Omega V \{ \varphi \; {\rm div} \mathcal{Y} + {\rm div} [\varphi \mathcal{Y}] \ dx dy = 0 \ ,
\label{iiisaac}$$ for every $\varphi \in {\cal D}(\Omega)$. The first integral of the previous expression vanishes. To see this, Green’s theorem on the plane can be applied as follows $$\int_\Omega {\rm div} [ V \varphi \mathcal{Y} ] \ dx dy = \int_{\partial \Omega} - V \varphi Q \ dx + V \varphi P \ dy = \int_{\partial \Omega} V \varphi [ P \ dy - Q \ dx ] = 0 \ ,$$ where in the last step we have used the fact that $\varphi \in {\cal D}(\Omega)$ and therefore $\varphi(x,y)=0$ for all $(x,y) \in \partial\Omega$. To conclude, after rearrangement of the second integral in (\[iiisaac\]), equation (\[iiisaac\]) becomes $$\int_\Omega V [ \mathcal{Y}(\varphi) +2 \varphi \; {\rm div} \mathcal{Y} ] \ dx dy = 0 \ .$$ Then $V$ is a weak inverse Jacobi multiplier after Definition \[dwifi3\], thus concluding the proof in one sense.
Now if $V$ is $C^1(\Omega)$ then the converse holds by reversing the previous steps. The proof is thus complete. $\Box$
Piecewise $C^k$ weak inverse Jacobi multipliers
-----------------------------------------------
The following definition is now introduced.
\[defwiifisaac\] [Let $\Omega\subset\mathbb{R}^2$ be a domain with smooth boundary $\partial\Omega$. A function $f: \Omega \to \mathbb{R}$ is termed [piecewise $C^k$]{} with $k \geq 1$ in $\Omega$ if there exists a smooth curve $\gamma \subset \Omega$ such that $\Omega_1 \cup \Omega_2 \cup \gamma = \Omega$, where $\gamma =\partial \Omega_1 \cap \partial \Omega_2$ and $f$ verifies $f \in C^k(\Omega_1 \cup \Omega_2)$, but $f \not\in C^1(\gamma)$. Additionally, a planar vector field $\mathcal{Y}$ is piecewise $C^k$ in $\Omega$ if both components are piecewise $C^k$ in $\Omega$ with respect to the same curve $\gamma$. ]{}
The forthcoming theorem will be useful in the context of weak inverse Jacobi multipliers.
\[Teo-inv\] Let $\mathcal{Y}$ be a planar vector field admitting a weak inverse Jacobi multiplier $W$ piecewise $C^1$ in $\Omega$ for the curve $\gamma \subset \Omega$ and $W \not\in C(\gamma)$. Then $\gamma$ is an invariant curve of $\mathcal{Y}$.
[*Proof.*]{} Since $W$ is a weak inverse Jacobi multiplier for ${\cal Y}$ in $\Omega$ we have $$\label{int_suma}
\int_\Omega W [ \dot{\varphi} + 2 \varphi \; {\rm div} \mathcal{Y} ] \ dx dy =
\sum_{i=1}^2 \int_{\Omega_i} W_i [ \dot{\varphi} + 2 \varphi \; {\rm div} \mathcal{Y} ] \ dx dy = 0 \ ,$$ for all $\varphi \in {\cal D}(\Omega)$, where $W_i = W|_{\Omega_i}$. Since $W_i \in C^1(\Omega_i)$, we can make use of the identity $$W_i \varphi \; {\rm div} \mathcal{Y} = {\rm div} [ W_i \varphi \mathcal{Y} ] - \varphi \mathcal{Y}(W_i) - W_i \mathcal{Y}(\varphi) \ .$$ Since $\mathcal{Y}(W_i) = W_i {\rm div} \mathcal{Y}$ in $\Omega_i$, the previous identity can be written in $\Omega_i$ as $$2 W_i \varphi \; {\rm div} \mathcal{Y} = {\rm div} [ W_i \varphi \mathcal{Y} ] - W_i \mathcal{Y}(\varphi) \ .$$ This allows writing (\[int\_suma\]) as $$\sum_{i=1}^2 \int_{\Omega_i} {\rm div} [ W_i \varphi \mathcal{Y} ] \ dx dy = 0 \ .
\label{wiifinter1}$$ Applying Green’s theorem to the two previous integrals we have $$\int_{\Omega_i} {\rm div} [ W_i \varphi \mathcal{Y} ] \ dx dy = \int_{\partial\Omega_i} W_i \varphi (P \ dy - Q \ dx) = (-1)^{i+1} \int_{\gamma} W_i \varphi (P \ dy - Q \ dx) \ ,$$ where in the last step we use that by definition of test function in $\Omega$ it is $\varphi(x,y)=0$ for all $(x,y) \in \partial\Omega_i \backslash \gamma$ for $i=1,2$. The factor $(-1)^{i+1}$ takes into account the fact that the line integral has an opposite sense of integration for $\partial\Omega_1$ and $\partial\Omega_2$. Therefore condition (\[wiifinter1\]) yields $$\int_{\gamma} (W_1-W_2) \varphi (P \ dy - Q \ dx) = 0,$$ for all $\varphi \in {\cal D}(\Omega)$. Since $W \not\in C(\gamma)$ we conclude that $P \ dy - Q \ dx = 0$ on $\gamma$. Consequently $\gamma$ is an invariant curve of $\mathcal{Y}$. $\Box$
\[rem\]
Some simple examples of weak inverse Jacobi multiplier $W$ of ${\cal Y}$ in $\Omega$ are listed below:
- Bounded piecewise $C^1$ in $\Omega$ functions $W$ with respect to a curve $\gamma \subset \Omega$. If additionally $W|_{\Omega_i} > 0$ then $1/W$ is an invariant measure for ${\cal Y}$ in $\Omega$.
- Consider an inverse Jacobi multiplier $V$ of ${\cal Y}$ in $\Omega$ and assume there is a curve $\gamma = V^{-1}(0)$ that induces a partition $\Omega =\Omega_1 \cup \Omega_2 \cup \gamma$, where $\gamma =\partial \Omega_1 \cap \partial \Omega_2$, with $V|_{\Omega_i}$ having opposite signs. Then $W = |V|$ is a weak inverse Jacobi multiplier of ${\cal Y}$ in $\Omega$.
Example: Perturbing a non-smooth harmonic oscillator
----------------------------------------------------
As an instance of statement (i) of Remark \[rem\], consider the mechanical model of the non-smooth harmonic oscillator $\ddot y + 2 \, {\rm sign}(y) = 0$, see [@BCT]. Taking $\dot{y} = 2 x$, the associated piecewise smooth vector field $\mathcal{Y}_0$ in the $(x,y)$-phase plane is given by $\mathcal{Y}_0^+$ if $y >0$ and $\mathcal{Y}_0^-$ when $y <0$ where $\mathcal{Y}_0^\pm = (\mp 1) \partial_x + 2 x \partial_y$. Now we shall perturb $\mathcal{Y}_0$ as follows: $\mathcal{Y}^\pm_\varepsilon = (\mp 1) \partial_x + (2 x + \varepsilon y) \partial_y$. Define the semi-planes $\Omega^{+} = \{ (x,y) \in \mathbb{R}^2 : y \geq 0 \}$ and $\Omega^{-} = \{ (x,y) \in \mathbb{R}^2 : y < 0 \}$. It is direct to check that $V^\pm(x,y) = \exp(\mp \varepsilon x)$ is an inverse Jacobi multiplier of $\mathcal{Y}_\varepsilon^\pm|_{\Omega^{\pm}}$, respectively. Now we define the piecewise $C^1$ function $W$ with respect to the curve $\gamma \equiv \{y=0\}$ as $W|_{\Omega^{\pm}} = V^\pm$. Notice that $\gamma$ is an invariant curve of $\mathcal{Y}_\varepsilon$ in agreement with Theorem \[Teo-inv\]. According to Remark \[rem\], $W$ is a weak inverse Jacobi multiplier of $\mathcal{Y}_\varepsilon$ in $\Omega = \mathbb{R}^2$ and $1/W$ is an invariant measure of $\mathcal{Y}_\varepsilon$ in $\Omega$.
Example: general Poisson systems
--------------------------------
Let us now consider an example of statement (ii) of Remark \[rem\]. As already recalled along the article, a general planar Poisson system corresponds to the $C^\infty$ class and has the form $$\label{poisson-2d}
\frac{\mbox{\rm d}x}{\mbox{\rm d}t} = \eta(x_1,x_2) {\cal S}_{(2,2)} \cdot \nabla H (x_1,x_2)$$ where ${\cal S}_{(2,2)}$ is the $2 \times 2$ symplectic matrix. As usual, assume the system is defined in a domain $\Omega$. The structure matrix ${\cal J}(x_1,x_2) = \eta(x_1,x_2) {\cal S}_{(2,2)}$ has constant rank 2 in $\Omega$ if and only if $\eta$ does not vanish in $\Omega$. In this case, as indicated in Remark \[remark13\] (see also references therein) it is possible to construct the Darboux canonical form globally in $\Omega$ by means of a time reparametrization and the smooth function $\eta$ is an inverse Jacobi multiplier of the Darboux canonical form (which is now a classical Hamiltonian flow). The opposite case arises (as mentioned in item(ii) of Remark \[rem\]) provided there is a curve $\gamma \equiv \{ (x_1, x_2) \in \Omega : \eta (x_1,x_2)=0 \}$ leading to a partition $\Omega =\Omega_1 \cup \Omega_2 \cup \gamma$, with $\gamma =\partial \Omega_1 \cap \partial \Omega_2$, and $\eta|_{\Omega_i}$ having opposite signs. Now the global Darboux reduction is not possible since ${\cal J}$ is not regular in $\Omega$. Note that $\eta$ is still an inverse Jacobi multiplier (and now $| \eta|$ is a weak inverse Jacobi multiplier) of the Darboux canonical form in $\Omega$, and such reduction can be now carried out separately on each subdomain $\Omega_i$. From the point of view of time rescalings, in each subdomain $\Omega_i$ they will have opposite signs, namely in one subdomain the direction of time will be inverted in the Darboux reduction, while in the other it will not. Additionally, in this case $\gamma$ is an invariant curve.
[**Acknowledgements**]{}
Both authors would like to acknowledge Ministerio de Economía y Competitividad for Project Ref. MTM2014-53703-P. In addition, I.A.G. acknowledges AGAUR grant number 2014SGR 1204. B.H.-B. acknowledges Ministerio de Economía y Competitividad for Project Ref. MTM2016-80276-P as well as financial support from Universidad Rey Juan Carlos-Banco de Santander (Excellence Group QUINANOAP, grant number 30VCPIGI14). Finally, B.H.-B. is sincerely indebted to the members of Departament de Matemàtica, Universitat de Lleida, for their kind hospitality.
[99]{} , [*Hamiltonian equations in $\mathbb{R}^3$*]{}, J. Math. Phys. [**44**]{} (2003), 5688–5705.
, [*On the number of limit cycles which appear with the variation of coefficients from an equilibrium position of focus or center type*]{}, Amer. Math. Soc. Transl. [**100**]{} (1954), 1–19.
, [* Inverse Jacobi multipliers*]{}, Rend. Circ. Mat. Palermo (2) [**52**]{} (2003), 77–130.
, Symmetry, Integrability and Geometry: Methods and Applications SIGMA [**12**]{} (2016), 1–19.
, Russian J. Math. Phys. [**22**]{} (2015), 444–453.
, Nonlinearity [**28**]{} (2015), 2307–2318.
, Regul. Chaotic Dyn. [**16**]{} (2011), 443–464.
, [*Inverse Jacobi multipliers: recent applications in dynamical systems*]{}, Progress and Challenges in Dynamical Systems, Springer Proc. Math. Stat. [**54**]{} (2013) 127–141.
, [*Birth of limit cycles bifurcating from a nonsmooth center*]{}, J. Math. Pures Appl. [**102**]{} (2014), 36–47.
, [*On the Hamiltonian structure of 2D ODE possessing an invariant*]{}, J. Phys. A [**25**]{} (1992), L1287–L1293.
, [*A survey of isochronous centers*]{}, Qual. Theory Dyn. Syst. [**1**]{} (1999), 1–70.
, [*Piecewise-Smooth Dynamical Systems, Theory and Applications*]{}, Springer-Verlag, London, 2008.
, [*Differential Equation with Discontinuous Right-Hand Sides*]{}, Kluwer Academic, Netherlands, 1988.
, [*Integrable zero-Hopf singularities and 3-dimensional centers*]{}. To appear in Proc. Roy. Soc. Edinburgh Sect. A.
, [*Perturbed Euler top and bifurcation of limit cycles on invariant Casimir surfaces*]{}, Physica D [**239**]{} (2010), 1665–1669.
, [*Poisson systems as the natural framework for additional first integrals via Darboux invariant hypersurfaces*]{}, Bull. Sci. Math. [**137**]{} (2013), 242–250.
, [*Periodic orbits in analytically perturbed Poisson systems*]{}, Physica D [**276**]{} (2014), 1–6.
, [*The three-dimensional center problem for the zero-Hopf singularity*]{}, Discrete Contin. Dyn. Syst. [**36**]{} (2016), 2027–2046.
, [*Generalization of solutions of the Jacobi PDEs associated to time reparametrizations of Poisson systems*]{}, J. Math. Anal. Appl. [**344**]{} (2008), 655–-666.
, [*Generalized results on the role of new-time transformations in finite-dimensional Poisson systems*]{}, Phys. Lett. A [**374**]{} (2010), 836–-841.
, [*New global solutions of the Jacobi partial differential equations*]{}, Physica D [**241**]{} (2012), 764–774.
, [*On the centra of the integral curves which satisfy differential equations of the first order and the first degree*]{}, Proc. Kon. Akad. Wet., Amsterdam [**13**]{} (1911), 1241–-1252.
, [*New researches upon the centra of the integrals which satisfy differential equations of the first order and the first degree*]{}, Proc. Kon. Acad. Wet., Amsterdam [**14**]{} (1912), 1185–-1185; [**15**]{} (1912), 46-–52.
, Izvestiya: Mathematics [**80**]{} (2016), 342–358.
, [*Non-smooth dynamical systems*]{}, Springer-Verlag, Berlin, 2000.
, [*Behavior of the period of solutions of certain plane autonomous systems near centers*]{}, Contributions to Differential Equations [**3**]{} (1964), 21–36.
, [* Qualitative theory of differential equations*]{}, Princeton, NJ: Princeton Univ. Press, 1960.
, [*Hamiltonian structure of the Lotka-Volterra equations*]{}, Phys. Lett. [**145A**]{} (1990), 27–28.
, [*Applications of Lie Groups to Differential Equations, 2nd ed.*]{}, New York: Springer-Verlag, 1993.
, [*The center and cyclicity problems: a computational algebra approach*]{}. Birkhäuser Boston, Inc., Boston, MA, 2009.
, [*Algebraic particular integrals, integrability and the problem of the center*]{}, Trans. Amer. Math. Soc. [**338**]{} (1993), 799–841.
, [*The local structure of Poisson manifolds*]{}, J. Diff. Geom. [**18**]{} (1983), 523–557.
, Cambridge: Cambridge University Press, 1937.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'P. Miocchi'
- 'R. Capuzzo-Dolcetta'
date: 'Received ??? / Accepted ???'
title: |
An efficient parallel tree-code for the simulation\
of self-gravitating systems[^1]
---
Introduction
============
Tree-codes (Barnes & Hut [@BH], hereafter BH; Hernquist [@H]) are particle algorithms extensively employed in the simulations of large self-gravitating astrophysical systems. They have the capability to speed up the numerical evaluation of the gravitational interactions among the $N$ bodies of the system. Of course, the parallelization of the algorithm aims at attaining larger and larger $N$ in the numerical representation of the real system; this is important not only to improve the spatial resolution, but also to get much more meaningful results, because a too low number of ‘virtual’ particles in comparison with the number of real bodies, gives rise to a unphysical shortening of the 2-body collisions time.
In general, an efficient parallelization means a data distribution to the processors, the so called *domain decomposition* (DD), so as to i) distribute the numerical work as uniformly as possible, ii) minimize the data exchange among the processors (hereafter PEs). Of course, this latter point is relevant only on distributed memory platform. Moreover, such DD should be performed with a minimal computational cost.
In the numerical evaluation of the gravitational interactions it is difficult to deal with these tasks, because the long-range nature of gravity makes data transfer among PEs unavoidable. Furthermore, self-gravitating systems have often non-uniform mass distributions, that give rise to very inhomogeneous distributions of the work-load (the amount of calculations needed to evaluate the acceleration of a particle). This implies that the DD should be weighted, in some way, according to the work-load.
Finally, the hierarchical arrangement of the subsets of the mass distribution which the tree-code is based on, implies that most of the computations for evaluating the acceleration of a particle regard the evaluation of the force due to *close* bodies. This suggests a *spatial* DD: each domain should be enclosed in a volume as *contiguous* and compact as possible.
At present, one of the most used approach for the DD is the orthogonal recursive bisection (Warren & Salmon [@orb]; Dubinski [@dubinsky]; Lia & Carraro [@carraro]; Springel et al. [@gadget]), which consists in a recursive subdivision of the space in pairs of sub-domains equally weighted. On every sub-domain, the owning PE builds (independently of the others) its *local* tree data structure. Such structure is then enlarged enclosing those data, belonging to *remote* trees, that are needed to the local forces evaluation.
The main disadvantage of this approach is that the retrieval of remote data is complicated (and computationally expensive, too) mainly because of the lack of an addressing reference scheme common to all the PEs. The ‘hashed oct-tree’ method (Warren & Salmon [@hashed], hereafter WS) solves this problem, but a certain implementation complexity still remains and some radical changes are required in the way the tree arrangement is usually stored. For this reason we decided to carry out a new and easy to implement method for sharing efficiently the computational domain among PEs in a distributed memory architecture.
This paper is organized as follows. In Sect. \[treecode\] we describe the differences of our tree-code in respect with the original method illustrated in BH, both from the general point of view of the algorithm and in connection with the parallelization approach. This latter is described in Sect. \[parallelization\] for both the stages the tree-code is made up of. Finally, the performances of a PGHPF (Portland Group High Performances Fortran) implementation running on a Cray T3E computer are discussed in Sect. \[results\].
Our version of the BH tree-code {#treecode}
===============================
In this Section we describe some modifications of the original BH tree-code, which are also important (as we will see later) from the point of view of the parallelization technique. They regard the construction of the tree arrangement. The reader who is not familiar with the basical features of tree-codes, can found detailed descriptions in Warren & Salmon ([@orb]); Hernquist ([@H]); Hernquist & Katz ([@HK]); Springel et al. ([@gadget]).
Let us give some definitions that may differ from those used by other authors: i) the *boxes* are the cubes that make up the hierarchical structure (arranged as an octal tree graph) built during the *tree-setting* stage by subdividing recursively the *root* box enclosing all the system, ii) the root is at the $0$-th *subdivision level* and iii) a *parent* box, at the $l$-th level, is a box which includes more than one particle and which is splitted into 8 cubic *sub-boxes* at the $(l+1)$-th subdivision level, iv) the *terminal* boxes are those with just one particle inside and v) the *tree-traversal* is the phase in which all the particles accelerations are evaluated by “ascending” the tree from the root upward.
We adopted an internal memory representation of the tree structure that makes use of pointers, i.e. integers pointing to the locations in which the data of the sub-boxes have been stored. This allows to accede recursively to boxes’ data with a $O(\log N)$ order of operations and, moreover, it permits, as we will see, to complete easily and with a minimal communications overhead the portion of the tree initially assigned to a given PE, appending those remote box data needed to calculate the accelerations on the particles in its domain.
The tree-setting is performed through a recursive method different from the original approach used in BH. The method can be outlined as: given a *parent* box and the set of all the particles it contains, the subset of the particles enclosed in a given sub-box is found. If it is *non*-empty then the multipolar coefficients of the sub-box, plus various parameters, are evaluated and stored into a free memory location. Then a pointer in the parent box is set to point to such location. This procedure starts from the root box and is repeated recursively for any non-terminal sub-box. Also for the evaluation of the multipolar coefficients the recursive ‘natural’ approach is used (as in Hernquist [@H]).
Within the framework of the tree-setting just described, it is important to employ a fast method to check whether a particle belongs to a box or not. In this respect, we implemented a *spatial mapping* of the particles, which ‘translates’ the coordinates of each of them into one binary number (a ‘key’), enclosing all the necessary informations about which box contains it at *any* subdivision level. Moreover, it is quite easy to get quickly such informations using binary operations within a *recursive* context. Details about such method are given in Appendix \[mapping\].
The parallelization method {#parallelization}
==========================
Parallel tree-setting and domain decomposition {#par_tree_setting}
----------------------------------------------
As far as the parallel execution of the tree-setting is concerned, an important feature of the logical data structure is that the lower levels of the tree are made up of few but highly populated boxes while, on the contrary, at upper levels there are many boxes, but containing few particles.
This suggests two different schemes of work distribution to the PEs during this phase. Indeed, in order to have a good load-balancing, it is desirable to make the number of the ‘computational elements’ much larger than the number $p$ of the PEs which the work has to be distributed to. Hence, in our case it is convenient that, on one side the parallel setting up of the lower levels of the tree is done by assigning to each processor a sub-set of the particles belonging to the *same* box; on the other side, for the construction of the upper levels, whole sets of particles belonging to *distinct* boxes should be assigned to each PE. Thus, before going any further, let us give some useful definitions. Given $k > 1$ a fixed integer, we call
- *lower* box: a box containing a number of particles $n$ such that $n > kp$;
- *upper* box: a box with $n \leq kp$;
- *pseudo-terminal* (PTERM) box: an upper box having a parent lower box.
Finally, we call ‘lower (upper) tree’ the portion of the whole tree made up of lower (upper) boxes (see Fig. \[treeparts\]).
{width="12cm"}
The parallel tree-setting is then executed in two steps, the first step for the construction of the lower tree and the second one for that of the upper tree, as described in the following Sections.
### Lower tree setting {#low-tree-setting}
In the first step, the particles are initially distributed at random to the PEs, i.e. without any correlation with their spatial location. Then, the PEs start building the tree, working on lower boxes, according to the recursive procedure described in Sect. \[treecode\]. They consider, at the same time, the *same* box but dealing, of course, only with their own particles. This phase of the tree setting stops at PTERM boxes (instead of at terminal ones, as it is done in the serial version) where no more ‘branches’ are set up. In order to obtain an efficient parallel execution, the evaluation of the multipolar coefficients of (lower) boxes is done directly, i.e. by means of summations running over the set of particles contained, rather than by the recursive formulas involving the sub-boxes coefficients. To attain the maximum data-locality, each PE keeps a copy of the tree structure in its own local memory. This makes that, in the tree-traversal stage, the reading access to lower boxes[^2] data will not be slowed down continuously by the inter-processors data transfer, which is one of the performances bottlenecks of codes running on distributed memory parallel computers. Note also that the amount of local storage needed to this part of the tree scales like the number of lower boxes, that is like $\sim \tau\log \tau$, being $\tau \sim
N(kp)^{-1}$ the total number of PTERM boxes. Thus, the memory occupation for each local copy of the lower tree scales, conveniently, like the number of particles *per* processor. During the current step, the large number of particles in lower boxes (provided that $k$ is sufficiently great), together with the random particle distribution to the PEs, ensures a good work-load balancing.
![Example of PTERM boxes inspection order, for a uniform particle distribution in 2-D. The boxes are on the 3$^{\rm rd}$ level of the spatial subdivision and each pattern corresponds to a different processor’s domain (among 4 PEs), while the dashed arrows indicate the ‘jumps’ along the path.[]{data-label="order"}](figure2.ps){width="6cm"}
At this point a suitable re-distribution of the particles (for an efficient tree-traversal) is performed ‘on the fly’ by exploiting our recursive way to set up the logical tree structure. Actually, such recursive approach, together with the technique used in mapping the particles’ coordinates (see Fig. \[sub-box\]), leads to a particular order in which the PTERM boxes are met. Such order corresponds to a one-dimensional path connecting a PTERM box to another spatially *adjacent*[^3] in a self-similar fashion (see Fig. \[order\]). Though similar to that used by WS, the order is obtained in a substantially different way, as discussed in Appendix \[mapping\].
This order is such that, by cutting the path into $p$ contiguous pieces with the same ‘length’ and then assigning to the $i$-th PE the particles contained into all the PTERM boxes in the $i$-th piece, one obtains a particularly efficient DD. Indeed, it is characterised by having most of sub-domains compact in space [^4].
The particles assignement is performed every time a PTERM box is met. Moreover, other than the particles contained, also the data regarding the box are stored into the *local* memory of the PE. For this reason, it is necessary that the pointer in the parent box pointing to the PTERM one, includes also the information about which PE owns this latter. This way, any other PE can easily accede to the box data. The information is included within the pointer itself by constructing the *full-address* of the PTERM box, as described in Appendix \[fulladdressing\].
As in WS, we found that a good load-balancing can be attained if one ‘measures’ the path length in terms of the computational loads of the PTERM boxes, defining such quantity as the sum of the ‘weights’ of all the particles inside them, being the particle weight proportional to the number of bodies (both particles and boxes) whose force on it has been evaluated during the tree-traversal of the *previous* time step.
An important difference with respect to the WS’ method is that in our scheme the DD is performed via a (PTERM) *boxes* distribution to the PEs, rather then by directly distributing particles, and this means an easier parallel tree construction of the sub-trees in comparison with the HOT method. In this latter, in fact, there are several complications in setting up the local parts of the tree, such as: the ‘broadcasting’ of branch boxes, the inter-processor exchange of data regarding particles sited at the border of sub-domains, etc. In Fig. \[domains\] an example of particles distribution to 4 PEs is plotted for a non-uniform case.
![ Example of domain decomposition among four PEs, for a cluster represented with 16,384 particles. Top: ‘section’ of the 3-D particles distribution lying on the $yz$ plane. Bottom: the sections of the 4 sub-domains have been spaced for clarity; note how one of them (the grey one) is not completely compact.[]{data-label="domains"}](figure3.ps){width="7cm"}
### Upper tree-setting
The second step is the construction of the remaining upper part of the tree, which is performed according to the same recursive procedure used for lower boxes but, in this case, every PE works independently and without synchronism, starting every time from each PTERM box in its own domain. Every PE will store, in its local memory, the logical and data structure of all the sub-trees whose roots are given by such boxes. For example, in Fig. \[treeparts\], the PE owning the rightmost PTERM box, will set up the sub-tree enclosed within the dashed rectangle. Another difference, in comparison with the lower tree setting, is that all the pointers box $\rightarrow$ sub-box adopt the full-addressing, because, in principle, any box could be required by other processors during the tree-traversal.
Parallel tree-traversal {#par_tree_trav}
-----------------------
In this stage, each PE uses exactly the same recursive procedure usually adopted in serial tree-codes (see e.g. Hernquist [@H]) to evaluate the forces acting on the particles though, of course, only on those belonging to its domain.
At the beginning of this phase, such domain includes both the whole lower tree and those sub-trees whose roots are the PTERM boxes assigned to the PE. We can call all this set of boxes the initial *locally essential tree* (LET). This latter is not yet “complete”, in the sense that it does not include, yet, all the bodies that are necessary to evaluate the forces on the particles in the PE’s domain, lacking some of the upper boxes belonging to *remote* sub-trees (though they are the minority of all the required bodies, thanks to the spatially compact DD).
Anyway, the suitable addressing scheme adopted and the belonging of the boxes to the *overall* tree topology (as well as the recursive approach for the tree-traversal), allow us to perform the *LET completion* ‘at run time’. Given a particle belonging to a certain PE and given a box $B\in$ LET whose sub-boxes have to be handled: if a sub-box does not belong to the LET, as can be immediately recognized by Eq. \[fulladdr2\], then (i) get all its data (and all its pointers) from the owning PE’s memory at the address given by Eq. \[fulladdr3\], (ii) copy them into a free location of the local memory, (iii) change the pointer in $B$ to point this new *local* address. This way, the sub-box is included into the LET and when any other particle requires it, it is already found into the local memory. This mechanism minimizes the amount of inter-processor communications.
Note that the full-addressing mechanism makes remote data retrieval immediate, thanks to that the local *sub*-trees are portions of a *single* and global tree arrangement, unlike the orthogonal recursive bisection scheme (Warren & Salmon [@orb]). In our opinion our addressing method is as ‘global’ as that used in WS, though easier to be implemented and with a lower computational overhead, as we will see.
Results
=======
The efficiency of the parallelization method has been checked by analysing the performances for a *single* evaluation of the forces on a set of particles. Such evaluation includes one tree-setting (including DD) and one tree-traversal step, as well as all the necessary inter-processor communications and remote accesses (LET completion), *without* the time integration of trajectories. Indeed, it is known that most of the CPU-time used by a simulation of large self-gravitating systems is spent in the computation of the gravitational interactions. Usually this latter takes at least 80% of the total CPU-time, while the time advancing of particles’ dynamical quantities (position, velocity, etc.) takes just $\sim$ 10%, because its computational cost scales like $N$. This cost is even smaller for low-order time integration schemes, like those generally used in conjunction with tree-codes[^5]. Moreover, it is generally very simple to parallelize time integration methods, because the time advancing of a particle is independent of that of the others and the corresponding work-load is, normally, very homogeneous. For this reason our tests involve the forces computation only, which indeed represents the key problem to overcome for getting a good parallelization.
Nevertheless, one has to be careful when time integration algorithms adopt individual time steps (that is a desirable feature when dealing with self-gravitating systems with a wide range of time scales), because they imply a force evaluation which is mostly performed on a *subset* of the entire set of particles. We discuss this problem in Appendix \[time\_integration\].
All the tests were performed on a set of $N=128$K equal mass ($m$) particles distributed according to the Plummer profile (known to fit acceptably globular clusters, at least in regions not too far from the center, see Binney & Tremaine [@binney]) $\rho(r)=\rho_0(1+r^2/r_c^2)^{-5/2}$, within a sphere of radius $R$ such to include a particles total mass, $M=Nm$, such that $M=0.995\times M_\infty$, where $M_\infty=\int_0^\infty 4\pi r^2\rho dr$. The core radius is chosen as $r_c=6\times 10^{-2}R$, while $\rho_0=3M_\infty(4\pi r_c^3)^{-1}$ is the central density. This is a highly non-uniform distribution, being $\rho_0/\rho(R)\sim 10^6$ (see Fig. \[plummer\]).
For the box-particle force evaluation we used multipolar series truncated at the quadrupoles, and the original BH ‘opening’ criterion with different values for the open-angle parameter $\theta$. The maximum number of particle in PTERM boxes (see Sect. \[par\_tree\_setting\]) was set equal to $16\times p$, this value giving the best performances for any number of PEs ($p$), as we checked. The tests ran on a Cray T3E and regarded our parallelization method implemented by means of PGHPF/Craft directives.
![ The clustered set of 128K particles used in the tests.[]{data-label="plummer"}](figure4.ps){width="7cm"}
The performances
----------------
The good efficiency of the parallelization approach described in previous Sections is basically shown by three facts: i) a behavior of the relative speedup[^6] close to the ideal one (linear in $p$); ii) a low unbalancing of the work-load; iii) a low parallelization overhead[^7].
{width="11cm"}
The good overall code scalability is shown in the upper panel of Fig. \[speedup\]. It appears to be rather good for $p\leq 16$ (i.e. $N/p\geq 8192$). For more than 16 PEs, the performances start to degrade because of the too small number of particles *per* PE: with $p\leq 32$ one has $\leq 4096$ particles per PE that makes the tree-code not that efficient in itself. To give immediate indications of how fast the calculations are for a given accuracy, we show in the lower panel the absolute speed of the code versus the relative error on the forces evaluation. The relative error on the evaluation of the force (per unit mass) on a particle – due to the use of the truncated multipolar expansion for the box satisfying the opening criterion – is defined as: $\delta a/a \equiv |a_{\rm tc}-a|/a$, where $a$ is the magnitude of the acceleration evaluated by means of the ‘exact’ particle-particle summation, and $a_{\rm tc}$ denotes the magnitude of the acceleration calculated via the tree-code.
The computational work-load is well balanced among the PEs. A natural way to quantify the load unbalancing, $u$, is via the formula $u=(t_{\rm max}- t_{\rm min})/<\!\! t\!\! >$, where $t_{\rm max}$ and $t_{\rm min}$ are, respectively, the maximum and the minimum CPU-time spent by the PEs to perform a given procedure and $<\!\! t\!\! >$ is the averaged CPU-time. From Fig. \[workload\], we can see that, for a sufficiently high number of particles per PE (say for $N/p \geq 8,000$) we have a quite low $u$, that is less than 10% for the tree-setting and always less than 6% during the tree-traversal, demonstrating the efficiency of the DD. Only for $N/p\sim 4,000$ the unbalancing becomes unacceptable ($>50$%).
![ Load unbalancing parameter ($u$) for a single force evaluation with $\theta=0.7$. Solid line: for the tree-setting stage; dotted line: for the tree-traversal.[]{data-label="workload"}](figure6.ps){width="8cm"}
Last, but not least, we can see in Table \[tabella\] that the parallelization overhead takes only 3.2% of the total CPU-time in a 8 PEs run. Moreover, as we verified, this percentage increases significantly only when $N/p< 8,000$. Thus, in optimal conditions the ‘surplus’ of CPU-time specifically needed to make the parallelization operative (in our case spent by the DD plus the LET completion) is almost negligible. This point is crucial in order to state that a parallel code is really efficient.
Indeed, even in a distributed memory context one can get a highly scalable tree-code with a good work-load balancing, using just an absolutely banal DD. In fact, the load balancing can be achieved by means of a ‘dynamical’ distribution[^8] of the particles to the PEs (see Singh et al. [@singh]), tolerating a great deal of communications and remote accesses. Following this approach, we implemented another parallel version of the tree-code obtaining a good speedup scaling and a low load unbalancing on the T3E, but the communications overhead *heavily* affected the absolute performaces making it not convenient for practical use (see Capuzzo-Dolcetta & Miocchi [@cdm-ssc97], [@cdm-cpc]).
Code section sec %
-------------------------- -------- -------
tree-setting $1.4$ $6$
*domain decomposition* $0.1$ $0.5$
tree-traversal $21$ $94$
LOWER tree-traversal $3.3$ $15$
UPPER tree-traversal $18$ 80
*LET completion* $0.6$ $2.7$
total $22.4$ $100$
: Code CPU-time consumption with 8 PEs, 128K particles and $\theta=0.7$. Italic: parallelization overhead. \[tabella\]
Finally, the code memory usage requires roughly 1 Kbyte per particle. For instance, more than $10^7$ particles can be handled by 128 processors having 128 Mbyte each. Such amount of particles can be furtherly increased in a more optimized message passing implementation.
Comparisons with other codes
----------------------------
To make really significant comparisons of the performances of different tree-codes, one should ensure the forces computation be done with the same accuracy and on the same set of particles. Of course, such performances depend also on the opening criterion adopted, because at a given accuracy and with a given set of particles, different opening criteria can give different amounts of interactions to evaluate on a particle, thus giving different computation speeds. Therefore, if one wants to compare specifically the efficiency of the *parallelization* approach, then the tests should be done with the same opening criterion too[^9].
Unfortunately, it is often very difficult to make such conditions hold with the tree-codes available in the literature. For this reason we decided to compare codes speed at a given amount of *computational work* done to evaluate forces. This makes the comparison independent of: the particles distribution, the number of particles, and the accuracy (i.e. the opening criterion and its parameters). In tree-codes the amount of numerical work done on a given particle, $w_i$, is naturally quantified as the number of ‘interactions’ evaluated to estimate the force on it (as in the particle work-load definition of Sect. \[low-tree-setting\]), namely the total number of bodies (both boxes and single particles) of which the tree-code evaluates the force they exert on the particle itself (in a particle-particle method one would have $w_i=N-1$).
In Fig. \[work\] we plotted the error on the forces evaluation ($\delta a/a$), versus the averaged computational work, $<\!\! w\!\!>=(\sum_i w_i)/N$, needed by our code to evaluate the forces on the set of $N=128$K particles above-described. This allows us to compare ‘honestly’ our code performances with those of other codes.
![ Relative error on forces evaluation (at 90% percentile) versus the averaged computational work, using the BH opening criterion. The corresponding values for the open-angle parameter $\theta$ are labeled.[]{data-label="work"}](figure7.ps){width="8cm"}
In Springel et al. ([@gadget]) the authors tested their tree-code (GADGET) on a Cray T3E. They give speed measurements for a rather clustered cosmological distribution, using the BH opening criterion with $\theta=1$. In such conditions their code gives $\delta a /a\sim 3.5\times 10^{-2}$ at 90% percentile, with $<\!\! w\!\!>\simeq 200$ interactions per particle. From Fig. \[work\] we can see that, with the Plummer distribution we used, the closest value for $<\!\! w\!\!>$ is achieved for $\theta =1.2$, which gives $<\!\! w\!\!>\simeq 230$ interactions per particle and corresponds to an accuracy of $\delta a /a\sim 3\times 10^{-2}$. Such accuracy is obtained in a run that, using e.g. 8 PEs, is performed with a speed of $15,000$ particles/sec (as shown in the lower panel of Fig. \[speedup\]). Note that such run includes, apart from the tree-traversal, also the tree-setting and all the overhead needed to a parallel execution. At the same conditions, but performing *only* the tree-traversal stage, GADGET has a lower speed: about $13,000$ particles/sec (with 8 PEs). It is worth noting that it shows a load unbalancing of about 19% with $N/p\sim 3\times 10^4$, while our code exhibits $u\sim 4$% with the same ratio $N/p$. One has to say, finally, that GADGET has been implemented using message passing instructions, certainly a more suitable approach with respect to that we adopted (see next Section).
Another comparison can be done with the parallel code illustrated in Dubinski ([@dubinsky]). In this case the code ran on a Cray T3D and the author employed a more efficient modified BH opening criterion (Barnes [@barnes]). Anyway, the rapidity of the code is given both in terms of the total computational work ($N<\!\! w\!\! >$) performed in one second, and in terms of particles per second. With 16 PEs such code evaluated about $3\times 10^6$ interactions/sec, corresponding to $6,000$ particles/sec, for a forces computation performed on a cluster with $N=1.1$M particles. This means that the code spent $t\simeq 180$ sec in that run. Hence, being $N<\!\! w\!\! >/t\simeq 3\times 10^6$, it handled $<\!\! w\!\! >\simeq 500$ interactions per particle. With our tree-code such $<\!\! w\!\! >$ corresponds to an accuracy of about $\delta a/a\sim 4\times 10^{-3}$ (Fig. \[work\]), and so it is performed at $\sim 13,000$ particles/sec with 16 PEs (Fig. \[speedup\]). Of course, our speedup factor of $2$ is partially due to the improved performances of the T3E compared with the T3D, even if the direct message passing approach used by the author is more efficient then the use of the PGHPF compiler.
Remarks about the implementation
--------------------------------
The use of PGHPF/Craft directives is certainly not the best way to implement a parallel tree-code on a distributed memory architecture, but we did so in order to obtain quickly a ready-to-use version. Indeed, the directives permit just to distribute in a *simple* way loop iterations to the PEs, as well as the elements of shared arrays to their local memory, without using *explicit* message passing routines. The highest price to pay for such simplicity, is that the way message passing operations are actually performed cannot be controlled and optimized. In our specific case, each PE has to *copy* all its local upper tree to logically shared arrays, in order to enable the other PEs to accede to it during the tree-traversal. This means a considerable waste of memory (and communications), which can be avoided in a direct message passing implementation, so to reduce furtherly the parallelization overhead.
Similar storage and clock time wastings are due also to that the PGHPF compiler is generally not capable to recognize local references within a shared array, which are handled as if they were remote instead. Anyway, we experimented also a slow local referencing, due to a non optimal cache-memory managing. For these reasons, our next goal is the development of an MPI version, which would also be easily implementable on different distributed memory platforms.
Finally, we have also carried out an implementation suitable for running on a *shared* memory computer. In this case, of course, the DD has to take into account only the work-load balancing and a ‘dynamic’ particles distribution can be adopted during the tree-traversal. Anyway, it turns out to be worth subdividing the tree into upper and lower boxes for a balanced tree-setting. Such implementation[^10] was carried out using OpenMP directives on a SUN Enterprise 4500 HPC machine, with 14 PEs. The results are very good and, given the same parameters, we verified a $40$% speedup in comparison with the T3E PGHPF implementation.
Particle mapping {#mapping}
================
Particles’ locations are mapped converting each coordinate[^11], $x_i$, $i=1,2,3$, into an integer triple $q_i=\lfloor x_i\times 2^{l_{\max}}/L\rfloor$, where $\lfloor...\rfloor$ indicates the truncation to an integer, $L$ is the root box’ size and $l_{\max}$ is the maximum subdivision level *a priori* allowed. Then $q_{1,2,3}$ are combined into an integer number (the ‘key’) $Q\in
[0,8^{l_{\max}}-1]$, which is defined as: $$\begin{aligned}
Q&=&\sum_{l=1}^{l_{\max}}
\left[\mbox{mod}(\lfloor q_1/k_l\rfloor,2)+2\times\mbox{mod}
(\lfloor q_2/k_l\rfloor,2)+ \right.\nonumber \\
&{}&\left.+4\times\mbox{mod}(\lfloor q_3/k_l\rfloor,2)\right]^{k_l^3}, \label{Q}\end{aligned}$$ where $k_l=2^{l_{\max}-l}$. Despite its complicated definition, $Q$ can be rapidly evaluated by means of direct bit manipulation routines[^12] available both in FORTRAN and in C.
Given a particle’s key, it is possible to determine quickly whether it belongs to a box or not, in a recursive fashion. Let us indicate the key binary representation as $Q\equiv\{b_{3l_{\max}-1}b_{3l_{\max}-2}\cdot\cdot\cdot b_2b_1b_0\}_2$, then: known that the particle belongs to a certain box at the level $l$ of the spatial subdivision, at the level $l+1$ such particle will belong to the $i$-th sub-box ($0\leq i \leq 7$) which is identified by the *octal* digit $i\equiv\{ b_{k+2}b_{k+1}b_k\}_2$. This latter is made up of the three adjacent bits of $Q$ from position $k=3(l_{\max}-l-1)$ to the left. For $l=0$ any particle is enclosed in the root box. See the example in Fig. \[sub-box\].
Each key needs $3l_{\max}$ bits to be handled. We used two long-integers (i.e. $16$ bytes), thus allowing $l_{\max}\leq 42$, which is sufficient for most applications. Note that the evaluation of the keys of all the particles, with a computational cost of order $O(N)$, can be done just once, before the tree setting starts.
The mapping method described in WS is similar, to some extent, to that above-illustrated, but in that case the authors do not use pointers to reproduce the tree topology, they rather use the particles’ keys themselves (evaluated similarly as in Eq.\[Q\]) plus a hashing function to build an addressing space for all the boxes. Roughly speaking, in order to reduce the huge number of a-priori possible keys (e.g. in 3-D there are $8^{l_{\max}}$ possible values for $Q$), they truncate the keys binary representation, replacing the information losted this way by means of linked lists. In the authors opinion this presents mainly the following advantages: i) it permits a direct access to any boxes, without needing a tree-traversal; ii) in a distributed memory context, it gives a unique addressing scheme for the boxes, independently of which PE owns them. We think that the advantage in point i) is not important in our implementation because, as we have seen, any procedure involving the tree structure is performed *recursively*. A direct access to a box does not really speed up the execution; moreover, in WS’ method the accesses are not really direct (expecially at upper levels) due to the presence of linked lists. As far as point ii) is concerned, also in our parallel version there is an addressing scheme which allows data in any sub-domain to be globally referenced (see Sect. \[par\_tree\_setting\] and Appendix \[fulladdressing\]).
Construction of a global addressing scheme {#fulladdressing}
==========================================
Given the binary representation of [baddr]{}$\equiv\{b_{m-1} b_{m-2}\cdot\cdot\cdot b_1b_0\}_2$ that is the address location of an upper box within the [i]{}-th PE’s local memory, we define the box full-address as the $s$-bit number (being $s$ the bit size of integer variables, excluding the bit used for the sign): $$\mbox{\tt faddr}\equiv\{100\cdot \cdot \cdot 0b_{m-1}b_{m-2}\cdot\cdot\cdot b_1b_0
c_7c_6\cdot\cdot\cdot c_1c_0\}_2, \label{fulladdr}$$ being [i]{}$\equiv\{c_7c_6\cdot\cdot\cdot c_1c_0\}_2$. Note that the leftmost bit, in position $s-1$, is set to 1 in order to indicate that the box pointed is an *upper* one. Note also that 8 bits are used for [i]{}, allowing $p\le 256$. To make the full-address be a single integer number it is necessary that $m+8<s$. Hence with 8-bytes integers ($s=63$) the local addressing is limited by $2^{55}$ that is sufficient for any purpose.
In FORTRAN such ‘bit concatenation’ can be easily carried out this way: $$\mbox{{\tt faddr = Ior(Ior(Ishft(baddr,8),i),mask)}},$$ where [mask]{}$=2^{s-1}$. Inversely, given the full-address of a box, the following bit manipulation instructions give the address in the local memory of the [i]{}-th processor which it belongs to: $$\begin{aligned}
\mbox{\tt i} &=& \mbox{{\tt Iand(faddr,mask2)}},\label{fulladdr2}\\
\mbox{\tt baddr}&=& \mbox{{\tt Ishft(Iand(faddr,mask1),-8)}}
\label{fulladdr3}\end{aligned}$$ being [mask1]{}$=$[mask]{}$-1$ and [mask2]{}$\equiv 255$.
Parallel time integration {#time_integration}
=========================
As far as the time integration is concerned, when the time advancing algorithm adopts individual time steps, at a given time the forces are evaluated only on a sub-set of all the particles. This leads to a work-load as more unbalanced as the sub-set is smaller. We experimented various possible solutions for such problem, in particular in the case of the *leapfrog* algorithm with the block-time scheme (Porter [@porter]; Henquist & Katz [@HK]). According to this scheme, the $i$-th particle occupies the ‘time bin’ $b_i\in\{0,1,2,...,b_{\rm max}\}$, in the sense that it gets a time step $\Delta t_i=\tau/2^{b_i}$, where $\tau$ is the maximum time step allowed (usually a fraction of a significant time scale for the system). The simulation time ($t$) is advanced by the minimum time step used and, at a given $t$, the accelerations are evaluated only on *synchronized* particles, i.e. on those whose acceleration was calculated last time at $t-\Delta t_i$ (at $t=0$ all the particles accelerations are evaluated). According to some rules, $\Delta t_i$ can also change with time.
First, we found convenient to set the maximum a priori possible bin not that high, say $b_{\rm max}<6$, so to avoid bins with too few particles within. This choice is also recommendable because of the intrinsic non-symplectic nature of the individual time step leapfrog. Indeed, every time a particle changes its bin, a lost of time symmetry occurs, leading to instability and to a long term energy drift. Furthermore, good results are obtained if, at a given instant, one assignes to synchronized particles a weight ($w$) much greater than those assigned to the others. One could be even tempted to set $w=0$ for non-synchronized particles because no work will be done, in the current step, to update their accelerations. Nevertheless, this would give rise to a very unbalanced work-load during the tree-setting, because wide sets of PTERM boxes (containing zero-weight non-synchronized particles) may be assigned to the same PE, forcing it to build large portions of the upper tree. We found that a good compromise is to multiply the weight of the currently synchronized particles – initially estimated as described in Sect. \[low-tree-setting\] – by the factor $N/s$, being $s$ the number of these latter.
This mantains the unbalancing comparable with that of the single force evaluation on all the particles, also in those highly dynamic situations in which the accelerations of the particles moving through very dense region (like for pairs of stars during close encounters occurring within the core of a globular cluster) need to be frequently updated.
Barnes, J.E. 1994, in Computational Astrophysics, eds. J. Barnes et al. (Berlin: Springer-Verlag) Barnes, J.E., & Hut, P. 1986, Nature 324, 446 Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton, USA: Princeton University Press) Capuzzo-Dolcetta, R., & Miocchi, P. 1998, in Sciences & Supercomputing at CINECA, 1997 Report, ed. M. Voli (Bologna: CINECA), 29 Capuzzo-Dolcetta, R., & Miocchi, P. 1999, Computer Phys. Communications, 121-122, 423 Dubinski, J. 1996, New Astronomy, 1, 133 Hernquist, L. 1987, ApJS 64, 715 Hernquist, L., & Katz, N. 1989, ApJS 70, 419 Lia, C., & Carraro, G. 2000, MNRAS, 314, 145 Pal Singh, J., Holt, C., Totsuka, T., et al. 1995, Journal of Parallel and Distributed Computing, 27, 118 Porter, D. 1985, Ph.D. thesis, University of California (Berkeley, USA) Springel, V., Yoshida, N., & White, S.D.M. 2000, submitted to New Astronomy (astro-ph/0003162) Warren, M.S., & Salmon, J.K. 1992, Astrophysical N-Body Simulations Using Hierarchical Tree Data Structures, in Supercomputing ’92 (Los Alamitos: IEEE Comp. Soc.), 570 Warren, M.S., & Salmon, J.K. 1993, A Parallel Hashed Oct-Tree N-Body Algorithm in Supercomputing ’93 (Los Alamitos: IEEE Comp. Soc.), 12
[^1]: Supported by CINECA (http://www.cineca.it) and CNAA (http://cnaa.cineca.it) under Grant *cnarm12a*.
[^2]: They are very frequently met in evaluating particles’ accelerations, because they generate the long-range field.
[^3]: Actually, along the path there are some discontinuities in form of ‘jumps’ from a PTERM box to another non-adjacent one, which could be avoided with a more complicated (non self-similar) order, as described in WS.
[^4]: The efficiency comes from that, in evaluating the acceleration on a particle, most of interactions are with the closest bodies. In fact, the number density of the bodies satisfying the opening criterion at a distance $d$ from the particle is roughly $\propto
(\theta d)^{-3}$ (with $\theta$ the open-angle parameter) which decreases very rapidly with the distance. Hence a compact sub-domain will contain most of such bodies.
[^5]: The commonly accepted degree of accuracy in the force calculation is about one part in $10^2$–$10^3$. This makes high-order time schemes superfluous (at least in not too dynamic situations).
[^6]: The rapidity gain one has when going from a run with one PE to the same run with $p$ PEs.
[^7]: The CPU-time needed by all those instructions which would not be necessary in a *serial* execution.
[^8]: All the particles to be processed are put in a ‘queue’. As soon as a PE has finished its previous work, it gets the first particle in the queue and evaluates its acceleration.
[^9]: In general the implementation of the opening criterion does not really affect the parallelization method, and it is a rather simple task as well.
[^10]: We are thankful to the CASPUR center (sited at the Universitá di Roma “La Sapienza”) for the resources provided.
[^11]: With the origin at a root box’ vertex and the axes parallel to its edges.
[^12]: For instance, a multiplication of an integer, $n$, by $2^j$, with $j>0$ ($j<0$), corresponds to shift its binary representation by $j$ positions left (right), whereas $\mbox{mod}(n,2)\equiv$ the least significant (rightmost) bit. Hence $\mbox{mod}(n/2^m,2)$, with $m\ge 0$, is the bit in position $m$, being the rightmost one in position 0. In FORTRAN such bit is given by [ibits($n$,$m$,1)]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A small Higgs mass parameter $m_{h_u}^2$ can be insensitive to various trial heavy stop masses, if a universal soft squared mass is assumed for the chiral superpartners and the Higgs boson at the grand unification (GUT) scale, and a focus point (FP) of $m_{h_u}^2$ appears around the stop mass scale. The challenges in the FP scenario are (1) a too heavy stop mass ($\approx 5 {\,\textrm{TeV}}$) needed for the 126 GeV Higgs mass and (2) the too high gluino mass bound ($\gtrsim 1.4 {\,\textrm{TeV}}$). For a successful FP scenario, we consider (1) a superheavy right-hand (RH) neutrino and (2) the first and second generations of hierarchically heavier chiral superpartners. The RH neutrino can move a FP in the higher energy direction in the space of $(Q, ~m_{h_u}^2(Q))$, where $Q$ denotes the renormalization scale. On the other hand, the hierarchically heavier chiral superpartners can lift up a FP in that space through two-loop gauge interactions. Precise focusing of $m_{h_u}^2(Q)$ is achieved with the RH neutrino mass of $\sim 10^{14} {\,\textrm{GeV}}$ together with an order one ($0.9-1.2$) Dirac Yukawa coupling to the Higgs boson, and the hierarchically heavy masses of $15-20 {\,\textrm{TeV}}$ for the heavier generations of superpartners, when the U(1)$_R$ breaking soft parameters, $m_{1/2}$ and $A_0$ are set to be $1 {\,\textrm{TeV}}$ at the GUT scale. Those values can naturally explain the small neutrino mass through the seesaw mechanism, and suppress the flavor violating processes in supersymmetric models.'
author:
- 'Bumseok Kyae$^{(a)}$[^1] and Chang Sub Shin$^{(b)}$[^2]'
title: |
**Precise focus point scenario\
for a natural Higgs boson in the MSSM**
---
Introduction
============
The naturalness problem of the electroweak scale (EW) and the Higgs boson mass has been the most important issue for the last four decades in the theoretical particle physics community. It has provided a strong motivation to study various theories beyond the standard model (SM). Particularly, the minimal supersymmetric SM (MSSM) has been regarded as the most promising candidate among new physics models beyond the SM. However, any evidence of new physics beyond the SM including supersymmetry (SUSY) has not been observed yet at the large hadron collider (LHC), and experimental bounds on SUSY particles are increasing gradually. Nonetheless, a better new idea that can replace the present status of SUSY has not seemed to appear yet. Accordingly, it would be worthwhile to explore a breakthrough within the SUSY framework.
Concerning the radiative Higgs mass and EW symmetry breaking, the top quark Yukawa coupling ($y_t$) of order unity plays the key role in the MSSM: through the sizable top quark Yukawa coupling, the top quark and stop make a dominant contribution to the renormalization of the soft mass parameter of the Higgs boson ($m_{h_u}^2$) as well as the radiative physical Higgs mass squared ($m_H^2$) [@book]: \[physHiggs\] &&m\_H\^2 \^4v\_h\^2 [log]{}() + ,\
\[renormHiggs\] && m\_[h\_u]{}\^2\_t\^2 [log]{}( ) +, where $m_t$ ($\widetilde{m}_t$) denotes the top quark (stop) mass, and $v_h$ is the vacuum expectation value (VEV) of the Higgs boson, $v_h\equiv\sqrt{{\langle h_u \rangle}^2+{\langle h_d \rangle}^2}\approx 174~{\rm GeV}$ with $\tan\beta\equiv{\langle h_u \rangle}/{\langle h_d \rangle}$. $\Lambda$ means a cutoff scale. A messenger scale of SUSY breaking is usually adopted for it. Here we set the left-hand (LH) and right-hand (RH) stop squared masses, $m_{q_3}^2$ and $m_{u^c_3}^2$ equal to $\widetilde{m}_t^2$ for simplicity. Note that $\Delta m_{h_u}^2$ can be a large negative value for a large stop mass and a high messenger scale.
As seen in [Eq. (\[physHiggs\])]{}, a large stop mass can raise the radiative Higgs mass. According to the recent analysis based on three-loop calculations [@3-loop], a $3$–$4$ or $5 {\,\textrm{TeV}}$ stop mass is necessary for explaining the recently observed 126 GeV Higgs mass [@LHCHiggs] without a stop mixing effect. From [Eq. (\[renormHiggs\])]{}, however, such a heavy stop mass is expected to significantly enhance the renormalization effect on $m_{h_u}^2$, and eventually it gives rise to a fine-tuning problem associated with naturalness of the EW scale. It is because a negative $m_{h_u}^2$ triggers the EW symmetry breaking, and eventually determines the $Z$ boson mass in the MSSM, as seen in the extremum condition of the MSSM Higgs potential [@book]: [$$\begin{split} \label{extremeCondi}
\frac12 m_Z^2=\frac{m_{h_d}^2-m_{h_u}^2{\rm tan}^2\beta}{{\rm tan}^2\beta-1}
-|\mu|^2 ,
\end{split}$$]{} where $m_Z^2$ denotes the $Z$ boson mass and $\mu$ is the “$\mu$-term” coefficient in the MSSM superpotential. If $-m_{h_u}^2$ is excessively large, it should be compensated with $|\mu|^2$. Thus, a fine-tuning of $10^{-3}$–$10^{-4}$ does not seem to be avoidable in the MSSM, unless the messenger scale $\Lambda$ is low enough. Due to this reason, a relatively smaller stop mass ($\ll 1 {\,\textrm{TeV}}$) has been assumed for naturalness of the EW scale, and various extensions of the Higgs sector have been proposed for explaining the observed 126 GeV Higgs mass [@NMSSMreview; @singletEXT; @extensions]. Unfortunately, however, the stop mass bound has already reached $700 {\,\textrm{GeV}}$ [@stopmass], which starts threatening the traditional status of SUSY as a solution to the naturalness problem of the EW phase transition. Thus, in this paper, we intend to discuss the naturalness problem in case the stop is quite heavy ($\sim 5 {\,\textrm{TeV}}$).
In fact, the renormalization of $m_{h_u}^2$, [Eq. (\[renormHiggs\])]{} is necessarily affected by ultraviolet (UV) physics. Thus, for a more complete expression of it, the full renormalization group (RG) equations should be studied for a given UV model, even though [Eq. (\[renormHiggs\])]{} would not be very sensitive to an UV physics in SUSY models. Unlike the expectation based on low energy physics, however, it was claimed that the $Z$ boson and Higgs masses at low energy are quite insensitive to the stop mass in the “focus point (FP) scenario” [@FMM1; @FMM2; @Nath]: under the simple initial condition for the stops and Higgs squared masses, $m_{q_3}^2=m_{u^c_3}^2=m_{h_u}^2=\cdots\equiv m_0^2$ at the grand unification (GUT) scale, the RG solution of $m_{h_u}^2$ turns out to be almost independent of $m_0^2$ [*at the EW scale*]{} unlike those of $m_{q_3}^2$ and $m_{u^c_3}^2$. It is because the coefficient of $m_0^2$ in the RG solution of $m_{h_u}^2$ at the EW scale turns out to be quite small. Accordingly, $m_{h_u}^2$ can remain small enough even for relatively large trial $m_0^2$s ($\sim$ multi-TeV) unlike other superparticles in the chiral sector. Interestingly enough, moreover, the FP scenario favors the simplest version of SUSY model with the minimal field contents and the universal initial condition for the soft squared masses at the GUT scale: many careless extensions of the MSSM at low energy would destroy the FP mechanism.
The insensitivity of $m_{h_u}^2$ to $m_0^2$ or stop masses implies that [Eq. (\[renormHiggs\])]{} is effectively canceled by other ingredients. One might expect that a fine-tuning for smallness of $m_{h_u}^2$ would be hidden somewhere in this scenario. This guess is actually true. As will be seen later, the smallness of the coefficient of $m_0^2$ in $m_{h_u}^2$ originates from the [*fact*]{} that [$$\begin{split} \label{key}
e^{\frac{-3}{4\pi^2}\int^{t_0}_{t_W}dt ~y_t^2}\approx \frac13 ~.
\end{split}$$]{} Here $t$ parametrizes the renormalization scale $Q$, $t-t_0={\rm log}\frac{Q}{M_G}$. $t_W$ and $t_0$ correspond to the EW and GUT scale $M_G$ ($\approx 2\times 10^{16} {\,\textrm{GeV}}$), respectively. Actually, [Eq. (\[key\])]{} is an accidental relation in some sense. Just the quark and lepton masses, the low energy values of the SM gauge couplings, and the MSSM field contents completely determine $y_t(t)$, and the $Z$ boson mass scale and the gauge coupling unification scales provide exactly the needed energy interval. In the sense that [Eq. (\[key\])]{} is not artificially designed, but Nature might permit it, we will call it “Natural tuning.” Of course, there might exist a deep reason for it. In this paper, however, we will not attempt to explain the origin, but take a rather pragmatic attitude: we will just accept, utilize, and improve it.
However, the recently observed 126 GeV Higgs mass is challenging also in the FP scenario. Since the FP scenario works well with the minimal field contents and a suppressed stop mixing effect, the Higgs mass can be raised only through the radiative correction by the quite heavy stop, $\widetilde{m}_t\sim 3$–$4$ or $5 {\,\textrm{TeV}}$ [@3-loop]. To get a heavier stop mass, we need a larger $m_0^2$. In order for $m_{h_u}^2$ to remain insensitive even to much larger $m_0^2$s \[$>(5 {\,\textrm{TeV}})^2$\], a more precise focusing is quite essential. That is to say, the coefficient of $m_0^2$ in the $m_{h_u}^2$’s RG solution should be much closer to zero. Moreover, $m_{h_u}^2$ does not follow the original FP scenario below the stop mass scale, because the stops are decoupled there. Thus, for a predictive EW scale, the FP should appear around the stop mass scale rather than the conventional EW or $Z$ boson mass scale. The present heavy gluino mass bound at the LHC, $M_3\gtrsim 1.4 {\,\textrm{TeV}}$ [@gluinomass], also spoils the success of the FP scenario [@M3/M2; @M3/M2model; @DQW]. The heavy gluino leads to a too large negative $m_{h_u}^2$ at the EW scale through RG evolution. Such an RG effect by a heavy gluino mass should be compensated properly for a small enough $Z$ boson mass.
In this paper, we will attempt just to trim the FP scenario such that the FP is made located around the stop mass scale and the heavy gluino effect becomes mild. In order to accomplish that goal, we will consider a superheavy RH neutrino [@RHnuFP0; @RHnuFP], and the two-loop [*gauge*]{} interactions by the hierarchically heavier first and second generations of chiral superpartners (sfermions) [@Natural/2-loop; @Natural/splitZprime]. Hierarchically heavy masses for the first two generations of sfermions ($\gtrsim 15 {\,\textrm{TeV}}$) could also sufficiently suppress unwanted SUSY flavor and SUSY $CP$ violating processes as in the “effective SUSY model” [@effSUSY]. Once the location of the FP is successfully modified to a desirable position, even a quite heavy stop mass could still be naturally compatible with the $Z$ boson mass scale, and the 126 GeV Higgs mass can be supported dominantly by the radiative correction from such a heavy stop.
This paper is organized as follows: we will review the FP scenario and discuss the problems associated with the recent experimental results in Sec. \[sec:FP\]. In Sec. \[sec:preciseFP\], we will explore the ways to move the location of the FP into a desirable position in the space of $(Q, ~m_{h_u}^2(Q))$. In Sec. \[sec:model\], we will propose a simple model and discuss phenomenological constraints. Section \[sec:conclusion\] will be a conclusion. For convenience, in our discussion in the main text, we will leave the details of the full RG equations and derivation of some semianalytic solutions to them in the Appendix.
Focus point scenario {#sec:FP}
====================
Based on our semianalytic solutions to the RG equations, let us discuss first the RG behaviors of soft parameters associated with the Higgs boson and the third generation of sfermions. When $\tb$ is small enough, the top quark Yukawa coupling, $y_t$ dominantly drives the RG running of $\{m_{h_u}^2, m_{u^c_3}^2, m_{q_3}^2, A_t\}$, while the bottom quark and tau lepton’s Yukawa couplings, $y_b$ and $y_\tau$ are safely ignored. Here, $A_t$ denotes the “$A$-term” coefficient corresponding to the top quark Yukawa coupling. Thus, for small $\tb$, the one-loop RG equations for $\{m_{h_u}^2, m_{u^c_3}^2, m_{q_3}^2, A_t\}$ are written as 16\^2m\_[h\_u]{}\^2&=&6y\_t\^2(X\_t+A\_t\^2)-6g\_2\^2M\_2\^2-65 g\_1\^2M\_1\^2 , \[RG1\]\
16\^2m\_[u\^c\_3]{}\^2&=&4y\_t\^2(X\_t+A\_t\^2)-g\_3\^2M\_3\^2- g\_1\^2M\_1\^2 , \[RG2\]\
16\^2m\_[q\_3]{}\^2&=&2y\_t\^2(X\_t+A\_t\^2)-g\_3\^2M\_3\^2-6g\_2\^2M\_2\^2- g\_1\^2M\_1\^2 , \[RG3\]\
8\^2A\_t&=&6y\_t\^2A\_t-g\_3\^2M\_3-3g\_2\^2M\_2- g\_1\^2M\_1 , \[RG4\] where $X_t\equiv m_{h_u}^2+m_{u^c_3}^2+m_{q_3}^2$. $t$ parametrizes the renormalization scale $Q$, $t-t_0={\rm log}\frac{Q}{M_{G}}$. $g_{3,2,1}$ and $M_{3,2,1}$ in the above equations stand for the three MSSM gauge couplings and gaugino masses. Our semianalytic solutions to them are approximately given by &&m\_[h\_u]{}\^2(t)m\_[h\_u0]{}\^2++ -32()\^2{g\_2\^4(t)-g\_0\^4} , \[RGsol1\]\
&&m\_[u\^c\_3]{}\^2(t)m\_[u\^c\_30]{}\^2++ +89()\^2{g\_3\^4(t)-g\_0\^4} , \[RGsol2\]\
&&m\_[q\_3]{}\^2(t)m\_[q\_30]{}\^2++ +()\^2{89 g\_3\^4(t) -32 g\_2\^4(t)+g\_0\^4} , \[RGsol3\]\
&&A\_t(t)=e\^[\^t\_[t\_0]{}dt\^y\_t\^2]{} , \[RGsol4\] where we ignored the bino mass $M_1$ and the relevant U(1)$_Y$ gauge contributions due to their smallness. For the complete expressions and derivation of the above solutions, refer to the Appendix (setting $\widetilde{m}^2=0$). Here, $\{m_{h_u0}^2, m_{u^c_30}^2, m_{q_30}^2, A_0\}$ denote the values of $\{m_{h_u}^2(t), m_{u^c_3}^2(t), m_{q_3}^2(t), A_t(t)\}$ at the GUT scale, and $X_0\equiv m_{h_u0}^2+m_{u^c_30}^2+m_{q_30}^2$. $g_0$ and $m_{1/2}$ are the unified gauge coupling and gaugino mass at the GUT scale, respectively. $F(t)$ in the above solutions is defined as [$$\begin{split} \label{F}
&\quad~~ F(t)\equiv \frac{3}{4\pi^2} ~e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2} \int^t_{t_0}dt^\prime ~y_t^2A_t^2 ~e^{\frac{-3}{4\pi^2}\int^{t'}_{t_0}dt^{\prime\prime}y_t^2}
\\
&-\frac{1}{4\pi^2} \left[e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2} \int^t_{t_0}dt^\prime ~G_X^2 ~e^{\frac{-3}{4\pi^2}\int^{t'}_{t_0}dt^{\prime\prime}y_t^2}
-\int^t_{t_0}dt^\prime~G_X^2 \right] .
\end{split}$$]{} $G_A$ in [Eq. (\[RGsol4\])]{} and $G_X^2$ in [Eq. (\[F\])]{} are given, respectively, by $$\begin{aligned}
\label{GA}
&&G_A(t)\equiv \frac{16}{3}g_3^2M_3+3g_2^2M_2+\frac{13}{15}g_1^2M_1
=\left(\frac{m_{1/2}}{g_0^2}\right)\left[\frac{16}{3}g_3^4+3g_2^4+\frac{13}{15}g_1^4\right] ,
\\ \label{GX2}
&&G_X^2(t)\equiv \frac{16}{3}g_3^2M_3^2+3g_2^2M_2^2+\frac{13}{15}g_1^2M_1^2
= \left(\frac{m_{1/2}}{g_0^2}\right)^2\left[\frac{16}{3}g_3^6+3g_2^6+\frac{13}{15}g_1^6\right] . \end{aligned}$$ Note that $F(t)$ is independent of $\{m_{h_u0}^2, m_{u^c_30}^2, m_{q_30}^2\}$, so $\{m_{h_u0}^2, m_{u^c_30}^2, m_{q_30}^2\}$ appear only in the first three terms in the above RG solutions, Eqs. (\[RGsol1\]), (\[RGsol2\]), and (\[RGsol3\]).
$F(t)$ depends on $\tb$ in principle. But it turns out to be almost insensitive to $\tb$. For instance, $F(t)$ at $Q=5 {\,\textrm{TeV}}$ \[$=F(t_T)$\] is estimated as [$$\begin{split} \label{F(t_T)}
F(t_T)\approx \left\{-1.03,-1.02\right\}\times \left(\frac{m_{1/2}}{g_0^2}\right)^2
\end{split}$$]{} for $\{\tb=5,\tb=50\}$ and $A_0=0$. Here the numerical estimation for $\tb=50$ was performed by including $y_b$ and $y_\tau$ effects with $m_{h_d}^2=m_{e^c_3}^2=m_{l_3}^2=m_0^2$. For the complete RG equation we used, see the Appendix. Thus, the last three terms in [Eq. (\[RGsol1\])]{} at $Q=5 {\,\textrm{TeV}}$ yield $\{-1.43,-1.41\}\times m_{1/2}^2$ for $\{\tb=5,\tb=50\}$ and $A_0=0$. Note that the $F(t)$ term dominates over the last two terms in [Eq. (\[RGsol1\])]{} at $Q=5 {\,\textrm{TeV}}$. Although the last two terms provide a positive coefficient of $m_{1/2}^2$, the large gluino mass effect contained in $F(t)$ flips the sign.
If the gauge sector’s contributions proportional to $m_{1/2}^2$ are relatively suppressed, $A_t(t)$ and $F(t)$ are simplified as follows: A\_t(t)A\_0e\^[\^t\_[t\_0]{}dt\^y\_t\^2]{} , [and]{} F(t)A\_0\^2 e\^[\^t\_[t\_0]{}dt\^y\_t\^2]{} . In this case, $\{m_{h_u}^2(t), m_{u^c_3}^2(t), m_{q_3}^2(t)\}$ thus reduce to \[appxH\] &&m\_[h\_u]{}\^2(t)m\_[h\_u0]{}\^2++ e\^[\^t\_[t\_0]{}dt\^y\_t\^2]{} + ,\
\[appxU\] &&m\_[u\^c\_3]{}\^2(t)m\_[u\^c\_30]{}\^2++ e\^[\^t\_[t\_0]{}dt\^y\_t\^2]{} + ,\
\[appxQ\] &&m\_[q\_3]{}\^2(t)m\_[q\_30]{}\^2++ e\^[\^t\_[t\_0]{}dt\^y\_t\^2]{} + , where “$\cdots$” does not contain $m_0^2$ and $A_0$. As emphasized in [Eq. (\[key\])]{}, the most important notice should be taken here that $e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2}\approx \frac13$ for $t=t_0+{\rm log}\frac{10^{2}~{\rm GeV}}{M_G}$ ($\equiv t_W$) when $\tb$ is moderately small [@FMM1]. Thus, if a universal soft squared mass is assumed, $m_{h_u0}^2=m_{u^c_30}^2=m_{q_30}^2\equiv m_0^2$, and $A_0=0$ is set at the GUT scale, Eqs. (\[appxH\])–(\[appxQ\]) are recast into [@FMM1] \[FP0\] &&m\_[h\_u]{}\^2(t\_W) + 0.006 m\_0\^2 + ,\
&&m\_[u\^c\_3]{}\^2(t\_W)+ 13 m\_0\^2 + ,\
&&m\_[q\_3]{}\^2(t\_W)+ m\_0\^2 + , where “$\cdots$” does not contain $m_0^2$. Hence, $m_{h_u}^2(t)$ almost vanishes at the EW sale ($t\approx t_W$). It means that $m_{h_u}^2$ can be light enough at the EW scale, almost [*independent of $m_0^2$*]{}, only if the “$\cdots$” in [Eq. (\[FP0\])]{} was also suppressed. Since $m_{h_u}^2$ is very insensitive to $m_0^2$, even a large enough $m_0^2$ guarantees the smallness of $m_{h_u}^2$ at the EW scale, whereas it makes stop masses quite heavy: $m_{u^c_3}^2(t_W)\approx m_0^2/3$ and $m_{q_3}^2(t_W)\approx 2m_0^2/3$. In the FP scenario, therefore, the naturalness of the EW scale and the Higgs mass is based on Natural tuning.
Although $A_0$ is comparable to other soft parameters, $m_{h_u}^2$ can still remain small at the EW scale, provided $(m_{h_u0}^2, m_{u^c_30}^2, m_{q_30}^2, A_0^2)$ are very specially related, satisfying, e.g., $m_0^2~ (1, ~1+x-3y, ~1-x, ~9y)$ at the GUT scale, where $x$, $y$ are arbitrary numbers [@FS]. However, such a relation looks hard to realize in a supergravity (SUGRA) model. For simplicity, we will assume in this paper that $|x|, |y|\ll 1$; namely, $A_0$ is quite suppressed compared to $m_0^2$ ($=m_{h_u0}^2=m_{u^c_30}^2=m_{q_30}^2$). Actually, this is possible, e.g., in the gauge mediated SUSY breaking scenario with a GUT scale messenger. To get a universal soft squared mass in the gauge mediation, the SM gauge group should be embedded in a simple group at the GUT scale. However, the effect by nonvanishing $A_0$ on $m_{h_u}^2$ can be compensated by another ingredient introduced later. Hence, the gravity mediated SUSY breaking scenario with the universal soft squared mass and $A_0\neq 0$ can also be consistent with the FP scenario.
Unlike the naive expectation, the low energy value of $m_{h_u}^2$ is not sensitive to the stop masses in the FP scenario. Hence, apparently, the naturalness of the Higgs boson seems to be guaranteed in this framework. It is a result of
[**1.**]{} the employed initial conditions, $m_{h_u0}^2=m_{u^c_30}^2=m_{q_30}^2=m_0^2$ and $A_0=0$, and
[**2.**]{} the accidental result, $e^{\frac{-3}{4\pi^2}\int^{t_0}_{t_W}dt^\prime y_t^2}\approx \frac13$ (“Natural tuning”).
The first condition is associated with a model-building problem. Actually, it can easily be realized in a large class of simple SUGRA models. However, the second condition would be a kind of fine-tuning condition, because the top quark Yukawa coupling $y_t(t)$ and the interval of the energy scales between the EW and the GUT scales should specially be related. But it is not artificially designed. As mentioned in the introduction, we will simply accept such a Natural tuning phenomenon.
However, the recent experimental results at the LHC seem to spoil the nice picture of the original FP scenario. Most of all, the gauge contributions in Eqs. (\[RGsol1\])–(\[RGsol4\]) cannot be ignored any longer, since the mass bound for the gluino has been increased, $M_3\gtrsim 1.4~{\rm TeV}$ [@gluinomass]. As a result, the unified gaugino mass $m_{1/2}$ should be heavier than at least $550~{\rm GeV}$. Since a large $m_{1/2}^2$ leads to a large negative $m_{h_u}^2$ and large positive $m_{u^c_3}^2$ and $m_{q_3}^2$ at low energy, as seen in Eqs. (\[RGsol1\])–(\[RGsol3\]) and (\[F(t\_T)\]), $-m_{h_u}^2$ cannot be small enough at the EW scale. A too large negative $m_{h_u}^2$ should be finely tuned with $|\mu|^2$ to be matched to $M_Z^2$ in [Eq. (\[extremeCondi\])]{}. Moreover, the observed Higgs mass, $126 {\,\textrm{GeV}}$, is somewhat heavy as a SUSY Higgs mass. Once we suppose $A_0\approx 0$, a quite heavy stop mass ($\sim 5 {\,\textrm{TeV}}$) is needed for explaining the observed Higgs mass [@3-loop].[^3] A very large $m_{1/2}^2$ for a $5 {\,\textrm{TeV}}$ stop mass would require a serious fine-tuning between $m_{h_u}^2$ and $|\mu|^2$ or $m_{1/2}^2$ and $m_0^2$. Alternatively, one can try to extend the MSSM for raising the Higgs mass. However, many extensions of the MSSM Higgs sector end up ruining the FP scenario, as will be commented later. Since the stops are decoupled around $5~{\rm TeV}$ ($t\equiv t_T$), $m_{h_u}^2$ follows the RG running of the SM below $t\approx t_T$. Hence, the FP mechanism based on the SUSY RG equations would not work well anymore. Actually, [Eq. (\[FP0\])]{} is valid when the stop is not too much heavier than the $Z$ boson. The heavy fields’ correction to the RG solution can be estimated using the formula on the Coleman-Weinberg’s effective potential [@CW]. In fact, the RG solution is a result of one-loop effects by massless fields, while the Coleman-Weinberg’s one-loop effective potential is dominated by the heavy fields. The signs of both loop effects are opposite. Thus, the low energy value of $m_{h_u}^2$ below the stop decoupling scale is roughly estimated as [@CQW; @book] [$$\begin{split} \label{RGsm}
m_{h_u}^2(t_W)&\approx m_{h_u}^2|_{\Lambda_T} + \frac{3|y_t|^2}{8\pi^2}\left[(\widetilde{m}_t^2+m_t^2)\left\{{\rm log}\frac{\widetilde{m}_t^2+m_t^2}{\Lambda_T^2}-1\right\}-m_t^2\left\{{\rm log}\frac{m_t^2}{\Lambda_T^2}-1\right\}\right]
\\
&\approx m_{h_u}^2|_{\Lambda_T} - \frac{3|y_t|^2}{8\pi^2}\widetilde{m}_t^2 ,
\end{split}$$]{} where $m_t$ ($\widetilde{m}_t$) denotes the top quark (stop) mass, and the cutoff $\Lambda_T$ \[$\approx (\widetilde{m}_t^2+m_t^2)^{1/2}$\] is the scale where the stops are decoupled, and so $m_{h_u}^2|_{\Lambda_T} = m_{h_u}^2(t_T)$. Here we set $m_{u^c_3}^2\approx m_{q_3}^2\equiv\widetilde{m}_t^2$ for simple estimation. Note that $\frac{3|y_t|^2}{8\pi^2}\widetilde{m}_t^2\approx (800 {\,\textrm{GeV}})^2$. Accordingly, $m_{h_u}^2$ at $t=t_T$ (or $m_{h_u}^2|_{\Lambda_T}$) should be smaller than $(1~{\rm TeV})^2$ in order for $-m_{h_u}^2$ at the EW scale to be smaller than $(1~{\rm TeV})^2$. Since $t=t_T$ is more or less far from $t_W$, however, the coefficient of $m_0^2$ in [Eq. (\[RGsol1\])]{} is not suppressed enough, $m_{h_u}^2(t_T)\approx 0.1 m_0^2-\cdots$, where $m_0^2 > (5~{\rm TeV})^2$ for obtaining $5~{\rm TeV}$ stop masses. Hence, $m_{h_u}^2(t_T)$ is quite sensitive to $m_0^2$, and it should be tuned with $m_{1/2}^2$ in [Eq. (\[RGsol1\])]{} and/or $|\mu|^2$. Thus, for a predictively small $m_{h_u}^2$, the FP should somehow appear around the stop decoupling scale [@M3/M2model; @DQW]. That is to say, the coefficient of $m_0^2$ should be much closer to zero around the stop mass scale, as mentioned in the introduction.
Figs. \[fig:MSSM\]-(a) and (b) display the RG behaviors of $m_{h_u}^2$ for $m_0^2=(7 {\,\textrm{TeV}})^2$, $(5 {\,\textrm{TeV}})^2$, $(3 {\,\textrm{TeV}})^2$, when $m_{1/2}=1 {\,\textrm{TeV}}$, $A_0=0$, $\tb=5$ \[Fig. \[fig:MSSM\]-(a)\] and $\tb=50$ \[Fig. \[fig:MSSM\]-(b)\] with $\alpha_{G}=1/25$. Note that $m_{1/2}=1~{\rm TeV}$ yields the gluino mass of $2.4~{\rm TeV}$ at TeV scale, which is well above the present experimental lower bound $1.4 {\,\textrm{TeV}}$ [@gluinomass]. Although we presented the simple RG equations valid for small $\tb$ in Eqs. (\[RG1\])–(\[RG3\]), the figures in Fig. \[fig:MSSM\] are based on the full one-loop RG equations including $y_b$ and $y_\tau$ with the universal boundary condition imposed also for $m_{h_d}^2$, $m_{e^c_3}^2$, and $m_{l_3}^2$. Figs. \[fig:MSSM\]-(a) and (b) show that the FP is located at a slightly higher (lower) energy scale for a small (large) $\tb$. Table \[tab:FP0\] lists the values of $\{m_{q_3}^2,m_{u^c_3}^2,m_{h_u}^2\}$ at $t=t_T$ (i.e. at $Q=5~{\rm TeV}$) in these cases. It shows that $m_{h_u}^2(t_T)$ is quite sensitive to $m_0^2$, as mentioned above. For $\tb=50$, particularly, the fine-tuning measure defined in Refs. [@FTmeasure] is estimated as [$$\begin{split} \label{FTmeasure0}
\Delta_{m_0^2}=\left|\frac{\partial ~{\rm log} ~m_Z^2}{\partial ~{\rm log} ~m_0^2}\right|
=\left|\frac{m_0^2}{m_Z^2} ~\frac{\partial m_Z^2}{\partial m_0^2}\right|
~\approx~ 875
\end{split}$$]{} around the $m_0^2=(7 {\,\textrm{TeV}})^2$. A similar analysis with $\alpha_{G}=1/24$ turns out to yield a worse result, $\Delta_{m_0^2}\approx 1474$. They are quite large. It is because the locations of their FPs are too far from the point $(t=t_T, m_{h_u}^2=0)$.
[c|ccc||c|ccc]{} $$&$$ & [$\tb=5$]{} & $$&$$ & $$& {\small $\tb=50$} &$$\
& [$({\bf 7} {\,\textrm{TeV}})^2$]{} & [$({\bf 5} {\,\textrm{TeV}})^2$]{} & [$({\bf 3} {\,\textrm{TeV}})^2$]{} & [${\bf m_0^2}$]{} & [$({\bf 7} {\,\textrm{TeV}})^2$]{} & [$({\bf 5} {\,\textrm{TeV}})^2$]{} & [$({\bf 3} {\,\textrm{TeV}})^2$]{}\
& [$(6.1 {\,\textrm{TeV}})^2$]{} & [$(4.5 {\,\textrm{TeV}})^2$]{} & [$(3.1 {\,\textrm{TeV}})^2$]{} & [$m_{q_3}^2(t_T)$]{} & [$(5.2 {\,\textrm{TeV}})^2$]{} & [$(3.9 {\,\textrm{TeV}})^2$]{} & [$(2.8 {\,\textrm{TeV}})^2$]{}\
[$m_{u^c_3}^2(t_T)$]{} & [$(4.6 {\,\textrm{TeV}})^2$]{} & [$(3.4 {\,\textrm{TeV}})^2$]{} & [$(2.4 {\,\textrm{TeV}})^2$]{} & [$m_{u^c_3}^2(t_T)$]{} & [$(4.7 {\,\textrm{TeV}})^2$]{} & [$(3.5 {\,\textrm{TeV}})^2$]{} & [$(2.5 {\,\textrm{TeV}})^2$]{}\
[${\bf m_{h_u}^2(t_T)}$]{} & [$({\bf 1.3} {\,\textrm{TeV}})^2$]{} & [$-({\bf 0.4} {\,\textrm{TeV}})^2$]{} & [$-({\bf 0.9} {\,\textrm{TeV}})^2$]{} & [${\bf m_{h_u}^2(t_T)}$]{} & [$({\bf 1.8} {\,\textrm{TeV}})^2$]{} & [$({\bf 1.1} {\,\textrm{TeV}})^2$]{} & [$-({\bf 0.6} {\,\textrm{TeV}})^2$]{}
In order to get $m_{h_u}^2$ that is small enough and insensitive to $m_0^2$, the location of the FP needs to be moved somehow to a position [*around*]{} the stop mass scale. See Fig. \[fig:desirable\]. $\epsilon$ in Figs. \[fig:desirable\]-(a) and (b) should be as small as possible for a predictable $m_{h_u}^2$ at the EW scale. In addition, at a location of the FP near $t=t_T$, $m_{h_u}^2$ should be in the range of $0\lesssim m_{h_u}^2\lesssim (1 {\,\textrm{TeV}})^2$. Since the heavy gluino makes a large negative contribution to $m_{h_u}^2(t_T)$, we need some other ingredients to overcome the heavy gluino effect. Below $t=t_T$, $m_{h_u}^2$ further decreases by $\sim (800 {\,\textrm{GeV}})^2$ down to $t=t_W$, as discussed in [Eq. (\[RGsm\])]{}. In order to mitigate the $m_0^2$ dependence via $\widetilde{m}_t^2$ in [Eq. (\[RGsm\])]{}, reducing the fine-tuning, a FP of $m_{h_u}^2$ appearing at a slightly lower energy scale than (but still around) $t_T$ is more preferred: the coefficient of $m_0^2$ in $m_{h_u}^2|_{\Lambda_T}$ needs to be of order ${\cal O}(10^{-2})$.
[![Desirable locations of the focus point in the $(t, m_{h_u}^2)$ space. The straight lines sketch different RG evolutions of $m_{h_u}^2$ for various $m_0^2$s. $t_T$ corresponds to the assumed stop decoupling scale ($Q=5 {\,\textrm{TeV}}$). $\epsilon$ needs to be as small as possible. []{data-label="fig:desirable"}](Fig2.eps "fig:"){width="0.70\linewidth"}]{}
Precise focusing {#sec:preciseFP}
================
In this section, we will discuss how to move the FP to the desirable locations presented in the previous section in the $(t,m_{h_u}^2(t))$ space. We intend to argue that the Higgs mass happens to be 126 GeV by $5 {\,\textrm{TeV}}$ stop mass, after $m_{h_u}^2$ at $t=t_T$ is made insensitive to $m_0^2$. It would be a way to trim the original idea of the Natural tuning.
Pushing up the focus point to higher energy scale
-------------------------------------------------
As $\tb$ increases, the size of the top quark Yukawa coupling decreases. As a consequence, the factor $[e^{\frac{-3}{4\pi^2}\int^{t_0}_{t}dt^\prime y_t^2}-\frac13]$ in [Eq. (\[FP0\])]{} vanishes at a lower energy scale $t$ ($< t_W$) for a smaller $y_t$. It implies that the FP moves to a lower energy scale for a larger $\tb$ [@FMM1; @Akula]. The numerical analysis including $y_b$ and $y_\tau$, Figs. \[fig:MSSM\]-(a) and (b) confirm such a behavior of the FP. Since we intend to move the FP in the higher energy direction, a large $\tb$ is not helpful.
A much larger top quark Yukawa coupling $y_t(t)$ at [*higher*]{} energy scales can move the FP to a new location at a higher energy scale. Actually, $y_t(t)$ can be easily raised at higher energy scales e.g. by introducing a new Yukawa coupling of the Higgs boson. For instance, let us consider a coupling between $h_u$ and a new singlet $S$ in the next-to-MSSM (NMSSM) [@NMSSMreview]: [$$\begin{split}
W_S=\lambda Sh_uh_d + \cdots .
\end{split}$$]{} In this case, the RG equations of $y_t$ and $\lambda$ are given by \[RGnmssm\] &&8\^2y\_t\^2=y\_t\^2,\
&& 8\^2\^2=\^2 for small $\tb$. Because of the additional positive contribution by $\lambda^2$ to the RG equation of $y_t$, $y_t^2$ becomes larger than that in the absence of $\lambda$. Moreover, the $\lambda$ coupling introduces a positive contribution also to the RG equation for $m_{h_u}^2$: [$$\begin{split} \label{mhuLambda}
16\pi^2\frac{d}{dt}m_{h_u}^2=2\lambda^2\left(X_\lambda + A_\lambda^2\right)
+ 6y_t^2\left(X_t+A_t^2\right)
-6g_2^2M_2^2-\frac65g_1^2M_1^2 ,
\end{split}$$]{} where $X_\lambda\equiv(m_{h_u}^2+m_{u^c_3}^2+m_{q_3}^2)$. It turns out, however, that the FP’s location is too sensitive to $\lambda$. According to our analysis, $\lambda$ should be smaller than at least $0.1$. Otherwise, the FP moves too far away in the high energy direction. For example, $\lambda=0.6$ and $\tb=3$ moves the location of FP to $10^{13} {\,\textrm{GeV}}$ energy scale. Hence, the parameter window satisfying the $126 {\,\textrm{GeV}}$ Higgs mass and the Landau pole constraint in the NMSSM, $0.6\lesssim\lambda\lesssim 0.7$ and $1<\tb\lesssim 3$ [@nmssmWindow], cannot be compatible with the FP scenario. As seen in this example, extensions of the MSSM Higgs sector with a new sizable Yukawa coupling, e.g., for raising the Higgs mass could result in ruin of the FP scenario.[^4] The RG effect of $\lambda$ coupling on $y_t$ can be reduced just by assuming that $S$ is superheavy and so decoupled at a very high energy scale. One well-motivated superheavy particle is the RH neutrino ($N^c$), which is introduced to explain the smallness of the active neutrino mass through the seesaw mechanism [@seesaw] by the superpotential, [$$\begin{split} \label{RHnu}
W_N=y_Nl_3h_uN^c + \frac12M_NN^cN^c ,
\end{split}$$]{} where $l_3$ is a lepton doublet in the MSSM. We assume that the Majorana mass of $N^c$ is $M_N\approx 2\times 10^{14} {\,\textrm{GeV}}$. If the RH neutrino is embedded in a multiplet of a GUT with the $B-L$ charge, [Eq. (\[RHnu\])]{} can be naturally obtained from the nonrenormalizable term in GUTs, $W\supset \langle H_G\rangle\langle H_G\rangle N^cN^c/M_P$, where $\langle H_G\rangle$ and $M_P$ are a VEV of a GUT breaking Higgs boson ($\sim 10^{16} {\,\textrm{GeV}}$) and the reduced Planck mass ($\approx 2.4\times 10^{18} {\,\textrm{GeV}}$), respectively. For $M_N\sim 10^{14}~{\rm GeV}$, the Yukawa coupling $y_N$ should be of order unity to get a neutrino mass of order $0.1 {\,\textrm{eV}}$. Here, we suppose that only one Yukawa coupling with $h_u$, $y_N$ is of order unity: for simplicity, we assume that other Yukawa couplings of $h_u$ to other RH neutrinos are small enough. Accordingly, other RH neutrinos should be relatively lighter than $M_N$. Since $N^c$ would be decoupled at a very hight energy scale ($Q= M_N\approx 2\times 10^{14} {\,\textrm{GeV}}$), its RG effect on $y_t$ could be mild, and the FP would relatively slowly move as $y_N$ varied. Consequently, $m_{h_u}^2$ at $t=t_T$ could become less sensitive to $m_0^2$ [@RHnuFP]. If the heaviest RH neutrino was lighter than $\sim 10^{13} {\,\textrm{GeV}}$, its RG effect on $y_t$ would be negligible because the required Yukawa coupling becomes too small.
Similar to [Eq. (\[mhuLambda\])]{}, the RG evolution of $m_{h_u}^2$ between $Q=M_G$ and $Q= M_N$ is described by $$\begin{aligned}
\label{mHu2I}
16\pi^2\frac{d}{dt}m_{h_u}^2&=&2y_N^2\left(X_N+A_N^2\right)+6y_t^2\left(X_t+A_t^2\right)
-6g_2^2M_2^2-\frac65 g_1^2M_1^2 , \end{aligned}$$ where the $y_N^2X_N$ \[$=y_N^2(m_{h_u}^2+m_{N^c}^2+m_{l_3}^2)$\] and $y_N^2A_N^2$ terms are additional positive contributions coming from the RH neutrino. On the other hand, the RG equations for $m_{u^c_3}^2$ and $m_{q_3}^2$ maintain the same forms with those in the absence of the RH neutrino, Eqs. (\[RG2\]) and (\[RG3\]). They are just affected only through the modified value of $y_t^2\left(X_t+A_t^2\right)$, which appears also in [Eq. (\[mHu2I\])]{}. For the complete form of the RG equations, refer to the Appendix. Because of the $y_N^2\left(X_N+A_N^2\right)$ terms in [Eq. (\[mHu2I\])]{}, $m_{h_u}^2/m_{u^c_3}^2$ and $m_{h_u}^2/m_{q_3}^2$ more rapidly decrease from $Q=M_G$ to $Q= M_N$ than the case without the RH neutrino. Below $Q= M_N$, however, the RH neutrino becomes decoupled, and so $m_{h_u}^2$, $m_{u^c_3}^2$, and $m_{q_3}^2$ respect the same RG equations with Eqs. (\[RG1\])–(\[RG3\]).
Considering [Eq. (\[RGsol1\])]{}, one can see that the RG solution of $m_{h_u}^2$ [*valid only below*]{} $Q= M_N$ ($t<t_I$) should be written as [$$\begin{split} \label{solI}
&\qquad~~~ m_{h_u}^2(t) = m_{h_uI}^2+\frac{X_I}{2}
\left[e^{\frac{3}{4\pi^2}\int^{t}_{t_I}dt^\prime y_t^2}-1\right]
+ \cdots
\\
&=\frac{X_I}{2}\left[e^{\frac{-3}{4\pi^2}\int^{t_I}_{t}dt^\prime y_t^2}-\left(1-\frac{2m_{h_uI}^2}{m_{h_uI}^2+m_{u^c_3I}^2+m_{q_3I}^2}\right)\right]+\cdots ,
\end{split}$$]{} where $\{m_{h_uI}^2, m_{u^c_3I}^2, m_{q_3I}^2\}$ denote the values of $\{m_{h_u}^2, m_{u^c_3}^2, m_{q_3}^2\}$ at $Q= M_N$, respectively, and $X_I\equiv m_{h_uI}^2+m_{u^c_3I}^2+m_{q_3I}^2$. Note that “$\cdots$” in [Eq. (\[solI\])]{} does not contain the dependence of $\{m_{h_uI}^2, m_{u^c_3I}^2, m_{q_3I}^2\}$. Comparing with [Eq. (\[RGsol1\])]{}, $\{m_{h_u0}^2, m_{u^c_30}^2, m_{q_30}^2\}$ and $X_0$ are replaced by $\{m_{h_uI}^2, m_{u^c_3I}^2, m_{q_3I}^2\}$ and $X_I$ in [Eq. (\[solI\])]{}. On the contrary, $y_t^2$ in [Eq. (\[solI\])]{} is the same as $y_t^2$ of [Eq. (\[RGsol1\])]{} for $t<t_I$, because $y_t^2$ should be set to explain the top quark mass at low energy and undergoes the same RG evolution as the case of [Eq. (\[RGsol1\])]{}. The RH neutrino makes $y_t^2$ larger only above $Q= M_N$. Since $m_{h_uI}^2/m_{u^c_3I}^2$ and $m_{h_uI}^2/m_{q_3I}^2$ are more suppressed at $Q= M_N$ by the RH neutrino effect above $Q= M_N$, $1-2m_{h_uI}^2/(m_{h_uI}^2+m_{u^c_3I}^2+m_{q_3I}^2)$ or $1-2m_{h_uI}^2/X_I$ in [Eq. (\[solI\])]{} is larger than that evaluated at $Q= M_N$ in the absence of the RH neutrino. As a result, ${\rm exp}[\frac{-3}{4\pi^2}\int^{t_I}_{t}dt^\prime y_t^2]-(1-2m_{h_uI}^2/X_I)$ vanishes at a $t$ larger than $t_W$. It implies that [*a FP must still exist and appear at a scale higher than $t_W$*]{}. Therefore, we can move the FP to around $t=t_T$ using a sizable $y_N$. We will discuss it again later.
Uplifting the focus point
-------------------------
Toward the desirable FP location, we need to somehow lift up the FP in the $(t,m_{h_u}^2(t))$ space as mentioned before. As a trial, let us turn on a small $A_0$ in [Eq. (\[RGsol4\])]{}, keeping $m_{h_u0}^2=m_{u^c_30}^2=m_{q_30}^2=m_0^2$. Then [Eq. (\[appxH\])]{} yields $m_{h_u}^2(t_W)\approx -A_0^2/9$. So the FP moves in the opposite direction to our desire. From Eqs. (\[RGsol1\]) and (\[F(t\_T)\]), increase of $m_{1/2}^2$ also moves the FP in the negative direction. Because of the experimental gluino mass constraint ($M_3\gtrsim 1.4~{\rm TeV}$), however, one cannot decrease $m_{1/2}^2$ sufficiently.
Indeed, the largest negative contribution to $m_{h_u}^2$ comes from the gluino mass $M_3$, as seen from Eqs. (\[F\])–(\[F(t\_T)\]): [Eq. (\[F\])]{} is dominated by the $g_3^2M_3$ and $g_3^2M_3^2$ terms in Eqs. (\[GA\]) and (\[GX2\]), which eventually give a negative $F(t_T)$ as seen in [Eq. (\[F(t\_T)\])]{}. A too large negative $m_{h_u}^2$ at the EW scale should be fine-tuned with $|\mu|^2$ to yield the desired size of $m_Z^2$. One way to compensate the negative gluino mass effect on $m_{h_u}^2$ is to cancel it with the positive contribution from the wino mass effect, sacrificing the gaugino mass unification, $M_3^2\lesssim M_2^2$ at the GUT scale [@M3/M2; @M3/M2model]: such nonuniversal gaugino masses at the GUT scale could improve the FP behavior but also soften significantly the limits on the gluino mass. Alternatively, a fine-tuning between $m_0^2$ and $m_{1/2}^2$ could also leave a light enough $m_{h_u}^2$, as seen in Eqs. (\[RGsol1\]) and (\[F(t\_T)\]): a FP achieved through such a fine-tuning can remain insensitive e.g. to the scaling of $(m_0^2,m_{1/2}^2)\rightarrow \lambda^2(m_0^2,m_{1/2}^2)$, keeping the ratio between $m_0^2$ and $m_{1/2}^2$ [@DQW]. However, the idea of Natural tuning is lost in this mechanism.
In this paper, we propose to consider the two-loop gauge effects by the first and second generations of hierarchically heavier sfermions, maintaining the gaugino mass unification. Their two-loop Yukawa interactions are extremely suppressed by their tiny Yukawa couplings. For simplicity, we suppose a universal heavy mass for them ($\equiv\widetilde{m}^2$). If $\widetilde{m}^2\gg m_{1/2}^2$, the RG running of $\widetilde{m}^2$ is negligible. Then the gauge contributions to the RG equations for the soft masses of the Higgs boson and sfermions are modified as [@2-loop; @Natural/2-loop] [$$\begin{split} \label{1-2loops}
&16\pi^2\frac{d}{dt}m_f^2=-8\sum_{i=3,2,1}C^f_i\left(g_i^2M_i^2
-\frac{\widetilde{m}^2}{4\pi^2}g_i^4\right) + \cdots
\\
&~~ =-8\sum_{i=3,2,1}C^f_i\left[\left(\frac{m_{1/2}}{g_0^2}\right)^2g_i^6
-\frac{\widetilde{m}^2}{4\pi^2}g_i^4\right] + \cdots ,
\end{split}$$]{} where $f=h_u,~u^c_3,~q_3$, etc., and $C_i^f$ denotes the Casimir for $f$. With the universal soft mass condition, the contributions by the “$D$-term” potential to [Eq. (\[1-2loops\])]{} vanish. Since $g_i^2M_i^2$s are always accompanied with $-\frac{\widetilde{m}^2}{4\pi^2}g_i^4$ in Eqs. (\[RG1\])–(\[RG3\]), they all should be modified into $g_i^2M_i^2-\frac{\widetilde{m}^2}{4\pi^2}g_i^4$. As a result, the heavy gluino effect can be compensated to be milder by the $\widetilde{m}^2$ terms [@Natural/splitZprime]. If $\widetilde{m}$ is much heavier than the gluino mass, moreover, it can be comparable to it or even dominate over it. Thus, a heavy enough $\widetilde{m}^2$ could raise $m_{h_u}^2$ up even to a positive value at $t=t_T$. Note that $\widetilde{m}^2$ does not appear in $X_0$ in [Eq. (\[RGsol1\])]{}: the heavier sfermions’ effects on Eqs. (\[RG1\])–(\[RG4\]) via the Yukawa interactions are extremely tiny. So $\widetilde{m}^2$ does not touch the FP mechanism. Indeed, any Yukawa couplings and $\tb$ are not involved in $g_i^2M_i^2-\frac{\widetilde{m}^2}{4\pi^2}g_i^4$. Since both contributions originate from the gauge interactions, their relation could be more easily realized in a UV model [@ongoing] than the relation between $m_{1/2}^2$ and $m_0^2$. Note that they leave intact the $A$-term RG equation [Eq. (\[RG4\])]{}. For the full expressions of the semianalytic solutions, refer to the Appendix.
The hierarchical mass pattern between the first/second and the third generations can be realized by employing the two different SUSY breaking mediations, e.g. the gravity or gauge mediation and U(1)$^\prime$ mediation. For instance, the first two generations of matter could carry nonzero (but opposite) U(1)$^\prime$ charges and they could receive additional U(1)$^\prime$ SUSY breaking mediation effects proportional to their charge squareds [@Zprime] for their hierarchically heavier masses [@splitZprime; @Natural/splitZprime]. Their desired relation could be achieved from the hierarchy between $g_0$ and the U(1)$^\prime$ gauge coupling, and also the messengers’ masses with a common SUSY breaking source. In such a setup, a relation between $\widetilde{m}^2$ and $m_{1/2}^2$ could also be obtained. Since the third generation of sfermions do not carry U(1)$^\prime$ charges, its soft masses are determined only by the gravity mediation effect. $A_0$ can also remain small enough to avoid unwanted color breaking minimum at low energies [@CCB]. We will propose a simple model realizing a desired relation between them later. To summarize our discussion so far, in Table \[tab:FPmove\] we present the FP’s movements for the various variations of parameters. We can move the FP into the desirable positions of Fig. \[fig:desirable\] by using e.g. $y_N$ and $\widetilde{m}^2$.
[c||c|c|c|c|c]{} Variations & ${\rm tan}\beta \Uparrow$ & $y_t^2$, $\lambda^2$, $y_N^2 \Uparrow$ & $A_0^2 \Uparrow$ & $m_{1/2}^2 \Uparrow$ & $\widetilde{m}^2 \Uparrow$ \
Focus point & $\Leftarrow$ & $\Rightarrow$ & $\Downarrow$ & $\Downarrow$ & $\Uparrow$
Numerical results
-----------------
Let us attempt to reduce the fine-tuning by introducing a superheavy RH neutrino and taking heavy soft masses for the first two generations of sfermions. Figs. \[fig:FPn\_5\]-(a) and (b) show the numerical results for the RG evolutions of $m_{h_u}^2$ for $m_0^2=(9 {\,\textrm{TeV}})^2$, $(7 {\,\textrm{TeV}})^2$, and $(5 {\,\textrm{TeV}})^2$, when $\{y_{NI}^2=0.8,~\widetilde{m}^2=(15 {\,\textrm{TeV}})^2\}$ and $\{y_{NI}^2=1.0,~\widetilde{m}^2=(20 {\,\textrm{TeV}})^2\}$, respectively. Here, $y_{NI}$ means $y_N$ evaluated at the RH neutrino decoupling scale ($Q= M_N\approx 2\times 10^{14} {\,\textrm{GeV}}$). $y_{N}^2$ of $y_{NI}^2=0.8$ ($1.0$) reaches $0.95$ ($1.2$) at the GUT scale, while its RG evolution becomes frozen below $Q= M_N$. In both cases, we set $\tb=5$ and $m_{1/2}=A_0=1 {\,\textrm{TeV}}$ with $\alpha_{G}=1/24$. Note that $m_{1/2}$ and $A_0$ are U(1)$_R$ breaking parameters. Thus, e.g. if U(1)$_R$ breaking scale is relatively lower than the SUSY breaking scale, they can be smaller than other soft SUSY breaking parameters, $m_0^2$ and $\widetilde{m}^2$ as desired. In Ref. [@Li], conformal sequestering was considered to suppress them. In “pure gravity mediation,” $m_{1/2}$ and $A_0$ are suppressed at the tree level [@puregravity]. Below the seesaw scale, $t=t_I\approx 25.3$ \[$Q\approx 2\times 10^{14} {\,\textrm{GeV}}$\], the RH neutrino is decoupled. Thus, $m_{h_u}^2$s in Figs. \[fig:FPn\_5\]-(a) and (b) follow the RG equations without the RH neutrino below $t=t_I$, while they are governed by the full RG equations including the RH neutrino between $t=t_0$ and $t=t_I$. For the analyses in Figs. \[fig:FPn\_5\]-(a) and (b), we used the full RG equations in the Appendix with the boundary conditions, $m_{h_u}^2=m_{u^c_3}^2=\cdots=m_{h_d}^2=\cdots=m_{N^c}^2=m_0^2$ and $m_{u^c_{1,2}}^2=m_{q_{1,2}}^2 \cdots =\widetilde{m}^2$.
[c|ccc||c|ccc]{} $$& {\footnotesize $\tb=5$} & {\footnotesize $y_{NI}^2=0.8$}~ & {\footnotesize $\widetilde{m}=15 {\,\textrm{TeV}}$} &$$ & [$\tb=5$]{} & [$y_{NI}^2=1.0$]{} & [$\widetilde{m}=20 {\,\textrm{TeV}}$]{}\
& [$({\bf 9} {\,\textrm{TeV}})^2$]{} & [$({\bf 7} {\,\textrm{TeV}})^2$]{} & [$({\bf 5} {\,\textrm{TeV}})^2$]{} & [${\bf m_0^2}$]{} & [$({\bf 9} {\,\textrm{TeV}})^2$]{} & [$({\bf 7} {\,\textrm{TeV}})^2$]{} & [$({\bf 5} {\,\textrm{TeV}})^2$]{}\
& [$(7.3 {\,\textrm{TeV}})^2$]{} & [$(5.6 {\,\textrm{TeV}})^2$]{} & [$(3.7 {\,\textrm{TeV}})^2$]{} & [$m_{q_3}^2(t_T)$]{} & [$(6.9 {\,\textrm{TeV}})^2$]{} & [$(5.0 {\,\textrm{TeV}})^2$]{} & [$(2.8 {\,\textrm{TeV}})^2$]{}\
[$m_{u^c_3}^2(t_T)$]{} & [$(5.7 {\,\textrm{TeV}})^2$]{} & [$(4.3 {\,\textrm{TeV}})^2$]{} & [$(2.8 {\,\textrm{TeV}})^2$]{} & [$m_{u^c_3}^2(t_T)$]{} & [$(5.3 {\,\textrm{TeV}})^2$]{} & [$(3.8 {\,\textrm{TeV}})^2$]{} & [$(1.9 {\,\textrm{TeV}})^2$]{}\
[${\bf m_{h_u}^2(t_T)}$]{} & [$({\bf 0.9} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.5} {\,\textrm{TeV}})^2$]{} & [$-({\bf 0.3} {\,\textrm{TeV}})^2$]{} & [${\bf m_{h_u}^2(t_T)}$]{} & [$-({\bf 0.2} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.4} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.6} {\,\textrm{TeV}})^2$]{}
In Fig. \[fig:FPn\_5\]-(a) \[(b)\], the FP appears at a slightly lower \[higher\] scale than the stop decoupling scale ($t=t_T\approx 0.92$). Since $m_{h_u}^2$ is well focused in the both cases, $m_{h_u}^2(t_T)$ is quite insensitive to the various trial $m_0^2$s as seen in Table \[tab:FPn\_5\]: for $5 {\,\textrm{TeV}}< m_0^2 < 9 {\,\textrm{TeV}}$ at the GUT scale, $m_{h_u}^2$ just changes from $-(0.3 {\,\textrm{TeV}})^2$ \[$(0.6 {\,\textrm{TeV}})^2$\] to $(0.9 {\,\textrm{TeV}})^2$ \[$-(0.2 {\,\textrm{TeV}})^2$\] at the stop decoupling scale. Hence, for precise focusing, it is required that [$$\begin{split}
0.8~\lesssim ~y_{NI}^2~\lesssim ~1.0 \quad {\rm and} \quad (15 {\,\textrm{TeV}})^2~\lesssim~\widetilde{m}^2~\lesssim~ (20 {\,\textrm{TeV}})^2 ,
\end{split}$$]{} when $\tb=5$ and $m_{1/2}=A_0= 1 {\,\textrm{TeV}}$. Under the situation that $m_{h_u}^2$ at $t=t_T$ is insensitive to $m_0^2$ and stop masses, $m_0^2$ can happen to be around $(8 {\,\textrm{TeV}})^2$ at the GUT scale, which leads to $5 {\,\textrm{TeV}}$ stop masses and the $126 {\,\textrm{GeV}}$ Higgs mass at the EW scale. However, if a larger $y_{NI}^2$ is taken, e.g. $y_{NI}^2=1.4$, the FP emerges around $t\approx 3$ ($Q\approx 40 {\,\textrm{TeV}}$). For $\widetilde{m}^2\gtrsim (24 {\,\textrm{TeV}})^2$ and $y_{NI}^2=1.0$, the EW symmetry breaking does not arise, because $m_{h_u}^2(t_T) > (1 {\,\textrm{TeV}})^2$. Hence, the above range of $y_{N}$ and $\widetilde{m}^2$ for a desirable FP needs to be supported by a UV model. Once $M_N$ is fixed by a GUT as explained above, however, the above range of $y_{NI}^2$ could be regarded as another Natural tuning, since $y_N^2$ can be determined by the active neutrino mass. The tuning issue introduced for the desired $\widetilde{m}^2$ could be converted to a model-building problem [@ongoing].
Similarly, Figs. \[fig:FPn\_50\]-(a), (b), and Table \[tab:FPn\_50\] present the results of $m_{h_u}^2$ for $m_0^2=(9 {\,\textrm{TeV}})^2$, $(7 {\,\textrm{TeV}})^2$, and $(5 {\,\textrm{TeV}})^2$, when $\tb=50$ and $m_{1/2}= A_0=1 {\,\textrm{TeV}}$ with $\alpha_{G}=1/24$. Here, we take $\{y_{NI}^2=1.0,~\widetilde{m}^2=(15 {\,\textrm{TeV}})^2\}$ and $\{y_{NI}^2=1.2,~\widetilde{m}^2=(20 {\,\textrm{TeV}})^2\}$ in Figs. \[fig:FPn\_50\]-(a) and (b), respectively. $y_{N}^2$ of $y_{NI}^2=1.0$ ($1.2$) reaches $1.25$ ($1.6$) at the GUT scale. Thus, the parameter ranges required for precise focusing are [$$\begin{split}
1.0~\lesssim ~y_{NI}^2~\lesssim ~1.2
\quad {\rm and} \quad (15 {\,\textrm{TeV}})^2~\lesssim~\widetilde{m}^2~\lesssim~ (20 {\,\textrm{TeV}})^2 ,
\end{split}$$]{} when $\tb=50$ and $m_{1/2}= A_0=1 {\,\textrm{TeV}}$. Particularly, $\{y_{NI}^2=1.2,~\widetilde{m}^2=(20 {\,\textrm{TeV}})^2\}$ leads to a quite exact focusing, and so $m_{h_u}^2(t_T)$ is almost invariant under variation of $m_0^2$. Again, $m_0^2\approx (8 {\,\textrm{TeV}})^2$ at the GUT scale happens to yield $5 {\,\textrm{TeV}}$ stop masses and eventually the $126 {\,\textrm{GeV}}$ Higgs boson mass. Around $m_0^2=(8 {\,\textrm{TeV}})^2$, the fine-tuning measure is estimated as [$$\begin{split}
\Delta_{m_0^2}
=\left|\frac{\partial ~{\rm log} ~m_Z^2}{\partial ~{\rm log} ~m_0^2}\right|
~\approx~ 66 ~~~{\rm and}~~~ 306
\end{split}$$]{} for $\{y_{NI}^2=1.0,~\widetilde{m}^2=(15 {\,\textrm{TeV}})^2\}$ and $\{y_{NI}^2=1.2,~\widetilde{m}^2=(20 {\,\textrm{TeV}})^2\}$, respectively. They are remarkably small compared to [Eq. (\[FTmeasure0\])]{}. Even for $\{y_{NI}^2=1.0,~\widetilde{m}^2=(10 {\,\textrm{TeV}})^2,~(20 {\,\textrm{TeV}})^2\}$, $\Delta_{m_0^2}$ turns out to be just around $65-67$. However, it is rather sensitive to $y_{NI}^2$: e.g. for $\{y_{NI}^2=0.8, ~1.2,~\widetilde{m}^2=(15 {\,\textrm{TeV}})^2\}$, $\Delta_{m_0^2}$ turns out to be $438$ and $290$, respectively. With the hierarchy $\widetilde{m}/m_{1/2}=15-20$, $\Delta_{m_0^2}$ can thus reduce to ${\cal O}(10^2)$ or smaller at one-loop level.[^5]
As mentioned before, the case that the FP emerges at a scale slightly lower than $t_T$ yields a smaller fine-tuning.
[c|ccc||c|ccc]{} $$& {\footnotesize $\tb=50$} & {\footnotesize $y_{NI}^2=1.0$} & {\footnotesize $\widetilde{m}=15 {\,\textrm{TeV}}$} &$$ & [$\tb=50$]{} & [$y_{NI}^2=1.2$]{} & [$\widetilde{m}=20 {\,\textrm{TeV}}$]{}\
& [$({\bf 9} {\,\textrm{TeV}})^2$]{} & [$({\bf 7} {\,\textrm{TeV}})^2$]{} & [$({\bf 5} {\,\textrm{TeV}})^2$]{} & [${\bf m_0^2}$]{} & [$({\bf 9} {\,\textrm{TeV}})^2$]{} & [$({\bf 7} {\,\textrm{TeV}})^2$]{} & [$({\bf 5} {\,\textrm{TeV}})^2$]{}\
& [$(6.3 {\,\textrm{TeV}})^2$]{} & [$(4.8 {\,\textrm{TeV}})^2$]{} & [$(3.1 {\,\textrm{TeV}})^2$]{} & [$m_{q_3}^2(t_T)$]{} & [$(5.9 {\,\textrm{TeV}})^2$]{} & [$(4.2 {\,\textrm{TeV}})^2$]{} & [$(2.1 {\,\textrm{TeV}})^2$]{}\
[$m_{u^c_3}^2(t_T)$]{} & [$(5.9 {\,\textrm{TeV}})^2$]{} & [$(4.4 {\,\textrm{TeV}})^2$]{} & [$(2.9 {\,\textrm{TeV}})^2$]{} & [$m_{u^c_3}^2(t_T)$]{} & [$(5.5 {\,\textrm{TeV}})^2$]{} & [$(3.9 {\,\textrm{TeV}})^2$]{} & [$(2.1 {\,\textrm{TeV}})^2$]{}\
[${\bf m_{h_u}^2(t_T)}$]{} & [$({\bf 1.2} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.8} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.4} {\,\textrm{TeV}})^2$]{} & [${\bf m_{h_u}^2(t_T)}$]{} & [$({\bf 0.7} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.7} {\,\textrm{TeV}})^2$]{} & [$({\bf 0.7} {\,\textrm{TeV}})^2$]{}
Once $\{m_{h_u}^2,m_{h_d}^2\}$ are determined at low energy, $\mu$ should be properly adjusted to give $m_Z^2\approx (91 {\,\textrm{GeV}})^2$ as seen in [Eq. (\[extremeCondi\])]{}. Actually, the RG equation for $\mu$ is decoupled from those of $\{m_{q_3}^2,m_{u_3^c}^2,m_{h_u}^2, {\rm etc.}\}$ at one-loop level, and so its evolution does not affect our previous discussions. For the case of a small enough $\Delta_{m_0^2}$, $\Delta_{\mu}$ ($=|2\frac{\mu^2}{m_Z^2}\frac{\partial m_Z^2}{\partial \mu^2}|$) could become dominant over it [@Deltamu; @Kowalska]. For $m_{h_u}^2(t_T)< (1 {\,\textrm{TeV}})^2$, however, $|\mu|^2$ should be smaller than $(1 {\,\textrm{TeV}})^2$. Thus, $\mu^2/m_Z^2$ \[$\approx -m_{h_u}^2(t_W)/m_Z^2$\] in $\Delta_{\mu}$ is not excessively large ($<100$). Moreover, $\Delta_{\mu}$ is closely associated with the mechanism that $\mu$ is generated. If $\mu$ is generated at an intermediate scale (rather than the GUT scale), $\partial m_Z^2/\partial \mu^2$ can reduce a bit, which further decreases $\Delta_{\mu}$.
U(1)$^\prime$ mediation and Phenomenological constraints {#sec:model}
========================================================
As seen above, the hierarchy of $\widetilde{m}/m_{1/2}\sim {\cal O}(10)$ is essential for a successful FP scenario. It can be realized e.g. by employing also the U(1)$^\prime$ mediated SUSY breaking [@Zprime]. Let us consider the following interaction among vectorlike superfields: $$\begin{aligned}
W = (M+\theta^2 F)X X^c + y_1 X\Phi\Psi^c + y_2 X^c\Phi^{c} \Psi
+ M_{\Phi}\Phi \Phi^{c} + M_\psi \Psi\Psi^c ,\end{aligned}$$ where $M$ and $F$ denotes the scalar and $F$-components of a spurion superfield ($\Sigma$) parametrizing SUSY breaking effect. $M_{\Phi,\Psi}$ ($\sim M_{G}$) and $y_{1,2}$ are dimensionful and dimensionless parameters, respectively. For the above superpotential, one can assign e.g. U(1)$_R$ charges of 2 and 1 to $\Sigma$ and $\{\Phi,\Phi^c;\Psi,\Psi^c\}$, respectively. $\{X,X^c\}$, which are neutral under U(1)$_R$, play the role of the messenger for SUSY breaking effects on the MSSM sector. While $\{X, X^c\}$ are U(1)$^\prime$ charged but SM singlet superfields, $\{\Phi, \Phi^{c}\}$ are superfields carrying both U(1)$^\prime$ and SM gauge charges. $\{\Psi,\Psi^c\}$ carry only SM gauge quantum numbers. In the U(1)$^\prime$ mediated SUSY breaking scenario [@Zprime], the U(1)$^\prime$ gaugino mass ($\equiv M_{\widetilde Z^\prime}$) is of order $(g_{\widetilde Z^\prime}^2/16\pi^2)F/M$. On the other hand, the soft squared masses of U(1)$^\prime$ charged scalars, i.e., the first and second generations of sfermions in our case are given by $M_{\widetilde Z^\prime}$, $\widetilde m^2 \sim (q_i^2g_{\widetilde Z^\prime}^2/16\pi^2) M_{\widetilde Z^\prime}^2$. $m_0^2$ can be induced just through the ordinary gravity mediated SUSY breaking effect, which is always there. Thus, the soft squared masses for the third generation of sfermions are given by $m_0^2$.
Since the SM charged superfields have Yukawa interactions with the messengers, the threshold correction to the wave function renormalization for $\Psi^c$ has the following form: $$\begin{aligned}
\Delta Z_{\Psi^c} \sim \frac{y_1^2}{16\pi^2} {\rm log} |M+\theta^2 F|^2 . \end{aligned}$$ It contributes to the MSSM gaugino masses: \~-|\_F =[O]{}( ) =[O]{}(). We regard it as the dominant contribution to the MSSM gaugino masses. Hence, in this setup, we can achieve the desired hierarchy, $\widetilde m/m_{1/2}\sim {\cal O}(4\pi)$.
According to the “effective SUSY” (or “more minimal SUSY”), the masses of the first two generations of sfermions are required to be about $5$–$20 {\,\textrm{TeV}}$ in order to avoid the SUSY flavor and SUSY $CP$ problems, while the third ones and gauginos are lighter than $1 {\,\textrm{TeV}}$ for naturalness of the Higgs boson [@effSUSY]. In our case, the third generations of sfermions are heavier than $1 {\,\textrm{TeV}}$, but the naturalness problem can be addressed depending on the FP scenario. As in the effective SUSY, the hierarchically heavy masses for the first two generations of sfermions ($15$–$20 {\,\textrm{TeV}}$) with $CP$ violating phases of ${\cal O}(0.1)$ can solve the SUSY flavor and SUSY $CP$ problems. In Ref. [@Natural/2-loop], it was pointed out that such heavy masses for the first two generations of sfermions drive the stop mass squared too small or even negative at the EW scale via RG evolutions. As seen in Tables \[tab:FPn\_5\] and \[tab:FPn\_50\], however, such a thing does not occur. It is because the gluino mass is quite heavy in our case. Moreover, the initial value of stop squared masses at the GUT scale, $m_0^2$ can be quite large without a serious fine-tuning only if $m_{h_u}^2(t)$ is well focused near the stop mass scale.
Since all the sfermions are very heavy in this model, the pair annihilation cross section of the lightest neutralino is quite suppressed, and so it would overclose the Universe. However, this problem could be resolved, e.g. if a sufficient amount of entropy is somehow produced after thermal freeze-out of the neutralino [@RHnuFP]. In this paper, we do not discuss this issue in detail. Instead, let us discuss phenomenological constraints coming from flavor violations in more detail.
In the squark mass matrix, the diagonal components, $(1,1)$ and $(2,2)$, are almost degenerate with a squared mass of $(15$–$20 {\,\textrm{TeV}})^2$, e.g., by the U(1)$^\prime$ SUSY breaking mediation, while the $(3,3)$ is filled dominantly by the gravity mediation effect, which is quite suppressed compared to the $(1,1)$ and $(2,2)$ components. In the other components, nonzero values can be generated by a U(1)$^\prime$ breaking effect. (We do not specify a U(1)$^\prime$ breaking mechanism here.) After diagonalization in the fermionic quarks sector, $(1,2)$, $(2,1)$, and $(i,3)$, $(3,i)$ can be induced after U(1)$^\prime$ breaking.
The $(1,2)$ and $(2,1)$ components affect, e.g., $K$-$\bar K$ mixing. The amplitude of $K$-$\bar K$ mixing by the squark mixing is roughly estimated as [@book2; @FV] [$$\begin{split} \label{KKbar}
{\cal M}_{K\bar{K}}\approx
\frac{4\alpha_3^2}{\widetilde m_q^2}
\left(\frac{\Delta\widetilde m_q^2}{\widetilde m_q^2}\right)^2 ,
\end{split}$$]{} where $\widetilde m_q^2\approx (20 {\,\textrm{TeV}})^2$, and $\Delta\widetilde m_q^2$ denotes the off-diagonal component of the squark mass matrix. Note that RG runnings of the heavy masses for the first two generations of sfermions are negligible [@Natural/2-loop; @Natural/splitZprime], and so their low energy values are almost the same as those at the GUT scale. Since the SM still explains the observed data well, [Eq. (\[KKbar\])]{} should be smaller than the SM prediction, ${\cal M}_{K\bar{K}}^{\rm SM}\approx\alpha_2^2\sin^2\theta_c\cos^2\theta_c
(m_c^2/M_W^4)$, where $\theta_c$ stands for the Cabibbo mixing angle. The condition ${\cal M}_{K\bar{K}}\ll{\cal M}_{K\bar{K}}^{\rm SM}$ yields [$$\begin{split}
\left(\frac{\Delta \widetilde m_q^2}{\widetilde m_q^2}\right)\ll 1.6\times 10^{-1}\times
\left(\frac{\widetilde m_q}{20 {\,\textrm{TeV}}}\right) .
\end{split}$$]{} If the mixing among the d-type quarks is given fully by the CKM (or a similar order mixing matrix) and the elements induced by gravity mediation are of order TeV$^2$, this constraint can be satisfied.[^6] Unlike the quark sector, the lepton sector requires large mixing to explain the observed neutrino oscillations. Thus, although $(1,1)$ and $(2,2)$ components of the slepton mass matrices acquire very large squared masses \[$\approx (15$–$20 {\,\textrm{TeV}})^2$\] from the U(1)$^\prime$ mediation effect, other components can also receive large squared masses after diagonalization of the fermion mass matrices. Nonzero off-diagonal components in the slepton matrix can induce lepton flavor violations (LFV), which is absent in the SM. The branching ratio for $\mu^-\to e^-\gamma$ by such a slepton mixing is estimated as [@FV] [$$\begin{split}
&\frac{{\rm BR}(\mu^-\to e^-\gamma)}{{\rm BR}(\mu^-\to e^-\nu_\mu\bar{\nu}_e)}
=\frac{12\pi\alpha^3}{G_F^2\widetilde{m}^4_l}
\left\{\left|I_3(x)\left(\delta^l_{21}\right)_{LL}
+\frac{M_{\tilde\gamma}}{m_\mu}I_1(x)\left(\delta^l_{21}\right)_{LR}\right|^2
+L\leftrightarrow R\right\}
\\
&~~ \approx 6.7\times 10^{-13}\times\left[\frac{(20 {\,\textrm{TeV}})^4}{\widetilde{m}^4_l}\right]
\left\{\left|\frac{1}{12}\left(\delta^l_{21}\right)_{LL}
+\frac{M_{\tilde\gamma}}{2m_\mu}\left(\delta^l_{21}\right)_{LR}\right|^2
+L\leftrightarrow R\right\} ,
\end{split}$$]{} where the functions of $x$, $I_{3}(x)$ and $I_1(x)$ approach to $1/12$ and $1/2$, respectively, for $x\equiv M_{\tilde\gamma}^2/\widetilde{m}_e^2\ll 1$. $\widetilde{m}_l$ is the mass of the first or second generation of SU(2)$_L$ doublet (i.e., LH) slepton. $\left(\delta^l_{21}\right)_{LR}$ is associated with the $A$-term vertex proportional to a very small Yukawa coupling. It is at most of order $m_\mu/\widetilde{m}_e$, which suppresses the second term, because the photino mass $M_{\tilde\gamma}$ would be smaller than $1 {\,\textrm{TeV}}$ in our case. This process is possible through, e.g., the $\tilde{\nu}_{1,2}$-chargino and $\tilde{e}_{1,2}$-neutralino loops. Even if the slepton mixing $\left(\delta^l_{21}\right)_{LL}$ is of order unity, sleptons of 20 TeV are heavy enough to meet the current bound, ${\rm BR}(\mu^-\to e^-\gamma) < 5.7\times 10^{-13}$ [@muegamma].
Similarly, such heavy slepton masses suppress also $\tau^-\to e^-\gamma$ \[${\rm BR}(\tau^-\to e^-\gamma) < 3.3\times 10^{-8}$ [@PDG]\] and $\tau^-\to \mu^-\gamma$ \[${\rm BR}(\tau^-\to \mu^-\gamma) < 4.4\times 10^{-8}$\], which are actually much less stringent, because they are still involved in those processes. Even if the first two generations of sleptons are quite heavy, however, $\tau$ can still decay with a sizable rate through the $\tilde{\nu}_{3L}$-chargino and $\tilde{e}_{3L}$-neutralino loops without a slepton mixing insertion, provided that the $\tau$–$e$ or $\tau$–$\mu$ mixing in the fermion sector is large [@Hisano]. So it is desirable to assume that the PMNS matrix comes dominantly from the neutrino sector [@Natural/splitZprime], only if the first two generations of sleptons are quite heavy. Then additional large off-diagonal components of sneutrino mass matrix, which are induced after diagonalization of the neutrino mass matrix, can suppress the unwanted $\tau^-\to e^-\gamma$ and $\tau^-\to \mu^-\gamma$.
Now we propose a model, in which the PMNS matrix results from mixing of the neutrino sector. Let us introduce extra singlet fields. Their charge assignments under U(1)$^\prime$ and U(1)$_R$ are listed in Table \[tab:GC\].
[c|ccc|ccc]{} Superfields & $l_{1,2}$ & $e^c_{1,2}$ & $l_3,e^c_3,\nu^c_{1,2,3}$ & $S_{1,2}$ & $S_{1,2}^c$ & $Z_{1,2}$\
U(1)$^\prime$ & $\pm 2$ & $\mp 2$ & $0$ & $\mp 2$ & $\pm 1$ & $\pm 1$\
U(1)$_{R}$ & $1$ & $1$ & $1$ & $1$ & $1$ & $0$
One can see that the charged lepton mass matrix should have a diagonal form at the renormalizable level because of the U(1)$^\prime$ and U(1)$_R$ symmetries. Through the U(1)$^{\prime}$ mediated SUSY breaking mechanism, sfermions with nonzero U(1)$^\prime$ charges receive quite heavy soft masses. Hence, as discussed above, LFV can adequately be suppressed by U(1)$^\prime$. Note that the RH neutrinos, $\nu^c_{1,2,3}$ carry only the U(1)$_R$ \[and U(1)$_{B-L}$\] charge\[s\]. So they can freely be mixed. Note that the mixing in the RH (s)neutrino sector is almost irrelevant to LFV, while RH neutrinos’ mixing still contributes to the PMNS matrix.
The superpotential of the neutrino sector consistent with U(1)$^\prime\times$U(1)$_R$ is written as [$$\begin{split} \label{W_N}
W_N=& \sum_{i=1,2,3}\left[y_{\nu}^il_3h_u\nu^c_i +\frac12M^{ij}\nu_i^c\nu_j^c
+ \left(\lambda_1^{i} Z_2 S_1^c + \lambda_2^{i} Z_1 S_2^c\right)\nu^c_i\right]
\\
&+ \sum_{k=1,2}\left[y_S^k l_{k}h_uS_k + \lambda_Z^k Z_kS_kS_k^c\right]
+ M_SS_1S_2 + M_{S^c}S_1^cS_2^c ,
\end{split}$$]{} where $M^{ij}$ ($\{M_S,M_{S^c}\}$) denotes dimensionful parameters of order $10^{14} {\,\textrm{GeV}}$ or smaller ($10^{16} {\,\textrm{GeV}}$ or smaller), while $y$s and $\lambda$s are dimensionless ones. \[$M^{ij}$ breaks U(1)$_{B-L}$.\] In terms of [Eq. (\[W\_N\])]{}, $N^c$ in [Eq. (\[RHnu\])]{} can be identified as $(y_{\nu}^1\nu^c_1+y_{\nu}^2\nu^c_2+y_{\nu}^3\nu^c_3)/\sqrt{\sum_{i}(y_{\nu}^i)^2}$, and $y_N$ as $\sqrt{(y_{\nu}^1)^2+(y_{\nu}^2)^2+(y_{\nu}^3)^2}$. The other two components orthogonal to $N^c$ have no direct couplings to the MSSM lepton doublets. They obtain such couplings via the mediation of $\{S_{1,2},S_{1,2}^c\}$ after $\widetilde{Z}_{1,2}$ get GUT scale VEVs, breaking U(1)$^\prime$, and $\{S_{1,2},S_{1,2}^c\}$ are integrated out. We assume that the resulting effective Dirac Yukawa couplings are somewhat smaller than $y_N$. The sizable (effective) Dirac Yukawa couplings could radiatively generate the mixing soft mass squareds such as $(\Delta\widetilde{m}_{31})_{LL}$, $(\Delta\widetilde{m}_{32})_{LL}$, etc. for sneutrinos via the RH neutrino-Higgsino loops above the seesaw scale.[^7] As discussed above, however, such mixing terms cannot give rise to sizable LFV, because the heavy soft masses for sleptons should always be involved there. After integrating out the RH neutrinos $\nu^c_{1,2,3}$, the general results of the type-I seesaw mechanism can eventually be reproduced. Unlike the charged lepton sector, the neutrinos can thus fully be mixed below the seesaw scale, yielding the desired form of the PMNS matrix in principle. In a similar way, one can achieve the CKM mixing of the quarks by introducing extra vectorlike quarks at the GUT scale, which play the role of the mediators $\{S_{1,2},S_{1,2}^c\}$. However, the absence of the extra vectorlike charged leptons guarantees the almost diagonal mass matrix for the SM charged leptons even at low energies.
Conclusion {#sec:conclusion}
==========
According to the recent analysis based on three-loop calculations, the radiative correction by $5 {\,\textrm{TeV}}$ stop masses can support the $126 {\,\textrm{GeV}}$ Higgs mass without a large stop mixing effect. The $5 {\,\textrm{TeV}}$ stop decoupling scale is much higher than the FP scale determined in the original FP scenario. As a result, $m_{h_u}^2$ evaluated at low energy becomes sensitive to $m_0^2$ chosen at the GUT scale, and so to the low energy value of stop mass, unlike the original FP scenario. Moreover, the present high gluino mass bound ($\gtrsim 1.4 {\,\textrm{TeV}}$) results in a too large negative $m_{h_u}^2$ at low energy, which gives rise to a serious fine-tuning problem in the MSSM Higgs sector.
In this paper, we have discussed how the location of the FP changes under various variations of parameters. In particular, we noted that the FP can move to the desirable location under increases of [*both*]{} the Yukawa coupling of a superheavy RH neutrino to the Higgs, and the masses of the first and second generations of sfermions. On the other hand, the “$\lambda$ coupling” in the NMSSM should be more suppressed than $0.1$ to be consistent with the FP scenario, if it is introduced.
We have shown that an order one Dirac Yukawa coupling ($\sim 1.0$) of the superheavy RH neutrino ($\sim 10^{14} {\,\textrm{GeV}}$) at the seesaw scale can move the FP to the desired stop decoupling scale, and two-loop gauge interactions by the hierarchically heavy masses ($15-20 {\,\textrm{TeV}}$) of the first two generations of sfermions can effectively compensate the heavy gluino effects in the RG evolution of $m_{h_u}^2$. Here, we set the U(1)$_R$ breaking soft parameters, $m_{1/2}=A_0=1 {\,\textrm{TeV}}$, at the GUT scale. The gaugino mass unification is maintained in this setup. Such heavy masses of the RH neutrino and the first two generations of sfermions can also provide a natural explanation of the small active neutrino mass via the seesaw mechanism, and suppress the flavor violating processes in SUSY models. At the new location of the FP, $m_{h_u}^2$ can be insensitive to $m_0^2$ or trial heavy stop squared masses, remarkably improving the naturalness of the small EW scale. Under this setup, the $126 {\,\textrm{GeV}}$ Higgs mass can be naturally explained by an accidentally selected $m_0^2$ of about $(8 {\,\textrm{TeV}})^2$, which gives $5 {\,\textrm{TeV}}$ stop mass at low energy.
B.K. thanks Department of Physics and Astronomy in Rutgers University for the hospitality during his visit to Rutgers University. B.K. is supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Grant No. 2013R1A1A2006904, and also in part by Korea Institute for Advanced Study (KIAS) grant funded by the Korean government. C.S.S. is supported in part by DOE Awards No. DOE-SC0010008, No. DOE-ARRA-SC0003883, and No. DOE-DE-SC0007897.
Appendix {#sec:Appendix}
========
In the Appendix, we present the full RG equations utilized in our analyses and some semianalytic solutions on which the discussions in the main text are based. The notations here follow those of the main text of this paper.
The full RG equations
---------------------
The RG equations for the gauge couplings, $g_{3,2,1}$ and gaugino masses, $M_{3,2,1}$ are integrable. The RG solutions for them are given by [@book] $$\begin{aligned}
\label{gaugeSol}
g_i^2(t)=\frac{g_0^2}{1-\frac{g_0^2}{8\pi^2}b_i(t-t_0)}~,
~~~~ {\rm and}~~~~~ \frac{M_i(t)}{g_i^2(t)}=\frac{m_{1/2}}{g_0^2} ~, \end{aligned}$$ where $b_i$ ($i=3,2,2$) denotes the beta function coefficients for the case of the MSSM field contents, $(b_3,b_2,b_1)=(-3,1,\frac{33}{5})$. $t$ parametrizes the renormalization scale $Q$, $t-t_0={\rm log}\frac{Q}{M_{G}}$. The relevant superpotential in this paper is [$$\begin{split} \label{apdxSuperPot}
W\supset y_tq_3h_uu^c_3 + y_bq_3h_dd^c_3 + y_\tau l_3h_de_3^c
+ y_N l_3h_uN^c + \frac{1}{2}M_NN^cN^c + \mu h_uh_d,
\end{split}$$]{} where $q_3$ ($l_3$) and $\{u^c_3,d^c_3\}$ ($e^c_3$) stand for the third generations of quark (lepton) doublet and singlets. The Majoran mass of the RH neutrino $N^c$ is assumed to be $M_N\approx 2\times 10^{14} {\,\textrm{GeV}}$. Thus, below the energy scale of $M_N$, the RH neutrino $N^c$ is decoupled from dynamics. The one-loop RG equations for the above renormalizable couplings are given by &&8\^2=y\_t\^2,\
&&8\^2=y\_b\^2,\
&&8\^2=y\_\^2,\
&&8\^2=y\_N\^2,\
&&8\^2=\^2, and the RG equations of the $A$-term coefficients corresponding to the Yukawa couplings of [Eq. (\[apdxSuperPot\])]{} are 8\^2&=&6y\_t\^2A\_t+y\_b\^2A\_b+y\_N\^2A\_N-g\_3\^2M\_3-3g\_2\^2M\_2- g\_1\^2M\_1 ,\
8\^2&=&y\_t\^2A\_t+6y\_b\^2A\_b+y\_\^2A\_-g\_3\^2M\_3-3g\_2\^2M\_2- g\_1\^2M\_1 ,\
8\^2&=&3y\_b\^2A\_b+4y\_\^2A\_+y\_N\^2A\_N-3g\_2\^2M\_2- g\_1\^2M\_1 ,\
8\^2&=&3y\_t\^2A\_t+y\_\^2A\_+4y\_N\^2A\_N-3g\_2\^2M\_2- g\_1\^2M\_1 . Below the scale of $M_N$, the RG evolutions of $y_N$ and $A_N$ become frozen, and they should be decoupled from the above equations.
The RG evolutions for the soft squared masses are governed by the following equations: 16\^2&=&6y\_t\^2(X\_t+A\_t\^2)+2y\_N\^2(X\_N+A\_N\^2)-6g\_2\^2M\_2\^2-65 g\_1\^2M\_1\^2 +, \[\] \
16\^2&=&4y\_t\^2(X\_t+A\_t\^2)-g\_3\^2M\_3\^2- g\_1\^2M\_1\^2 +, \[\]\
16\^2&=&2y\_t\^2(X\_t+A\_t\^2) + 2y\_b\^2(X\_b+A\_b\^2) -g\_3\^2M\_3\^2-6g\_2\^2M\_2\^2- g\_1\^2M\_1\^2 \[\]\
&& +,\
16\^2&=&6y\_b\^2(X\_b+A\_b\^2)+2y\_\^2(X\_+A\_\^2) -6g\_2\^2M\_2\^2-65 g\_1\^2M\_1\^2 +, \[\]\
16\^2&=&4y\_b\^2(X\_b+A\_b\^2)-g\_3\^2M\_3\^2- g\_1\^2M\_1\^2 +, \[\]\
16\^2&=&4y\_\^2(X\_+A\_\^2) - g\_1\^2M\_1\^2 +,\
16\^2&=&2y\_\^2(X\_+A\_\^2)+2y\_N\^2(X\_N+A\_N\^2) -6g\_2\^2M\_2\^2- g\_1\^2M\_1\^2 +,\
16\^2&=&4y\_N\^2(X\_N+A\_N\^2) , \[\] where $X_t$, $X_b$, $X_\tau$, and $X_N$ are defined as $X_t\equiv m_{h_u}^2+m_{u^c_3}^2+m_{q_3}^2$, $X_b\equiv m_{h_d}^2+m_{d^c_3}^2+m_{q_3}^2$, $X_\tau\equiv m_{h_d}^2+m_{e^c_3}^2+m_{l_3}^2$, and $X_N\equiv m_{h_u}^2+m_{N^c}^2+m_{l_3}^2$, respectively. The $\widetilde{m}^2$ terms denote the contributions coming from the two-loop gauge interactions by the first and second generations of sfermions, which are assumed to be hierarchically heavier than the third ones. The RG running of $\widetilde{m}^2$ is negligible [@Natural/2-loop; @Natural/splitZprime], and so its low energy value is almost the same as that at the GUT scale. Here we suppose a universal soft mass for the first two generations of sfermions, which eliminates the contributions by the “$D$-term” potential from the above equations. Since these effects are comparable to the one-loop gaugino mass terms, we take them into account. $m_N^2$ and $X_N$ as well as $y_N$ and $A_N$ are dropped out from the above equations below $Q= M_N$.
Semianalytic RG solutions
-------------------------
Let us present our semianalytic solutions to the RG equations. When ${\rm tan}\beta$ is small enough and the RH neutrino is decoupled, the RG evolutions of the soft mass parameters, $m_{h_u}^2$, $m_{u^c_3}^2$, $m_{q_3}^2$, and $A_t$ are approximately simplified as $$\begin{aligned}
16\pi^2\frac{dm_{h_u}^2}{dt}&=&6y_t^2\left(X_t+A_t^2\right)-6g_2^2M_2^2-\frac65 g_1^2M_1^2
+\frac{\widetilde{m}^2}{4\pi^2}\left[6g_2^4+\frac65g_1^4\right] ,
\label{apdxRG1} \\
16\pi^2\frac{dm_{u^c_3}^2}{dt}&=&4y_t^2\left(X_t+A_t^2\right)-\frac{32}{3}g_3^2M_3^2-\frac{32}{15} g_1^2M_1^2
+\frac{\widetilde{m}^2}{4\pi^2}\left[\frac{32}{3}g_3^4+\frac{32}{15} g_1^4 \right] ,
\label{apdxRG2} \\
16\pi^2\frac{dm_{q_3}^2}{dt}&=&2y_t^2\left(X_t+A_t^2\right)-\frac{32}{3}g_3^2M_3^2-6g_2^2M_2^2-\frac{2}{15} g_1^2M_1^2
+\frac{\widetilde{m}^2}{4\pi^2}\left[\frac{32}{3}g_3^4+6g_2^4+\frac{2}{15} g_1^4\right] ,
\qquad~~ \label{apdxRG3} \\
8\pi^2\frac{dA_t}{dt}&=&6y_t^2A_t-\frac{16}{3}g_3^2M_3-3g_2^2M_2-\frac{13}{15} g_1^2M_1
\equiv 6y_t^2A_t - G_A .
\label{apdxRG4}\end{aligned}$$ Summation of Eqs. (\[apdxRG1\]), (\[apdxRG2\]), and (\[apdxRG3\]) yields the RG equation for $X_t$: [$$\begin{split} \label{apdxX}
\frac{dX_t}{dt} = \frac{3y_t^2}{4\pi^2}\left(X_t + A_t^2\right)
-\frac{1}{4\pi^2} G_X^2 .
\end{split}$$]{} In Eqs. (\[apdxRG4\]) and (\[apdxX\]), $G_A$ and $G_X^2$ are defined as $$\begin{aligned}
&&\qquad\qquad\quad~~ G_A(t)\equiv
\left(\frac{m_{1/2}}{g_0^2}\right)\left[\frac{16}{3}g_3^4+3g_2^4+\frac{13}{15}g_1^4\right] ,
\\
&&G_X^2(t)\equiv
\left(\frac{m_{1/2}}{g_0^2}\right)^2\left[\frac{16}{3}g_3^6+3g_2^6+\frac{13}{15}g_1^6\right]-\frac{\widetilde{m}^2}{4\pi^2}\left[\frac{16}{3}g_3^4+3g_2^4+\frac{13}{15}g_1^4\right] ,\end{aligned}$$ respectively, assuming $\frac{M_i(t)}{g_i^2(t)}=\frac{m_{1/2}}{g_0^2}$ ($i=3,2,1$).
The solutions of $A_t$ and $X_t$ are given by $$\begin{aligned}
&&\qquad~~ A_t(t)=e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2}
\left[A_0-\frac{1}{8\pi^2}\int^t_{t_0}dt^\prime G_A
e^{\frac{-3}{4\pi^2}\int^{t'}_{t_0}dt^{\prime\prime} y_t^2}
\right] ,
\label{solA}
\\
&&X_t(t)=e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2}
\left[X_0+\int^t_{t_0}dt^\prime
\left(\frac{3}{4\pi^2}y_t^2A_t^2-\frac{1}{4\pi^2}G_X^2\right)
e^{\frac{-3}{4\pi^2}\int^{t'}_{t_0}dt^{\prime\prime}y_t^2}
\right] ,
\label{solX}\end{aligned}$$ where $A_0$ and $X_0$ denote the GUT scale values of $A_t$ and $X_t$, $A_0\equiv A_t(t=t_0)$, and $X_0\equiv X_t(t=t_0)=m_{h_u0}^2+m_{u^c_30}^2+m_{q_30}^2$.
With Eqs. (\[apdxX\]), (\[solA\]), and (\[solX\]), one can solve Eqs. (\[apdxRG1\]), (\[apdxRG2\]), and (\[apdxRG3\]): $$\begin{aligned}
&&m_{h_u}^2(t)=m_{h_u0}^2+\frac{X_0}{2}\left[e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2}-1\right]
+\frac12 F(t)
\nonumber \\
&&\qquad -\left(\frac{m_{1/2}}{g_0^2}\right)^2\left[\frac32\left\{g_2^4(t)-g_0^4\right\}
+\frac{1}{22}\left\{g_1^4(t)-g_0^4\right\}\right]
\label{apdxSol1} \\
&&\qquad +\left(\frac{\widetilde{m}^2}{4\pi^2}\right)\left[3\left\{g_2^2(t)-g_0^2\right\}
+\frac{1}{11}\left\{g_1^4(t)-g_0^4\right\}\right] ,
\nonumber \\
&&m_{u^c_3}^2(t)=m_{u^c_30}^2+\frac{X_0}{3}\left[e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2}-1\right]
+\frac13 F(t)
\nonumber \\
&&\qquad +\left(\frac{m_{1/2}}{g_0^2}\right)^2\left[\frac89\left\{g_3^4(t)-g_0^4\right\}
-\frac{8}{99}\left\{g_1^4(t)-g_0^4\right\}\right]
\label{apdxSol2} \\
&&\qquad - \left(\frac{\widetilde{m}^2}{4\pi^2}\right)\left[\frac{16}{9}\left\{g_3^2(t)-g_0^2\right\}
-\frac{16}{99}\left\{g_1^2(t)-g_0^2\right\}\right] ,
\nonumber \\
&&m_{q_3}^2(t)=m_{q_30}^2+\frac{X_0}{6}\left[e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2}-1\right]
+\frac16 F(t)
\nonumber \\
&&\qquad +\left(\frac{m_{1/2}}{g_0^2}\right)^2\left[\frac89\left\{g_3^4(t)-g_0^4\right\}
-\frac32\left\{g_2^4(t)-g_0^4\right\}-\frac{1}{198}\left\{g_1^4(t)-g_0^4\right\}\right]
\label{apdxSol3} \\
&&\qquad -\left(\frac{\widetilde{m}^2}{4\pi^2}\right)\left[\frac{16}{9}\left\{g_3^2(t)-g_0^2\right\}
-3\left\{g_2^2(t)-g_0^2\right\}-\frac{1}{99}\left\{g_1^2(t)-g_0^2\right\}\right] ,
\nonumber \end{aligned}$$ where $F(t)$ is defined as [$$\begin{split} \label{apdxF}
&\qquad F(t)\equiv e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2} \int^t_{t_0}dt^\prime ~\frac{3}{4\pi^2}y_t^2A_t^2 ~e^{\frac{-3}{4\pi^2}\int^{t'}_{t_0}dt^{\prime\prime}y_t^2}
\\
&-\frac{1}{4\pi^2} \left[e^{\frac{3}{4\pi^2}\int^t_{t_0}dt^\prime y_t^2} \int^t_{t_0}dt^\prime ~G_X^2 ~e^{\frac{-3}{4\pi^2}\int^{t'}_{t_0}dt^{\prime\prime}y_t^2}
-\int^t_{t_0}dt^\prime~G_X^2 \right] .
\end{split}$$]{} Note that $F(t)$ in [Eq. (\[apdxF\])]{} is independent of the initial values for the squared masses, $m_{h_u0}^2$, $m_{u^c_30}^2$, and $m_{q_30}^2$. Using Eqs. (\[gaugeSol\]), one can obtain the following useful results: $$\begin{aligned}
&& \int^t_{t_0}dt^\prime g_i^2M_i^2=\frac{4\pi^2}{b_i}\left(\frac{m_{1/2}}{g_0^2}\right)^2\left\{g_i^4(t)-g_0^4\right\} ,
\\
&& \int^t_{t_0}dt^\prime g_i^2M_i=\frac{8\pi^2}{b_i}\left(\frac{m_{1/2}}{g_0^2}\right)\left\{g_i^2(t)-g_0^2\right\} ,
\\
&& \int^t_{t_0}dt^\prime g_i^4=\frac{8\pi^2}{b_i}\left\{g_i^2(t)-g_0^2\right\} . \end{aligned}$$
[99]{}
For a review, for instance, see M. Drees, R. Godbole and P. Roy, “Theory and phenomenology of sparticles: An account of four-dimensional N=1 supersymmetry in high energy physics,” Hackensack, USA: World Scientific (2004) 555 p. References are therein.
J. L. Feng, P. Kant, S. Profumo and D. Sanford, Phys. Rev. Lett. [**111**]{} (2013) 131802 \[arXiv:1306.2318 \[hep-ph\]\].
G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{} (2012) 1 \[arXiv:1207.7214 \[hep-ex\]\]; S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{} (2012) 30 \[arXiv:1207.7235 \[hep-ex\]\]. For a review, see U. Ellwanger, C. Hugonie and A. M. Teixeira, Phys. Rept. [**496**]{} (2010) 1 \[arXiv:0910.1785 \[hep-ph\]\].
A. Delgado, C. Kolda, J. P. Olson and A. de la Puente, Phys. Rev. Lett. [**105**]{}, 091802 (2010) \[arXiv:1005.1282 \[hep-ph\]\]; G. G. Ross and K. Schmidt-Hoberg, Nucl. Phys. B [**862**]{}, 710 (2012) \[arXiv:1108.1284 \[hep-ph\]\].
B. Kyae and J. -C. Park, Phys. Rev. D [**86**]{} (2012) 031701 \[arXiv:1203.1656 \[hep-ph\]\]; B. Kyae and J. -C. Park, Phys. Rev. D [**87**]{} (2013) 075021 \[arXiv:1207.3126 \[hep-ph\]\]; B. Kyae and C. S. Shin, JHEP [**1306**]{} (2013) 102 \[arXiv:1303.6703 \[hep-ph\]\]; B. Kyae, Phys. Rev. D [**89**]{} (2014) 075016 \[arXiv:1401.1878 \[hep-ph\]\].
ATLAS collaboration, ATLAS-CONF-2013-024; S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Eur. Phys. J. C [**73**]{} (2013) 2677 \[arXiv:1308.1586 \[hep-ex\]\].
J. L. Feng, K. T. Matchev and T. Moroi, Phys. Rev. Lett. [**84**]{} (2000) 2322 \[hep-ph/9908309\].
J. L. Feng, K. T. Matchev and T. Moroi, Phys. Rev. D [**61**]{} (2000) 075005 \[hep-ph/9909334\].
See also K. L. Chan, U. Chattopadhyay and P. Nath, Phys. Rev. D [**58**]{}, 096004 (1998) \[hep-ph/9710473\].
ATLAS collaboration, ATLAS-CONF-2013-061.
H. Abe, T. Kobayashi and Y. Omura, Phys. Rev. D [**76**]{} (2007) 015002 \[hep-ph/0703044 \[HEP-PH\]\]; D. Horton and G. G. Ross, Nucl. Phys. B [**830**]{} (2010) 221 \[arXiv:0908.0857 \[hep-ph\]\]; J. E. Younkin and S. P. Martin, Phys. Rev. D [**85**]{} (2012) 055028 \[arXiv:1201.2989 \[hep-ph\]\]; H. Abe, J. Kawamura and H. Otsuka, PTEP [**2013**]{} (2013) 013B02 \[arXiv:1208.5328 \[hep-ph\]\]; I. Gogoladze, F. Nasir and Q. Shafi, Int. J. Mod. Phys. A [**28**]{} (2013) 1350046 \[arXiv:1212.2593 \[hep-ph\]\].
T. T. Yanagida and N. Yokozaki, Phys. Lett. B [**722**]{} (2013) 355 \[arXiv:1301.1137 \[hep-ph\]\]; T. T. Yanagida and N. Yokozaki, JHEP [**1311**]{} (2013) 020 \[arXiv:1308.0536 \[hep-ph\]\].
A. Delgado, M. Quiros and C. Wagner, arXiv:1402.1735 \[hep-ph\].
K. Kadota and K. A. Olive, Phys. Rev. D [**80**]{} (2009) 095015 \[arXiv:0909.3075 \[hep-ph\]\].
M. Asano, T. Moroi, R. Sato and T. T. Yanagida, Phys. Lett. B [**708**]{} (2012) 107 \[arXiv:1111.3506 \[hep-ph\]\].
N. Arkani-Hamed and H. Murayama, Phys. Rev. D [**56**]{} (1997) 6733 \[hep-ph/9703259\].
J. -H. Huh and B. Kyae, Phys. Lett. B [**726**]{} (2013) 729 \[arXiv:1306.1321 \[hep-ph\]\].
A. G. Cohen, D. B. Kaplan and A. E. Nelson, Phys. Lett. B [**388**]{} (1996) 588 \[hep-ph/9607394\].
J. L. Feng and D. Sanford, Phys. Rev. D [**86**]{} (2012) 055015 \[arXiv:1205.2372 \[hep-ph\]\]. See also S. Zheng, arXiv:1312.4105 \[hep-ph\].
S. R. Coleman and E. J. Weinberg, Phys. Rev. D [**7**]{} (1973) 1888.
M. S. Carena, M. Quiros and C. E. M. Wagner, Nucl. Phys. B [**461**]{} (1996) 407 \[hep-ph/9508343\].
J. R. Ellis, K. Enqvist, D. V. Nanopoulos and F. Zwirner, Mod. Phys. Lett. A [**1**]{} (1986) 57; R. Barbieri and G. F. Giudice, Nucl. Phys. B [**306**]{} (1988) 63.
S. Akula, M. Liu, P. Nath and G. Peim, Phys. Lett. B [**709**]{}, 192 (2012) \[arXiv:1111.4589 \[hep-ph\]\]; S. Akula, B. Altunkaynak, D. Feldman, P. Nath and G. Peim, Phys. Rev. D [**85**]{}, 075001 (2012) \[arXiv:1112.3645 \[hep-ph\]\].
M. Masip, R. Munoz-Tapia and A. Pomarol, Phys. Rev. D [**57**]{} (1998) R5340 \[hep-ph/9801437\]; R. Barbieri, L. J. Hall, A. Y. Papaioannou, D. Pappadopulo and V. S. Rychkov, JHEP [**0803**]{} (2008) 005 \[arXiv:0712.2903 \[hep-ph\]\]; L. J. Hall, D. Pinner and J. T. Ruderman, JHEP [**1204**]{} (2012) 131 \[arXiv:1112.2703 \[hep-ph\]\]; E. Hardy, J. March-Russell and J. Unwin, JHEP [**1210**]{} (2012) 072 \[arXiv:1207.1435 \[hep-ph\]\].
P. Minkowski, Phys. Lett. B [**67**]{} (1977) 421; M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C [**790927**]{} (1979) 315 \[arXiv:1306.4669 \[hep-th\]\]; T. Yanagida, Conf. Proc. C [**7902131**]{} (1979) 95; R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. [**44**]{} (1980) 912.
I. Jack, D. R. T. Jones, S. P. Martin, M. T. Vaughn and Y. Yamada, Phys. Rev. D [**50**]{} (1994) 5481 \[hep-ph/9407291\].
B. Kyae and C. S. Shin, work in progress.
P. Langacker, G. Paz, L. -T. Wang and I. Yavin, Phys. Rev. Lett. [**100**]{} (2008) 041802 \[arXiv:0710.1632 \[hep-ph\]\]; P. Langacker, G. Paz, L. -T. Wang and I. Yavin, Phys. Rev. D [**77**]{} (2008) 085033 \[arXiv:0801.3693 \[hep-ph\]\].
K. S. Jeong, J. E. Kim and M. -S. Seo, Phys. Rev. D [**84**]{} (2011) 075008 \[arXiv:1107.5613 \[hep-ph\]\].
J. E. Camargo-Molina, B. Garbrecht, B. O’Leary, W. Porod and F. Staub, arXiv:1405.7376 \[hep-ph\]; N. Blinov and D. E. Morrissey, JHEP [**1403**]{} (2014) 106 \[arXiv:1310.4174 \[hep-ph\]\]; D. Chowdhury, R. M. Godbole, K. A. Mohan and S. K. Vempati, JHEP [**1402**]{} (2014) 110 \[arXiv:1310.1932 \[hep-ph\]\]; J. ECamargo-Molina, B. O’Leary, W. Porod and F. Staub, arXiv:1310.1260 \[hep-ph\].
R. Ding, T. Li, F. Staub and B. Zhu, arXiv:1312.5407 \[hep-ph\].
M. Ibe and T. T. Yanagida, Phys. Lett. B [**709**]{} (2012) 374 \[arXiv:1112.2462 \[hep-ph\]\]; J. L. Evans, M. Ibe, K. A. Olive and T. T. Yanagida, Eur. Phys. J. C [**73**]{} (2013) 2468 \[arXiv:1302.5346 \[hep-ph\]\].
F. Staub, Comput. Phys. Commun. 181 (2010) 1077-1086 \[arXiv:0909.2863\]; [*ibid*]{} 185 (2014) 1773-1790 \[arXiv:1309.7223\].
W. Porod, Comput. Phys. Commun. 153 (2003) 275-315 \[hep-ph/0301101\]; W. Porod, F.Staub, Comput. Phys. Commun. 183 (2012) 2458-2469 \[arXiv:1104.1573\].
S. Antusch, L. Calibbi, V. Maurer, M. Monaco and M. Spinrath, JHEP [**01**]{} (2013) 187 \[JHEP [**1301**]{} (2013) 187\] \[arXiv:1207.7236\].
See also K. Kowalska, L. Roszkowski, E. M. Sessolo and S. Trojanowski, arXiv:1402.1328 \[hep-ph\].
J. Terning, “Modern supersymmetry: Dynamics and duality,” (International series of monographs on physics. 132), Oxford University Press, USA (March 29, 2009).
F. Gabbiani, E. Gabrielli, A. Masiero and L. Silvestrini, Nucl. Phys. B [**477**]{} (1996) 321 \[hep-ph/9604387\]. For a recent discussion, see M. Arana-Catania, S. Heinemeyer and M. J. Herrero, Phys. Rev. D [**88**]{} (2013) 1, 015026 \[arXiv:1304.2783 \[hep-ph\]\].
A. Delgado and M. Quiros, Phys. Rev. D [**85**]{} (2012) 015001 \[arXiv:1111.0528 \[hep-ph\]\].
J. Adam [*et al.*]{} \[MEG Collaboration\], Phys. Rev. Lett. [**110**]{} (2013) 201801 \[arXiv:1303.0754 \[hep-ex\]\].
J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**86**]{} (2012) 010001.
G. -C. Cho, N. Haba and J. Hisano, Phys. Lett. B [**529**]{} (2002) 117 \[hep-ph/0112163\].
[^1]: email: [email protected]
[^2]: email: [email protected]
[^3]: To be precise, a $3$–$5 {\,\textrm{TeV}}$ stop mass is needed for a 126 GeV Higgs mass at three-loop level when $A_0=0$. According to Ref. [@3-loop], parametric uncertainty in the top quark mass ($m_t^{\rm pole}=173.3\pm 1.8 {\,\textrm{GeV}}$) results in uncertainty of 0.5 to 2 GeV in the Higgs mass. Among public codes providing the two-loop results, moreover, inconsistencies of up to 4 GeV are observed. In this paper, we adopt the three-loop result of Ref. [@3-loop]. To be conservative, however, we will take 5 TeV as the stop mass needed for the 126 GeV Higgs mass, although a stop mass lighter than 5 TeV turns out to further decrease the fine-tuning.
[^4]: With a relatively lighter stop mass ($\lesssim 1 {\,\textrm{TeV}}$), the (singlet) extensions of the MSSM can significantly reduce the fine-tuning by adding an additional tree level [@NMSSMreview; @singletEXT] or a radiative Higgs mass [@extensions].
[^5]: Using the public codes, “SARAH4.2.2” [@SARAH] and “SPheno3.3.2” [@SPheno] after properly modifying them, one could estimate also other fine-tuning measures at two-loop level: e.g. $\Delta_{\alpha}=\{106,32,75,543,71\}$ for $\alpha=\{m_0^2,\widetilde{m}^2,m_{1/2},A_0,\mu\}$, when $y_{NI}=0.8$ and $\widetilde{m}^2= (15 {\,\textrm{TeV}})^2$ with $\alpha_{\rm GUT}\approx 1/25$, $m_{1/2}=(1 {\,\textrm{TeV}})^2$, and $m_0^2=A_0^2=(7 {\,\textrm{TeV}})^2$. $A_0$ of $7 {\,\textrm{TeV}}$ leads to a relatively large $\Delta_{A_0}$. In this case, the stop mixing effect on the Higgs mass is still negligible \[$(A_t/\widetilde{m}_t)^2\approx 0.07$\] at low energies, yielding $m_H^2\approx(126 {\,\textrm{GeV}})^2$. The mass spectra for the neutralino, charginos, and gluino are $\{454 {\,\textrm{GeV}}, 505 {\,\textrm{GeV}}, 519 {\,\textrm{GeV}}, 945 {\,\textrm{GeV}}\}$, $\{496 {\,\textrm{GeV}}, 944 {\,\textrm{GeV}}\}$, and $2.8 {\,\textrm{TeV}}$, respectively, with $\mu\approx 510 {\,\textrm{GeV}}$.
[^6]: In fact, even $\widetilde{m}_q^2\approx (10 {\,\textrm{TeV}})^2$ is enough to avoid the SUSY flavor and SUSY $CP$ problems in the quark sector [@Delgado].
[^7]: If the U(1)$^\prime$ breaking scale and the mass scale of $S_{1,2}^{(c)}$ are lower than the seesaw scale, they are not radiatively generated at all even with sizable Dirac neutrino Yukawa couplings.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This article is concerned with the linearisation around a dark soliton solution of the nonlinear Schrödinger equation. Crucially, we present analytic expressions for the four linearly-independent zero eigenvalue solutions (also known as Goldstone modes) to the linearised problem. These solutions are then used to construct a Greens matrix which gives the first-order spatial response due to some perturbation. Finally we apply this Greens matrix to find the correction to the dark-soliton wavefunction of a Bose-Einstein condensate in the presence of fluctuations.'
address: 'Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA'
author:
- 'Andrew G. Sykes'
bibliography:
- 'bib\_soliton.bib'
title: Exact solutions to the four Goldstone modes around a dark soliton of the nonlinear Schrödinger equation
---
Introduction
============
The nonlinear Schrödinger (NLS) equation is a ubiquitous nonlinear wave equation with a range of applications including the propagation of light within a waveguide [@soliton_book_agrawal; @soliton_book_hasegawa], the behaviour of deep water waves [@nls_deepwater], and the mean-field theory of Bose-Einstein condensates [@pitaevskii_becbook]. However, in many practical situations the NLS represents only the zeroth order approximation to the system, and for this reason, the response of an NLS system to small perturbations is important [@kivshar_malomed]. A novel mathematical formalism (based on the four linearly independent Goldstone modes of the linearised problem) with which one can deal with the spatial consequences of such perturbations is the aim of this article. Currently relevant examples of such perturbative mechanisms include the loss and/or dephasing of coherent light traveling through an optical fibre, and the presence of quantum and/or thermal noise in Bose-Einstein condensates [@franzeskakis].
The problem under consideration has received a significant amount of attention in previous literature [@keener_mclaughlin_soliton_perturbation1; @keener_mclaughlin_soliton_perturbation2; @kaup_newell_solitons; @herman_soliton_perturbation; @kaup_soliton_perturbation; @konotop_incorrect; @kivshar_soliton_perturbation], and in fact it would seem that a general approach toward such problems has been established within the community since the late 1970’s. Briefly, the approach focuses on finding eigenfunctions of a differential operator which is found from linearising the NLS around an analytic soliton solution. The majority of the earlier work was concerned more with the bright solitons found in the self-focusing NLS [@zakharov_shabat1]. Progress on the dark soliton of the self-defocusing NLS caught up with its bright counterpart in the mid-to-late 1990’s, with the introduction of a complete set of so-called “squared Jost solutions” [@chinese_bdg_solutions]. The crux for the dark soliton solutions involved dealing with the nonvanishing boundary conditions, this issue was avoided in the earlier work of Ref. [@konotop_incorrect], where they force a vanishing boundary condition onto the perturbation for theoretical convenience. Reference [@kivshar_soliton_perturbation] develops a method based on separating out the internal soliton dynamics from that of the boundary conditions, however such a separation is approximate at best [@chinese_bdg_solutions; @burtsev_camassa]. These squared Jost solutions of Ref. [@chinese_bdg_solutions] elegantly provided the desired eigenfunctions for all real eigenvalues, except the case where the eigenvalue is zero (in this case the eigenfunctions are commonly referred to as the Goldstone modes). In this limit as the eigenvalue tends toward zero, the squared Jost solutions collapse down to just two linearly independent solutions (the linearised differential operator is ultimately a fourth order differential equation and should therefore yield four linearly independent solutions). This fact was noted in Refs. [@chinese_bdg_solutions; @bilaspavloff_darksoliton], and two additional generalised eigenvectors were introduced to cope with the absence of the remaining two solutions. With the inclusion of these generalised eigenvectors it was shown that one had a complete set of functions. The main results of Ref. [@chinese_bdg_solutions] has led onto several other publications [@chinese_increment1; @shengmeiao; @chinese_greenfunction; @chinesemultisolitons; @chinese_increment2] of a similar vein.
The issue has recently seen an influx of interest coming from the community of scientists involved with ultra-cold quantum gases. The original observation of dark-soliton-excitations of Bose-Einstein condensates within elongated trapping geometries came in 1999 [@darksoliton_first] and continues to accrue an impressive number of citations. Sophisticated numerical techniques have been employed in Refs. [@ds_mishmash_carr2; @ds_martin_ruostekoski; @ds_martin_ruostekoski2] which investigate the lifetimes of dark solitons in the presence of quantum and thermal noise (special attention was paid to the high temperature regime in Ref. [@ds_greeks_prl]). Analytic approaches toward the same problem were put forth earlier in Refs. [@ds_dziarmaga1; @ds_dziarmaga2], and included the effects of the anomolous modes associated with phase diffusion of the Bose-Einstein condensate (originally considered in Ref. [@bec_you_lewenstein]) as well as diffusion in the position of the dark soliton (these anomolous modes are given in equations \[eq:om1\]–\[eq:om2\] of this article). Conflicting interpretations of the ensemble density evolution sparked debate as to whether the soliton exhibits decay or diffusion in the presence of noise [@mishmash_comment]. Another mechanism put forth as being responsible for the decay of dark solitons is the effective three-body contact interactions considered in [@ds_muryshev; @ds_gangardt_kamenev]. They claim the soliton is protected against decay by the integrability in the system under two-body collisions. This integrability must be broken to observe soliton decay, a hypothesis which is supported by the claims of [@ds_dziarmaga1; @ds_dziarmaga2]. The inclusion of three-body interactions destroys the integrability in the system. Further experiments in the field have successfully verified much of the fundamental interest in solitons such as their particle-like properties and mutual transparency under collision [@ds_expt; @ds_expt2].
The overall goal of the present paper differs slightly from much of the previous literature, specifically we emphasise that time evolution of the soliton parameters is not addressed in this article (see Ref. [@chinese_bdg_solutions; @chinese_greenfunction] for a treatment of this problem). Rather we concern ourselves solely with the first order correction to the spatial profile of the soliton. This correction is found by solving a nonhomogeneous fourth order differential equation (see equation \[eq:nonhomogeneous\_equation\] in section \[basic\_formalism\] of this paper). It is true that this correction can in principle be dealt with using the complete set of Ref. [@chinese_bdg_solutions], however this can be very difficult in general. Indeed it is expressly stated in Ref. [@chinese_bdg_solutions] (see the final paragraph of the introduction) that the first order correction is difficult to obtain via their method. We present here a much simpler method based on analytic solutions for all four linearly independent Goldstone modes. It is the introduction of these analytic expressions for the two, previously unpublished, Goldstone modes which allows us to proceed in this way. The four solutions are related to the four fundamental symmetries of the NLS. These symmetries are phase symmetry, translational symmetry, Galilean symmetry, and dilaton symmetry. Aside from the methods aesthetic appeal, Ref. [@me_dave_matt] describes a physical system (which necessitated the authors interest in this field), where, due to the numerical nature of the perturbing function, (denoted $g(x)$ in equation \[eq:nonhomogeneous\_equation\] of the current paper, but denoted $f(z)$ in Ref. [@me_dave_matt]) the method of Ref. [@chinese_bdg_solutions] was rendered useless.
The paper is organised as follows: In section \[basic\_formalism\] we set up the problem by linearising the NLS equation around a dark soliton. In section \[nonzero\_eigenvalues\] we look at the squared Jost solutions of Ref. [@chinese_bdg_solutions] and discuss their importance as eigenfunctions of the linearised problem. In section \[zero\_eigenvalues\] we look at how these squared Jost solutions behave in the limit as the eigenvalue tends to zero. After establishing the fact that (in this zero eigenvalue limit) the squared Jost solutions give only two of the four possible eigenvectors, we give exact analytic solutions for all four eigenvectors. In section \[green\_section\] we use these eigenvectors to construct a Greens matrix for the differential operator of the linearised problem. In section \[example1\] we illustrate the use of this Greens matrix in solving a practical example (specifically the correction to the dark-soliton wavefunction of a Bose-Einstein condensate, in the presence of fluctuations).
Basic formalism {#basic_formalism}
===============
The usual nonlinear Schrödinger equation (with a defocusing nonlinearity), in its dimensionless form, is $$-i\partial_t\psi-\frac{1}{2}\partial_z^2\psi+
|\psi|^2\psi=0,\label{eq:nls1}$$ which, after a Galilean boost of the coordinates, ($x\equiv z-vt$) becomes $$-i\partial_t\psi-\frac{1}{2}\partial_x^2\psi+iv\partial_x\psi+
|\psi|^2\psi=0.\label{eq:nls2}$$ An interesting solution to equation \[eq:nls2\] under non-vanishing boundary conditions is Tsuzuki’s single soliton solution [@tsuzuki1971; @zakharov_shabat]. In this case, the function can be separated into the product $\psi(x,t)=e^{-it}\psi_0(x)$, (the Galiliean shift is important for this separation). The solution is then $$\psi_0(x)=\cos(\theta)\tanh\left(x_c\right)+i\sin(\theta),
\label{eq:soliton}$$ where $v=\sin(\theta)$ is the velocity of the soliton, and we have introduced a position coordinate $x_c=x\cos(\theta)$ for notational convenience. The boundary condition in use is $|\psi|\rightarrow1$ as $x\rightarrow\pm\infty$ (i.e. $|\psi_0|^2$ is normalised to unity far away from the soliton).
Now let us consider a perturbation to this NLS system of the form; $$-i\partial_t\psi-\frac{1}{2}\partial_x^2\psi+iv\partial_x\psi+
|\psi|^2\psi=\epsilon F[\psi,\bar{\psi}].\label{eq:nls3}$$ where $0<\epsilon\ll1$ and $F[\psi,\bar{\psi}]$ represents some process responsible for the departure from the ideal NLS and $\bar{\phantom{X}}$ denotes a complex conjugate. In a similar vein to Tsuzuki’s solution of the unperturbed solution, we seek a separable solution in the form $$\psi(x,t)=e^{-it}\left[\psi_0(x)+\epsilon\psi_1(x,T_0,T_1,\ldots)+\epsilon^2\psi_2(x,T_0,T_1,\ldots)\right]
\label{eq:expansion}$$ where the coordinates $T_n=\epsilon^nt$, for $n=0,1,2,\ldots$, introduce a multiple-time-scale analysis. In the limit as $\epsilon\rightarrow0$ the coordinates $T_0,T_1,\ldots$ may be regarded as being independent. As an aside, we note that a solution to equation \[eq:nls3\] in the form of equation \[eq:expansion\] is certainly not guaranteed, however the ansatz may be appropriate in certain scenarios. To aid any reader, who is interested in the application of this work, in determining whether or not the ansatz of equation \[eq:expansion\] is appropriate in a particular case we outline a few basic points.
- When $\epsilon=0$ the system is a perfect NLS system and the function $\psi$ is given by Tsuzuki’s single soliton solution. Changes in $\psi$ occur over a length scale $x_c\approx1$ and a time scale $t\approx1$.
- For finite $\epsilon$ the system will acquire an additional dynamical evolution which occurs over a timescale $\epsilon t\approx1$ [@kaup_newell_solitons], aswell as a new spatial profile (given by the spatial dependence of $\psi_1$) which is an $O(\epsilon)$ correction to $\psi_0(x)$.
Continuing on with the formalism, we expand the time derivative as $\partial_t=\partial_{T_0}+\epsilon\partial_{T_1}+\ldots$ and look for a solution of $\psi_1$ under the assumption that the rapid-time evolution (if any exists) is complete, that is $\partial_{T_0}\psi_1=0$. Inserting equation \[eq:expansion\] into equation \[eq:nls3\] and keeping only the terms which are linear in $\epsilon$ we get $$\fl
\left[-\frac{1}{2}D_x^2+ivD_x+2|\psi_0|^2-1\right]\psi_1+\psi_0^2\bar{\psi}_1=
F\left[\psi_0e^{-it},\bar{\psi}_0e^{it}\right]e^{it},\label{eq:X1}$$ where $D_\alpha\equiv\frac{d\phantom{\alpha}}{d\alpha}$. Crucially for this particular approach to be relevant, the right-hand-side of equation \[eq:X1\] should not depend on the rapid-time variable $T_0$. The severity of this condition is unclear in general, however, at least in the case of one-dimensional Bose-Einstein condensates (where the author first encountered this kind of problem) this condition is certainly true. The problem then is finding a solution for the perturbation $\psi_1$. This is given by the following fourth order, nonhomogeneous differential equation; $$\mathcal{H}_x\left[\begin{array}{c}
\psi_1(x,T_1)\\ \bar{\psi}_1(x,T_1)
\end{array}\right]=
\left[\begin{array}{c}
g(x,T_1)\\ \bar{g}(x,T_1)
\end{array}\right]\label{eq:nonhomogeneous_equation}$$ where $$\fl
\mathcal{H}_x=\left[\begin{array}{cc}
-\frac{1}{2}D_x^2+ivD_x+2|\psi_0(x)|^2-1 &
\psi_0(x)^2 \\
\bar{\psi}_0(x)^2 &
-\frac{1}{2}D_x^2-ivD_x+2|\psi_0(x)|^2-1
\end{array}
\right].\label{eq:Hoperator}$$ The function $g$ is the right hand side of equation \[eq:X1\], and can only depend on the slow-time variable $T_1$. We will refer to the linear operator $\mathcal{H}_x$ as the linearised operator. The eigenfunctions of this operator play an important part in the solution to equation \[eq:nonhomogeneous\_equation\].
Eigenfunctions of the linearised operator
=========================================
Non-zero eigenvalues {#nonzero_eigenvalues}
--------------------
In this section we briefly review some previous literature on this problem [@chinese_bdg_solutions; @chinese_greenfunction; @chinese_increment1; @chinese_increment2]. Specifically we look for solutions to $$\mathcal{H}_x\left[\begin{array}{c}
u_E(x)\\ v_E(x)
\end{array}\right]=E\left[\begin{array}{c}
u_E(x)\\ -v_E(x)
\end{array}\right]\label{eq:eigenequation}$$ for a fixed $E\neq0$. Four linearly independent functions $u^j_E$ and $v^j_E$ can be found by searching the previous literature [@chinese_bdg_solutions], $$\begin{aligned}
u^j_{E}=e^{ik_jx}\left[k_j/2+E/k_j+i\cos(\theta)\tanh\left(x_c\right)\right]^2 \label{eq:bdg_solns1a}\\
v^j_{E}=e^{ik_jx}\left[k_j/2-E/k_j+i\cos(\theta)\tanh\left(x_c\right)\right]^2 \label{eq:bdg_solns1b}\end{aligned}$$ where $j=1,2,3,4$ and $k_j$ is one of the four roots to the polynomial $\left[E+k\sin(\theta)\right]^2=k^2(k^2/4+1)$. It is worth while to note that two of the roots ($k_1$ and $k_2$ say) are real, while two of the roots ($k_3$ and $k_4$ say) are complex. The complex roots mean $u^{3,4}_E$ and $v^{3,4}_E$ diverge exponentially as $x$ tends to either positive or negative infinity and for this reason are usually excluded on the grounds that they are unphysical.
Equations \[eq:bdg\_solns1a\]–\[eq:bdg\_solns1b\] can be thought of as the radiative eigenvectors of $\mathcal{H}_x$. Plane wave excitations moving through the system essentially see the dark soliton as a reflectionless potential and emerge on the other side with nothing more than a phase shift.
Zero eigenvalues {#zero_eigenvalues}
----------------
As well as the radiative eigenvectors of the previous subsection, one also has a discrete set of eigenvectors associated with the symmetries of equation \[eq:nls1\]. These are nonradiative eigenvectors and are commonly referred to as Goldstone modes. They have zero energy, but they have physical effects such as changing the phase of the soliton, shifting its spatial position, or dilating its profile. We thus turn our attention to solving the homogeneous problem, $$\mathcal{H}_x\left[\begin{array}{c}
\omega(x)\\ \bar{\omega}(x)
\end{array}\right]=\left[\begin{array}{c}
0\\ 0
\end{array}\right],\label{eq:zero}$$ to find these Goldstone modes. The fact that equation \[eq:eigenequation\] is solved for $E\neq0$ would seem to indicate that solutions to equation \[eq:zero\] could be found simply by taking the limit $E\rightarrow0$. Unfortunately this isn’t the case, as $E\rightarrow0$ the four solutions of equations \[eq:bdg\_solns1a\]–\[eq:bdg\_solns1b\] collapse down into just two linearly independent solutions, $$\begin{aligned}
\left[\begin{array}{c}
\omega_1(x)\\ \bar{\omega}_1(x)
\end{array}\right]&=
\left[\begin{array}{c}
i\left(\cos(\theta)\tanh\left(x_c\right)+i\sin(\theta)\right)\\
-i\left(\cos(\theta)\tanh\left(x_c\right)-i\sin(\theta)\right)
\end{array}\right]=
\left[\begin{array}{c}
i\psi_0\\ -i\bar{\psi}_0
\end{array}\right]\nonumber
\\
\left[\begin{array}{c}
\omega_2(x)\\ \bar{\omega}_2(x)
\end{array}\right]&=
\left[\begin{array}{c}
\operatorname{sech}^2\left(x_c\right)\\
\operatorname{sech}^2\left(x_c\right)
\end{array}\right]\nonumber\end{aligned}$$ and so we find that two of the solutions are absent from the previous literature. This point has not gone unnoticed, and the usual strategy for dealing with these absent solutions is to find generalised eigenvectors which satisfy $$\mathcal{H}_x\left[\begin{array}{c}
\Omega(x)\\ \bar{\Omega}(x)
\end{array}\right]=\left[\begin{array}{c}
\omega(x)\\ \bar{\omega}(x)
\end{array}\right].\label{eq:generalised}$$ The previous literature contains expressions for two such generalised eigenvectors (see for example, appendix A of Ref. [@bilaspavloff_darksoliton]) and it is the union of the $\mathcal{H}_x$ and $\mathcal{H}_x^2$ null-spaces which is then used to form a complete set of functions.
Rather than adopt this approach based on generalised eigenvectors, we write down expressions for all four linearly independent solutions to equation \[eq:zero\] $$\begin{aligned}
\fl
\omega_1(x)=-\sin(\theta)+i\cos(\theta)\tanh(x_c)
\label{eq:om1}\\
\fl
\omega_2(x)=\operatorname{sech}^2(x_c)\label{eq:om2}\\
\fl
\omega_3(x)=\operatorname{sech}^2(x_c)\left[2x_c-x_c\cosh(2x_c)+
(3/2)\sinh(2x_c)\right]\tan(\theta)
+\nonumber\\
2i\left[x_c\tanh(x_c)-1\right]\label{eq:om3}\\
\fl
\omega_4(x)=\operatorname{sech}^2(x_c)\left\{x_c\left(10-4\cos^2(\theta)-8\sin(\theta)
\sin(\theta-2ix_c)\right)+\right.\nonumber\\
\left.\cosh(x_c)\left[i\sin(2\theta-3ix_c)-5i\sin(2\theta-ix_c)\right]
+6\sinh(2x_c)\right\}\label{eq:om4}.\end{aligned}$$ These four expressions form the key result of this paper ($\omega_1$ and $\omega_2$ have appeared in the previous literature, however to the best of our knowledge $\omega_3$ and $\omega_4$ have not). These expressions do not follow from the finite $E$ eigenvectors, rather they are related to the four fundamental symmetries of the NLS; $\omega_1$ $\leftrightarrow$ phase symmetry, $\omega_2$ $\leftrightarrow$ translational symmetry, $\omega_3$ $\leftrightarrow$ Galilean symmetry, and $\omega_4$ $\leftrightarrow$ dilaton symmetry. A brief summary of these symmetries is given below: Assuming that $\phi_0(x,t)$ is a solution of equation \[eq:nls1\] and $\alpha$ is any real constant, then
- *phase symmetry* tells us that $\phi_0'(x,t)\equiv e^{i\alpha}\phi_0(x,t)$ will also be a solution,
- *translational symmetry* tells us that $\phi_0'(x,t)\equiv \phi_0(x-\alpha,t)$ will also be a solution,
- *Galilean symmetry* tells us that $\phi_0'(x,t)\equiv e^{i(\alpha x-\frac{\alpha^2}{2}t)}\phi_0(x-\alpha t,t)$ will also be a solution,
- *dilaton symmetry* tells us that $\phi_0'(x,t)\equiv\alpha\phi_0(\alpha x,\alpha^2t)$ will also be a solution.
In order to show the linear independence of equations \[eq:om1\]–\[eq:om4\] we calculate the Wronskian $$\fl
\left|\begin{array}{cccc}
\omega_1(x) & \omega_2(x) & \omega_3(x) & \omega_4(x) \\
\omega_1'(x) & \omega_2'(x) & \omega_3'(x) & \omega_4'(x) \\
\omega_1''(x) & \omega_2''(x) & \omega_3''(x) & \omega_4''(x) \\
\omega_1'''(x) & \omega_2'''(x) & \omega_3'''(x) & \omega_4'''(x) \\
\end{array}
\right|=512\cos^5(\theta)\operatorname{sech}^4(x_c)\sin^4(\theta-ix_c),\nonumber$$ and we see that, provided $0\leq\theta<\pi/2$, the solutions are linearly independent. In the case where $\theta=\pi/2$ the soliton has vanished from the system and the problem becomes trivial.
Constructing a Greens matrix {#green_section}
============================
Returning our attention to the solution of equation \[eq:nonhomogeneous\_equation\], we use the zero-eigenvalue solutions given in equations \[eq:om1\]–\[eq:om4\] to construct a Greens matrix for the linearised operator. The minimum requirement for this Greens matrix being that it satisfies the following condition; $$\mathcal{H}_x\tilde{G}(x,s)=\mathbb{I}_2 \delta(x-s)\label{eq:green}$$ where $\mathbb{I}_2$ is the $2\times2$ identity matrix and $\tilde{G}$ denotes the $2\times2$ Greens matrix. The general solution to equation \[eq:nonhomogeneous\_equation\] will then be given by $$\left[\begin{array}{c}
\psi_1(x)\\ \bar{\psi}_1(x)
\end{array}\right]=\int_{-\infty}^\infty \tilde{G}(x,s)\left[\begin{array}{c}
g(s)\\ \bar{g}(s)
\end{array}\right]\label{general_solution}$$ Additional requirements given by symmetry and boundary conditions of the specific problem will completely determine $\tilde{G}$.
We write $\tilde{G}$ as, $$\tilde{G}(x,s)=\sum_{j=1}^4\left\{\begin{array}{ll}
\left[\begin{array}{c} \omega_j(x)\\ \bar{\omega}_j(x)\end{array}\right]
\left[\begin{array}{cc} \bar{\kappa}_j(s) & \kappa_j(s)\end{array}\right] & \quad s<x
\\
\left[\begin{array}{c} \omega_j(x)\\ \bar{\omega}_j(x)\end{array}\right]
\left[\begin{array}{cc} \bar{\lambda}_j(s) & \lambda_j(s)\end{array}\right] & \quad x<s
\end{array}
\right.$$ and equation \[eq:green\] gives rise to the conditions when $x=s$, $$\begin{aligned}
&\lim_{x\to s^+}\tilde{G}(x,s)=\lim_{x\to s^-}\tilde{G}(x,s)\\
&\left[\lim_{x\to s^+}D_x\tilde{G}(x,s)\right]-\left[\lim_{x\to s^-}D_x\tilde{G}(x,s)\right]=-2\mathbb{I}_2.\end{aligned}$$ These conditions manifest in the following simultaneous equations for $\kappa_j$ and $\lambda_j$; $$\begin{aligned}
\kappa_1(s)-\lambda_1(s)=&\frac{1}{2}\sec^2(\theta)\omega_3(s),\label{eq:bc1}\\
\kappa_2(s)-\lambda_2(s)=&\frac{1}{4}\sec(\theta)\tan(\theta)\omega_3(s)+\frac{1}{16}\sec^3(\theta)\omega_4(s),\label{eq:bc2}\\
\kappa_3(s)-\lambda_3(s)=&-\frac{1}{2}\sec^2(\theta)\omega_1(s)-\frac{1}{4}\sec(\theta)\tan(\theta)\omega_2(s),\label{eq:bc3}\\
\kappa_4(s)-\lambda_4(s)=&-\frac{1}{16}\sec^3(\theta)\omega_2(s).\label{eq:bc4}\end{aligned}$$ The symmetry of $\tilde{G}$ \[namely $\tilde{G}(x,s)=\tilde{G}^\dagger(s,x)$, where $\dagger$ denotes the complex conjugate\] yields a further condition; $$\sum_{j=1}^4\bar{\lambda}_j(s)\omega_j(x)=\sum_{j=1}^4\kappa_j(x)\bar{\omega}_j(s).\label{eq:bc_sym}$$ Because $\tilde{G}(x,s)$ must also be a solution to the adjoint problem $\tilde{G}(x,s)\mathcal{H}^\dagger_s=\mathbb{I}_2\delta(x-y)$ (where $\mathcal{H}_s^\dagger$ acts to the left), we see that $\kappa_j$ and $\lambda_j$ must be linear combinations of the $\omega_j$. Thus we look for 32 real constants, $\kappa_i^j$ and $\lambda_i^j$ (where $i,j=1,2,3,4$) which appropriately define $$\begin{aligned}
\kappa_i(s)=\sum_{j=1}^4\kappa_i^j\omega_j(s),\\
\lambda_i(s)=\sum_{j=1}^4\lambda_i^j\omega_j(s).\end{aligned}$$ Equations \[eq:bc1\]–\[eq:bc4\] then become $$\begin{aligned}
\kappa_1^j-\lambda_1^j&=\delta_{j3}\frac{1}{2}\sec^2(\theta),\\
\kappa_2^j-\lambda_2^j&=\delta_{j3}\frac{1}{2}\sec^2(\theta)+\delta_{j4}\frac{1}{16}\sec^3(\theta),\\
\kappa_3^j-\lambda_3^j&=-\delta_{j1}\frac{1}{2}\sec^2(\theta)-\delta_{j2}\frac{1}{16}\sec^3(\theta),\\
\kappa_4^j-\lambda_4^j&=-\delta_{j2}\frac{1}{16}\sec^3(\theta),\end{aligned}$$ (where $\delta_{jk}$ is the Kronecker delta) while equation \[eq:bc\_sym\] becomes $$\fl
\lambda_2^1=\kappa_1^2,\quad
\lambda_3^1=\kappa_1^3,\quad
\lambda_3^2=\kappa_2^3,\quad
\lambda_4^1=\kappa_1^4,\quad
\lambda_4^2=\kappa_2^4,\quad
\lambda_4^3=\kappa_3^4.$$ We can also set $\lambda_1^1=\lambda_2^2=\lambda_3^3=\lambda_4^4=0$ since these diagonal elements only affect the final solution for $\psi_1(x)$ by adding a constant times $\omega_j(x)$ which is of no physical interest since it is just transforming the solution into one of the four previously-mentioned symmetry groups. This leaves us with 26 equations for the 32 unknowns, the remaining 6 equations are provided by the boundary conditions on $\psi_1(x)$.
Example problem
===============
1D Bose-Einstein condensate in the presence of fluctuations {#example1}
-----------------------------------------------------------
Thermal and quantum fluctuations in a Bose-Einstein condensate cause a small-but-finite population of non-condensed particles. When a soliton is present in the system these non-condensed particles bunch up in the low-density region around the soliton [@ds_law2003; @damski2006]. Without paying close attention to the specific details of this non-condensed density, we assign $g(x)$ \[of equation \[eq:nonhomogeneous\_equation\]\] the following fairly generic form; $$g(x)=\cos^4(\theta)\left[A\tanh\left(x_c\right)\operatorname{sech}^2\left(x_c\right)+iB\operatorname{sech}^2\left(x_c\right)\right]
\label{eq:gx1}$$ where $A$ and $B$ are real constants \[$g(x)$ is shown in Fig. \[fig:gx1\] with $A=B=1$\].
Note that we have chosen $g(x)$ to have the same symmetry as $\psi_0$ (that is the real part is odd, while the imaginary part is even) and that $g(x)$ decays at the same rate as $1-|\psi_0|^2$. As boundary conditions on $\psi_1$ we simply say that $\psi_1(x)\rightarrow$constant and $D_x\psi_1(x)\rightarrow0$ as $x\rightarrow\infty$, aswell as basic symmetry arguments; $\operatorname{Re}\left[\psi_1(x)\right]=-\operatorname{Re}\left[\psi_1(-x)\right]$ and $\operatorname{Im}\left[\psi_1(x)\right]=\operatorname{Im}\left[\psi_1(-x)\right]$.
Divergences in $\psi_1$ as $x\rightarrow\infty$ can be avoided by the conditions; $$\lambda_1^4=-\kappa_1^4,\quad\lambda_2^4=-\kappa_2^4,\quad\lambda_3^4=-\kappa_3^4,$$ and the symmetry is ensured by the conditions; $$\lambda_1^2=0,\quad\lambda_1^3=-\frac{1}{4}\sec^2(\theta),\quad\lambda_2^3=-\frac{1}{8}\sec(\theta)\tan(\theta).$$ These six additional conditions give us the Greens matrix, $$\begin{aligned}
\fl
\tilde{G}_{11}(x>s)=
\frac{\sec^2(\theta)}{4}\omega_1(x)\bar{\omega}_3(s)+
\frac{\sec(\theta)\tan(\theta)}{8}\omega_2(x)\bar{\omega}_3(s)+
\frac{\sec^3(\theta)}{32}\omega_2(x)\bar{\omega}_4(s)-\nonumber\\
\frac{\sec^2(\theta)}{4}\omega_3(x)\bar{\omega}_1(s)-
\frac{\sec(\theta)\tan(\theta)}{8}\omega_3(x)\bar{\omega}_2(s)-
\frac{\sec^3(\theta)}{32}\omega_4(x)\bar{\omega}_2(s),\\
\fl
\tilde{G}_{11}(x<s)=
-\frac{\sec^2(\theta)}{4}\omega_1(x)\bar{\omega_3}(s)-
\frac{\sec(\theta)\tan(\theta)}{8}\omega_2(x)\bar{\omega}_3(s)-
\frac{\sec^3(\theta)}{32}\omega_2(x)\bar{\omega}_4(s)+\nonumber\\
\frac{\sec^2(\theta)}{4}\omega_3(x)\bar{\omega}_1(s)+
\frac{\sec(\theta)\tan(\theta)}{8}\omega_3(x)\bar{\omega}_2(s)+
\frac{\sec^3(\theta)}{32}\omega_4(x)\bar{\omega}_2(s),\\
\fl
\tilde{G}_{12}(x>s)=
\frac{\sec^2(\theta)}{4}\omega_1(x){\omega}_3(s)+
\frac{\sec(\theta)\tan(\theta)}{8}\omega_2(x){\omega}_3(s)+
\frac{\sec^3(\theta)}{32}\omega_2(x){\omega}_4(s)-\nonumber\\
\frac{\sec^2(\theta)}{4}\omega_3(x){\omega}_1(s)-
\frac{\sec(\theta)\tan(\theta)}{8}\omega_3(x){\omega}_2(s)-
\frac{\sec^3(\theta)}{32}\omega_4(x){\omega}_2(s),\\
\fl
\tilde{G}_{12}(x<s)=
-\frac{\sec^2(\theta)}{4}\omega_1(x){\omega}_3(s)-
\frac{\sec(\theta)\tan(\theta)}{8}\omega_2(x){\omega}_3(s)-
\frac{\sec^3(\theta)}{32}\omega_2(x){\omega}_4(s)+\nonumber\\
\frac{\sec^2(\theta)}{4}\omega_3(x){\omega}_1(s)+
\frac{\sec(\theta)\tan(\theta)}{8}\omega_3(x){\omega}_2(s)+
\frac{\sec^3(\theta)}{32}\omega_4(x){\omega}_2(s),\end{aligned}$$
\
$\tilde{G}_{21}$ and $\tilde{G}_{22}$ are easily deduced from the symmetry of $\tilde{G}$. The expression for $\psi_1$ then follows, $$\begin{aligned}
\fl
\psi_1(x)=\frac{1}{4}\operatorname{sech}^2(x_c)\Big[2x_c\big(A\cos(2\theta)+B\sin(2\theta)\big)+
\sin(\theta)\big(2B\cos(\theta)-\big.\Big.\nonumber\\
\Big.\big. A\sin(\theta)\big)\sinh(2x_c)\Big]+
\frac{i}{2}\cos(\theta)\big[A\sin(\theta)-2B\cos(\theta)\big]\label{eq:X1bec}\end{aligned}$$ and $\psi_1$ is plotted in Fig. \[fig:X1\]. One can easily check that equation \[eq:X1bec\] is indeed a solution to equation \[eq:nonhomogeneous\_equation\] with $g(x)$ defined by equation \[eq:gx1\].
Conclusion and discussion
=========================
In this article we have introduced four exact analytic solutions to the NLS equation linearised around a dark soliton \[equation \[eq:zero\]\]. These solutions are given in equations \[eq:om1\]–\[eq:om4\]. These four solutions provide a possible means of bypassing the need to solve the spatial perturbative correction (denoted $\psi_1(x)$ in this paper) using the complete set of finite $E$ eigenfunctions \[given in equations \[eq:bdg\_solns1a\]–\[eq:bdg\_solns1b\]\] supplemented with generalised eigenfunctions for the nullspace of $\mathcal{H}_x$, (a procedure which appears to be common-place in the previous literature in-spite of it’s apparent difficulty [@bilaspavloff_darksoliton; @chinese_bdg_solutions]). To illustrate this point, we constructed a Green’s matrix which can be used to find a solution to equation \[eq:nonhomogeneous\_equation\] once boundary conditions have been defined. We applied the technique to the problem of thermal and/or quantum fluctuations within a Bose-Einstein condensate.
It is interesting to note that, of the four solutions presented in equations \[eq:om1\]–\[eq:om4\] only two of them \[$\omega_1(x)$ and $\omega_2(x)$\] remain bounded in the limit as $x\rightarrow\infty$. The other two, $\omega_3(x)$ and $\omega_4(x)$, are linearly diverging and exponentially diverging respectively. This then begs the question as to which set of perturbing functions \[$g(x)$ in equation \[eq:nonhomogeneous\_equation\]\] are amenable to the use of the Greens matrix defined by equation \[eq:green\], particulary when the boundary conditions require $\psi_1$ to be bounded. Certainly in the example problem of Section \[example1\] where the perturbing function itself is strongly localised around the soliton, satisfying boundary conditions does not seem to be an issue, since the integral in equation \[general\_solution\] is able to contain the divergences associated with $\omega_3$ and $\omega_4$. It is also possible to contain divergences by exploiting even or odd symmetries of $g(x)$, since $\omega_3$ and $\omega_4$ have even and odd symmetries in the real and imaginary parts, the integration in equation \[general\_solution\] can once again, avoid undesired divergences. Intuitively one might expect (due to the fact that the only interesting parts of equations \[eq:om1\]–\[eq:om4\] are in the region close to the soliton) that any perturbing function which has a considerable nonzero component far away from the soliton would require the use of the radiative solutions given in equations \[eq:bdg\_solns1a\]–\[eq:bdg\_solns1b\], and one would follow the procedure of Ref. [@chinese_bdg_solutions]. However, a general theory on this issue is currently lacking.
AGS wishes to thank Alan Bishop, Avadh Saxena, and David Roberts for useful discussions.
References {#references .unnumbered}
==========
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Janko Gravner$^*$, Damien Pitman$^*$, and Sergey Gavrilets$^{\dag\ddag}$\
$^*$Department of Mathematics, University of California, Davis, CA 95616,\
$^{\dag}$Departments of Ecology and Evolutionary Biology and Mathematics,\
University of Tennessee, Knoxville, TN 37996, USA.\
$^\ddag$corresponding author. Phone: 865-974-8136,fax: 865-974-3067,\
email: [email protected]
title: 'Percolation on fitness landscapes: effects of correlation, phenotype, and incompatibilities'
---
[**Abstract**]{}We study how correlations in the random fitness assignment may affect the structure of fitness landscapes. We consider three classes of fitness models. The first is a continuous phenotype space in which individuals are characterized by a large number of continuously varying traits such as size, weight, color, or concentrations of gene products which directly affect fitness. The second is a simple model that explicitly describes genotype-to-phenotype and phenotype-to-fitness maps allowing for neutrality at both phenotype and fitness levels and resulting in a fitness landscape with tunable correlation length. The third is a class of models in which particular combinations of alleles or values of phenotypic characters are “incompatible” in the sense that the resulting genotypes or phenotypes have reduced (or zero) fitness. This class of models can be viewed as a generalization of the canonical Bateson-Dobzhansky-Muller model of speciation. We also demonstrate that the discrete $NK$ model shares some signature properties of models with high correlations. Throughout the paper, our focus is on the percolation threshold, on the number, size and structure of connected clusters, and on the number of viable genotypes.\
[**Key words**]{}: fitness landscapes, percolation, nearly neutral networks, genetic incompatibilities
Introduction
============
The notion of fitness landscapes, introduced by a theoretical evolutionary biologist Sewall Wright in [-@wri32] (see also @kau93 [@gav04]), has proved extremely useful both in biology and well outside of it. In the standard interpretation, a fitness landscape is a relationship between a set of genes (or a set of quantitative characters) and a measure of fitness (e.g. viability, fertility, or mating success). In Wright’s original formulation the set of genes (or quantitative characters) is the property of an individual. However, the notion of fitness landscapes can be generalized to the level of a mating pair, or even a population of individuals [@gav04].
To date, most empirical information on fitness landscapes in biological applications has come from studies of RNA (e.g., @sch95 [@huy96b; @fon98b]), proteins (e.g., @lip91 [@mar96; @ros97]), viruses (e.g., @bur99 [@bur04]), bacteria (e.g., @ele03 [@woo06]), and artificial life (e.g., @len99 [@wil01c]). The three paradigmatic landscapes — rugged, single-peak, and flat — emphasizing particular features of fitness landscapes have been the focus of most of the earlier theoretical work (reviewed in @kau93 [@gav04]). These landscapes have found numerous applications with regards to the dynamics of adaptation (e.g., @kau87 [@kau93; @orr06a; @orr06b]) and neutral molecular evolution (e.g., @der91).
More recently, it was realized that the dimensionality of most biologically interesting fitness landscapes is enormous and that this huge dimensionality brings some new properties which one does not observe in low-dimensional landscapes (e.g. in two- or three-dimensional geographic landscapes). In particular, multidimensional landscapes are generically characterized by the existence of neutral and nearly neutral networks (also referred to as holey fitness landscapes) that extend throughout the landscapes and that can dramatically affect the evolutionary dynamics of the populations [@gav97; @gav97b; @rei97b; @gav04; @rei01a; @rei01b; @rei02].
An important property of fitness landscapes is their correlation pattern. A common measure for the strength of dependence is the [*correlation function*]{} $\rho$ measuring the correlation of fitnesses of pairs of individual at a distance (e.g., Hamming) $d$ from each other in the genotype (or phenotype) space: $$\label{rho}
\rho(d)=\frac{\operatorname{cov}[w(.),w(.)]_d}{\operatorname{var}(w)}$$ [@eig89]. Here, the term in the numerator is the covariance of fitnesses of two individuals conditioned on them being at distance $d$, and $\operatorname{var}(w)$ is the variance in fitness over the whole fitness landscape. For uncorrelated landscapes, $\rho(d)=0$ for $d > 0$. In contrast, for highly correlated landscapes, $\rho(d)$ decreases with $d$ very slowly.
The aim of this paper is to extend our previous work [@gav97b] in a number of directions paying special attention to the question of how correlations in the random fitness assignment may affect the structure of genotype and phenotype spaces. For the resulting random fitness landscapes, we shed some light on issues such as the number of viable genotypes, number of connected clusters of viable genotypes and their size distribution, existence thresholds, and number of possible fitnesses.
To this end, we introduce a variety of models, which could be divided into two essentially different classes: those with local correlations, and those with global correlations. As we will see, techniques used to analyze these models, and answers we obtain, differ significantly. We use a mixture of analytical and computational techniques; it is perhaps necessary to point out that these models are very far from trivial, and one is quickly led to outstanding open problems in probability theory and computer science.
We start (in Section 2) by briefly reviewing some results from [@gav97b]. In Section 3 we generalize these results for the case of a continuous phenotype space when individuals are characterized by a large number of continuously varying traits such as size, weight, color, or the concentrations of some gene products. The latter interpretation of the phenotype space may be particularly relevant given the rise of proteomics and the growing interest in gene regulatory networks.
The main idea behind our local correlations model studies in Section 4 is fitness assignment [*conformity*]{}. Namely, one randomly divides the genotype space into components which are forced to have the same phenotype; then, each different phenotype is independently assigned a random fitness. This leads to a simple two-parameter model, in which one parameter determines the density of viable genotypes, and the other the correlations between them. We argue that the probability of existence of a giant cluster (which swallows a positive proportion of all viable genotypes) is a non-monotone function of the correlation parameter and identify the critical surface at which this probability jumps almost from 0 to 1. In Section 4 we also investigate the effects of interaction between conformity structure and fitness assignment.
Section 5 introduces our basic global correlation model, one in which genotypes are eliminated due to random pairwise [*incompatibilities*]{} between alleles. This is equivalent to a random version of [SAT]{} problem, which is the canonical constraint satisfaction problem in computer science. In general, a [SAT]{} problem involves a set of Boolean variables and their negations that are strung together with [OR]{} symbols into [*clauses*]{}. The [*clauses*]{} are joined by [AND]{} symbols into a [*formula*]{}. A [SAT]{} problem asks one to decide, whether the variables can be assigned values that will make the formula true. An important special case, $K$-[SAT]{}, has the length of each clause fixed at $K$. Arguably, [SAT]{} is the most important class of problems in complexity theory. In fact, the general [SAT]{} was the first known NP-complete problem and was established as such by S. Cook in 1971 (@Coo). Even considerable simplifications, such as the [$3$-SAT]{} (see Section 5.4), remain NP-complete, although [$2$-SAT]{} (see Section 5.1) can be solved efficiently by a simple algorithm. See e.g. [@KV] for a comprehensive presentation of the theory. Difficulties in analyzing random [SAT]{} problems, in which formulas are chosen at random, in many ways mirror their complexity classes, but even random [$2$-SAT]{} presents significant challenges [@dlV; @BKL2]. In our present interpretation, the main reason for these difficulties is that correlations are so high that the expected number of viable genotypes may be exponentially large, while at the same time the probability that even one viable genotype exists is very low. In Section 5, we further illuminate this issue by showing that connected viable clusters must contain fairly large sub-cubes, and that the number of such clusters is, in a proper interpretation, finite. The relevance to both types of models for discrete and continuous phenotype spaces is also discussed, with particular emphasis on the existence of viable phenotypes in the presence of incompatibilities. Section 5 also contains a brief review of the existing theory on higher order incompatibilities.
In Section 6 we demonstrate how the discrete $NK$ model shares some signature properties of models with high correlations. In Section 7 we summarize our results and discuss their biological relevance. The proofs of our major results are relegated to Appendices A–E.
The basic case: binary hypercube and independent binary fitness
===============================================================
We begin with a brief review of the basic setup, from [@gav97b] and [@gav04]. The [*binary hypercube*]{} consists of all $n$–long arrays of bits, or [*alleles*]{}, that is ${{\mathcal G}}=\{0, 1\}^n$. This is our [*genotype space*]{}. Genotypes are linked by edges induced by bit-flips, i.e., [*mutations*]{} at a single locus, for example, for $n=4$, a sequence of mutations might look like $$0000\leftrightarrow 1000\leftrightarrow 1001\leftrightarrow 1101\leftrightarrow 1100.$$ The (Hamming) [*distance*]{} $d(x,y)$ between $x\in {{\mathcal G}}$ and $y\in {{\mathcal G}}$ is the number of coordinates in which $x$ and $y$ differ or, equivalently, the least number of mutations which connect $x$ and $y$.
The [*fitness*]{} of each genotype $x$ is denoted by $w(x)$. We will describe several ways to prescribe the fitness $w$ at random, according to some probability measure $P$ on the $2^{2^n}$ possible assignments. Then we say that an event $A_n$ happens [*asymptotically almost surely*]{} ([a. a. s.]{}) if $P(A_n)\to 1$ as $n\to\infty$. Typically, $A_n$ will capture some important property of (random) clusters of genotypes.
We commonly assume that $w(x)\in \{0,1\}$ so that $x$ is either viable ($w(x)=1$) or inviable ($w(x)=0$). As a natural starting point, [@gav97b] considered uncorrelated landscapes, in which $w(x)$ is chosen to be 1 with probability $p_v$, for each $x$ independently of others. We assume this setup for the rest of this section and note that this is a well-studied problem in mathematical literature, although it presents considerable technical difficulties and some issues are still not completely resolved.
Given a particular fitness assignment, viable genotypes form a subset of ${{\mathcal G}}$, which is divided into connected [*components*]{} or [*clusters*]{}. For example, with $n=4$, if $0000$ is viable, but its 4 neighbors $1000$, $0100$, $0010$, and $0001$ are not, then it is isolated in its own cluster.
Perhaps the most basic result determines the [*connectivity threshold*]{} [@Tom]: when $p_v>1/2$, the set of all viable genotypes is connected a. a. s. By contrast, when $p_v<1/2$, the set of viable genotypes is [*not*]{} connected [[a. a. s.]{}]{} This is easily understood, as the connectedness is closely linked to isolated genotypes, whose expected number is $2^np_v(1-p_v)^n$. This expectation makes a transition from exponentially large to exponentially small at $p_v=1/2$. The events $\{x$ is isolated$\}$, $x\in {{\mathcal G}}$, are only weakly correlated, which implies that when $p_v<1/2$ there are exponentially many isolated genotypes with high probability, while when $p_v>1/2$, a separate argument shows that the event that the set of viable genotypes contains no isolated vertex but is not connected becomes very unlikely for large $n$. This is perhaps the clearest instance of the [*local method*]{}: a local property (no isolated genotypes) is [a. a. s.]{} equivalent to a global one (connectivity).
Connectivity is clearly too much to ask for, as $p_v$ above $1/2$ is not biologically realistic. Instead, one should look for a weaker property which has a chance of occurring at small $p_v$. Such a property is [*percolation*]{}, a. k. a. existence of the [*giant component*]{}. For this, we scale $p_v={\lambda}_v/n$, for a constant ${\lambda}_v$. When ${\lambda}_v>1$, the set of viable genotypes percolates, that is, it a. a. s. contains a component of at least $c\cdot n^{-1} 2^n$ genotypes, with all other components of at most polynomial (in $n$) size. When ${\lambda}_v<1$, the largest component is a. a. s. of size $Cn$. Here and below, $c$ and $C$ are some constants. These are results from [@BKL2].
The local method that correctly identifies the percolation threshold is a little more sophisticated than the one for the connectivity threshold, and uses branching processes with Poisson offspring distribution — hence we introduce notation Poisson(${\lambda}$) for a Poisson distribution with mean ${\lambda}$. Viewed from, say, genotype $0\dots0$, the binary hypercube locally approximates a tree with uniform degree $n$. Thus viable genotypes approximate a branching process in which every node has the number of successors distributed binomially with parameters $n-1$ and $p$, hence this random number has mean about ${\lambda}_v$ and is approximately Poisson(${\lambda}_v$). When ${\lambda}_v>1$, such a branching process survives forever with probability $1-\delta>0$, where $\delta=\delta({\lambda}_v)$, and $\delta({\lambda})$ is given by the implicit equation $$\label{delta}
\delta=e^{{\lambda}(\delta-1)}.$$ (e.g., @AN). Large trees of viable genotypes created by the branching processes which emanate from viable genotypes merge into a very large (“giant”) connected set. On the other hand, when ${\lambda}_v<1$ the branching process dies out with probability 1.
The condition ${\lambda}_v>1$ for the existence of the giant component can be loosely rewritten as $$\label{basic}
p_v > \frac{1}{n}.$$ This shows that the larger the dimensionality $n$ of the genotype space, the smaller values of the probability of being viable $p_v$ will result in the existence of the giant component. See [@gav97b; @gav97; @gav04; @ski04; @pig06] for discussions of biological significance and implications of this important result.
Percolation in a continuous phenotype space
===========================================
In this section we will assume that individuals are characterized by $n$ continuous traits (such as size, weight, color, or concentrations of particular gene products). To be precise, we let ${{\mathcal P}}=[0,1]^n$ be the [*phenotype space*]{}.
We begin with the extension of the notion of independent viability. The most straightforward analogue of the discrete genotype space considered in the previous section involves Poisson point location in $\cal{P}$, obtained by generating a Poisson($\lambda$) random variable $N$, and then choosing points $x_1,\dots,x_N\in {{\mathcal P}}$ uniformly at random. These will be interpreted as [*peaks*]{} of equal height in the fitness landscape. Another parameter is a small $r>0$, which can be interpreted as measuring how harsh the environment is: any phenotype within $r$ of one of the peaks is declared viable and any phenotype not within $r$ of one of the peaks is declared inviable. For simplicity, we will assume “within $r$” to mean that “every coordinate differs by at most $r$,” i.e., distance is measured in the ($n$-dimensional) $\ell^\infty$ norm $||\cdot||_\infty$. Note that this makes the set of viable genotypes correlated, albeit the range of correlations is limited to $2r$.
Our most basic question is whether a positive proportion of viable phenotypes is connected together into a giant cluster. Note that the probability $p_v$ that a random point in ${{\mathcal P}}$ is viable is equal to the probability that there is a “peak” within $r$ from this point. Therefore, $$p_v=1-\exp\left[-\lambda (2r)^n\right]\approx \lambda (2r)^n.$$ This is also the expected combined volume of viable phenotypes.
We will consider peaks $x_i$ and $x_j$ to be [*neighbors*]{} if they share a viable phenotype, that is, if their $r$-neighborhoods overlap, or equivalently, if $||x_i-x_j||_\infty<2r$. Two viable phenotypes $y_1$ and $y_2$ are [*connected*]{} if they are, respectively, within $r$ of peaks $x_1$ and $x_2$, and $x_1$ and $x_2$ are connected to each other via a chain of neighboring peaks.
By the standard branching process comparison, the necessary condition for the existence of a giant cluster is that a “peak” $x$ is connected to more than one other “peak” on the average. All peaks within $2r$ of the focal peak are connected to the latter. Therefore, if $\mu$ is the expected number of peaks connected to $x$, then $$\mu= \lambda \cdot (4r)^n,$$ and $\mu>1$ is necessary for percolation. As demonstrated by [@Pen] (for a different choice of the norm, but the proof is the same), this condition becomes sufficient when $n$ is large. Note that the expected number $\lambda$ of peaks can be written as $\mu\cdot (4r)^{-n}$.
If $\mu>1$ and fixed, then [a. a. s.]{} a positive proportion of all peaks (that is, $cN$ peaks, where $c=c(\mu)>0$) are connected in one “giant” component, while the remaining connected components are all of size ${{\mathcal O}}(\log N)$. On the other hand, if $\mu<1$, all components are [a. a. s.]{} of size ${{\mathcal O}}(\log N)$.
The condition $\mu>1$ for the existence of the giant component of viable phenotypes can be loosely rewritten as $$\label{cont}
p_v > \frac{1}{2^n}.$$ This shows that viable phenotypes are likely to form a large connected cluster even when one is [*very*]{} unlikely to hit one of them at random, if $n$ is even moderately large. The same conclusion and the same threshold are valid if instead of $n$-cubes we use $n$-spheres of a constant radius.
The percolation threshold in the continuous phenotype space given by inequality (\[cont\]) is much smaller than that in the discrete genotype space which is given by inequality (\[basic\]). An intuitive reason for this is that continuous space offers a viable point a much greater opportunity to be connected to a large cluster. Indeed, in the discrete genotype space there are $n$ neighbors per each genotype. In contrast, in the continuous phenotype space, the ratio of the volume of the space where neigboring peaks can be located (which has radius $2r$) to the volume of the focal $n$-cube (which has radius $r$) is $2^n$.
Percolation in a correlated landscape with phenotypic neutrality
================================================================
The standard paradigm in biology is that the relationship between genotype and fitness is mediated by phenotype (i.e., observable characteristics of individuals). Both the genotype-to-phenotype and phenotype-to-fitness maps are typically not one-to-one. Here, we formulate a simple model capturing these properties which also results in a correlated fitness landscape. Below we will call mutations that do not change phenotype [*conformist*]{}. These mutations represent a subset of [*neutral*]{} mutations that do not change fitness.
We propose the following two-step model. To begin the [*first step*]{}, we make each [*pair*]{} of genotypes $x$ and $y$ in a binary hypercube ${{\mathcal G}}$ independently [*conformist*]{} with probability $p_{d(x,y)}$ where $d(x,y)$ is the Hamming distance between $x$ and $y$. We then declare $x$ and $y$ to belong to the same [*conformist cluster*]{} if they are linked by a chain of conformist pairs. This version of long-range percolation model (cf., @Ber [@Bis]) divides the set of genotypes ${{\mathcal G}}$ into conformist clusters. We postulate that all genotypes in the same conformist cluster have the same phenotype. Therefore, genetic changes represented by a change from one member of a conformist cluster to another (i.e., single or multiple mutations) are phenotypically neutral.
In the [*second step*]{}, we make each conformist cluster independently viable with probability $p_v={\lambda}_v/n$. This generates a random set of viable genotypes, and we aim to investigate when this set has a large connected component.
For example, the “genotype” can be a linear RNA sequence. This sequence folds into a 2-dimensional molecule which has a particular structure (or “shape”), and corresponds to our “phenotype.” Finally, the molecule itself has a particular function, e.g., to bind to a specific part of the cell or to another molecule. A measure of how well this can be accomplished is represented by our “fitness.”
The distribution of conformist clusters depends on the probabilities $p_1, p_2, p_3, \dots $ which determine how the conformity probability varies with distance. Here we will study the case when $p_1=p_e>0,p_2=p_3=...=0$ [@Hag]. It is then very convenient for the mathematical analysis that a pair $x$ and $y$ can be conformist only when they are linked by an edge — therefore we can talk about [*conformist edges*]{} or equivalently [*conformist mutations*]{}. (Note however that it is possible that nearest neighbors $x$ and $y$ are in the same conformist cluster even if the edge between them is non-conformist.)
Figure 1 illustrates our 2-step procedure on a four-dimensional example.
We expect that a more general model with $p_i$ declining fast enough with $i$ is just a smeared version of this basic one, and its properties are not likely to differ from those of the simpler model. We conjecture that for our purposes, “fast enough” decrease should be exponential with a rate logarithmically increasing in the dimension $n$, e.g. for large $k$, $$p_k \le \exp(-\alpha(\log n)k),$$ for some $\alpha>1$. (This is expected to be so because in this case the expected number of neighbors of the focal genotype is finite.)
We observe that the first step of our procedure is an edge version of the percolation model discussed in the second section, with a similar giant component transition [@BKL1]. Namely, let $p_1=p_e=\lambda_e/n$. Then, if ${\lambda}_e>1$, there is a. a. s. one giant conformist cluster of size $c\cdot 2^n$, with all others of size at most $Cn$. In contrast, if ${\lambda}_e<1$ all conformist clusters are of size at most $Cn$. Note that the number of conformist clusters is always on the order $2^n$. In fact, even the number of “non-conformist” (i.e., isolated) clusters is a. a. s. asymptotic to $e^{-\lambda_e} 2^n$, as $P(x\ \text{is isolated})=(1-\lambda_e/n)^n$.
[ \[clip, viewport= 140 325 475 680, height=4cm\][4q.ps]{}\
]{}
Denote by $x{\leftrightsquigarrow}y$ (resp. $x{{\,\,\leftrightsquigarrow\!\!\!\!\!\!\!\!/\;\,\,\,}}y$) the event that $x$ and $y$ are (resp. are not) in the same conformist cluster. First, we note that the probability $P(x {\leftrightsquigarrow}y)$ that two genotypes belong to the same conformist cluster depends on the Hamming distance $d(x,y)$ between them, and on $p_e=\lambda_e/n$. In particular, we show in Appendix A that, if ${\lambda}_e<1$ and $d(x,y)=k$ is fixed, then $$\label{Px-y}
k!p_e^k (1 - O(n^{-2})) \leq P(x {\leftrightsquigarrow}y) \leq k!p_e^k (1 + O(n^{-1} \log{n})).$$ The dominant contribution $k!p_e^k$ is simply the expected number of conformist pathways between $x$ and $y$ that are of shortest possible length.
It is also important to note that, for every $x\in {{\mathcal G}}$, the probability $P( x$ is viable$)=p_v$, therefore it does not depend on $p_e$. Moreover, for $x,y\in {{\mathcal G}}$, $$\begin{aligned}
&P(x\text{ and }y\text{ viable})-p_v^2\\
&=P(x\text{ and }y\text{ viable},x{\leftrightsquigarrow}y)+ P(x\text{ and }y\text{ viable},x{{\,\,\leftrightsquigarrow\!\!\!\!\!\!\!\!/\;\,\,\,}}y)-p_v^2\\
&=p_vP(x{\leftrightsquigarrow}y)+ p_v^2\cdot P(x{{\,\,\leftrightsquigarrow\!\!\!\!\!\!\!\!/\;\,\,\,}}y)-p_v^2\\
&=p_v(1-p_v)P(x{\leftrightsquigarrow}y)\ge 0.
\end{aligned}$$ Therefore, the correlation function (\[rho\]) is $$\rho(x,y)=P(x{\leftrightsquigarrow}y),$$ which clearly increases with $p_e$ and, thus, with $\lambda_e$. Therefore, this model has tunable positive correlations controlled by the parameter ${\lambda}_e$, whose value does not affect the expected number of viable genotypes. The correlation function $\rho(x,y)$ decreases exponentially with distance $d(x,y)$ when ${\lambda}_e<1$, and is bounded below when ${\lambda}_e>1$. Nevertheless, as we will see below, we can effectively use local methods for all values of ${\lambda}_e$.
Threshold surface for percolation
---------------------------------
Proceeding by the local branching process heuristics, we reason that a surviving node on the branching tree can have two types of descendants: those that are connected by conformist mutations and those that are in different conformist clusters and thus independently viable. Therefore the number of descendants is approximately Poisson(${\lambda}_e+{\lambda}_v$). This can only work when ${\lambda}_e<1$, as otherwise the correlations are global.
If ${\lambda}_e>1$, we need to eliminate the entire conformist giant component, which is [a. a. s.]{} inviable. Locally, we condition on the (supercritical) branching process of the supposed descendant to die out. Such conditioned process is a subcritical branching process, with Poisson $({\lambda}_e\delta)$ distribution of successors [@AN] where $\delta=\delta(\lambda_e)$ is given by the equation (\[delta\]). This gives the conformist contribution, to which we add the independent Poisson$({\lambda}_v\delta)$ contribution.
{height="10cm"} {height="10cm"}
To have a convenient summary of the conclusions above, assume that ${\lambda}_e$ is fixed and let $\zeta({\lambda}_e)$ be the smallest ${\lambda}_v$ which [a. a. s.]{} ensures the giant component, i.e., $$\zeta({\lambda}_e)=\inf\{{\lambda}_v: \text{a cluster of at least }cn^{-1} 2^n
\text{ viable genotypes exists {a.~a.~s.}~for some } c>0\}.$$ One would expect that for ${\lambda}_v<\zeta({\lambda}_e)$ all components are [a. a. s.]{} of size at most $Cn$. The asymptotic critical curve is given by ${\lambda}_v=\zeta({\lambda}_e)$, where $$\label{pheno}
\zeta({\lambda})=
\begin{cases}
1-{\lambda}&\qquad\text{if } {\lambda}\in [0,1],\\
\frac 1{\delta}-{\lambda}&\qquad\text{if } {\lambda}\in [1,\infty).
\end{cases}$$
Having only a heuristic proof of this, we resort to computer simulations for confirmation. For this, we indicate global connectivity with the event $A$ that a genotype within distance 2 of $0\dots 0$ is connected (through viable genotypes) to a genotype within distance 2 of $1\dots 1$. We make this choice because the distance 2 is the smallest that works with asymptotic certainty. Indeed, the genotypes $0\dots0$ and $1\dots1$ are likely to be inviable. Even the number of viable genotypes within distance one of each of these is only of constant order, so even in the percolation regime the probability of connectivity between a viable genotype within distance one of $0\dots0$ and a viable one within distance one of $1\dots1$ does not converge to 1 but is of a nontrivial constant order. By contrast, there are about $n^2$ vertices within distance 2 of $0\dots0$ among which of order $n$ are viable.
When ${\lambda}_v>\zeta({\lambda}_e)$ the probability of the event $A$ should therefore be (exponentially) close to 1. On the other hand, when ${\lambda}_v<\zeta({\lambda}_e)$ the probability that a connected component within distance 2 of either $0\dots0$ or $1\dots1$ extends for distance of the order $n$ is exponentially small. We further define the critical curves $$\begin{aligned}
&\text{${\lambda}_v^{m}=\;$the smallest ${\lambda}_v$ for which
$P(A)>0.1$,}\\
&\text{${\lambda}_v^{M}=\;$the largest ${\lambda}_v$ for which
$P(A)<0.9$.}
\end{aligned}$$
We approximated ${\lambda}_v^m$ and ${\lambda}_v^{M}$ for $n=10, \dots, 20$ and ${\lambda}_e=0(0.1)2$, with 1000 independent realizations of each choice of $n$, ${\lambda}_e$, and ${\lambda}_v$. We used the linear cluster algorithm described in [@Sed]. The results are depicted in Figure 1. Unfortunately, simulations above $n\approx 20$ are not feasible.
From Figure 2 we observe that:
- Even for low $n$, both critical curves approximate well the overall shape of the theoretical limit curve $\zeta$.
- ${\lambda}_v^{m}$ and ${\lambda}_v^{M}$ get closer faster than they converge to $\zeta$. Consequently, one can expect that $P(A)$ makes a very sharp jump from near 0 to near 1 even for moderate $n$.
- For ${\lambda}_e<1$, ${\lambda}_v^{m}$ tends to be above the limit curve. This is not really surprising, as the local argument always gives an upper bound on the probability $P(A)$ of event $A$. Further, the approximation of ${\lambda}_v^m$ deteriorates near ${\lambda}_e=2$, which stems from the possibility of survival of the giant component in this regime.
What is clear from the heuristics and simulations is that conformist mutations, and thus correlations, significantly affect the probability of long range connectivity in the genotype space. The effect is not monotone: the most advantageous choice is when the correlations are at the point of phase transition between between local and global.
To intuitively understand why percolation occurs the easiest with ${\lambda}_e \approx 1$, it helps to think of the model as a branching process on clusters rather than on genotypes. For a genotype on a viable cluster, there is a number of neighboring clusters and each of these is viable with probability $p_v$. If $\lambda_e < 1$, then the probability that any two of the neighboring genotypes are in the same cluster is $o(1)$, so there are asymptotically exactly $n$ clusters neighboring the present cluster. Consequently, the overall number of descendants will be greater if the size of these clusters is greater on average; which is exactly what happens as $\lambda_e$ increases towards 1. If $\lambda_e > 1$, then there is a positive proportion of the neighboring genotypes that are in the giant cluster. This giant cluster is likely to be inviable, so the parameter $\lambda_v$ must be greater to compensate for its loss.
Correlations between conformity and viability
---------------------------------------------
In the previous model, the viability probability $p_v$ was independent of the conformity structure. Mainly to investigate the robustness of our conclusions, we consider a simple generalization in which there are either positive or negative correlations between conformity and fitness. While more sophisticated models are possible, the one below is chosen for its amenability to relatively simple analysis.
Assume now that conformist clusters are formed as before (i.e., with edges being conformist with probability $p_e=\lambda_e/n$), are still independently viable, but now the probability of their viability depends on their size. We will consider the simple case when an isolated genotype (one might call it [*non-conformist*]{}) is viable with probability $p_0={\lambda}_0/n$, while a conformist cluster of size larger than 1 is viable with probability $p_1={\lambda}_1/n$.
In this case $$P(x\text{ is viable})=(1-p_e)^np_0+(1-(1-p_e)^n)p_1\sim \frac 1n\left(
e^{-{\lambda}_e}{\lambda}_0+(1-e^{-{\lambda}_e}){\lambda}_1\right).$$ Moreover, by a similar calculation as before, $$\begin{aligned}
&P(x\text{ and }y\text{ viable})-P(x\text{ viable})^2\\
&=p_1(1-p_1)P(x{\leftrightsquigarrow}y)+P(x\text{ non-conformist})^2p_e(p_0-p_1)^2\cdot 1_{\{d(x,y)=1\}}.
\end{aligned}$$ Here, the last factor is the indicator of the set $\{(x,y), d(x,y)=1\}$, which equals $1$ if $d(x,y)=1$ and $0$ otherwise. Therefore, for $d(x,y)\ge 2$, the correlation function (\[rho\]) is $$\rho(x,y)\sim\frac {{\lambda}_1}{e^{-{\lambda}_e}{\lambda}_0+(1-e^{-{\lambda}_e}){\lambda}_1}P(x{\leftrightsquigarrow}y),$$ which is smaller than before iff ${\lambda}_1<{\lambda}_0$. However, it has the same asymptotic properties unless ${\lambda}_1=0$.
Assume first that ${\lambda}_e<1$. The local analysis now leads to a [*multi-type*]{} branching process [@AN] with three types: NC (non-conformist node), CI (non-isolated node independently viable, so no conformist edge is accounted for), and CC (non-isolated node viable by conformity, so a conformist edge is accounted for).
Note first that a genotype is non-conformist with probability about $e^{-{\lambda}_e}$. Hence a node of any of the three types creates a Poisson($e^{-{\lambda}_e}{\lambda}_1$) number of type NC descendants, and a Poisson($(1-e^{-{\lambda}_e}){\lambda}_1$) number of type CI descendants. In addition, the type CI creates a Poisson(${\lambda}_e$), conditioned on being nonzero, number of descendants of type CC and type CC creates a Poisson(${\lambda}_e$) number of descendants of type CC. Thus the matrix of expectations, in which the $ij$th entry is the expectation of the number of type $j$ descendants from type $i$, is $$M=
\begin{bmatrix}
e^{-{\lambda}_e}{\lambda}_0 & \left(1- e^{-{\lambda}_e}\right){\lambda}_1 & 0\\
e^{-{\lambda}_e}{\lambda}_0 & \left(1- e^{-{\lambda}_e}\right){\lambda}_1 & {\lambda}_e/(1-e^{-{\lambda}_e})\\
e^{-{\lambda}_e}{\lambda}_0 & \left(1- e^{-{\lambda}_e}\right){\lambda}_1 & {\lambda}_e \end{bmatrix}\quad .$$ When ${\lambda}_e>1$, ${\lambda}_e$ needs to be replaced by ${\lambda}_e\delta$, and ${\lambda}_1$ by ${\lambda}_1\delta$, where $\delta=\delta({\lambda}_e)$ is given by (\[delta\]).
It follows from the theory of multi-type branching processes [@AN] that the critical surface for survival of a multi-type branching process is given by $\det(M-1)=0$.
The simplest case is when only non-conformist genotypes may be viable, i.e., ${\lambda}_1=0$. In this case the critical surface is given by ${\lambda}_0 e^{-{\lambda}_e}=1$ (Pitman, unpub.). Not surprisingly, the critical ${\lambda}_0$ to achieve global connectivity strictly increases with ${\lambda}_e$, which is the result of negative correlations between conformity and viability.
The other extreme is when non-conformist genotypes are inviable, i.e., ${\lambda}_0=0$. As an easy computation demonstrates, the critical curve is now given by ${\lambda}_1=\zeta({\lambda}_e)$, where $$\label{phenocorr}
\zeta({\lambda})=
\begin{cases}
\frac{1-{\lambda}}{{\lambda}e^{-{\lambda}}+1-e^{-{\lambda}}} &\qquad\text{if } {\lambda}\in \{0,1\},\\
\frac{\rho^{-1} -{\lambda}}{ {\lambda}e^{-{\lambda}}+1-e^{-{\lambda}\rho}}&\qquad\text{if } {\lambda}\in [1,\infty).
\end{cases}$$ Note that $\zeta({\lambda})\to \infty$ as ${\lambda}\to 0$. We carried out exactly the same simulations as before. These are also featured in Figure 2 (right frame), and again confirm our local heuristics. We conclude that positive correlations between viability and conformity tend to lead to a V-shaped critical curve, whose sharpness at critical conformity ${\lambda}_e=1$ increases with the size of correlations. In short, then, correlations help more if viability probability increases with size of conformist clusters.
Percolation in incompatibility models
=====================================
In the model considered in the previous section correlations rapidly decreased with distance. This property made local analysis possible. The models we introduce now are fundamentally different in the sense that correlations are so high that the local method gives a wrong answer.
In the previous sections, in constructing fitness landscapes we were assigning fitness to individual genotypes or phenotypes. Here, we make certain assumptions about “fitness” of particular combinations of alleles or the values of phenotypic characters. Specifically, we will assume that some of these combinations are “incompatible” in the sense that the resulting genotypes or phenotypes have reduced (or zero) fitness [@orr95; @orr96; @gav04]. The resulting models can be viewed as a generalization of the Bateson-Dobzhansky-Muller model [@orr95; @orr96; @orr97; @orr01; @gav96b; @gav97; @gav97b; @gav03d; @gav04; @coy04] which represents a canonical model of speciation.
Diallelic loci
--------------
We begin by assuming that viability of a genotype is determined by a set $F$ of pairwise incompatibilities. $F$ is thus a subset of $4\cdot \binom{n}{2}$ pairs $(u_i, v_j)$, where $1\le i<j\le n$ and $u,v\in\{0,1\}$. In this nonstandard notation, $(0_1,0_2)\in F$, for example, means that allele $0$ at locus $1$ and allele $0$ at locus $2$ are incompatible. In general, if $(u_i, v_j)\in F$, all genotypes with $u$ in position $i$ and $v$ in position $j$ are inviable. A genotype $x$ is then inviable if and only if there exist $i$ and $j$, with $i<j$, so that $u$ and $v$ are, respectively, the alleles of $x$ at loci $i$ and $j$, and $(u_i, v_j)\in F$. For example, if $F_1=\{(0_1, 0_2), (1_2, 0_3), (1_1, 1_2)\}$, viable genotypes may have $011$, $100$, and $101$ as their first three alleles. For $F_2=F_1\cup \{(0_1, 1_3), (1_1, 0_2)\}$, no viable genotype remains.
Incompatibility $(0_1, 0_2)$ is equivalent to two implications: $0_1\implies 1_2$ and $0_2\implies 1_1$ or to the single [OR]{} statement $1_1$ [OR]{} $1_2$. In this interpretation, the problem of whether, for a given list of incompatibilities $F$, there is a viable genotype is known as the [$2$-SAT]{} problem [@KV]. The associated [*digraph*]{} $D_F$ is a graph on $2n$ vertices $x_i$, $i=1,\dots n$, $x=0,1$, with oriented edges determined by the implications. A well-known theorem [@KV] states that a viable genotype exists iff $D_F$ contains no oriented cycle from $0_i$ to $1_i$ and back to $0_i$ for any $i=1,\dots n$ in $D_F$. For example, for the incompatibilities $F_2$ as above, one such cycle is $0_1\to1_2\to 1_3\to 1_1\to 1_2\to 0_1$.
Now assume that each possible incompatibility is adjoined to $F$ at random, independently with probability $$p=\frac c{2n}.$$ (We use the generic notation $p$ for a probability parameter in all our models, even though the nature of probabilistic assignments differs from model to model.)
[**Existence of viable genotypes.**]{}Let $N$ be the number of viable genotypes. Then
- if $c>1$, then a. a. s. $N=0$.
- if $c<1$, then a. a. s. $N>0$.
This result first appeared in the computer science literature in the 90’s (see @dlV for a review), and it is an extension of the celebrated Erdös-Rényi random graph results [@Bol; @JLR] to the oriented case.
Note that the expectation $E(N)=2^n(1-p)^{\binom{ n}{2}}\approx 2^ne^{-cn/4}$, which grows exponentially whenever $c<4\log 2\approx 2.77$. Neglecting correlations would therefore suggest a wrong threshold for $N>0$. The local method (e.g., used in @gav04 [Chapter 6]) is even farther off, as it suggests an [a. a. s.]{} giant component when $p<(1-{\epsilon})\log n/n$ for any ${\epsilon}>0$.
[**The number of viable genotypes.**]{}Assume that $c<1$. Sophisticated, but not mathematically rigorous methods based on [*replica symmetry*]{} [@MZ; @BMW] from statistical physics suggest that, as $n\to\infty$, $\lim n^{-1}\log N$ varies almost linearly between $\log 2\approx 0.69$ (for small $c$, when, as we prove below, this limit is $\log 2+{{\mathcal O}}(c)$) and about $0.38$ (for $c$ close to $1$). One can however prove that $n^{-1}\log N$ is for large $n$ sharply concentrated around its mean [@dlV].
Upper and lower bounds on $N$ can also be obtained rigorously. For example, if $X$ is a number of incompatibilities which involve [*disjoint*]{} pairs of loci (i.e., those for which every locus is represented at most once among the incompatibilities), then $N\le \exp(n\log 2+X\log(3/4))$, as each of the $X$ incompatibilities reduces the number of viable genotypes by the factor $3/4$. If we imagine adding incompatibilities one by one at random until there are about $cn$ of them, then after we have $k$ incompatibilities on disjoint pairs of loci the waiting time (measured by the number of incompatibilities added) for a new disjoint one is geometric with expectation $\binom{n} {2}/\binom{n-2k} {2}$. Therefore, $X$ is [a. a. s.]{} at least $Kn$, where $K$ solves the approximate equation $$\binom{n} {2} \left(\sum_{k=0}^{Kn}\frac 1{ \binom{n-2k} {2}} \right)\sim cn,$$ or $$\int_{0}^{Kn}\frac 1{(n-2k)^2}\, dk \sim \frac cn,$$ which reduces to $K=c/(1+2c)$. This implies that the upper bound on $N$ can be defined as $$\label{up_bound}
\limsup \frac 1n\log N\le \frac {1}{1+2c}\log 2+\frac {c}{1+2c}\log 3.$$
A lower bound is even easier to obtain. Namely, the probability that a fixed location (i.e., locus) $i$ does not appear in $F$ is $(1-p)^{4(n-1)}
\to e^{-2c}$, and then it is easy to see that the number of loci represented in $F$ is asymptotically $(1-e^{-2c})n$. As the other loci are neutral (in the sense that changing their alleles does not affect fitness), $n^{-1}\log N$ is asymptotically at least $e^{-2c}\log 2$. Clearly, this gives a lower bound on the exponential size of any cluster of viable genotypes.
If this was an accurate bound, it would imply that the space of genotypes is rather simple, in that almost all its entropy would come from neutral loci. The Appendix B presents two arguments which will demonstrate that this is not the case. The derivations there are somewhat technical, but do provide more insight into random pair incompatibilities.
[**The structure of clusters.**]{}The derivations in Appendix B show that every viable genotype is connected through mutation to a fairly substantial viable sub-cube. In this sub-cube, alleles on at most a proportion $r_u(c)<1$ of loci are fixed (to 0 or 1) while the remaining proportion $1-r_u(c)$ could be varied without effect on fitness. Note from Figure 4 in the Appendix B that $1-r_u(c)\ge 0.3$ for all $c$, and that such a phenomenon is extremely unlikely on uncorrelated landscapes. Note also that, for $c<1$, $N\ge 2^{(1-r_u(c))n}$ [a. a. s.]{} and so the lower bound on $N$ can be written as $$\label{low_bound}
\liminf\frac 1n\log N\ge (1-r_u(c))\log 2.$$
[**The number of clusters.**]{}The natural next question concerns the number of clusters $R$ when $c<1$. This again has quite a surprising answer, unparalleled in landscapes with rapidly decaying correlations. Namely, $R$ is [*stochastically bounded*]{}, that is, for every ${\epsilon}>0$ there exists an $z=z({\epsilon})$ such that $P(R\le z$ for all $n)>1-{\epsilon}$. As there is some confusion in the literature as to whether it is even possible to get more than one cluster [@BMW], Appendix C presents a sketch of the results which will appear in Pitman (unpub.). There we also show that the limiting probability of a unique cluster is $\sqrt{(1-c)e^c}$.
Asymptotically, a unique cluster has a better than even chance of occurring for $c$ below about $0.9$, and is [*very*]{} likely to occur for small $c$, though of course not [a. a. s.]{} so. To confirm, we have done simulations for $n=20$ and $c=0.01 (0.01) 1$ (again 1000 trials in each case) and got distribution of clusters depicted in Figure 3. The results suggest that the convergence to limiting distribution is rather slow for $c$ close to 1, and that the likelihood of a unique cluster increases for low $n$.
[{height="5cm"} ]{}
To summarize, in the presence of random pairwise incompatibilities, the set of viable genotypes is, when nonempty, divided into a stochastically bounded number of connected clusters, where a unique cluster is usually the most likely possibility. These clusters are all of exponentially large size (with bounds given by equations \[up\_bound\] and \[low\_bound\]), in fact they all contain sub-cubes of dimension at least $(1-r_u(c))n$. However, the proportion of viable genotypes among all $2^n$ genotypes is exponentially small, by equation (\[up\_bound\]).
Multiallelic loci
-----------------
Here we assume that at each locus there can be $a\ (\ge 2)$ alleles (cf., @Rei). In this case, the genotype space is the generalized hypercube ${{\mathcal G}}_a=\{0,\dots, a-1\}^n$. For $a=3$ this could be interpreted as the genotype space of diploid organisms without [*cis-trans*]{} effects [@gav97b], $a=4$ corresponds to DNA sequences, and $a=20$ corresponds to proteins. Much larger values of $a$ can correspond to a number of alleles at a protein coding locus and we will see later that there is not much difference between this model and a natural continuous space model.
We will assume that each pair of alleles, out of total number of $a^2\binom{n}{2}$ is independently incompatible with probability $$p=\frac{c}{2n}.$$ The main question we are interested in here is for which values of $c$ viable genotypes exist [[a. a. s.]{}]{}
Clearly, if $N$ is the number of viable phenotypes, then the expectation $$E(N)=a^n(1-p)^{\binom{n}{2}}\approx\exp(n \log a-{\textstyle\frac 14}cn),$$ and so there are [a. a. s.]{} no viable phenotypes when $c>4\log a$. On the other hand, clearly there are viable genotypes (with all positions filled by 0’s and 1’s) when $c<1$. It turns out that the first of these trivial bounds is much closer to the critical value when $a$ is large. Before we proceed, however, we state a sharp threshold result from [@Mol]: there exists a function $\gamma=\gamma(n,a)$ so that for every ${\epsilon}>0$,
- if $c>\gamma+{\epsilon}$, then a. a. s. $N=0$.
- if $c<\gamma-{\epsilon}$, then a. a. s. $N>0$.
In words, for a fixed $a$, the probability of the event that $N\ge 1$ transitions sharply from large to small as $np$ varies. As it is not proved that $\lim_{n\to\infty}\gamma(n,a)$ exists, it is in principle possible that the place of this sharp transition fluctuates as $n$ increases (although it must of course remain within $[1, 4\log a]$).
Our main result in this section is $$\label{gamma}
\gamma=4\log a-o(1), \text{ as }a\to\infty.$$ This somewhat surprising result in proven in Appendix D by the second moment method, as developed in [@AM] and [@AP].
Continuous phenotype spaces
---------------------------
Here we extend the model of pair incompatibilities for the case of continuous phenotypic space $\cal{P}$. Again, we have a small $r>0$ as a parameter. For each of $(i,j)$, $i<j$, we consider independent Poisson point location $\Pi_{ij}$ in the unit square $[0,1]\times[0,1]$, of rate ${\lambda}=c/(2n)$. (Equivalently, choose Poisson(${\lambda}$) number of points uniformly at random in $[0,1]\times[0,1]$.) Then we declare $a\in {{\mathcal P}}$ inviable if there exist $i<j$ so that $(a_i,a_j)$ is within $r$ of $\Pi_{ij}$. Again, we use the two-dimensional $\ell^\infty$ norm for distance. Our procedure can be visualized as throwing a random number of $(n-2)$-dimensional square tubes of inviable phenotypes into the phenotype space.
Our main result here is that the existence threshold is on the order $c\approx -\log r/r^{2}$. Namely, we prove in the Appendix E that there exists a constant $C>0$ so that for small enough $r$,
- if $c>4\frac{-\log r}{r^2}$, then a. a. s. $N=0$.
- if $c<\frac{-\log r-C}{r^2}$, then a. a. s. $N>0$.
Complex incompatibilities
-------------------------
Here we assume that incompatibilities involve $K\ (\geq 2)$ diallelic loci [@orr96; @gav04]. The question whether a viable combination of genes exist is then equivalent to the [$K$-SAT]{} problem [@KV]. Even for $K=3$, this is an NP-complete problem [@KV], so there is no known polynomial algorithm to answer this question. The random case, which we now describe, is also much harder to analyze than the [$2$-SAT]{} one. Let $F$ be a random set to which any of the $2^K\binom n K$ incompatibilities belong independently with probability $$p=\frac {K!}{2^K}\cdot \frac c{n^{K-1}}.$$ Here $c=c(K)$ is a constant, and the above form has been chosen to make the number of incompatibilities in $F$ asymptotically $cn$. (Note also the agreement with the definition of $p$ in Section 5.1 when $K=2$.) For a fixed $K$, it has been proved [@Fri] that the probability that viable genotype exists jumps sharply from 0 to 1 as $c$ varies. However, the location of the jump has not been proved to converge as $n\to\infty$. Instead, a lot of effort has been invested in obtaining good bounds. For example [@AP], for $K=3$, $c<3.42$ implies [[a. a. s.]{}]{} existence of viable genotype, while $c>4.51$ implies [a. a. s.]{} nonexistence (while the sharp constant is estimated to be about $4.48$, see e.g. @BMW). For $K=4$ the best current bounds are $7.91$ and $10.23$. For large $K$, the transition occurs at $c=2^K\log 2-{{\mathcal O}}(K)$ [@AP].
Techniques from statistical physics [@BMW] strongly suggest that, for $K\ge 3$, there is another phase transition, which for $K=3$ occurs at about $c=3.96$. For smaller $c$, the viable genotypes are conjectured to be contained in a [*single*]{} cluster. For larger $c$, the space of viable genotypes (if nonempty) is divided into exponentially many connected clusters.
Perhaps more relevant to genetic incompatibilities is the following [*mixed*]{} model (commonly known as [$(2+p)$-SAT]{}), @MZ). Assume that every 2-incompatibility is present with probability $c_2/(2n)$, while every 3-incompatibility is present with probability $3c_3/(4n^2)$. The normalizations are chosen so that the numbers of the two types of incompatibilities are asymptotically $c_2 n$ and $c_3 n$, respectively.
If $c_2$ (resp. $c_3$) is very small, then the respective incompatibility set affects a very small proportion of loci, therefore $c_3$ (resp. $c_2$) determines whether a viable genotype is likely to exist. Intuitively, one also expects that 2-incompatibilities should be more important than 3-incompatibilities as one of the former type excludes more genotypes than one of the latter type. A careful analysis confirms this. First observe that $c_2>1$ implies [a. a. s.]{} non-existence of a viable genotype. The surprise [@MZ; @AKKK] is that if $c_3$ is small enough, $c_2<1$ implies [a. a. s.]{} existence of viable genotypes, so the 3-incompatibilities do not change the threshold. This is established in [@MZ] by a physics argument for $c_3<0.703$, while [@AKKK] gives a rigorous argument for $c_3<2/3$. Therefore, even if their numbers are on the same scale, if the more complex incompatibilities are rare enough compared to the pairwise ones, their contribution to the structure of the space of viable genotypes is not essential.
Notes on neutral clusters in the discrete [*NK*]{} model
========================================================
The model considered here is a special case of the discretized NK model [@kau93], introduced in [@NE]. This model features $n$ diallelic loci each of which interacts with $K$ other loci. To have a concrete example, assume that the loci are arranged on a circle, so that $n+1\equiv 1$, $n+1\equiv 2$, etc., and let the interaction [*neighborhood*]{} of the $i$’th locus consist of itself and $K$ loci to its right $i+1, \dots, i+K$. For a given genotype $x\in{{\mathcal G}}=\{0,1\}^n$, the neighborhood configuration of the $i$’th locus is then given by ${{\mathcal N}}_i(x)= (x_i, x_{i+1}, \dots, x_{i+K})\in \{0,1\}^{K+1}$. To each locus and to each possible configuration in its neighborhood we independently assign a binary fitness contibution. To be more precise, we choose the $2^{K+1}n$ numbers $v_i(y)$, $i=1, \dots, n$ and $y\in \{0,1\}^{K+1}$, to be independently 0 or 1 with equal probability, and interpret $v_i(y)$ as the fitness contribution of locus $i$ when its neighborhood configuration is $y$. The fitness of a genotype $x$ is then the sum of contributions from each locus: $$w(x)=\sum_{i=1}^n v_i({{\mathcal N}}_i(x)).$$ In [@kau93], the values $v_i$ were taken from a continuous distribution. In [@NE], these values were integers in the range $[0,F-1]$ so that our model is a special case $F=2$. [*Neutral clusters*]{} are connected components of same fitness.
The $K=0$ case is easy but nevertheless illustrative. Namely, a mutation at locus $i$ will not change fitness iff $v_i(0)=v_i(1)$; let $D$ be the number of such loci. Then $D\sim n/2$ [a. a. s.]{}, the number of different fitnesses is $n-D$, each neutral cluster is a sub-cube of dimension $D$, and there are exactly $2^{n-D}$ neutral clusters.
The next simplest situation is when $K=1$. Let $D_1$ be the number of loci $i$ for which $v_i$ is constant. Then $D_1\sim n/8$ [a. a. s.]{}, and each neutral cluster contains a sub-cube of dimension $D_1$. Moreover, let $D_2$ be the number of loci $i$ for which $v_i(00)=v_i(01)\ne v_i(10)=v_1(11)$. Note that any genotypes that differ at such locus $i$ must belong to a different neutral cluster, and so the number of different neutral clusters is at least $2^{D_2}$. Thus there are exponentially many of them, as again $D_2\sim n/8$ [[a. a. s.]{}]{} This division of genotype space into exponentially many clusters of exponential size persists for every $K$, although the distribution of numbers and sizes of these clusters is not well understood (see @NE for simulations for $n=20$).
Finally, we mention that the question of whether a genotype with the maximal possible fitness $n$ exists for a given $K$ is in many way related to issues in incompatibilities models [@CJK].
Discussion
==========
In this section we summarize our major findings and provide their biological interpretation.
The previous work on neutral and nearly neutral networks in multidimensional fitness landscapes has concentrated exclusively on genotype spaces in which each individual (or a group of individuals) is characterized by a discrete set of genes. However many features of biological organisms that are actually observable and/or measurable are described by continuously varying variables such as size, weight, color, or concentration. A question of particular biological interest is whether (nearly) neutral networks are as prominent in a continuous phenotype space as they are in the discrete genotype space. Our results provide an affirmative answer to this question. Specifically, we have shown that in a simple model of random fitness assignment, viable phenotypes are likely to form a large connected cluster even if their overall frequency is very low provided the dimensionality of the phenotype space, $n$, is sufficiently large. In fact, the percolation threshold for the probability of being viable scales with $n$ as $1/2^n$ and, thus, decreases much faster than $1/n$ which is characteristic of the analogous discrete genotype space model.
Earlier work on nearly neutral networks has been limited to consideration of the relationship between genotype and fitness. Any phenotypic properties that usually mediate this relationship in real biological organisms have been neglected. In Section 4, we proposed a novel model in which phenotype is introduced explicitly. In our model, the relationships both between genotype and phenotype and between phenotype and fitness are of many-to-one type, so that neutrality is present at both the phenotype and fitness levels. Moreover, this model results in a correlated fitness landscape in which the correlation function can be found explicitly. We studied the effects of phenotypic neutrality and correlation between fitnesses on the percolation threshold and showed that the most conducive conditions for the formation of the giant component is when the correlations are at the point of phase transition between local and global. To explore the robustness of our conclusions, we then look at a simplistic but mathematically illuminating model in which there is a correlation between conformity (i.e., phenotypic neutrality) and fitness. The model has supported our conclusions.
Section 5, we studied a number of models that have been recently proposed and explored within the context of studying speciation. In these models, fitness is assigned to particular gene/trait combinations and the fitness of the whole organisms depends on the presence or absence of incompatible combinations of genes or traits. In these models, the correlations of fitnesses are so high that local methods lead to wrong conclusions. First, we established the connection between these models and $K$-[SAT]{} problems, prominent in computer science. Then we analyzed the conditions for the existence of viable genotypes, their number, as well as the structure and the number of clusters of viable genotypes. These questions have not been studied previously. Among other things we showed that the number of clusters is stochastically bounded and each cluster contains a very large sub-cube. The majority of our results are for the case of pairwise incompatibilities between diallelic loci, but we also looked at multiple alleles and complex incompatibilities. Moreover, we generalized some of our results to continuous phenotype spaces.
At the end, we provided some additional results on the size, number and structure of neutral clusters in the discrete $NK$ model.
Some more general lessons of our work are that
- Correlations may help or hinder connectivity in fitness landscapes. Even when correlations are positive and tunable by a single parameter, it may be advantageous (for higher connectivity) to increase them only to a limited extent.
- Averages (i.e., expected values) can easily lead to wrong conclusions, especially when correlations are strong. Nevertheless, they may still be useful with a crafty choice of relevant statistics.
- Very high correlations may fundamentally change the structure of connected clusters. For example, clusters may look locally more like cubes than trees and their number may be reduced dramatically.
- Necessary analytical techniques may be unexpected and quite sophisticated; for example, they may require detailed understanding of random graphs, spin-glass machinery, or decision algorithms.
[ACKNOWLEDGMENTS. This work was supported by the Defense Advanced Research Projects Agency (DARPA), by National Institutes of Health (grant GM56693), by the National Science Foundation (grants DMS-0204376 and DMS-0135345), and by Republic of Slovenia’s Ministry of Science (program P1-285).]{}
Achlioptas, D., Kirousis, L. M., Kranakis, E., and Krizanc, D. (2001). Rigorous results for $(2+p)$-[SAT]{}. , 265:109–129.
Achlioptas, D. and Moore, C. (2004). Random k-[SAT]{}: two moments suffice to cross a sharp threshold. , 17:947–973.
Achlioptas, D. and Peres, Y. (2004). The threshold for random $k$-[SAT]{} is $2\sp k\log 2-o(k)$. , 17:947–973.
Athreya, K. and Ney, P. (1971). . Springer-Verlag (reprinted by Dover 2004).
Barbour, A. D., Holst, L., and Janson, S. (1992). . Oxford University Press.
Berger, N. (2004). A lower bound for the chemical distance in sparse long-range percolation models. .
Biroli, G., Monasson, R., and Weigt, M. (2000). A variational description of the ground state structure in random satisfiability problems. , 14:551–568.
Biskup, M. (2004). On the scaling of the chemical distance in long-range percolation models. , 32:2938–2977.
Bollobás, B. (2001). . Cambridge University Press.
Bollobás, B., Kohayakawa, Y., and Łuczak, T. (1992). The evolution of random subgraphs of the cube. , 3:55–90.
Bollobás, B., Kohayakawa, Y., and Łuczak, T. (1994). On the evolution of random [Boolean]{} functions. In [*Extremal problems for finite sets (Visegrád, 1991)*]{}, pages 137–156. Bolyai Society Mathematical Studies, 3, János Bolyai Mathematical Society, Budapest.
Boufkhad, Y. and Dubois, O. (1999). Length of prime implicants and number of solutions of random [CNF]{} formulae. , 215:1–30.
Burch, C. L. and Chao, L. (1999). Evolution by small steps and rugged landscapes in the [RNA]{} virus phi 6. , 151:921–927.
Burch, C. L. and Chao, L. (2004). Epistasis and its relationship to canalization in the [RNA]{} virus phi 6. , 167:559–567.
Choi, S.-S., Jung, K., and Kim, J. H. (2005). Phase transition in a random [NK]{} landscape model. In [*Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, [Washington, DC]{}*]{}, pages 1241–1248. ACM Press.
Cook, S. A. (1971). The complexity of theorem proving procedures. In [*Proceedings of the Third Annual ACM Symposium on the Theory of Computing*]{}, pages 151–158. ACM.
Coyne, J. and Orr, H. A. (2004). . Sinauer Associates, Inc., Sunderland, Massachusetts.
de la Vega, W. F. (2001). Random [2-SAT]{}: results and problems. , 265:131–146.
Derrida, B. and Peliti, L. (1991). Evolution in flat landscapes. , 53:255–282.
Eigen, M., Mc[C]{}askill, J., and Schuster, P. (1989). The molecular quasispecies. , 75:149–263.
Elena, S. F. and Lenski, R. E. (2003). Evolution experiments with microorganisms: The dynamics and genetic bases of adaptation. , 4:457–469.
Fontana, W. and Schuster, P. (1998). Continuity in evolution: on the nature of transitions. , 280:1451–1455.
Friedgut, E. (1999). Necessary and sufficient conditions for sharp thersholds of graph properties, and the $k$-[SAT]{} problem. , 12:1017–1054.
Gavrilets, S. (1997). Evolution and speciation on holey adaptive landscapes. , 12:307–312.
Gavrilets, S. (2003). Models of speciation: what have we learned in 40 years? , 57:2197–2215.
Gavrilets, S. (2004). . Princeton University Press, Princeton, NJ.
Gavrilets, S. and Gravner, J. (1997). Percolation on the fitness hypercube and the evolution of reproductive isolation. , 184:51–64.
Gavrilets, S. and Hastings, A. (1996). Founder effect speciation: a theoretical reassessment. , 147:466–491.
Häggström, O. (2001). Coloring percolation clusters at random. , 96:213–242.
Huynen, M. A., Stadler, P. F., and Fontana, W. (1996). Smoothness within ruggedness: the role of neutrality in adaptation. , 93:397–401.
Janson, S., Łuczak, T., and Rucinski, A. (2000). . Wiley.
Kauffman, S. A. (1993). . Oxford University Press, Oxford.
Kauffman, S. A. and Levin, S. (1987). Towards a general theory of adaptive walks on rugged landscapes. , 128:11–45.
Korte, B. and Vygen, J. (2005). . Springer, 3rd edition.
Lenski, R. E., Ofria, C., Collier, T. C., and Adami, C. (1999). Genome complexity, robustness and genetic interactions in digital organisms. , 400:661–664.
Lipman, D. J. and Wilbur, W. J. (1991). Modeling neutral and selective evolution of protein folding. , 245:7–11.
Martinez, M. A., Pezo, V., Marlière, P., and Wain-Hobson, S. (1996). Exploring the functional robustness of an enzyme by [*in vitro*]{} evolution. , 15:1203–1210.
Molloy, M. (2003). Models for random constraint satisfaction problems. , 32:935–949.
Monasson, R. and Zecchina, R. (1997). Statistical mechanics of the random [K]{}-satisfiability model. , 56:1357–1370.
Newman, M. E. J. and Engelhardt, R. (1998). Effects of selective neutrality on the evolution of molecular species. , 265:1333–1338.
Orr, H. A. (1995). The population genetics of speciation: the evolution of hybrid incompatibilities. , 139:1803–1813.
Orr, H. A. (1997). Dobzhansky, [Bateson]{}, and the genetics of speciation. , 144:1331–1335.
Orr, H. A. (2006a). The distribution of fitness effects among beneficial mutations in [Fisher]{}’s geometric model of adaptation. , 238:279–285.
Orr, H. A. (2006b). The population genetics of adaptation on correlated fitness landscapes: The block model. , 60:1113–1124.
Orr, H. A. and Orr, L. H. (1996). Waiting for speciation: the effect of population subdivision on the waiting time to speciation. , 50:1742–1749.
Orr, H. A. and Turelli, M. (2001). The evolution of postzygotic isolation: accumulating [Dobzhansky]{}-[Muller]{} incompatibilities. , 55:1085–1094.
Palasti, I. (1971). On the threshold distribution function of cycles in a directed random graph. , 6:67–73.
Penrose, M. D. (1996). Continuum percolation and [Euclidean]{} minimal spanning trees in high dimensions. , 6:528–544.
Pigliucci, M. (2006). . University of Chicago Press, Chicago.
Reidys, C. M. (2006). Combinatorics of genotype-phenotype maps: an [RNA]{} case study. In Percus, A., Istrate, G., and Moore, C., editors, [ *Computational Complexity and Statistical Physics*]{}, pages 271–284. Oxford University Press.
Reidys, C. M., Forst, C. V., and Schuster, P. (2001). Replication and mutation on neutral networks. , 63:57–94.
Reidys, C. M. and Stadler, P. F. (2001). Neutrality in fitness landscapes. , 117:321–350.
Reidys, C. M. and Stadler, P. F. (2002). Combinatorial landscapes. , 44:3–54.
Reidys, C. M., Stadler, P. F., and Schuster, P. (1997). Generic properties of combinatory maps: neutral networks of [RNA]{} secondary structures. , 59:339–397.
Rost, B. (1997). Protein structures sustain evolutionary drift. , 2:S19–S24.
Schuster, P. (1995). How to search for [RNA]{} structures. theoretical concepts in evolutionary biotechnology. , 41:239–257.
Sedgewick, R. (1997). Addison-Wesley.
Skipper, R. A. (2004). The heuristic role of [Sewall Wright]{}’s 1932 adaptive landscape diagram. , 71:1176–1188.
Toman, E. (1979). The geometric structure of random boolean functions. , 35:111–132.
Wilke, C. O., Wang, J. L., Ofria, C., Lenski, R. E., and Adami, C. (2001). Evolution of digital organisms at high mutation rates leads to survival of the flattest. , 412:331–333.
Woods, R., Schneider, D., Winkworth, C. L., Riley, M. A., and Lenski, R. E. (2006). Tests of parallel molecular evolution in a long-term experiment with [[*Escherichia coli*]{}]{}. , 103:9107–9112.
Wright, S. (1932). The roles of mutation, inbreeding, crossbreeding and selection in evolution. In Jones, D. F., editor, [*Proceedings of the Sixth International Congress on Genetics*]{}, volume 1, pages 356–366, Austin, Texas.
Appendix {#appendix .unnumbered}
========
Appendix A. Proof of equation (\[Px-y\]). {#appendix-a.-proof-of-equationpx-y. .unnumbered}
-----------------------------------------
To prove equation (5), we assume that $\lambda_e<1$ and show that for a fixed $k$ (which does not grow with $n$), the event that $x$ and $y$ at distance $k$ are in the same conformist cluster is most likely to occur because $x$ and $y$ are connected via the shortest possible path. Indeed, the dominant term $k!p_e^k$ is the expected number of conformist pathways between $x$ and $y$ that are of shortest possible length $k$. This easily follows from the observation that on a shortest path there is no opportunity to backtrack; each mutation must be toward the other genotype. We can assume that $x$ is the all 0’s genotype and $y$ is the genotype with 1’s in the first $k$ positions and 0’s elsewhere. There are $k!$ orders in which the 1’s can be added.
To obtain the lower bound we use inclusion-exclusion on the probability that $x {\leftrightsquigarrow}y$ through a shortest path. Let $\mathcal{I}_l=\mathcal{I}_l(x,y)$ be the set of all paths of length $l$ between $x$ and $y$. Then $$P(x {\leftrightsquigarrow}y) \geq \sum_{\alpha \in \mathcal{I}_k} P(A_\alpha) -
\sum_{\alpha \neq \beta \in \mathcal{I}_k} P(A_\alpha\cap A_\beta)$$ where $A_\alpha$ is the event that a particular path $\alpha$ consists entirely of conformist edges. Notice that two distinct paths of the same length differ by at least two edges. Thus, we get the following upper bound $$\sum_{\alpha, \beta} P(A_\alpha\cap A_\beta) < (k!)2 p_e^{k+2},$$ and the lower bound in (5) follows.
The upper bound is a little more difficult to obtain (it is only here that we use $\lambda_e<1$) and we need some notation. Each genotype can be identified with the set of 1’s that it contains, so for any two genotypes $u$ and $v$ we let $u \bigtriangleup v$ denote the set of loci on which they differ. Notice that if $u \bigtriangleup v$ is even (resp. odd) then every path between $u$ and $v$ is of even (resp. odd) length because each mutation which alters the allele at a locus not in $u \bigtriangleup v$ must later be compensated for.
To estimate the expected number of conformist pathways, we will need to bound the number of paths of length $l$ between $x$ and $y$. This is given by $$k!\binom{l}{m}m!n^{m}\quad \text{ where }\quad m=\frac{l-k}{2}.$$ We show this via the methods of [@BKL1]. They obtain an estimate for the number of cycles of a given length through a fixed vertex of the cube.
Given a path, say $x=v_0,v_1,\ldots,v_l=y$, between $x$ and $y$, let us associate the sequence $(\epsilon_1i_1,\ldots,\epsilon_l i_l)$ where $$v_j \bigtriangleup v_{j-1}=\{i_j\}
\quad\text{and}\quad
\epsilon_j=
\left\{
\begin{array}{l}
+1\qquad\text{ if } v_j=v_{j-1}\cup{i_j} \\
-1\qquad\text{ if } v_j=v_{j-1}\setminus\{i_j\}
\end{array}
\right.$$ $j=1,\ldots,l$. Since distinct paths will have distinct sequences we can bound the number of paths by finding an upper bound for the number of sequences.
Note that there must be $m+k$ positive entries, which occur at $\binom{l}{m+k}=\binom{l}{m}$ possible locations. The absolute values of $m$ of these entries are chosen freely from $\{1,\dots, n\}$, while the remaining $k$ must be the integers $1,\ldots,k$. There are $n^mk!$ ways to do this. We are free to order the $m$ negative entries and the bound follows.
We now assume that $d(x,y)$ is even and relabel $d(x,y)=2k$. We omit the similar calculation for odd distances. Define $b=-3k/(2\log{\lambda}_e)$ and $t=\lfloor b\log n\rfloor$. Then the expected number of conformist paths between $x$ and $y$ can be expressed as $$\begin{aligned}
\sum_{l\geq k+1} \sum_{\mathcal{I}_{2l}} p_e^{2l}&=&
\sum_{k+1\leq l< t}
\sum_{\mathcal{I}_{2l}}p_e^{2l}+\sum_{l\geq t}
\sum_{\mathcal{I}_{2l}}p_e^{2l} \\
&<&\sum_{k+1\leq l< t} \binom{2l}{l-k}n^{l-k}(l-k)!(2k)!p_e^{2l}
+\sum_{l\geq t}n^{2l}p_e^{2l} \\
&=&\sum_{k+1\leq l< t}
(2l)^{l-k}n^{l-k}p_e^{2(l-k)}(2k)!p_e^{2k}
+\sum_{l\geq t}{\lambda}_e^{2l} \\
&<&(2k)!p_e^{2k}\sum_{l\geq k+1}(2b{\lambda}_e p_e\log n)^{l-k}+O({\lambda}_e^{2b\log n})
\\
&=&k (2k)!p_e^{2k} O(p_e\log{n})+O(n^{2b\log {\lambda}_e}) \\
&=&k (2k)!p_e^{2k} O\left( n^{-1} \log{n} \right) .\end{aligned}$$
Appendix B. Cluster structure under random pair incompatibilities. {#appendix-b.-cluster-structure-under-random-pair-incompatibilities. .unnumbered}
------------------------------------------------------------------
Here we show that, under random pairwise incompatibilities model introduced in Section 5.1, connected clusters include large subcubes. The basic idea comes from [@BD]. A configuration $a\in \{0,1,*\}^n$ is a way to specify a sub-cube of ${{\mathcal G}}$, if $*$’s are thought of as places which could be filled by either a 0 or a 1. The number of non-$*$’s is the [*length*]{} of $a$. Call $a$ an [*implicant*]{} if the entire sub-cube specified by $a$ is viable.
We present two arguments, beginning with the one which works better for small $c$. Let the auxiliary random variable $X$ be the number of pairs of loci $(i,j)$, $i<j$, for which:
- There is exactly one incompatibility involving alleles on $i$ and $j$.
- There is no incompatibility involving an allele on either $i$ or $j$, and an allele on $k\notin\{i,j\}$.
Assume, without loss of generality, that the incompatibility which satisfies (E1) is $(1_i, 1_j)$. Then fitness of all genotypes which have any of the allele assignments $0_i0_j$, $0_i1_j$ and $1_i0_j$, and agree on other loci, is the same. Note also that all pairs of loci which satisfy (E1) and (E2) must be disjoint. Therefore, if $x$ is any viable genotype, its cluster contains an implicant with the number of $*$’s at least $X$ plus the number of free loci. To determine the size of $X$, note that the expectation $$E(X)={\binom{n}{2}}4p(1-p)^3(1-p)^{8(n-2)}\sim ce^{-4c}n$$ and furthermore, by an equally easy computation, $$E(X^2)-E(X)^2={{\mathcal O}}(n),$$ so that $X\sim ce^{-4c}n$ [[a. a. s.]{}]{} It follows that every cluster contains [a. a. s.]{} at least $\exp((e^{-2c}+ce^{-4c})\log 2-{\epsilon})n)$, viable genotypes, for any ${\epsilon}>0$.
The second argument is a refinement of the one in [@BD] and only works better for larger $c$. Call an implicant $a$ a [*prime implicant (PI)*]{} if at any locus $i$, replacement of either $0_i$ or $1_i$ by $*_i$ results in a non-implicant. Moreover, we call $a$ the [*least prime implicant (LPI)*]{} if it is a PI, and the following two conditions are satisfied. First, if all the $*$’s are changed to 0’s, then no change from $1_i$ to $0_i$ results in a viable genotype. Second, no change $*_i1_j$ to $1_i*_j$, where $i<j$, results in an indicator.
Now, every viable genotype must have an LPI in its cluster. To see this, assume we have a PI for which the first condition is not satisfied. Make the indicated change, then replace some 0’s and 1’s by $*$’s until you get a prime indicator. If the second condition is violated, make the resulting switch, then again make some replacement by $*$’s until you arrive at a PI. Either of these two operations moves within the same cluster, and keeps the number of 1’s nonincreasing and their positions more to the left. Therefore, the procedure must at some point end, resulting in an LPI in the same cluster.
For a sub-cube $a$ to be an LPI, the following conditions need to be satisfied:
- Every non-$*$ has to be compatible with every other non-$*$, and with both 0 and 1 on each of the $*$’s.
- Any of the four 0,1 combinations on any pair of $*$’s must be compatible.
- Pick an $i$ with allele 1, that is, a $1_i$. Then $0_i$ must be incompatible with at least one non-$*$, or at least one 0 on a $*$. Furthermore, if $0_i$ has an incompatibility with a 0 on a $*$ to its left, it has to have another incompatibility, either with a non-$*$, or with a 0 or a 1 on a $*$.
- Pick a $0_i$. Then $1_i$ must be incompatible with a non-$*$, or a 0 or a 1 on a $*$.
The first two conditions make $a$ an implicant, and the last two an LPI. Note also that these conditions are independent.
Let now $X$ be the number of LPI of length $rn$. We will identify a function $L_4=L_4(r,c)$ such that $$\frac 1n\log E(X)\le L_4.$$ Let $$L_1=L_1({\beta},p,z)=z({\beta}\log p+(1-{\beta})\log(1-p)-{\beta}\log{\beta}-(1-{\beta})\log(1-{\beta})).$$ This is the exponential rate for the probability that in $zn$ Bernoulli trials with success probability $p$ there are exactly ${\beta}n$ successes, i.e., this probability is $\approx \exp(L_1n)$. Further, if $\kappa, {\epsilon},{\delta}\in(0,1)$ are fixed, then among sub-cubes with $rn$ non-$*$’s and ${\alpha}n$ 1’s (${\alpha}\le r$), the proportion which have ${\epsilon}n$ 1’s in $[\kappa n, n]$ and ${\delta}n$ $*$’s in $[1,\kappa n]$ has exponential rate $$\begin{aligned}
L_2=&L_2(r,c,\kappa, {\alpha}, {\epsilon}, {\delta})\\
=&L_1(({\alpha}-{\epsilon})/\kappa, {\alpha}, \kappa)+L_1({\epsilon}/(1-\kappa), {\alpha}, 1-\kappa)\\
&+L_1({\delta}/(\kappa-{\alpha}+{\epsilon}), 1-r, \kappa-{\alpha}+{\epsilon})+ L_1((1-r-{\delta})/(1-\kappa-{\epsilon}), 1-r, 1-\kappa-{\epsilon}).
\end{aligned}$$ (Here all four first arguments in $L_1$ are in $[0, 1]$, or else the rate is $-\infty$.)
The expected number of LPI, with $r,\kappa, {\epsilon},{\delta}$ given as above, has exponential rate at most (and this is only an upper bound) $$\begin{aligned}
L_3=&L_3(r,c,\kappa, {\alpha}, {\epsilon}, {\delta})\\
=&-(1-r)\log(1-r)-{\alpha}\log{\alpha}-(r-{\alpha})\log(r-{\alpha})\\
&-c(1-r/2)^2\\
&+(r-{\alpha})\log(1-\exp(-c(1-r/2)))\\
&+({\alpha}-{\epsilon})\log(1-\exp(-c/2))+{\epsilon}\log(1-\exp(-c/2)-{\textstyle\frac 12}{\delta}c\exp(-c(1-r/2)))\\
&+L_2(r,c,\kappa, {\alpha}, {\epsilon}, {\delta}).
\end{aligned}$$ The next to last line is obtained from (LPI1), as ${\epsilon}n$ 1’s must have ${\delta}n$ $*$’s on their left.
It follows that $L_4$ can be obtained by $$L_4(r,c)=\inf_\kappa\sup_{{\alpha}, {\epsilon},{\delta}} L_3(r,c,\kappa, {\alpha}, {\epsilon}, {\delta}).$$ If $L_4(r,c)<0$, all LPI (for this $c$) [a. a. s.]{} have length at most $r$. Numerical computations show that this gives a better bound than $1-e^{-2c}-ce^{-4c}$ for $c\ge 0.38$. Let us denote the best upper bound from the two estimates by $r_u(c)$. This function is computed numerically and plotted in Figure 3.
[{height="5cm"} ]{}
Appendix C. Number of clusters under random pair incompatibilities {#appendix-c.-number-of-clusters-under-random-pair-incompatibilities .unnumbered}
------------------------------------------------------------------
In this section we briefly explain why the number of clusters under random pair incompatibilities is asymptotically a function of a Poisson random variable. There is a clear way to separate the genotype space into disconnected clusters. For example, if $F_1=\{(0_1,0_2), (1_2,0_3),(1_1,1_2)\}$, we see that every viable genotype has one of these two allele configurations on the first two loci: $C=0_11_2$ or $\overline{C}=1_10_2$. Since there are no genotypes with $0_10_1$ or $1_11_2$, there is no way to mutate from the viable genotypes with $0_11_2$ to the viable genotypes with $1_10_2$ without passing through an inviable genotype. However, if we add one incompatibility to $F_1$ to make $F_2=F_1\cup\{(0_1,1_2)\}$, then there are no longer any genotypes with the alleles $0_11_2$ and we return to a single cluster of viable genotypes.
Notice that the digraph $D_{F_1}$ contains the directed cycle $1_1 \to 0_2 \to 1_1$ and equivalently the directed cycle $1_2 \to 0_1 \to 1_2$. $D_{F_3}$ also contains these cycles but there are paths between them as well: $0_2 \to 0_1$ and $1_1 \to 1_2$.
Formally, a pair of complementary allele configurations $(C,\overline{C})$ on a set of $k \geq 2$ loci is defined to be a [*splitting pair*]{} if the digraph $D_F$ contains a directed cycle (in any order) on the alleles in $C$ (and equivalently on those in $\overline{C}$, which consist of reversed alleles in $C$) and does not contain a path between the alleles in $C$ and the alleles in $\overline{C}$. It should be clear from the example $F_1$ above that the existence of a splitting pair will create a barrier in the genotype space through which it is not possible to pass by mutations on viable genotypes. In fact, it is proved in Pitman (unpub.) that any two viable genotypes $u$ and $v$ will be disconnected in the fitness landscape if and only if the loci on which they differ contain a splitting pair.
Thus, the existence of viable genotypes on either side of a splitting pair (with each configuration of complementary alleles) ensures disconnected clusters. If there are $k$ splitting pairs in the formula $F$ and there are viable genotypes with each of the allele configurations in each of the splitting pairs then there are $2^k$ clusters of viable genotypes. The restriction that there be viable genotypes on either side is asymptotically unlikely to make a difference as we can fix one of the $2^k$ configurations of alleles and [a. a. s.]{} find a viable genotype on the remaining loci. Therefore the number of clusters of viable genotypes is [a. a. s.]{} equal to $2^X$, where $X$ is the number of splitting pairs, provided that $X$ is stochastically bounded, but we will see shortly that the expectation $E(X)$ is bounded. In fact, the next paragraph suggests that $X$ converges to a Poisson limiting distribution. (A detailed discussion of this issue will appear in Pitman (unpub.).)
It follows from [@Pal] or [@Bol] that the number of directed cycles of length $k$ in $D_F$ is Poisson$(\lambda_k)$ with $\lambda_k = (2k)^{-1}c^k$. In particular, the expected number of splitting pairs converges to is $\lambda=-\frac{1}{2} (\ln(1-c)+c)$. Moreover, the probability that there is no splitting pair converges to the product of the probabilities that the cycle of each length is absent [@Pal], which is $$\prod_{k=2}^\infty \exp{\left(-\frac{c^k}{2k}\right)} =
\exp{\left(\frac{ \ln{(1-c)}+c}{2}\right)} = [(1-c)e^c]^{\frac{1}{2}}.$$ In particular, this gives the limiting probability of a unique cluster.
Appendix D. Proof of equation (\[gamma\]). {#appendix-d.-proof-of-equationgamma. .unnumbered}
------------------------------------------
In this section we assume that genotypes have multiallelic loci, which are subject to random pair incompatibilities. The model introduced in Section 5.2 is the most natural, but is not best suited for our second moment approach. Instead, we will work with the equivalent modified model with $m$ pair incompatibilities, each chosen independently at random, and the first and the second member of each pair chosen independently from the $an$ available alleles. We will assume that $m=\frac 14ca^2n$, label $c'=\frac 14c$, and denote, as usual, the resulting set of incompatibilities by $F$.
To see that these two models are equivalent for our purposes, first note that the number of incompatibilities which are [*not legitimate*]{}, in the sense that the two alleles are chosen from the same locus, is stochastically bounded in $n$. (In fact, it converges in distribution to a Poisson($c'a^2$) random variable.) Moreover, by the Poisson approximation to the birthday problem [@BHJ], the number of pairs of choices which result in the same incompatibility in this model is asymptotically Poisson($c'a^2/2$). In short, then, the procedure results in the number $m-{{\mathcal O}}(1)$ of different legitimate incompatibilities. If $m$ in the modified model is increased to, say, $m'=m+n^{2/3}$, then the two models could be coupled so that the incompatibilities in the original model are included in those in the modified model. As the existence of a viable phenotype becomes less likely when $m$ is increased, this demonstrates that (\[gamma\]) will follow once we show the following for the modified model: for every ${\epsilon}>0$ there exists a large enough $a$ so that $c'<\log a-{\epsilon}$ implies that $N\ge 1$ [a. a. s.]{}
To show this, we introduce the auxiliary random variable $$X=\sum_{\sigma \in {{\mathcal G}}_a}\prod_{I\in F}\left(w_01_{\{|I\cap\sigma|=0\}}+
w_11_{\{|I\cap\sigma|=1\}}\right),$$ where $1_A$ is the indicator of the set $A$. The size of the intersection $I\cap\sigma$ is computed by transforming both the incompatibility $I$ and the genotype $\sigma$ to sets of (indexed) alleles, and the weights $w_0$ and $w_1$ will be chosen later. To intuitively understand the statistic $X$, note that when $w_0=w_1=1$, the product is exactly the indicator of the event that $\sigma$ is viable and $X$ is then the number of viable genotypes $N$. In general, $X$ gives different scores to different viable genotypes — however, the crucial fact to note is that that $X>0$ iff $N>0$. Therefore $$P(N>0)= P(X>0)\ge (E(X))^2/E(X^2),$$ which is how the second moment method is used [@AM].
As $$\begin{aligned}
&P(|\sigma\cap I|=0)=\left(\frac {a-1}a\right)^2, \\
&P(|\sigma\cap I|=1)=\frac {2(a-1)}{a^2}, \\
\end{aligned}$$ we have $$E(X)=a^n\left(w_0\left(\frac {a-1}a\right)^2+w_1\frac {2(a-1)}{a^2}\right)^m.$$ Moreover $$E(X^2)=\sum_{k=0}^n a^n \binom{n}{k}(a-1)^k(w_0^2 P(00)+2w_0w_1P(01)+w_1^2P(11)),$$ where $P(01)$ is the probability that $I$ has intersection of size $0$ with $\sigma=0_1\dots0_k0_{k+1}\dots 0_n$ and of size $1$ with $\tau=1_1\dots1_k0_{k+1}\dots 0_n$, and $P(00)$ and $P(11)$ are defined analogously. Thus, if $k={\alpha}n$, $$\begin{aligned}
&P(00)=\left(1-\frac{1+{\alpha}}a\right)^2,\\
&P(01)=\frac{2{\alpha}}a\left(1-\frac{1+{\alpha}}a\right),\\
&P(11)=\frac{2(1-{\alpha})}a\left(1-\frac{1+{\alpha}}a\right)+2\left(\frac{\alpha}a\right)^2.
\end{aligned}$$ Let $\Lambda=\Lambda_{a, w_0, w_1}({\alpha})$ be the $n$’th root of the $k=({\alpha}n)$’th term in the sum for $E(X^2)$, divided by $E(X)^2$. Hence $$\begin{aligned}
\Lambda=&\frac{(a-1)^{\alpha}}{a\cdot {\alpha}^{\alpha}(1-{\alpha})^{1-{\alpha}}}\\
&\times \frac
{\left( w_0^2\left(1-\frac{1+{\alpha}}a\right)^2+4w_0w_1\frac{{\alpha}}a\left(1-\frac{1+{\alpha}}a\right)
+2w_1^2\left(\frac{(1-{\alpha})}a\left(1-\frac{1+{\alpha}}a\right)+\left(\frac{\alpha}a\right)^2\right)
\right)^{c'a^2}}
{\left(w_0\left(\frac {a-1}a\right)^2+w_1\frac {2(a-1)}{a^2}\right)^{2c'a^2}}.
\end{aligned}$$ Let ${\alpha}^*=(a-1)/a$. A short computation shows that $\Lambda=1$ when ${\alpha}={\alpha}^*$.
If $\Lambda>1$ for some ${\alpha}$, then $E(X^2)/(E(X))^2$ increases exponentially and the method fails (as we will see below, this always happens when $w_0=w_1=1$, i.e., when $X=N$). On the other hand, if $\Lambda<1$ for ${\alpha}\ne{\alpha}^*$, and $\frac{d^2\Lambda}{d{\alpha}^2}({\alpha}^*)<0$, then Lemma 3 from [@AM] implies that $E(X^2)/(E(X))^2\le C$ for some constant $C$, which in turn implies that $P(N>0)\ge 1/C$. The sharp threshold result then finishes off the proof of (\[gamma\]).
Our aim then is to show that $w_0$ and $w_1$ can be chosen so that, for $c'=\log a-{\epsilon}$, $\Lambda$ has the properties described in the above paragraph. We have thus reduced the proof of (\[gamma\]) to a calculus problem.
Certainly the necessary condition is that $\frac{d\Lambda}{d{\alpha}}({\alpha}^*)=0$, and $$\frac{d\Lambda}{d{\alpha}}({\alpha}^*)=-\frac 2{a^3}(w_0(a-1)-w_1(a-2))^2,$$ so we choose $w_0=a-2$ and $w_1=a-1$. (Only the quotient between $w_0$ and $w_1$ matters, so a single equation is enough.) This simplifies $\Lambda$ to $$\Lambda=\Lambda_a({\alpha})=\frac{(a-1)^{\alpha}}{a{\alpha}^{\alpha}(1-{\alpha})^{1-{\alpha}}}
\cdot
\frac
{\left(\left({\alpha}-\frac{a-1}a\right)^2-\frac{(a-1)^4}{a^2}\right)^{c'a^2}}
{\left(\frac{(a-1)^2}a\right)^{2c'a^2}}.$$ Let $\varphi=\log\Lambda$. We need to demonstrate that $\varphi<0$ for ${\alpha}\in[0,{\alpha}^*)\cup ({\alpha}^*, 1]$ and that $\varphi''({\alpha}^*)<0$. A further simplification can be obtained by using $x-Cx^2\le \log(1+x)\le x$ (valid for all nonnegative $x$), which enables us to transform $\varphi$ (without changing the notation) to $$\varphi({\alpha})=c'\frac{a^4}{(a-1)^4}\left({\alpha}-\frac{a-1}a\right)^2
-{\alpha}\log{\alpha}-(1-{\alpha})\log(1-{\alpha})+{\alpha}\log(a-1)-\log a.$$ Now $$\varphi''({\alpha})=2c'\frac{a^4}{(a-1)^4}-\frac 1{{\alpha}(1-{\alpha})}.$$ So automatically, for $c'$ large but $c'=o(a)$, $\varphi''({\alpha}^*)<0$ for large $a$. Moreover, $\varphi$ cannot have another local maximum when $\varphi''>0$. If $\varphi({\alpha})\ge 0$ for some ${\alpha}\ne{\alpha}^*$, then this must happen for an ${\alpha}$ in one of the two intervals $[0, 1/(2c')+{{\mathcal O}}((c')^{-2})]$ or $[1- 1/(2c')-{{\mathcal O}}((c')^{-2}), 1]$. Now, $\varphi$ has a unique maximum at ${\alpha}^*$ in the second interval. In the first interval, a short computation shows that $$\varphi({\alpha})=-{\epsilon}-{\alpha}\log a+{{\mathcal O}}\left(\frac{\log\log a}{\log a}\right),$$ which is negative for large $a$. This ends the proof.
This method yields nontrivial lower bounds for $\gamma$ for all $a\ge 3$, cf. Table 1.
[ ]{}
[|r||r|r|]{}$a$ & l. b. on $\gamma$ & $4\log a$\
3 & 1.679 & 4.395\
4 & 2.841 & 5.546\
5 & 3.848 & 6.438\
6 & 4.714 & 7.168\
7 & 5.467 & 7.784\
8 & 6.128 & 8.318\
9 & 6.715 & 8.789\
10 & 7.242 & 9.211\
20 & 10.672 & 11.983\
30 & 12.608 & 13.605\
40 & 13.944 & 14.756\
50 & 14.960 & 15.649\
100 & 18.017 & 18.421\
200 & 20.982 & 21.194\
300 & 22.663 & 22.816\
400 & 23.846 & 23.966\
500 & 24.759 & 24.859\
Appendix E. Existence of viable phenotypes. {#appendix-e.-existence-of-viable-phenotypes. .unnumbered}
-------------------------------------------
In this section we describe a comparison between models from Sections 5.2 and 5.3 that will yield the result in Section 5.3. We begin by assuming that $a=1/r$ is an integer, which we can do without loss of generality. Divide the $i$’th coordinate interval $[0,1]$ into $a$ disjoint intervals $I_{i0},\dots, I_{i,{a-1}}$ of length $r$. For a phenotype $x\in {{\mathcal P}}$ let $\Delta(x)\in {{\mathcal G}}_a$ be determined so that $\Delta(x)_i=j$ iff $x_i\in I_{ij}$.
Note that, as soon as $I_{i_1j_1}\times I_{i_2j_2}$ contains a point in ${{\mathcal P}}_{i_1i_2}$, no $x$ with $\Delta(x)_{i_1}=j_1$ and $\Delta(x)_{i_2}=j_2$ is viable. This happens independently for each such Cartesian product, with probability $1-\exp(-{\lambda}r^2)\ge cr^2/(2n)$. Therefore, using the result from Section 5.2, when $cr^2>4\log a=-4\log r$, there is [a. a. s.]{} no viable genotype.
On the other hand, let $I^{\epsilon}$ be the closed ${\epsilon}$-neighborhood of the interval $I$ in $[0,1]$ (the set of points within ${\epsilon}$ of $I$), and consider the events that $I_{i_1j_1}^{r/2}\times I_{i_2j_2}^{r/2}$ contains a point in $\Pi_{i_1i_2}$. These events are independent if we restrict $j_1,j_2$ to even integers. Moreover, each has probability $1-\exp(-4{\lambda}r^2)\sim 4cr^2/(2n)$, for large $n$. It again follows from Section 6.2 that a viable genotype $x$ with $\Delta(x)_i$ even for all $i$, [a. a. s.]{} exists as soon as $4cr^2<4(\log (a/2)-o(1))=(-4\log r-\log 2-o(1))$.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
[Haifeng Hu$^{1}$ and Kaijun Zhang$^{2,*}$]{}\
*1. Center for Partial Differential Equations, East China Normal University,\
*Minhang, Shanghai 200241, P.R. China\
*2. School of Mathematics and Statistics, Northeast Normal University,\
*Changchun 130024, P.R. China****
title: '[Stability and semi-classical limit in a semiconductor full quantum hydrodynamic model with non-flat doping profile]{} '
---
**Abstract.** [We present the new results on stability and semi-classical limit in a semiconductor full quantum hydrodynamic (FQHD) model with non-flat doping profile. The FQHD model can be used to analyze the thermal and quantum influences on the transport of carriers (electrons or holes) in semiconductor device. Inspired by the physical motivation, we consider the initial-boundary value problem of this model over the one-dimensional bounded domain and adopt the ohmic contact boundary condition and the vanishing bohmenian-type boundary condition. Firstly, the existence and asymptotic stability of a stationary solution are proved by Leray-Schauder fixed-point theorem, Schauder fixed-point theorem and the refined energy method. Secondly, we show the semi-classical limit results for both stationary solutions and global solutions by the elaborate energy estimates and the compactness argument. The strong convergence rates of the related asymptotic sequences of solutions are also obtained.]{}\
[**Keywords.** Full quantum hydrodynamic model, dispersive velocity term, non-flat doping profile, asymptotic stability, semi-classical limit, semiconductor.]{}
[**2010 Mathematics Subject Classification.** 35A01, 35B40, 35M33, 35Q40, 76Y05, 82D37.]{}
Introduction {#Sect.1}
============
In the mathematical modeling of the nano-size semiconductor devices (e.g. HEMTs, MOSFETs, RTDs and superlattice devices ), the quantum effects (like particle tunneling through potential barriers and particle buildup in quantum wells) take place and can not be simulated by classical hydrodynamic models. Therefore, the quantum hydrodynamical (QHD) equations are important and dominative in the description of the motion of electrons or holes transport under the self-consistent electric field.
The QHD conservation laws have the same form as the classical hydrodynamic equations (for simplicity, we treat the flow of electrons in the self-consistent electric field for unipolar devices):
\[qhdcl\]
[\_[t]{}\^]{}n+[\_[x\_k]{}\^]{}j\_k=0, \[qhdcl1\]\
[\_[t]{}\^]{}j\_l+[\_[x\_k]{}\^]{}(u\_kj\_l-P\_[kl]{})=n[\_[x\_l]{}\^]{}-,l=1,2,3, \[qhdcl2\]\
[\_[t]{}\^]{}e+[\_[x\_k]{}\^]{}(u\_ke-u\_lP\_[kl]{}+q\_k)=j\_k[\_[x\_k]{}\^]{}+C\_e, \[qhdcl3\]\
\^2=n-D(), \[qhdcl4\]
where $n>0$ is the electron density, $\bm{u}=(u_1,u_2,u_3)$ is the velocity, $\bm{j}=(j_1,j_2,j_3)$ is the momentum density, $\bm{P}=(P_{kl})$ is the stress tensor, $\phi$ is the self-consistent electrostatic potential, $e$ is the energy density, $\bm{q}=(q_1,q_2,q_3)$ is the heat flux. Indices $k,l$ equal $1,2,3$, and repeated indices are summed over using the Einstein convention. Equation expresses conservation of electron number, expresses conservation of momentum, and expresses conservation of energy. The last terms in and represent electron scattering (the collision terms may include the effects of electron-phonon and electron-impurity collisions, intervalley and interband scattering), which is modeled by the standard relaxation time approximation with momentum and energy relaxation times $\tau_m>0$ and $\tau_e>0$. The energy relaxation term $C_e$ is given by $$C_e=-\frac{1}{\tau_e}\Bigg(\frac{1}{2}n|\bm{u}|^2+\frac{3}{2}n(\theta-\theta_{L})\Bigg),$$ where $\theta>0$ is the electron temperature and $\theta_{L}>0$ is the temperature of the semiconductor lattice in energy units. The transport equations $\sim$ are coupled to Poisson’s equation for the self-consistent electrostatic potential, where $\lambda>0$ is the Debye length, $D=N_d-N_a$ is the doping profile, $N_d>0$ is the density of donors, and $N_a>0$ is the density of acceptors.
The QHD equations $\sim$ are derived as a set of nonlinear conservation laws by a moment expansion of the Wigner-Boltzmann equation [@W32] and an expansion of the thermal equilibrium Wigner distribution function to $O({\varepsilon}^2)$, where ${\varepsilon}>0$ is the scaled Planck constant. However, to close the moment expansion at the first three moments, we must define, for example, $\bm{j}$, $\bm{P}$, $e$ and $\bm{q}$ in terms of $n$, $\bm{u}$ and $\theta$. According to the closure assumption [@G94], up to order $O({\varepsilon}^2)$, we define the momentum density $\bm{j}$, the stress tensor $\bm{P}=(P_{kl})$, the energy density $e$ and the heat flux $\bm{q}$ as follows: $$\begin{gathered}
\bm{j}=n\bm{u},\qquad P_{kl}=-n\theta\delta_{kl}+\frac{{\varepsilon}^2}{2}n{\partial_{x_k}^{}}{\partial_{x_l}^{}}\ln n,\\
e=\frac{3}{2}n\theta+\frac{1}{2}n|\bm{u}|^2-\frac{{\varepsilon}^2}{4}n\Delta\ln n,\qquad \bm{q}=-\kappa\nabla\theta-\frac{3{\varepsilon}^2}{4}n\Delta\bm{u},\end{gathered}$$ with the Kronecker symbol $\delta_{kl}$ and the heat conductivity $\kappa>0$. The quantum correction to the stress tensor was first stated in the semiconductor context by Ancona and Iafrate [@AI89] and Ancona and Tiersten [@AT87]. Since $$\frac{{\varepsilon}^2}{2}\mathrm{div}(n(\nabla\otimes\nabla)\ln n)={\varepsilon}^2n\nabla\Bigg(\frac{\Delta\sqrt{n}}{\sqrt{n}}\Bigg),$$ it can be interpreted as a force including the Bohm potential ${\varepsilon}^2\Delta\sqrt{n}/\sqrt{n}$ [@FZ93]. The quantum correction to the energy density was first derived by Wigner [@W32]. The heat conduction term consists of a classical Fourier law $-\kappa\nabla\theta$ plus a new quantum contribution $-3{\varepsilon}^2n\Delta\bm{u}/4$ which can be interpreted as a dispersive heat flux [@G95; @JMM06]. For details on the more general quantum models for semiconductor devices, one can refer to the references [@J01; @MRS90; @ZH16book].
Interestingly, most quantum terms cancel out in the energy equation . In fact, by substituting the above expressions for $C_e$, $\bm{j}$, $\bm{P}$, $e$ and $\bm{q}$ into , a computation yields the multi-dimensional full quantum hydrodynamic (FQHD) model for semiconductors as follows.
\[fqhd\]
n\_t+(n)=0, \[fqhd1\]\
(n)\_t+(n)+(n)-\^2n()=n-, \[fqhd2\]\
n\_t+n+n-()\
-(n)=n||\^2-, \[fqhd3\]\
\^2=n-D(). \[fqhd4\]
Comparing with the classical full hydrodynamic (FHD) model, the new feature of the FQHD model is the Bohm potential term $$-{\varepsilon}^2n\nabla\Bigg(\frac{\Delta\sqrt{n}}{\sqrt{n}}\Bigg)$$ in the momentum equation and the dispersive velocity term $$-\frac{{\varepsilon}^2}{3}\mathrm{div}(n\Delta\bm{u})$$ in the energy equation . Both of them are called quantum correction terms (or dispersive terms) and belong to the third-order derivative terms of the system .
Recently, the study concerning the semiconductor quantum models and the related quantum systems has become popular. Jüngel and Li [@JL04; @JL04-1] investigated the one-dimensional unipolar isentropic QHD model with the Dirichlet-Neumann boundary condition and the flat doping profile. The authors proved the existence, uniqueness and exponential stability of the subsonic stationary solution for the quite general pressure-density function. Nishibata and Suzuki [@NS08] reconsidered this QHD model with isothermal simplification and the vanishing bohmenian-type boundary condition. The authors generalized Jüngel and Li’s results to the non-flat doping profile case and also discussed the semi-classical limit for both stationary and global solutions. Hu, Mei and Zhang [@HMZ16] generalized Nishibata and Suzuki’s results to the bipolar case with non-constant but flat doping profile. Huang, Li and Matsumura [@HLM06] proved the existence, exponential stability and semi-classical limit of stationary solution of Cauchy problem for the one-dimensional isentropic unipolar QHD model. Li and Yong [@LY17] studied the nonlinear diffusion phenomena on the Cauchy problem of the one-dimensional isentropic bipolar QHD model. The authors proved the algebraic stability of the diffusion waves.
In multi-dimensional case, Jüngel [@J98] first considered the unipolar stationary isothermal and isentropic QHD model for potential flows on a bounded domain. The existence of solutions was proved under the assumption that the electric energy was small compared to the thermal energy, where Dirichlet boundary conditions were addressed. This result was then generalized to bipolar case by Liang and Zhang [@LZ07]. Unterreiter [@U97] proved the existence of the thermal equilibrium solution of the bipolar isentropic QHD model confined to a bounded domain by variational analysis, and the semi-classical limit is carried out recovering the minimizer of the limiting functional. This result recently was developed by Di Michele, Mei, Rubino and Sampalmieri [@MMRS17] to a new model of the bipolar isentropic hybrid quantum hydrodynamics. Regarding the unipolar QHD model for irrotational fluid in spatial periodic domain, the global existence of the dynamic solutions and the exponential convergence to their equilibria were artfully proved by Li and Marcati in [@LM04]. Remarkably, the weak solutions with large initial data for the quantum hydrodynamic system in multiple dimensions were further obtained by Antonelli and Marcati in [@AM09; @AM12]. Li, Zhang and Zhang [@LZZ08] investigated the large-time behavior of solutions to the initial value problem of the isentropic QHD model in the whole space $\mathbb{R}^3$ and obtained the algebraic time-decay rate, and further showed in [@ZLZ08] the semi-classical and relaxation limits of global solutions. Recently, Pu and Guo [@PG16] studied the Cauchy problem of a quantum hydrodynamic equations with viscosity and heat conduction in the whole space $\mathbb{R}^3$. The global existence around a constant steady-state and semi-classical limit of the global solutions were shown by the energy method. This result was developed by Pu and Xu [@PX17], the authors obtained the optimal convergence rates to the constant equilibrium solution by the pure energy method and negative Sobolev space estimates.
However, all of these research results more or less have some limitations from both physical and mathematical points of view. Actually, in practical applications, the semiconductor quantum models should be treated under the following physically motivated settings which make the mathematical analysis more difficult:
- The model system should be considered on a bounded domain $\Omega$ and be supplemented by physical boundary conditions.
- In realistic semiconductor devices, the doping profile will be a non-flat function of the spatial variable. For instance, it has two steep slops in $n^+-n-n^+$ diodes [@G94]. Therefore, we should assume the continuity and positivity only to cover the actual devices. Namely, $$\label{nonflat}
D\in C(\overline{\Omega}),\qquad \inf_{x\in\overline{\Omega}}D(x)>0.$$
- To study the FQHD model which includes the quantum corrected energy equation is essentially significant for understanding the quantum transport of the hot carriers in semiconductor devices. The new feature is that one has to investigate both thermal and quantum effects in the more complex model system than the various simplified models.
In this paper, under the above physical principle settings (1)$\sim$(3), we will study the FQHD model in one space dimension with $\tau_m=\tau_e=\kappa=\lambda=1$. Namely, we consider
\[1dfqhd\]
[n]{}\_t+[j]{}\_x=0, \[1dfqhd1\]\
[j]{}\_t+(+[n]{})\_x-\^2[n]{}\_x=[n]{}\_x-[j]{}, \[1dfqhd2\]\
[n]{}\_t+[j]{}\_x+[n]{}()\_x-\_[xx]{}-\_x=-[n]{}(-\_[L]{}), \[1dfqhd3\]\
\_[xx]{}=[n]{}-D(x), t>0, x:=(0,1),\[1dfqhd4\]
with the initial condition $$\label{ic}
({n},{j},{\theta})(0,x)=({n}_0,{j}_0,{\theta}_0)(x),$$ and the boundary conditions
\[bc\] $$\begin{gathered}
{n}(t,0)={n}_{l},\qquad {n}(t,1)={n}_{r},\label{bc-a}\\
\big(\sqrt{{n}}\big)_{xx}(t,0)=\big(\sqrt{{n}}\big)_{xx}(t,1)=0,\label{bc-b}\\
{\theta}(t,0)={\theta}_{l},\qquad {\theta}(t,1)={\theta}_{r},\label{bc-c}\\
{\phi}(t,0)=0,\qquad {\phi}(t,1)=\phi_r,\label{bc-d}\end{gathered}$$
where the boundary data ${n}_{l},{n}_{r},{\theta}_{l},{\theta}_{r}$ and $\phi_r$ are positive constants. The vanishing bohmenian-type boundary condition means that the quantum Bohm potential vanishes on the boundary, which is derived in [@G94; @P99] and is also physically reasonable. The other boundary conditions in are called ohmic contact boundary condition. In order to establish the existence of a classical solution, we further assume the initial data $({n}_0,{j}_0,{\theta}_0)$ is compatible with the boundary data $\sim$ and ${n}_t(t,0)={n}_t(t,1)=0$, namely, $$\begin{gathered}
{n}_0(0)={n}_l,\quad {n}_0(1)={n}_r,\quad {\theta}_0(0)=\theta_l,\quad {\theta}_0(1)=\theta_r,\notag\\
{j}_{0x}(0)={j}_{0x}(1)=\big(\sqrt{{n}_0}\big)_{xx}(0)=\big(\sqrt{{n}_0}\big)_{xx}(1)=0.\label{compatibility}\end{gathered}$$
An explicit formula of the electrostatic potential $$\begin{aligned}
\label{efep}
\phi(t,x)=&\Phi[{n}](t,x) \notag\\
:=&\int_0^x\int_0^y\big({n}(t,z)-D(z)\big)dzdy+\Bigg(\phi_r-\int_0^1\int_0^y\big({n}(t,z)-D(z)\big)dzdy\Bigg)x,\end{aligned}$$ follows from and .
In consideration of the solvability of the system , the following properties
\[psc\] $$\begin{gathered}
\inf_{x\in\Omega}{n}>0,\qquad \inf_{x\in\Omega}{\theta}>0,\label{pc}\\
\inf_{x\in\Omega} S[{n},{j},{\theta}]>0,\qquad \text{where}\ S[{n},{j},{\theta}]:={\theta}-\frac{{j}^2}{{n}^2}\label{sc}\end{gathered}$$
attract our main interest. The condition means the positivity of the electron density and temperature. The other one is called the subsonic condition . Apparently, if we want to construct the solution in the physical region where the conditions hold, then the initial data must satisfy the same conditions $$\label{ipsc}
\inf_{x\in\Omega}{n}_{0}>0,\qquad \inf_{x\in\Omega}{\theta}_0>0, \qquad\inf_{x\in\Omega}S[{n}_{0},{j}_{0},{\theta}_0]>0.$$
The strength of the boundary data, which is defined by $$\label{delta}
\delta:=|n_l-n_r|+|\theta_l-\theta_{L}|+|\theta_r-\theta_{L}|+|\phi_r|,$$ plays a crucial role in the proofs of our main results in what follows.
The first aim in this paper is to investigate the existence, uniqueness and asymptotic stability of the stationary solution satisfying the following boundary value problem,
\[1dsfqhd\]
\_x=0, \[1dsfqhd1\]\
S\[,,\]\_x+\_x-\^2\_x=\_x-, \[1dsfqhd2\]\
\_x-()\_x-\_[xx]{}-\_x=-(-\_[L]{}), \[1dsfqhd3\]\
\_[xx]{}=-D(x), x,\[1dsfqhd4\]
and
\[sbc\] $$\begin{gathered}
{\tilde{n}}(0)={n}_{l},\qquad {\tilde{n}}(1)={n}_{r},\label{sbc-a}\\
\big(\sqrt{{\tilde{n}}}\big)_{xx}(0)=\big(\sqrt{{\tilde{n}}}\big)_{xx}(1)=0,\label{sbc-b}\\
{\tilde{\theta}}(0)={\theta}_{l},\qquad {\tilde{\theta}}(1)={\theta}_{r},\label{sbc-c}\\
{\tilde{\phi}}(0)=0,\qquad {\tilde{\phi}}(1)=\phi_r.\label{sbc-d}\end{gathered}$$
The second aim in the present paper is to study the singular limit as the scaled Planck constant ${\varepsilon}>0$ tends to zero in both the stationary problem $\sim$ and the transient problem $\sim$. Formally, we let ${\varepsilon}=0$ in the model system and its stationary counterpart , respectively, we then obtain the following limit systems. The limit transient system can be written as
\[1dfhd\]
[n\^[0]{}]{}\_t+[j\^[0]{}]{}\_x=0, \[1dfhd1\]\
[j\^[0]{}]{}\_t+(+[n\^[0]{}]{}[\^[0]{}]{})\_x=[n\^[0]{}]{}[\^[0]{}]{}\_x-[j\^[0]{}]{}, \[1dfhd2\]\
[n\^[0]{}]{}[\^[0]{}]{}\_t+[j\^[0]{}]{}[\^[0]{}]{}\_x+[n\^[0]{}]{}[\^[0]{}]{}()\_x-[\^[0]{}]{}\_[xx]{}=-[n\^[0]{}]{}([\^[0]{}]{}-\_[L]{}), \[1dfhd3\]\
[\^[0]{}]{}\_[xx]{}=[n\^[0]{}]{}-D(x), t>0, x,\[1dfhd4\]
and is supplemented by the same initial and ohmic contact boundary conditions with and , $$\label{0ic}
({n^{0}},{j^{0}},{\theta^{0}})(0,x)=({n}_0,{j}_0,{\theta}_0)(x),$$ and
\[0bc\] $$\begin{gathered}
{n^{0}}(t,0)={n}_{l},\qquad {n^{0}}(t,1)={n}_{r},\label{0bc-a}\\
{\theta^{0}}(t,0)={\theta}_{l},\qquad {\theta^{0}}(t,1)={\theta}_{r},\label{0bc-b}\\
{\phi^{0}}(t,0)=0,\qquad {\phi^{0}}(t,1)=\phi_r.\label{0bc-c}\end{gathered}$$
We call the limit system as the full hydrodynamic (FHD) model for semiconductor devices. The limit stationary system is the stationary version of the FHD model , it can be written by
\[1dsfhd\]
[\^[0]{}]{}\_x=0, \[1dsfhd1\]\
S\[[\^[0]{}]{},[\^[0]{}]{},[\^[0]{}]{}\][\^[0]{}]{}\_x+[\^[0]{}]{}[\^[0]{}]{}\_x=[\^[0]{}]{}[\^[0]{}]{}\_x-[\^[0]{}]{}, \[1dsfhd2\]\
[\^[0]{}]{}[\^[0]{}]{}\_x-[\^[0]{}]{}[\^[0]{}]{}([\^[0]{}]{})\_x-[\^[0]{}]{}\_[xx]{}=-[\^[0]{}]{}([\^[0]{}]{}-\_[L]{}), \[1dsfhd3\]\
[\^[0]{}]{}\_[xx]{}=[\^[0]{}]{}-D(x), x,\[1dsfhd4\]
and is supplemented by the same ohmic contact boundary condition with ,
\[0sbc\] $$\begin{gathered}
{\tilde{n}^{0}}(0)={n}_{l},\qquad {\tilde{n}^{0}}(1)={n}_{r},\label{0sbc-a}\\
{\tilde{\theta}^{0}}(0)={\theta}_{l},\qquad {\tilde{\theta}^{0}}(1)={\theta}_{r},\label{0sbc-b}\\
{\tilde{\phi}^{0}}(0)=0,\qquad {\tilde{\phi}^{0}}(1)=\phi_r.\label{0sbc-c}\end{gathered}$$
Throughout the rest of this paper, we will use the following notations. For a nonnegative integer $l\geq0$, $H^l(\Omega)$ denotes the $l$-th order Sobolev space in the $L^2$ sense, equipped with the norm $\|\cdot\|_l$. In particular, $H^0=L^2$ and $\|\cdot\|:=\|\cdot\|_0$. For a nonnegative integer $k\geq0$, $C^k(\overline{\Omega})$ denotes the $k$-times continuously differentiable function space, equipped with the norm $|f|_k:=\sum_{i=0}^k\sup_{x\in\overline{\Omega}}|{\partial_{x}^{i}}f(x)|$. The positive constants $C$, $C_1$, $\cdots$ only depend on ${n}_{l}$, $\theta_L$ and $|D|_0$. If the constants $C$, $C_1$, $\cdots$ additionally depend on some other quantities $\alpha$, $\beta$, $\cdots$, we write $C(\alpha,\beta,\cdots)$, $C_1(\alpha,\beta,\cdots)$, $\cdots$. The notations $\mathfrak{X}_m^l$, $\mathfrak{Y}_m^l$ and $\mathfrak{Z}$ denote the function spaces defined by $$\begin{gathered}
\mathfrak{X}_m^l([0,T]):=\bigcap_{k=0}^m C^k([0,T];H^{l+m-k}(\Omega)),\\
\mathfrak{Y}_m^l([0,T]):=\bigcap_{k=0}^{[m/2]} C^k([0,T];H^{l+m-2k}(\Omega)),\quad\text{for}\ m,l=0,1,2,\cdots,\\
\mathfrak{Z}([0,T]):=C^2([0,T];H^2(\Omega)).\end{gathered}$$
The limit problems $\sim$ and $\sim$ have been studied by Nishibata and Suzuki [@NS09]. The authors obtain the existence, uniqueness and asymptotic stability of the stationary solution. The corresponding results are stated in the following lemmas.
\[lem1\] Let the doping profile and the boundary data satisfy conditions and . For arbitrary positive constants $n_l$ and $\theta_{L}$, there exist three positive constants $\delta_1$, $c$ and $C$ such that if $\delta\leq\delta_1$, then the BVP $\sim$ has a unique solution $({\tilde{n}^{0}},{\tilde{j}^{0}},{\tilde{\theta}^{0}},{\tilde{\phi}^{0}})$ satisfying the condition in the space $C^2(\overline{\Omega})\times C^2(\overline{\Omega})\times H^3(\Omega)\times C^2(\overline{\Omega})$. Moreover, the stationary solution satisfies the estimates $$\label{0se}
0<c\leq{\tilde{n}^{0}},{\tilde{\theta}^{0}},S[{\tilde{n}^{0}},{\tilde{j}^{0}},{\tilde{\theta}^{0}}]\leq C,\quad |{\tilde{j}^{0}}|+\|{\tilde{\theta}^{0}}-\theta_{L}\|_3\leq C\delta,\quad |({\tilde{n}^{0}},{\tilde{\phi}^{0}})|_2\leq C.$$
\[lem2\] Let the doping profile and the boundary data satisfy conditions and . Assume that the initial data $({n}_0,{j}_0,{\theta}_0)\in\big[H^2(\Omega)\big]^3$ and satisfies the conditions and . For arbitrary positive constants $n_l$ and $\theta_{L}$, there exist three positive constants $\delta_2$, $\gamma_1$ and $C$ such that if $\delta+\|({n}_0-{\tilde{n}^{0}},{j}_0-{\tilde{j}^{0}},{\theta}_0-{\tilde{\theta}^{0}})\|_2\leq\delta_2$, then the IBVP $\sim$ has a unique global solution $({n^{0}},{j^{0}},{\theta^{0}},{\phi^{0}})$ satisfying the condition in the space $\mathfrak{X}_2([0,\infty))\times\big[\mathfrak{X}_1^1([0,\infty))\cap H_{loc}^2(0,\infty;L^2(\Omega))\big]\times\big[\mathfrak{Y}_2([0,\infty))\cap H_{loc}^1(0,\infty;H^1(\Omega))\big]\times\mathfrak{Z}([0,\infty))$. Moreover, the solution verifies the additional regularity ${\phi^{0}}-{\tilde{\phi}^{0}}\in\mathfrak{X}_2^2([0,\infty))$ and the decay estimate $$\begin{gathered}
\label{0de}
\|({n^{0}}-{\tilde{n}^{0}},{j^{0}}-{\tilde{j}^{0}},{\theta^{0}}-{\tilde{\theta}^{0}})(t)\|_2+\|({\phi^{0}}-{\tilde{\phi}^{0}})(t)\|_4\\
\leq C\|({n}_{0}-{\tilde{n}^{0}},{j}_{0}-{\tilde{j}^{0}},{\theta}_{0}-{\tilde{\theta}^{0}})\|_2\,e^{-\gamma_1 t},\quad\forall t\in [0,\infty).\end{gathered}$$
Now, we are in the position to state the main results in this paper. Firstly, the existence and uniqueness of the quantum stationary solution is summarized in the following theorem.
\[thm1\] Suppose that the doping profile and the boundary data satisfy conditions and . For arbitrary positive constants $n_l$ and $\theta_{L}$, there exist three positive constants $\delta_3$, ${\varepsilon}_1(\leq1)$ and $C$ such that if $\delta\leq\delta_3$ and $0<{\varepsilon}\leq{\varepsilon}_1$, then the BVP $\sim$ has a unique solution $({\tilde{n}^{{\varepsilon}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}},{\tilde{\phi}^{{\varepsilon}}})\in H^4(\Omega)\times H^4(\Omega)\times H^3(\Omega)\times C^2(\overline{\Omega})$ satisfying the condition and the uniform estimates
\[145.1\] $$\begin{gathered}
0<b^2\leq{\tilde{n}^{{\varepsilon}}}\leq B^2,\quad 0<\frac{1}{2}\theta_L\leq{\tilde{\theta}^{{\varepsilon}}}\leq\frac{3}{2}\theta_L, \label{m145.1a}\\
\|{\tilde{n}^{{\varepsilon}}}\|_2+\|({\varepsilon}{\partial_{x}^{3}}{\tilde{n}^{{\varepsilon}}},{\varepsilon}^2{\partial_{x}^{4}}{\tilde{n}^{{\varepsilon}}})\|+|{\tilde{\phi}^{{\varepsilon}}}|_2\leq C,\label{m145.1b}\\
|{\tilde{j}^{{\varepsilon}}}|+\|{\tilde{\theta}^{{\varepsilon}}}-\theta_{L}\|_3\leq C\delta,\label{m145.1c}\end{gathered}$$
where the positive constants $B$ and $b$ are defined as follows $$\label{75.1}
B:=\frac{3}{2}\sqrt{n_l}\,e^{2|D|_0/\theta_{L}},\quad b:=\frac{1}{2}\sqrt{n_l}\,e^{-(B^2+2|D|_0)/\theta_{L}}.$$
The asymptotic stability of the quantum stationary solution is stated in the next theorem.
\[thm2\] Assume that the doping profile and the boundary data satisfy conditions and . Let the initial data $({n}_0,{j}_0,{\theta}_0)\in H^4(\Omega)\times H^3(\Omega)\times H^2(\Omega)$ and satisfies the conditions and . For arbitrary positive constants $n_l$ and $\theta_{L}$, there exist four positive constants $\delta_4$, ${\varepsilon}_2$, $\gamma_2$ and $C$ such that if $0<{\varepsilon}\leq{\varepsilon}_2$ and $\delta+\|({n}_0-{\tilde{n}^{{\varepsilon}}},{j}_0-{\tilde{j}^{{\varepsilon}}},{\theta}_0-{\tilde{\theta}^{{\varepsilon}}})\|_2+\|({\varepsilon}{\partial_{x}^{3}}({n}_0-{\tilde{n}^{{\varepsilon}}}),{\varepsilon}{\partial_{x}^{3}}({j}_0-{\tilde{j}^{{\varepsilon}}}),{\varepsilon}^2{\partial_{x}^{4}}({n}_0-{\tilde{n}^{{\varepsilon}}}))\|\leq\delta_4$, then the IBVP $\sim$ has a unique global solution $({n^{{\varepsilon}}},{j^{{\varepsilon}}},{\theta^{{\varepsilon}}},{\phi^{{\varepsilon}}})$ satisfying the condition in $\big[\mathfrak{Y}_4([0,\infty))\cap H_{loc}^2(0,\infty;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,\infty))\cap H_{loc}^2(0,\infty;L^2(\Omega))\big]\times\big[\mathfrak{Y}_2([0,\infty))\cap H_{loc}^1(0,\infty;H^1(\Omega))\big]\times\mathfrak{Z}([0,\infty))$. Moreover, the solution verifies the additional regularity ${\phi^{{\varepsilon}}}-{\tilde{\phi}^{{\varepsilon}}}\in\mathfrak{Y}_4^2([0,\infty))$ and the decay estimate $$\begin{aligned}
&\|({n^{{\varepsilon}}}-{\tilde{n}^{{\varepsilon}}},{j^{{\varepsilon}}}-{\tilde{j}^{{\varepsilon}}},{\theta^{{\varepsilon}}}-{\tilde{\theta}^{{\varepsilon}}})(t)\|_2 \notag\\
&+\|({\varepsilon}{\partial_{x}^{3}}({n^{{\varepsilon}}}-{\tilde{n}^{{\varepsilon}}}),{\varepsilon}{\partial_{x}^{3}}({j^{{\varepsilon}}}-{\tilde{j}^{{\varepsilon}}}),{\varepsilon}^2{\partial_{x}^{4}}({n^{{\varepsilon}}}-{\tilde{n}^{{\varepsilon}}}))(t)\|+\|({\phi^{{\varepsilon}}}-{\tilde{\phi}^{{\varepsilon}}})(t)\|_4 \notag\\
&\leq C\Big(\|({n}_0-{\tilde{n}^{{\varepsilon}}},{j}_0-{\tilde{j}^{{\varepsilon}}},{\theta}_0-{\tilde{\theta}^{{\varepsilon}}})\|_2 \notag\\
&\qquad\qquad\quad+\|({\varepsilon}{\partial_{x}^{3}}({n}_0-{\tilde{n}^{{\varepsilon}}}),{\varepsilon}{\partial_{x}^{3}}({j}_0-{\tilde{j}^{{\varepsilon}}}),{\varepsilon}^2{\partial_{x}^{4}}({n}_0-{\tilde{n}^{{\varepsilon}}}))\|\Big)\,e^{-\gamma_2 t},\quad\forall t\in [0,\infty).\label{de}\end{aligned}$$
It is naturally expected that the solution $({n^{{\varepsilon}}},{j^{{\varepsilon}}},{\theta^{{\varepsilon}}},{\phi^{{\varepsilon}}})$ of the quantum system approaches the solution $({n^{0}},{j^{0}},{\theta^{0}},{\phi^{0}})$ of the limit system as ${\varepsilon}$ tends to zero. To justify this expectation, we first consider the convergence of the stationary solutions. Precisely, we show that the quantum stationary solution $({\tilde{n}^{{\varepsilon}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}},{\tilde{\phi}^{{\varepsilon}}})$ of the BVP $\sim$ converges to the limit stationary solution $({\tilde{n}^{0}},{\tilde{j}^{0}},{\tilde{\theta}^{0}},{\tilde{\phi}^{0}})$ of the BVP $\sim$ as ${\varepsilon}$ tends to zero. Then, we further study the convergence of the global solutions. The former result is summarized in the following theorem.
\[thm3\] Suppose that the same conditions in Lemma \[lem1\] and Theorem \[thm1\] hold. For arbitrary positive constants $n_l$ and $\theta_{L}$, there exist two positive constants $\delta_5$ and $C$ such that if $\delta\leq\delta_5$, then for all $0<{\varepsilon}\leq{\varepsilon}_1$ (where ${\varepsilon}_1$ is given in Theorem \[thm1\]) the following convergence estimate
\[ss39.1\] $$\label{ss39.1a}
\|{\tilde{n}^{{\varepsilon}}}-{\tilde{n}^{0}}\|_1+|{\tilde{j}^{{\varepsilon}}}-{\tilde{j}^{0}}|+\|{\tilde{\theta}^{{\varepsilon}}}-{\tilde{\theta}^{0}}\|_2+\|{\tilde{\phi}^{{\varepsilon}}}-{\tilde{\phi}^{0}}\|_3\leq C{\varepsilon},$$ holds true. Furthermore, $$\label{ss39.1b}
\big\|\big({\partial_{x}^{2}}({\tilde{n}^{{\varepsilon}}}-{\tilde{n}^{0}}),{\varepsilon}{\partial_{x}^{3}}{\tilde{n}^{{\varepsilon}}},{\varepsilon}^2{\partial_{x}^{4}}{\tilde{n}^{{\varepsilon}}},{\partial_{x}^{3}}({\tilde{\theta}^{{\varepsilon}}}-{\tilde{\theta}^{0}}),{\partial_{x}^{4}}({\tilde{\phi}^{{\varepsilon}}}-{\tilde{\phi}^{0}})\big)\big\|\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0.$$
The semi-classical limit of the transient problem is stated in the next theorem.
\[thm4\] Assume that the same conditions in Lemma \[lem2\] and Theorem \[thm2\] hold. For arbitrary positive constants $n_l$ and $\theta_{L}$, there exist four positive constants $\delta_6$, $\gamma_3$, $\gamma_4$ and $C$ such that if $$\begin{gathered}
\label{sic}
{\varepsilon}+\delta+\|({n}_0-{\tilde{n}^{{\varepsilon}}},{j}_0-{\tilde{j}^{{\varepsilon}}},{\theta}_0-{\tilde{\theta}^{{\varepsilon}}})\|_2\\
+\big\|\big({\varepsilon}{\partial_{x}^{3}}({n}_0-{\tilde{n}^{{\varepsilon}}}),{\varepsilon}{\partial_{x}^{3}}({j}_0-{\tilde{j}^{{\varepsilon}}}),{\varepsilon}^2{\partial_{x}^{4}}({n}_0-{\tilde{n}^{{\varepsilon}}})\big)\big\|\leq\delta_6,\end{gathered}$$ then the following convergence estimates
\[gs\] $$\label{gs164.5}
\|({n^{{\varepsilon}}}-{n^{0}},{j^{{\varepsilon}}}-{j^{0}},{\theta^{{\varepsilon}}}-{\theta^{0}})(t)\|_1+\|({\phi^{{\varepsilon}}}-{\phi^{0}})(t)\|_3\leq Ce^{\gamma_3 t}{\varepsilon}^{1/2},\quad\forall t\in[0,\infty),$$ and $$\label{gs1}
\sup_{t\in[0,\infty)}\Big(\|({n^{{\varepsilon}}}-{n^{0}},{j^{{\varepsilon}}}-{j^{0}},{\theta^{{\varepsilon}}}-{\theta^{0}})(t)\|_1+\|({\phi^{{\varepsilon}}}-{\phi^{0}})(t)\|_3\Big)\leq C{\varepsilon}^{\gamma_4}$$ hold true.
Now, we illustrate the main ideas and the key technical points in the proofs of the above theorems. Firstly, we apply the Schauder fixed-point theorem to solve the stationary problem $\sim$. To this end, we heuristically construct a fixed-point mapping $\mathcal{T}$, see , through a careful observation on the structure of the stationary FQHD model . Roughly speaking, in order to deal with the Bohm potential term in the stationary momentum equation , we introduce a transformation ${\tilde{w}}:=\sqrt{{\tilde{n}}}$ and reduce the stationary momentum equation to a parameter-dependent semilinear elliptic equation of the second order with the nonlocal terms by using the vanishing bohmenian-type boundary condition . In order to treat the dispersive velocity term in the stationary energy equation , the desired mapping $\mathcal{T}$ has to be defined by solving two carefully designed nonlocal problems $(P1)$ and $(P2)$ in turn, see and . The unique solvability of both $(P1)$ and $(P2)$ can be proved by using the Leray-Schauder fixed-point theorem and energy estimates. This ensure that the mapping $\mathcal{T}$ is well-defined. During the proof, the main difficulty is to establish the uniform (in ${\varepsilon}$) estimates and .
Secondly, the existence of the global-in-time solution and the asymptotic stability of the stationary solution can be proved by the standard continuation argument based on the local existence and the uniform a priori estimate. Similar to the stationary problem, we also introduce a transformation ${w}:=\sqrt{{n}}$ to conveniently deal with the Bohm potential term in the momentum equation . The local existence result is proved by combining the iteration method with the energy estimates. The unique solvability of the linearized problem $\sim$ used to design the iteration scheme is shown in Appendix by Galerkin method, where we have used the existence result in [@NS08] for a fourth order wave equation. The uniform a priori estimate is established by refined energy method. The proof is very complicated due to the non-flatness of the stationary density and the appearance of the dispersive velocity term in the perturbed energy equation . During the proof, we find that the spatial derivatives of the perturbations $({\psi},{\eta},{\chi},{\sigma})$ can be bounded by the temporal derivatives of the perturbations $({\psi},{\eta},{\chi})$ with the help of the special structure of the perturbed system , see . Therefore, we only need to establish the estimates of the temporal derivatives of the perturbations $({\psi},{\eta},{\chi})$ by using the homogeneous boundary condition . We also find the interplay of the dissipative-dispersive effects in the FQHD model. Roughly speaking, the Bohm potential term in the perturbed momentum equation contributes the quantum dissipation rate $\|{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx}(t)\|$, see . The dispersive velocity term in the perturbed energy equation contributes the extra quantum dissipation rate $\|{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx}(t)\|$, see . The dissipative property of the dispersive velocity term plays a crucial role to close the uniform a priori estimate .
Finally, we justify the semi-classical limit for both the stationary solutions and global solutions by using the energy method and compactness argument. For stationary solutions, in order to overcome the difficulties arising from the non-flatness of the stationary density, we need to introduce the transformations ${\tilde{z}^{{\varepsilon}}}:=\ln {\tilde{n}^{{\varepsilon}}}$ and ${\tilde{z}^{0}}:=\ln {\tilde{n}^{0}}$. In addition, we also have to technically estimate a bad integral term $I_2$ during establishing the error estimate of the stationary temperature error variable ${\tilde{\varTheta}^{{\varepsilon}}}$. Actually, we find that the quantum stationary current density ${\tilde{j}^{{\varepsilon}}}$ and the limit stationary current density ${\tilde{j}^{0}}$ possess the same explicit formula due to the vanishing bohmenian-type boundary condition. Based on this fact, we can successfully overcome the difficulty in estimating the integral term $I_2$, see . For global solutions, we have to pay more attention on the influences of the quantum corrected energy equation , see and for example, the computations are very complicated. In the proof, the semi-classical limit of the stationary solutions plays important role.
The paper is organized as follows. In Section \[Sect.2\], we prove the existence and uniqueness of the stationary solution. Section \[Sect.3\] is devoted to the global existence and stability analysis. In Subsection \[Subsect.3.1\], we show the local existence. In Subsections \[Subsect.3.2\]$\sim$\[Subsect.3.5\], we reformulate the problem and establish the uniform a priori estimate. Section \[Sect.4\] is devoted to the verification of the semi-classical limit. In Subsection \[Subsect.4.1\], we discuss the stationary case. In Subsection \[Subsect.4.2\], we study the non-stationary case.
Existence and uniqueness of the stationary solution {#Sect.2}
===================================================
In this section, we show Theorem \[thm1\]. The proof is based on the Schauder fixed-point theorem (see Corollary 11.2 in [@GT98]), the Leray-Schauder fixed-point theorem (see Theorem 11.3 in [@GT98]) and the energy method.
Since the proof is complicated, we divide it into several steps for clarification.
*Step 1. Reformulation of the problem .* It is convenient to make use of the transformation ${\tilde{w}}:=\sqrt{{\tilde{n}}}$. Inserting this transformation into the system , dividing the the equation by ${\tilde{w}}^2$ and integrating the resultant on $[0,x)$ and then using the boundary condition , applying the Green formula to the equation together with the boundary condition , via the necessary calculations, the above procedures yield the following BVP with a constant current density ${\tilde{j}}$ (which will be determined later, see below),
\[103.2\]
\^2\_[xx]{}=h(,), \[103.2a\]\
\_[xx]{}-\_[x]{}+(\^2)\_x-\^2(-\_L)=g(,;), x,\[103.2b\]
with boundary conditions
\[103.3\] $$\begin{gathered}
{\tilde{w}}(0)={w}_{l},\qquad {\tilde{w}}(1)={w}_{r},\label{103.3a}\\
{\tilde{\theta}}(0)={\theta}_{l},\qquad {\tilde{\theta}}(1)={\theta}_{r},\label{103.3b}\end{gathered}$$
where
$$\begin{gathered}
F(a_1,a_2,a_3):=\frac{a_2^2}{2a_1^2}+a_3+a_3\ln a_1,\quad{w}_{l}:=\sqrt{{n}_{l}}, \quad{w}_{r}:=\sqrt{{n}_{r}},\label{103.5}\\
{\tilde{\phi}}(x)=G[{\tilde{w}}^2](x):=\int_0^1G(x,y)({\tilde{w}}^2-D)(y)dy+\phi_rx,\quad G(x,y):=\begin{cases}x(y-1),\ x<y\\y(x-1),\ x>y\end{cases},\label{104.2}\\
h({\tilde{w}},{\tilde{\theta}}):={\tilde{w}}\bigg[F({\tilde{w}}^2,{\tilde{j}},{\tilde{\theta}})-F({n}_{l},{\tilde{j}},\theta_l)-{\tilde{\phi}}-\int_0^x{\tilde{\theta}}_{x}\ln{\tilde{w}}^2dy+{\tilde{j}}\int_0^x{\tilde{w}}^{-2}dy\bigg],\label{103.4}\\
g({\tilde{w}},{\tilde{\theta}};{\varepsilon}):=-\frac{1}{3}\frac{{\tilde{j}}^2}{{\tilde{w}}^2}+\frac{{\varepsilon}^2}{3}{\tilde{j}}\bigg(\frac{12{\tilde{w}}_x^3}{{\tilde{w}}^3}-\frac{14{\tilde{w}}_x{\tilde{w}}_{xx}}{{\tilde{w}}^2}+\frac{2{\tilde{w}}_{xxx}}{{\tilde{w}}}\bigg).\label{103}\end{gathered}$$
Next, taking value $x=1$ in the equation and using the boundary condition , we obtain the current-voltage relation $$\label{ucvr}
F({n}_{r},{\tilde{j}},\theta_r)-F({n}_{l},{\tilde{j}},\theta_l)-\phi_r-\int_0^1{\tilde{\theta}}_{x}\ln{\tilde{w}}^2dy+{\tilde{j}}\int_0^1{\tilde{w}}^{-2}dy=0.$$ Easy to see, the the equation is a quadratic equation on ${\tilde{j}}$. Based on the subsonic condition , we can uniquely solve ${\tilde{j}}$ provided ${\tilde{w}}$, ${\tilde{\theta}}$ are given and the strength parameter $\delta$ is small enough. Precisely, the constant stationary current density ${\tilde{j}}$ satisfies the following explicit formula $$\begin{aligned}
\label{104.1}
&{\tilde{j}}=J[{\tilde{w}}^2,{\tilde{\theta}}]:=2\Big(\bar{b}+\int_0^1{\tilde{\theta}}_{x}\ln{\tilde{w}}^2dy\Big)K[{\tilde{w}}^2,{\tilde{\theta}}]^{-1},\\
&K[{\tilde{w}}^2,{\tilde{\theta}}]:=\int_0^1{\tilde{w}}^{-2}dy+\sqrt{\Big(\int_0^1{\tilde{w}}^{-2}dy\Big)^2+2\Big(\bar{b}+\int_0^1{\tilde{\theta}}_{x}\ln{\tilde{w}}^2dy\Big)\Big(n_{r}^{-2}-n_{l}^{-2}\Big)},\notag\\
&\bar{b}:=\phi_r-\theta_r+\theta_l-\theta_r\ln n_r+\theta_l\ln n_l.\notag\end{aligned}$$
It is obvious that the BVP $\sim$ combined with the explicit formulas and is equivalent to the original BVP $\sim$ under the transformation ${\tilde{n}}={\tilde{w}}^2$ for positive smooth solution $({\tilde{w}},{\tilde{\theta}})$.
*Step 2. Construction of the fixed-point mapping.* From now on, we focus on the unique solvability of the BVP $\sim$. The system is a one-dimensional semilinear nonlocal elliptic system with a singular parameter ${\varepsilon}\in(0,1]$ in the principal part of its first component equation . To solve it, we adopt the conventional framework based on the Schauder fixed-point theorem.
Observing the structure of the system , we can construct the fixed-point mapping appropriately by the following procedure.
Firstly, we introduce a closed convex subset $\mathcal{U}[N_1,N_2]$ in the Banach space $C^2(\overline{\Omega})$ below, where $N_1$ and $N_2$ are positive constants to be determined later (see below), $$\label{120.1}
\mathcal{U}[N_1,N_2]:=\Big\{q\in C^2(\overline{\Omega})\,\Big|\, \|q-\theta_L\|_1\leq N_1\delta,\quad\|q_{xx}\|\leq N_2\delta,\quad q(0)=\theta_l,\ q(1)=\theta_r \Big\}.$$
Next, we define the fixed-point mapping $$\begin{aligned}
\label{110.1}
\mathcal{T}:\ \mathcal{U}[N_1,N_2]&\longrightarrow H^3(\Omega)\notag\\
q&\longmapsto Q\end{aligned}$$ by solving the following two problems in turn. For any fixed $q\in \mathcal{U}[N_1,N_2]$, we firstly solve the problem $(P1)$:
\[109.1\]
[(P1)]{} \^2u\_[xx]{}=h(u,q), x,\[109.1a\]\
u(0)=w\_l,u(1)=w\_r.
For problem $(P1)$, we claim the following fact, its proof will be given in the next step.
Based on the Claim 1, for given function pair $(u,q)$, we further solve the problem $(P2)$:
\[109.2\]
[(P2)]{} Q\_[xx]{}-JQ\_[x]{}+J\_\*(u\^2)\_x\_L+J(u\^2)\_x(Q-\_L)\
-u\^2(Q-\_L)=g(u,q;), x,\[109.2a\]\
Q(0)=\_l,Q(1)=\_r,
where $J:=J[u^2,q]$ and $J_*:=2\big(\bar{b}+\int_0^1Q_x\ln u^2dx\big)K[u^2,q]^{-1}$. For problem $(P2)$, we also have a claim, its proof will be given in the Step 4.
*Step 3. Proof of Claim 1.* Now, we begin to solve $(P1)$. In order to avoid vacuum $u=0$, we consider a truncation problem $(tP)$ induced by $(P1)$:
\[111.1\]
[(tP)]{} \^2u\_[xx]{}=h(u\_,q), x,\[111.1a\]\
u(0)=w\_l,u(1)=w\_r,\[111.1b\]
where $$u_{\alpha\beta}:=\max\big\{\beta,\min\{\alpha,u\}\big\},\quad 0<\frac{1}{2}b=:\beta<\alpha:=2B.$$ This problem can be solved by Leray-Schauder fixed-point theorem. To this end, we define a fixed-point mapping $\mathcal{T}_1: r\mapsto R$ over $H^1(\Omega)$ by solving the linear problem:
\[112.1\]
\^2R\_[xx]{}=h(r\_,q), x,\[112.1a\]\
R(0)=w\_l,R(1)=w\_r.\[112.1b\]
In fact, for given $q\in\mathcal{U}[N_1,N_2]$ and $r\in H^1(\Omega)$, the right-side $h(r_{\alpha\beta},q)\in H^1(\Omega)$. Thus, the linear BVP has a unique solution $R=:\mathcal{T}_1r\in H^3(\Omega)$ by the standard theory of the elliptic equations. In addition, the mapping $\mathcal{T}_1$ is a continuous and compact mapping from $H^1(\Omega)$ into itself. Next, we show that there exists a positive constant $M_1$ such that $\|v\|_1\leq M_1$ for an arbitrary $v\in\big\{f\in H^1(\Omega)\,|\,f=\lambda\mathcal{T}_1f, \ \forall\lambda\in[0,1] \big\}$. We may assume $\lambda>0$ as the case $\lambda=0$ is trivial. It is sufficient to show that $\|v\|_1\leq M_1$ for $v$ satisfying the following problem
\[113.3\]
\^2v\_[xx]{}=h(v\_,q), x,\[113.3a\]\
v(0)=w\_l,v(1)=w\_r.\[113.3b\]
Performing the procedure $$\label{153.2}
\int_0^1\eqref{113.3a}\times(v-\lambda\bar{w})dx,\quad\text{where}\ \bar{w}(x):=w_l(1-x)+w_rx,$$ and using the Young inequality, the mean value theorem, the formula and the estimate $|h(v_{\alpha\beta},q)|\leq C$, where $C$ is a positive constant which only depends on $\alpha$, $\beta$, $n_l$, $\theta_L$, and $|D|_0$. If $\delta$ is small enough, then these computations yield the desired estimate $$\label{153.4}
\|v\|_1\leq C\bigg(1+\frac{1}{{\varepsilon}}\bigg)=:M_1.$$ Based on the estimate , we can directly apply the Leray-Schauder fixed-point theorem to the mapping $\mathcal{T}_1$, and see that $\mathcal{T}_1$ has a fixed-point $u=\mathcal{T}_1u\in H^3(\Omega)$ which is a strong solution to the truncation problem $(tP)$.
Next, we can further provide a maximum principle argument for any strong solution $u$ to the truncation problem $(tP)$. Consequently, this result can help us to remove the truncation in $(tP)$ and show that the solution $u$ to the truncation problem $(tP)$ exactly is a solution to the problem $(P1)$.
We first establish the upper bound of $u_{\alpha\beta}$. Before doing this, we note that if $\delta$ is small enough, then $q\in\mathcal{U}[N_1,N_2]$ implies $$\label{154.2}
0<\frac{1}{2}\theta_L\leq q(x)\leq\frac{3}{2}\theta_L,\quad\|q_x\|\leq N_1\delta,\quad\|q_{xx}\|\leq N_2\delta.$$ Now, we can establish the upper bound of $u_{\alpha\beta}$ by choosing the appropriate test functions in $H_0^1(\Omega)$. To this end, we define $\bar{n}:=\max\{n_l,n_r\}>0$, and perform the procedure $$\label{154.1}
\int_0^1-\eqref{111.1a}\times\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx,\quad k=1,2,3,\cdots,\quad\text{where}\ (\cdot)_+:=\max\{0,\cdot\}.$$ The computations in terms of this procedure yield that $$\label{56.1}
\int_0^1-{\varepsilon}^2u_{xx}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx=\int_0^1-h(u_{\alpha\beta},q)\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx.$$ The left-side of can be estimated as follows by integration by parts, $$\begin{aligned}
\eqref{56.1}_l&=\int_0^1{\varepsilon}^2u_x\Bigg[\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^k\Bigg]_xdx \notag\\
&=\int_0^12{\varepsilon}^2k\frac{[(u_{\alpha\beta})_x]^2}{u_{\alpha\beta}}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k-1}dx\geq0.\label{57.1}\end{aligned}$$ Based on the expression , the estimate and the Young inequality, we can estimate the right-side of as follows, where $\bar{J}:=J[u_{\alpha\beta}^{2},q]$ satisfying the estimate $|\bar{J}|\leq C(\alpha,\beta,N_1)\delta$. Namely, $$\begin{aligned}
\eqref{56.1}_r=&\int_0^1-u_{\alpha\beta}\bigg[F(u_{\alpha\beta}^2,\bar{J},q)-F({n}_{l},\bar{J},\theta_l)-G[u_{\alpha\beta}^{2}]\notag\\
&\qquad\qquad\qquad\qquad-\int_0^xq_{x}\ln u_{\alpha\beta}^2dy+\bar{J}\int_0^xu_{\alpha\beta}^{-2}dy\bigg]\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
=&\int_0^1-u_{\alpha\beta}\bigg[F(u_{\alpha\beta}^2,\bar{J},q)-q\ln\bar{n}+q\ln\bar{n}-F({n}_{l},\bar{J},\theta_l)-G[u_{\alpha\beta}^{2}]\notag\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ -\int_0^xq_{x}\ln u_{\alpha\beta}^2dy+\bar{J}\int_0^xu_{\alpha\beta}^{-2}dy\bigg]\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
=&-\int_0^1u_{\alpha\beta}q\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx\notag\\
&+\int_0^1\bigg(\phi_rx-\int_0^1G(x,y)D(y)dy-\bar{J}\int_0^xu_{\alpha\beta}^{-2}dy+\frac{\bar{J}^2}{2n_l}\bigg)u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
&+\int_0^1\bigg(\theta_l\ln n_l-q\ln\bar{n}+\theta_l-q+\int_0^xq_x\ln u_{\alpha\beta}^2dy \bigg)u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
&+\int_0^1\bigg(\int_0^1\underbrace{G(x,y)}_{\leq0}u_{\alpha\beta}^2(y)dy-\frac{\bar{J}^2}{2u_{\alpha\beta}^4}\bigg)u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
\leq&-\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx+\int_0^1\big(C(N_1)\delta+|D|_0\big)u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
&+\int_0^1C(N_1)\delta u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx+0\notag\\
\leq&-\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx+\int_0^12|D|_0u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx \qquad\text{if}\ \delta\ll1\notag\\
=&-\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx+\int_0^12|D|_0\frac{2}{\theta_L}\frac{\theta_L}{2}u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^kdx\notag\\
=&-\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx+\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\underbrace{\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^k\frac{4|D|_0}{\theta_L}}_{\text{by Young inequality}}dx\notag\\
\leq&-\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx\notag\\
&\qquad\qquad\qquad+\int_0^1\frac{1}{2}\theta_Lu_{\alpha\beta}\Bigg[\frac{k}{k+1}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}+\frac{1}{k+1}\bigg(\frac{4|D|_0}{\theta_L}\bigg)^{k+1}\Bigg]dx\notag\\
=&-\frac{1}{k+1}\frac{1}{2}\theta_L\int_0^1u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx+\frac{1}{k+1}\frac{1}{2}\theta_L\bigg(\frac{4|D|_0}{\theta_L}\bigg)^{k+1}\int_0^1\underbrace{u_{\alpha\beta}}_{\leq\alpha}dx\notag\\
\leq&\frac{\theta_L}{2(k+1)}\Bigg[-\int_0^1u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx+\alpha\bigg(\frac{4|D|_0}{\theta_L}\bigg)^{k+1}\Bigg]. \label{59.3}\end{aligned}$$ Inserting and into , we have the estimate $$\label{60.2}
\int_0^1\sqrt{\bar{n}}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx\leq\int_0^1u_{\alpha\beta}\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+^{k+1}dx\leq\alpha\bigg(\frac{4|D|_0}{\theta_L}\bigg)^{k+1},$$ which implies $$\label{61.2}
\bigg\|\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+\bigg\|_{L^{k+1}(\Omega)}\leq\bigg(\frac{\alpha}{\sqrt{\bar{n}}}\bigg)^{\frac{1}{k+1}}\frac{4|D|_0}{\theta_L},\quad k=1,2,3,\cdots.$$ Let $k\rightarrow\infty$ in , we immediately obtain $$\label{61.3}
\bigg\|\bigg(\ln\frac{u_{\alpha\beta}^2}{\bar{n}}\bigg)_+\bigg\|_{L^{\infty}(\Omega)}\leq\frac{4|D|_0}{\theta_L}.$$ Note that $[\ln(u_{\alpha\beta}^2/\bar{n})]_+$ is nonnegative, then the estimate implies that $$\label{62.1}
u_{\alpha\beta}\leq\sqrt{\bar{n}}e^{2|D|_0/\theta_L}\leq B,\quad\text{if}\ \delta\ll1.$$
Using the similar argument, we can establish the lower bound of $u_{\alpha\beta}$. To this end, we define $\underbar{n}:=\min\{n_l,n_r\}>0$, and perform the procedure $$\label{159.1}
\int_0^1-\eqref{111.1a}\times\frac{1}{u_{\alpha\beta}}\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx,\quad k=1,2,3,\cdots,\quad\text{where}\ (\cdot)_-:=\min\{0,\cdot\}.$$ The computations in terms of this procedure yield that $$\label{63.1}
\int_0^1-{\varepsilon}^2\frac{u_{xx}}{u_{\alpha\beta}}\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx=\int_0^1-\frac{h(u_{\alpha\beta},q)}{u_{\alpha\beta}}\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx.$$ The left-side of can be estimated as follows by integration by parts, $$\begin{aligned}
\eqref{63.1}_l&=\int_0^1{\varepsilon}^2u_x\Bigg[\frac{1}{u_{\alpha\beta}}\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}\Bigg]_xdx \notag\\
&=\int_0^1{\varepsilon}^2u_x\bigg(\frac{1}{u_{\alpha\beta}}\bigg)_x\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx+\int_0^1{\varepsilon}^2\frac{u_x}{u_{\alpha\beta}}\Bigg[\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}\Bigg]_x\notag\\
&=-\int_0^1{\varepsilon}^2\bigg[\frac{(u_{\alpha\beta})_x}{u_{\alpha\beta}}\bigg]^2\Bigg[\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}-2(2k-1)\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-2}\Bigg]dx\notag\\
&\geq0.\label{64.1}\end{aligned}$$ The right-side of can be estimated as follows, $$\begin{aligned}
\eqref{63.1}_r=&\int_0^1-\bigg[F(u_{\alpha\beta}^2,\bar{J},q)-F({n}_{l},\bar{J},\theta_l)-G[u_{\alpha\beta}^{2}]\notag\\
&\qquad\qquad\qquad\qquad-\int_0^xq_{x}\ln u_{\alpha\beta}^2dy+\bar{J}\int_0^xu_{\alpha\beta}^{-2}dy\bigg]\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx\notag\\
=&\int_0^1-\bigg[F(u_{\alpha\beta}^2,\bar{J},q)-q\ln\underbar{n}+q\ln\underbar{n}-F({n}_{l},\bar{J},\theta_l)-G[u_{\alpha\beta}^{2}]\notag\\
&\qquad\qquad\qquad\qquad\qquad\qquad-\int_0^xq_{x}\ln u_{\alpha\beta}^2dy+\bar{J}\int_0^xu_{\alpha\beta}^{-2}dy\bigg]\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx\notag\\
=&-\int_0^1q\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx\notag\\
&+\int_0^1\Bigg[\phi_rx+\int_0^1G(x,y)(u_{\alpha\beta}^2-D)(y)dy-\bar{J}\int_0^xu_{\alpha\beta}^{-2}dy+\frac{\bar{J}^2}{2n_l}\notag\\
&\qquad\qquad+\theta_l\ln n_l-q\ln\underbar{n}+\theta_l-q+\int_0^xq_x\ln u_{\alpha\beta}^2dy-\frac{\bar{J}^2}{2u_{\alpha\beta}^4}\Bigg]\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx\notag\\
\leq&-\int_0^1\frac{1}{2}\theta_L\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx-\int_0^1\big(C(N_1)\delta+B^2+|D|_0\big)\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx\notag\\
\leq&-\int_0^1\frac{1}{2}\theta_L\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx-\int_0^1\big(B^2+2|D|_0\big)\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx\qquad\text{if}\ \delta\ll1 \notag\\
=&-\int_0^1\frac{1}{2}\theta_L\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx-\int_0^1\big(B^2+2|D|_0\big)\frac{2}{\theta_L}\frac{\theta_L}{2}\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k-1}dx\notag\\
=&-\int_0^1\frac{1}{2}\theta_L\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx+\int_0^1\frac{1}{2}\theta_L\underbrace{\Bigg[-\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-\Bigg]^{2k-1}\frac{2\big(B^2+2|D|_0\big)}{\theta_L}}_{\text{by Young inequality}}dx\notag\\
\leq&-\int_0^1\frac{1}{2}\theta_L\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx\notag\\
&\quad+\int_0^1\frac{1}{2}\theta_L\Bigg\{\frac{2k-1}{2k}\Bigg[-\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-\Bigg]^{(2k-1)\cdot\frac{2k}{2k-1}}+\frac{1}{2k}\bigg[\frac{2\big(B^2+2|D|_0\big)}{\theta_L}\bigg]^{2k}\Bigg\}dx\notag\\
=&\frac{\theta_L}{4k}\Bigg\{-\int_0^1\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-^{2k}dx+\bigg[\frac{2\big(B^2+2|D|_0\big)}{\theta_L}\bigg]^{2k}\Bigg\}. \label{66.3}\end{aligned}$$ Inserting and into , we have the estimate $$\label{67.1}
\bigg\|\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-\bigg\|_{L^{2k}(\Omega)}\leq\frac{2\big(B^2+2|D|_0\big)}{\theta_L},\quad k=1,2,3,\cdots.$$ Let $k\rightarrow\infty$ in , we immediately obtain $$\label{67.2}
\bigg\|\bigg(\ln\frac{u_{\alpha\beta}^2}{\underbar{n}}\bigg)_-\bigg\|_{L^{\infty}(\Omega)}\leq\frac{2\big(B^2+2|D|_0\big)}{\theta_L}.$$ Note that $[\ln(u_{\alpha\beta}^2/\underbar{n})]_-$ is nonpositive, then the estimate implies that $$\label{68.1}
u_{\alpha\beta}\geq\sqrt{\underbar{n}}e^{-(B^2+2|D|_0)/\theta_L}\geq b,\quad\text{if}\ \delta\ll1.$$ Combining with , we have $b\leq u_{\alpha\beta}\leq B$, which means $u_{\alpha\beta}=u$. This gives the existence result of the problem $(P1)$. Differentiating the equation and using the regularity $u\in H^3(\Omega)$ to obtain the desired regularity $u\in H^4(\Omega)$ in Claim 1.
Before proving the uniqueness result of the problem $(P1)$, we need to establish the uniform estimate with respect to ${\varepsilon}\in(0,1]$ for any $H^4$-solution $u$ of $(P1)$. To this end, the estimate can be proved similarly as the above derivation of $u_{\alpha\beta}=u$.
Furthermore, performing the procedure $$\label{117.1}
\int_0^1\bigg[\eqref{109.1a}\times\frac{1}{u}\bigg]_x\times u_xdx,$$ we have $$\label{117.1r}
\int_0^1{\varepsilon}^2\bigg(\frac{u_{xx}}{u}\bigg)_xu_xdx=\int_0^1\bigg[2\bigg(q-\frac{J^2}{u^4}\bigg)\frac{u_x}{u}+q_x-\varphi_x+\frac{J}{u^2}\bigg]u_xdx,$$ where $J:=J[u^2,q]$ and $\varphi(x):=G[u^2](x)$. The left-side of can be estimated by using integration by parts, $$\label{117.1r-l}
\eqref{117.1r}_l=-\int_0^1{\varepsilon}^2\frac{(u_{xx})^2}{u}dx\leq0.$$ Based on the estimates , and $|J|\leq C(b,B,N_1)\delta$, the right-side of can be estimated as follows provided $\delta$ is small enough, $$\label{117.1r-r}
\eqref{117.1r}_r\geq\frac{\theta_L}{2B}\|u_x\|^2-C(B,\theta_L)\|(q_x,\varphi_x)\|^2-J\bigg(\frac{1}{{w}_{r}}-\frac{1}{{w}_{l}}\bigg)\geq\frac{\theta_L}{2B}\|u_x\|^2-C,$$ where the positive constant $C$ only depends on $n_l$, $\theta_L$ and $|D|_0$. We have used the elliptic estimate $\|\varphi\|_2\leq C(\|u^2-D\|+\|\phi_rx\|)$ in the last inequality of . Inserting and into , we get $$\label{117.3}
\|u_x\|\leq C,$$ where the positive constant $C$ only depends on $n_l$, $\theta_L$ and $|D|_0$ and is independent of ${\varepsilon}\in(0,1]$.
Performing the procedure $$\label{118.0}
\int_0^1\bigg[\eqref{109.1a}\times\frac{1}{u}\bigg]_x\times\bigg(\frac{u_{xx}}{u}\bigg)_xdx,$$ and using integration by parts, we get $$\begin{gathered}
\label{80.1}
\int_0^1{\varepsilon}^2\bigg[\bigg(\frac{u_{xx}}{u}\bigg)_x\bigg]^2dx+\int_0^12S\bigg(\frac{u_{xx}}{u}\bigg)^2dx\\
=-\int_0^1\bigg(\frac{2S}{u}\bigg)_x\frac{u_xu_{xx}}{u}dx-\int_0^1q_{xx}\frac{u_{xx}}{u}dx+\int_0^1(u^2-D)\frac{u_{xx}}{u}dx+\int_0^1\frac{2J}{u^4}u_xu_{xx}dx,\end{gathered}$$ where $S:=q-J^2/u^4$. The left-side of can be estimated as $$\label{80.2}
\eqref{80.1}_l\geq\frac{\theta_L}{2B^2}\|u_{xx}\|^2.$$ The right-side of can be estimated by Hölder, Sobolev and Cauchy-Schwarz inequalities as $$\begin{aligned}
\eqref{80.1}_r&\leq C|u_x|_0\|u_x\|\|u_{xx}\|+C(\|q_{xx}\|+\|u^2-D\|+\|u_x\|)\|u_{xx}\|\notag\\
&\leq C(\|u_x\|^2+2\|u_x\|\|u_{xx}\|)^{1/2}\|u_{xx}\|+C\|u_{xx}\|\notag\\
&\leq\mu\|u_{xx}\|^2+C_\mu(\|u_x\|^2+2\|u_x\|\|u_{xx}\|)+C\|u_{xx}\|\notag\\
&\leq\mu\|u_{xx}\|^2+C_\mu(1+\|u_{xx}\|)\label{81.3}\end{aligned}$$ where we have used the estimate . The positive constant $C_\mu$ only depends on $n_l$, $\theta_L$, $|D|_0$ and $\mu$, where $\mu$ is a small number and will be determined later. Inserting and into , and let $\mu\ll1$, we obtain $\|u_{xx}\|^2\leq C(1+\|u_{xx}\|)$. Solving this inequality with respect to $\|u_{xx}\|$ to obtain the estimate $$\label{82.4}
\|u_{xx}\|\leq C,$$ where the positive constant $C$ only depends on $n_l$, $\theta_L$ and $|D|_0$ and is independent of ${\varepsilon}\in(0,1]$.
Substituting the uniform estimates , and in the equality , we have the estimate ${\varepsilon}\|(u_{xx}/u)_x\|\leq C$. Note that $${\varepsilon}{\partial_{x}^{3}}u={\varepsilon}u\bigg(\frac{u_{xx}}{u}\bigg)_x+{\varepsilon}\frac{u_xu_{xx}}{u}.$$ We immediately get the following uniform estimate $$\label{85.2}
\|{\varepsilon}{\partial_{x}^{3}}u\|\leq C.$$
Furthermore, applying ${\partial_{x}^{2}}$ to the equation and taking the $L^2$-norm of the resultant equality, we finally have the following uniform estimate $$\label{86.1}
\|{\varepsilon}^2{\partial_{x}^{4}}u\|\leq C.$$
From the estimates , , , and , we have established the desired uniform estimate with respect to ${\varepsilon}\in(0,1]$ for any strong solution $u$ to $(P1)$.
Based on the uniform estimate , now we can prove the uniqueness of solution to $(P1)$ by the energy method. To this end, we assume that $u_1$ and $u_2$ are two solutions to $(P1)$. Let $z_i:=\ln u_i^2$, $J_i:=J[e^{z_i},q]$, $S_i:=q-J_i^2/e^{2z_i}$, $\varphi_i:=G[e^{z_i}]$, $i=1,2$. Taking the difference $J_1$ and $J_2$, and applying the mean value theorem and to the explicit formula , we have $$\label{138.0}
|J_i|\leq C(b,B,N_1)\delta,\qquad |J_1-J_2|\leq C\delta\|z_x\|,$$ where $z:=z_1-z_2$ and $C$ is a positive constant which only depends on $n_l$, $\theta_L$ and $|D|_0$. Due to the procedure $$\bigg[\eqref{109.1a}\times\frac{1}{u_1}\bigg]_x-\bigg[\eqref{109.1a}\times\frac{1}{u_2}\bigg]_x$$ and the transformation $u_i=e^{z_i/2}$, the difference $z$ satisfies $$\label{95.3}
-\bigg(\frac{J_1^2}{e^{2z_1}}-\frac{J_2^2}{e^{2z_2}}\bigg)z_{1x}+S_2z_x-\frac{{\varepsilon}^2}{2}\bigg[z_{xx}+\frac{z_{1x}^2}{2}-\frac{z_{2x}^2}{2}\bigg]_x=(\varphi_1-\varphi_2)_x-\bigg(\frac{J_1}{e^{z_1}}-\frac{J_2}{e^{z_2}}\bigg).$$ Multiplying by $z_x$, integrating the resultant equality and using the boundary conditions $$\label{94.5}
z_i(0)=\ln{n}_l,\quad z_i(1)=\ln{n}_r,\quad \bigg(z_{ixx}+\frac{z_{ix}^2}{2}\bigg)(0)=\bigg(z_{ixx}+\frac{z_{ix}^2}{2}\bigg)(1)=0$$ to obtain that $$\begin{gathered}
\label{96.1}
\int_0^1S_2z_x^2dx+\int_0^1\frac{{\varepsilon}^2}{2}z_{xx}^2dx+\int_0^1\underbrace{(e^{z_1}-e^{z_2})z}_{\geq0}dx\\
=\int_0^1\bigg(\frac{J_1^2}{e^{2z_1}}-\frac{J_2^2}{e^{2z_2}}\bigg)z_{1x}z_xdx-\int_0^1\frac{{\varepsilon}^2(z_{1x}+z_{2x})}{4}z_xz_{xx}dx-\int_0^1\bigg(\frac{J_1}{e^{z_1}}-\frac{J_2}{e^{z_2}}\bigg)z_xdx.\end{gathered}$$ The left-side of can be estimated as $$\label{96.2}
\eqref{96.1}_l\geq\frac{\theta_L}{4}\|z_x\|^2+\frac{{\varepsilon}^2}{2}\|z_{xx}\|^2,$$ where we have used the estimates , and . The right-side of can be estimated by Hölder, Poincaré and Cauchy-Schwarz inequalities and by the estimates and as $$\begin{aligned}
\eqref{96.1}_r&\leq C\Big(|J_1-J_2|\|z_x\|+|J_2|\|z\|\|z_x\|\Big)+C{\varepsilon}^2\|z_x\|\|z_{xx}\|\notag\\
&\leq C(B,b,N_1)\delta\|z_x\|^2+\frac{{\varepsilon}^2}{4}\|z_{xx}\|^2+C{\varepsilon}^2\|z_x\|^2\notag\\
&=\Big(C(B,b,N_1)\delta+C{\varepsilon}^2\Big)\|z_x\|^2+\frac{{\varepsilon}^2}{4}\|z_{xx}\|^2.\label{97.3}\end{aligned}$$ Substituting and in , we see from letting $\delta$ and ${\varepsilon}$ small enough that $\|z\|^2\leq0$. Thus we have shown $u_1\equiv u_2$. This complete the proof of Claim 1.
*Step 4. Proof of Claim 2.* For given function pair $(u,q)$ in the Claim 1, we discuss the unique solvability of the problem $(P2)$ by Leray-Schauder fixed-point theorem and energy method again. To this end, we define a fixed-point mapping $\mathcal{T}_2: q_1\mapsto Q_1$ over $H^1(\Omega)$ by solving the linear problem,
\[126.1\]
Q\_[1xx]{}-JQ\_[1x]{}+J\_[1\*]{}(u\^2)\_x\_L+J(u\^2)\_x(q\_1-\_L)\
-u\^2(Q\_1-\_L)=g(u,q;), x,\[126.1a\]\
Q\_1(0)=\_l,Q\_1(1)=\_r,
where $J:=J[u^2,q]$ and $J_{1*}:=2\big(\bar{b}+\int_0^1q_{1x}\ln u^2dx\big)K[u^2,q]^{-1}$. In fact, for given $(u,q)$ in the Claim 1 and $q_1\in H^1(\Omega)$, the linear problem is uniquely solvable in $H^3(\Omega)$ owing to the standard theory of the elliptic equations. By using the standard argument, we can further show that the mapping $\mathcal{T}_2$ is continuous and compact from $H^1(\Omega)$ into itself. Hence, it is sufficient to show that there exists a positive constant $M_2$ such that $\|\Theta\|_1\leq M_2$ for any $\Theta\in\big\{f\in H^1(\Omega)\,|\,f=\lambda\mathcal{T}_2f, \ \forall\lambda\in[0,1] \big\}$. We may assume $\lambda>0$ as the case $\lambda=0$ is trivial. Namely, for the function $\Theta$ verifying
\[se127.2\]
\_[xx]{}-J\_[x]{}+J\_\*(u\^2)\_x\_L+J(u\^2)\_x(-\_L)\
-u\^2(-\_L)=g(u,q;), x, \[127.2a\]\
(0)=\_l,(1)=\_r,,\[127.2b\]\
J:=J\[u\^2,q\],J\_\*:=2(|[b]{}+\_0\^1\_xu\^2dx)K\[u\^2,q\]\^[-1]{},
we need to show the estimate $\|\Theta\|_1\leq M_2$.
Substituting $\Theta_\lambda:=\Theta-\lambda\bar{\theta}$ into the equation , where $\bar{\theta}(x):=\theta_l(1-x)+\theta_rx$, multiplying the resultant equation by $\Theta_\lambda$ and integrating it by parts over the domain $\Omega$ give $$\begin{aligned}
&\frac{2}{3}\|\Theta_{\lambda x}\|^2+b^2\|\Theta_\lambda\|^2\notag\\
\leq&-\frac{2}{3}\lambda\theta_LJ_*\int_0^1\Theta_{\lambda x}\ln u^2dx-\int_0^1\bigg[\frac{2}{3}\lambda J\ln u^2\big(\Theta_\lambda^2\big)_x+J\Theta_{\lambda x}\Theta_\lambda\bigg]dx\notag\\
&-\frac{2}{3}\lambda J\int_0^1\ln u^2\Big[(\lambda\bar{\theta}-\theta_L)\Theta_\lambda\Big]_xdx-\lambda\int_0^1\Big[J\bar{\theta}_x+u^2(\bar{\theta}-\theta_L)+g(u,q;{\varepsilon})\Big]\Theta_\lambda dx\notag\\ \leq&-\frac{4\lambda\theta_L}{3K[u^2,q]}\bigg(\bar{b}+\int_0^1\big(\Theta_{\lambda x}+\lambda\bar{\theta}_x\big)\ln u^2dx\bigg)\int_0^1\Theta_{\lambda x}\ln u^2dx+C(b,B,N_1)\delta\|\Theta_{\lambda}\|_1^2\notag\\
&+\mu\|\Theta_{\lambda}\|_1^2+C(\mu,b,B,N_1)\delta^2\big(\delta^2+\|\lambda\bar{\theta}-\theta_L\|^2\big)+C(\mu,b,B)\delta^2\notag\\
\leq&\underbrace{-\frac{4\lambda\theta_L}{3K[u^2,q]}\bigg(\int_0^1\Theta_{\lambda x}\ln u^2dx\bigg)^2}_{\leq0}+\big[\mu+C(b,B,N_1)\delta\big]\|\Theta_{\lambda}\|_1^2\notag\\
&+C(\mu,b,B,N_1)\delta^2\big(\delta^2+\|\lambda\bar{\theta}-\theta_L\|^2\big)+C(\mu,b,B,\theta_L)\delta^2\notag\\
\leq&\big[\mu+C(b,B,N_1)\delta\big]\|\Theta_{\lambda}\|_1^2+C(\mu,b,B,N_1)\delta^2\big(\delta^2+\|\lambda\bar{\theta}-\theta_L\|^2\big)\notag\\
&+C(\mu,b,B,\theta_L)\delta^2,\label{133.0}\end{aligned}$$ where we have used the expression , the estimates and $|J|\leq C(b,B,N_1)\delta$, and the Young inequality. Taking $\mu$ and $\delta$ small enough in , we have $$\label{133.3}
\|\Theta_{\lambda}\|_1^2\leq C(b,B,\theta_L)\delta^2+C(b,B,N_1)\delta^2\big(\delta^2+\|\lambda\bar{\theta}-\theta_L\|^2\big),$$ which immediately means that $$\begin{aligned}
\|\Theta\|_1=&\|\Theta_\lambda+\lambda\bar{\theta}\|_1\leq\|\Theta_\lambda\|_1+\lambda\|\bar{\theta}\|_1\notag\\
\leq&\sqrt{C(b,B,\theta_L)+C(b,B,N_1)(1+\theta_L^2)}+2\theta_L=:M_2.\label{es134.1}\end{aligned}$$ Thus, the mapping $\mathcal{T}_2$ has a fixed point $Q=\mathcal{T}_2Q\in H^3(\Omega)$ by Leray-Schauder fixed-point theorem and the elliptic regularity theory. Hence, we have shown the existence of the solution $Q$ to the problem $(P2)$.
The uniqueness of the solution $Q$ follows from the energy method. Let $Q_i\in H^1(\Omega)$, $i=1,2$ be two solutions to $(P2)$ corresponding to the same function pair $(u,q)$. Define $\bar{Q}:=Q_1-Q_2$, which satisfies
\[135.2\]
|[Q]{}\_[xx]{}-J|[Q]{}\_[x]{}+(\_0\^1|[Q]{}\_xu\^2dx)(u\^2)\_x\
+J(u\^2)\_x|[Q]{}-u\^2|[Q]{}=0, x,\[135.2a\]\
|[Q]{}(0)=|[Q]{}(1)=0.\[135.2b\]
Multiplying the equation by $-\bar{Q}$ and integrating the resultant equality over $\Omega$. In a similar way as the derivation of , we have $$\begin{aligned}
\frac{2}{3}\|\bar{Q}_x\|^2+b^2\|\bar{Q}\|^2\leq&-\frac{4\theta_L}{3K[u^2,q]}\bigg(\int_0^1\bar{Q}_x\ln u^2dx\bigg)^2+C(b,B,N_1)\delta\|\bar{Q}\|_1^2\notag\\
\leq&C(b,B,N_1)\delta\|\bar{Q}\|_1^2.\label{136.0}\end{aligned}$$ Taking $\delta$ small enough in , we see that $\|\bar{Q}\|_1^2\leq0$. Thus we have proven $Q_1\equiv Q_2$.
On the other hand, letting $\Theta=Q$ and $\lambda=1$ in the estimate , we obtain $$\begin{aligned}
\|Q-\theta_L\|_1\leq&\|Q-\bar{\theta}\|_1+\|\bar{\theta}-\theta_L\|_1\notag\\
\leq&C(b,B,\theta_L)\delta+C(b,B,N_1)\delta\big(\delta+\|\bar{\theta}-\theta_L\|\big)+\|\bar{\theta}-\theta_L\|_1\notag\\
\leq&C(b,B,\theta_L)\delta+C(b,B,N_1)\delta^2\notag\\
=&C_1\delta+C_2(b,B,N_1)\delta^2,\label{133.5}\end{aligned}$$ which exactly is the desired estimate . Solving the equation ${\partial_{x}^{k}}\eqref{109.2a}$ with respect to ${\partial_{x}^{k}}Q_{xx}$ for $k=0,1$ and directly taking the $L^2$-norm, we get the desired estimates and with the aid of the estimates , and $|J|, |J_*|\leq C(b,B,N_1)\delta$. Consequently, the proof of Claim 2 is completed.
*Step 5. End of the proof.* Firstly, based on the estimate we can determine the constants $N_1$ and $N_2$ by letting $$\label{N12}
N_1:=2C_1,\quad N_2:=C_3(b,B,2C_1).$$ If $\delta$ is small enough, that is, $$\delta\leq\frac{C_1}{C_2(b,B,2C_1)},$$ then we see from the estimate that $\mathcal{T}$ maps $\mathcal{U}[N_1,N_2]$ into itself. Combining the estimates and with the Sobolev compact embedding theorem, via a standard argument, we see that the mapping $\mathcal{T}$ is continuous in the norm of $C^2(\overline{\Omega})$ and the image $\mathcal{T}\big(\mathcal{U}[N_1,N_2]\big)$ is precompact in $C^2(\overline{\Omega})$. Therefore, applying the Schauder fixed-point theorem to the mapping $\mathcal{T}:\mathcal{U}[N_1,N_2]\rightarrow\mathcal{U}[N_1,N_2]$, we obtain a fixed-point ${\tilde{\theta}}\in\mathcal{U}[N_1,N_2]$ of the mapping $\mathcal{T}$. According to the construction of the mapping $\mathcal{T}$ above, we can easily see that $({\tilde{w}}:=u[{\tilde{\theta}}],{\tilde{\theta}})$ is a desired solution to the BVP $\sim$.
In addition, the solution $({\tilde{n}},{\tilde{j}},{\tilde{\theta}},{\tilde{\phi}})$ to the original BVP $\sim$ is constructed from the the solution $({\tilde{w}},{\tilde{\theta}})$ to the BVP $\sim$. In fact, we define a function ${\tilde{n}}:={\tilde{w}}^2$, the constant ${\tilde{j}}:=J[{\tilde{w}}^2,{\tilde{\theta}}]$ and a function ${\tilde{\phi}}:=G[{\tilde{w}}^2]$, where $J[\cdot,\cdot]$ and $G[\cdot]$ are given in and . Then, we see that $({\tilde{n}},{\tilde{j}},{\tilde{\theta}},{\tilde{\phi}})\in H^4(\Omega)\times H^4(\Omega)\times H^3(\Omega)\times C^2(\overline{\Omega})$ is a desired solution to the BVP $\sim$. Moreover, this stationary solution satisfies the condition and the estimate , thanks to the estimates and .
Finally, using the same methods in Step 3 and Step 4, we can prove the local uniqueness of the stationary solution $({\tilde{n}},{\tilde{j}},{\tilde{\theta}},{\tilde{\phi}})$ to the BVP $\sim$ if the parameters $\delta$ and ${\varepsilon}$ are small enough and the solution additionally satisfies and . The computations are standard but tedious, we omit the details.
Asymptotic stability of the stationary solution {#Sect.3}
===============================================
In this section, we show Theorem \[thm2\] by applying the standard continuation argument based on the local existence result and the uniform a priori estimate. To simplify the notations, we remove the superscript ${\varepsilon}$ and denote the solution $({n^{{\varepsilon}}},{j^{{\varepsilon}}},{\theta^{{\varepsilon}}},{\phi^{{\varepsilon}}})$ in Theorem \[thm2\] as $({n},{j},{\theta},{\phi})$.
Local existence {#Subsect.3.1}
---------------
In this subsection, we discuss the existence of the local-in-time solution. The proof is based on the iteration method and the energy estimates.
It is also convenient to make use of the transformation ${w}:=\sqrt{{n}}$ in the IBVP $\sim$. Then, we derive the equivalent IBVP for $({w},{j},{\theta},{\phi})$ as follows
\[a2.2\]
2[w]{}[w]{}\_t+[j]{}\_x=0, \[a2.2-1\]\
[j]{}\_t+2S\[[w]{}\^2,[j]{},\][w]{}[w]{}\_x+[j]{}\_x+[w]{}\^2\_x-\^2[w]{}\^2()\_x=[w]{}\^2\_x-[j]{}, \[a2.2-2\]\
[w]{}\^2\_t+[j]{}\_x+[w]{}\^2()\_x-\_[xx]{}-\_x=-[w]{}\^2(-\_[L]{}), \[a2.2-3\]\
\_[xx]{}=[w]{}\^2-D(x), t>0, x,\[a2.2-4\]
with the initial condition $$\label{a2.3}
({w},{j},{\theta})(0,x)=({w}_0,{j}_0,{\theta}_0)(x),\quad {w}_0:=\sqrt{{n}_0},$$ and the boundary conditions
\[a2.4\] $$\begin{gathered}
{w}(t,0)={w}_{l},\qquad {w}(t,1)={w}_{r},\label{a2.4-1}\\
{w}_{xx}(t,0)={w}_{xx}(t,1)=0,\label{a2.4-2}\\
{\theta}(t,0)={\theta}_{l},\qquad {\theta}(t,1)={\theta}_{r},\label{a2.4-3}\\
{\phi}(t,0)=0,\qquad {\phi}(t,1)=\phi_r.\label{a2.4-4}\end{gathered}$$
In the following discussion, we borrow the ideas in the papers [@NS08; @NS09] which have shown the local existence theorems for the isothermal QHD model and the FHD model. Also, see [@KNN99; @KNN03] for the general hyperbolic-elliptic coupled systems.
Now we are in the position to state the local existence.
\[lema2\] Suppose that the initial data $({w}_0,{j}_0,{\theta}_0)\in H^4(\Omega)\times H^3(\Omega)\times H^2(\Omega)$ and the boundary data satisfy the compatible condition $$\begin{gathered}
{w}_0(0)={w}_l,\quad {w}_0(1)={w}_r,\quad {\theta}_0(0)=\theta_l,\quad {\theta}_0(1)=\theta_r,\notag\\
{j}_{0x}(0)={j}_{0x}(1)={w}_{0xx}(0)={w}_{0xx}(1)=0\label{rcompatibility}\end{gathered}$$ and the condition $$\label{ripsc}
\inf_{x\in\Omega}{w}_{0}>0,\qquad \inf_{x\in\Omega}{\theta}_0>0, \qquad\inf_{x\in\Omega}S[{w}_{0}^2,{j}_{0},{\theta}_0]>0.$$ Then there exists a constant $T_*>0$ such that the IBVP $\sim$ has a unique solution $({w},{j},{\theta},{\phi})\in\big[\mathfrak{Y}_4([0,T_*])\cap H^2(0,T_*;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,T_*])\cap H^2(0,T_*;L^2(\Omega))\big]\times\big[\mathfrak{Y}_2([0,T_*])\cap H^1(0,T_*;H^1(\Omega))\big]\times\mathfrak{Z}([0,T_*])$ satisfying $$\label{rpsc}
\inf_{x\in\Omega}{w}>0,\qquad \inf_{x\in\Omega}{\theta}>0, \qquad\inf_{x\in\Omega}S[{w}^2,{j},{\theta}]>0.$$
To show Lemma \[lema2\], we first study the linear IBVP for the unknowns $(\hat{{w}},\hat{{j}},\hat{{\theta}})$
\[a3.1\]
2[w]{}\_t+\_x=0, \[a3.1-1\]\
\_t+2S\[[w]{}\^2,[j]{},\][w]{}\_x+\_x+[w]{}\^2\_x-\^2[w]{}\^2()\_x=[w]{}\^2\_x-[j]{}, \[a3.1-2\]\
[w]{}\^2\_t+[j]{}\_x+()\_x[w]{}\^2-\_[xx]{}-\_x=-[w]{}\^2(-\_[L]{}), \[a3.1-3\]\
:=, t>0, x,\[a3.1-4\]
with the initial condition $$\label{a3.2}
(\hat{{w}},\hat{{j}},\hat{{\theta}})(0,x)=({w}_0,{j}_0,{\theta}_0)(x),$$ and the boundary conditions
\[a3.3\] $$\begin{gathered}
\hat{{w}}(t,0)={w}_{l},\qquad \hat{{w}}(t,1)={w}_{r},\label{a3.3-1}\\
\hat{{w}}_{xx}(t,0)=\hat{{w}}_{xx}(t,1)=0,\label{a3.3-2}\\
\hat{{\theta}}(t,0)={\theta}_{l},\qquad \hat{{\theta}}(t,1)={\theta}_{r},\label{a3.3-3}\end{gathered}$$
where the function ${\phi}$ is defined by . Let the functions $({w},{j},{\theta})$ in the coefficients in satisfy
\[a4.1\] $$\begin{gathered}
({w},{j},{\theta})(0,x)=({w}_0,{j}_0,{\theta}_0)(x),\quad\forall x\in\Omega, \label{a4.1-1}\\
{w}\in\mathfrak{Y}_4([0,T])\cap H^2(0,T;H^1(\Omega)),\quad {j}\in\mathfrak{Y}_3([0,T])\cap H^2(0,T;L^2(\Omega)),\notag\\
{\theta}\in\mathfrak{Y}_2([0,T])\cap H^1(0,T;H^1(\Omega)),\label{a4.1-2}\\
{w}(t,x),\ {\theta}(t,x),\ S[{w}^2,{j},{\theta}](t,x)\geq m,\quad\forall (t,x)\in[0,T]\times\Omega,\label{a4.1-3}\\
\|{w}(t)\|_4^2+\|{j}(t)\|_3^2+\|({w}_t,{\theta})(t)\|_2^2+\|{j}_t(t)\|_1^2+\|({w}_{tt},{\theta}_t)(t)\|^2\notag\\
+\int_0^t\|({w}_{ttx},{j}_{tt},{\theta}_{tx})(\tau)\|^2d\tau\leq M,\quad\forall t\in[0,T],\label{a4.1-4}\end{gathered}$$
where $T$, $m$ and $M$ are positive constants. We denote by $X(T;m,M)$ the set of functions $({w},{j},{\theta})$ satisfying , and we abbreviate $X(T;m,M)$ by $X(\cdot)$ without confusion. The property of ${\phi}$ is that $${\phi}\in\mathfrak{Z}([0,T]),\quad \|{\partial_{t}^{i}}{\phi}(t)\|_2^2\leq M,\quad\forall t\in[0,T], \ i=0,1,2.$$ Then the next lemma means that for suitably chosen constants $T$, $m$ and $M$, the set $X(\cdot)$ is invariant under the mapping $({w},{j},{\theta})\mapsto(\hat{{w}},\hat{{j}},\hat{{\theta}})$ defined by solving the linear IBVP $\sim$. We discuss the solvability of this linear problem in Appendix. Since the next lemma is proved similarly as in [@NS08; @NS09], we omit the proof.
\[lema3\] Under the same assumptions in Lemma \[lema2\], there exist positive constants $T$, $m$ and $M$ with the following property: If $({w},{j},{\theta})\in X(\cdot)$, then the linear IBVP $\sim$ admits a unique solution $(\hat{{w}},\hat{{j}},\hat{{\theta}})$ in the same set $X(\cdot)$.
Using Lemma \[lema3\], we can show Lemma \[lema2\].
We define the approximation sequence $\{({w}^k,{j}^k,{\theta}^k)\}_{k=0}^\infty$ by letting $({w}^0,{j}^0,{\theta}^0)=({w}_0,{j}_0,{\theta}_0)$ and solving
\[a5.1\]
2[w]{}\^k[w]{}\^[k+1]{}\_t+[j]{}\^[k+1]{}\_x=0, \[a5.1-1\]\
[j]{}\^[k+1]{}\_t+2S\[([w]{}\^k)\^2,[j]{}\^k,\^k\][w]{}\^k[w]{}\^[k+1]{}\_x+[j]{}\^[k+1]{}\_x+([w]{}\^k)\^2\^[k+1]{}\_x\
-\^2([w]{}\^k)\^2()\_x=([w]{}\^k)\^2\^k\_x-[j]{}\^k, \[a5.1-2\]\
([w]{}\^k)\^2\^[k+1]{}\_t+[j]{}\^k\^[k+1]{}\_x+()\_x([w]{}\^k)\^2\^[k+1]{}-\^[k+1]{}\_[xx]{}\
-\_x=-([w]{}\^k)\^2(\^[k+1]{}-\_[L]{}), \[a5.1-3\]\
\^k:=, t>0, x,\[a5.1-4\]
with the initial condition $$\label{a5.2}
({w}^{k+1},{j}^{k+1},{\theta}^{k+1})(0,x)=({w}_0,{j}_0,{\theta}_0)(x),$$ and the boundary conditions
\[a5.3\] $$\begin{gathered}
{w}^{k+1}(t,0)={w}_{l},\qquad {w}^{k+1}(t,1)={w}_{r},\label{a5.3-1}\\
{w}^{k+1}_{xx}(t,0)={w}^{k+1}_{xx}(t,1)=0,\label{a5.3-2}\\
{\theta}^{k+1}(t,0)={\theta}_{l},\qquad {\theta}^{k+1}(t,1)={\theta}_{r},\label{a5.3-3}\end{gathered}$$
Thanks to Lemma \[lema3\], the sequence $\{({w}^k,{j}^k,{\theta}^k)\}_{k=0}^\infty$ is well defined and contained in $X(\cdot)$. Consequently, $({w}^k,{j}^k,{\theta}^k)$ satisfies the estimates and . Next, applying the standard energy method to the system satisfied by the difference $({w}^{k+1}-{w}^k,{j}^{k+1}-{j}^k,{\theta}^{k+1}-{\theta}^k)$, we see that $\{({w}^k,{j}^k,{\theta}^k)\}_{k=0}^\infty$ is the Cauchy sequence in $\mathfrak{Y}_2([0,T_*])\times\mathfrak{Y}_1([0,T_*])\times\big[\mathfrak{Y}_2([0,T_*])\cap H^1(0,T_*;H^1(\Omega))\big]$ for small enough $0<T_*\leq T$. In showing this fact, we obtain the estimates of the higher-order derivatives in the time variable $t$ and then rewrite them into those in the spatial variable $x$ by using the linear equations. Thus, there exists a function $({w},{j},{\theta})\in\mathfrak{Y}_2([0,T_*])\times\mathfrak{Y}_1([0,T_*])\times\big[\mathfrak{Y}_2([0,T_*])\cap H^1(0,T_*;H^1(\Omega))\big]$ such that $({w}^k,{j}^k,{\theta}^k)\rightarrow({w},{j},{\theta})$ strongly in $\mathfrak{Y}_2([0,T_*])\times\mathfrak{Y}_1([0,T_*])\times\big[\mathfrak{Y}_2([0,T_*])\cap H^1(0,T_*;H^1(\Omega))\big]$ as $k\rightarrow\infty$. Moreover, it holds $({w},{j})\in\big[\mathfrak{Y}_4([0,T_*])\cap H^2(0,T_*;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,T_*])\cap H^2(0,T_*;L^2(\Omega))\big]$ by the standard argument (see [@NS08; @NS09] for example). Define ${\phi}:=\Phi[{w}^2]$ by the limit function ${w}$ and the explicit formula , we see that $({w},{j},{\theta},{\phi})$ is the desired solution to the IBVP $\sim$. Notice that this solution also satisfies .
A priori estimate {#Subsect.3.2}
-----------------
To show the asymptotic stability of the stationary solution $({\tilde{w}},{\tilde{j}},{\tilde{\theta}},{\tilde{\phi}})$, we introduce the perturbations around the stationary solution $({\tilde{w}},{\tilde{j}},{\tilde{\theta}},{\tilde{\phi}})$ below $$\begin{aligned}
{\psi}(t,x)&:={w}(t,x)-{\tilde{w}}(x),&{\eta}(t,x)&:={j}(t,x)-{\tilde{j}},\notag\\
{\chi}(t,x)&:={\theta}(t,x)-{\tilde{\theta}}(x),&{\sigma}(t,x)&:={\phi}(t,x)-{\tilde{\phi}}(x).\label{10.1}\end{aligned}$$
Taking the difference between the transient system and the stationary system via the following procedure $$\label{10.0}
\eqref{1dfqhd1}-\eqref{1dsfqhd1},\quad\eqref{1dfqhd2}/{w}^2-\eqref{1dsfqhd2}/{\tilde{w}}^2,\quad\eqref{1dfqhd3}-\eqref{1dsfqhd3},\quad\eqref{1dfqhd4}-\eqref{1dsfqhd4},$$ we can derive the perturbed system for the perturbations $({\psi},{\eta},{\chi},{\sigma})$ as
\[10.2\]
2(+)\_t+\_x=0, \[10.2a\]\
\_t+{\^2-()\^2}\_x+\_x\
+\_x+\_x-\^2\_x\
=\_x-, \[10.2b\]\
(+)\^2\_t-\_[xx]{}+\_x-\_x\
-{(+)\^2\_[xx]{}-\^2()\_[xx]{}}\_x=H(t,x), \[10.2c\]\
\_[xx]{}=(+2),\[10.2d\]
where the right-side term of the perturbed energy equation is defined by $$\label{11.1}
H(t,x):=\frac{4{\tilde{\theta}}({\psi}+{\tilde{w}})_x}{3({\psi}+{\tilde{w}})}{\eta}-{\tilde{w}}^2{\chi}+H_1(t,x),$$ and $$\begin{aligned}
H_1(t,x):=&\frac{4{\chi}({\psi}+{\tilde{w}})_x}{3({\psi}+{\tilde{w}})}{\eta}-({\psi}+2{\tilde{w}})({\chi}+{\tilde{\theta}}-\theta_L){\psi}\notag\\
&-({\chi}+{\tilde{\theta}})_x{\eta}-{\tilde{j}}{\chi}_x-\frac{2{\eta}_x}{3}{\chi}+\frac{4{\tilde{j}}{\tilde{\theta}}({\psi}+{\tilde{w}})_x}{3{\tilde{w}}^2}{\psi}+\frac{4{\tilde{j}}({\psi}+{\tilde{w}})_x}{3({\psi}+{\tilde{w}})}{\chi}\notag\\
&+\frac{4{\tilde{j}}{\tilde{\theta}}({\psi}+2{\tilde{w}})({\psi}+{\tilde{w}})_x}{3{\tilde{w}}^2({\psi}+{\tilde{w}})}{\psi}+\frac{{\eta}+2{\tilde{j}}}{3({\psi}+{\tilde{w}})^2}{\eta}-\frac{{\tilde{j}}^2({\psi}+2{\tilde{w}})}{3{\tilde{w}}^2({\psi}+{\tilde{w}})^2}{\psi}.\label{11.0}\end{aligned}$$ The initial and the boundary conditions to the system are derived from , and as $$\begin{gathered}
{\psi}(0,x)={\psi}_0(x):={w}_0(x)-{\tilde{w}}(x),\quad{\eta}(0,x)={\eta}_0(x):={j}_0(x)-{\tilde{j}},\notag\\
{\chi}(0,x)={\chi}_0(x):={\theta}_0(x)-{\tilde{\theta}}(x),\label{pic}\end{gathered}$$ and
\[pbc\] $$\begin{gathered}
{\psi}(t,0)={\psi}(t,1)=0,\label{pbc1}\\
{\psi}_{xx}(t,0)={\psi}_{xx}(t,1)=0,\label{pbc2}\\
{\chi}(t,0)={\chi}(t,1)=0,\label{pbc3}\\
{\sigma}(t,0)={\sigma}(t,1)=0.\label{pbc4}\end{gathered}$$
Theorem \[thm1\] and Lemma \[lema2\] ensure the local existence of the solution $({\psi},{\eta},{\chi},{\sigma})$ to the IBVP $\sim$. It is summarized in the next corollary.
\[cor1\] Suppose that $({\psi}_0,{\eta}_0,{\chi}_0)\in H^4(\Omega)\times H^3(\Omega)\times H^2(\Omega)$ and $({\psi}_0+{\tilde{w}},{\eta}_0+{\tilde{j}},{\chi}_0+{\tilde{\theta}})$ satisfies and . Then there exists a constant $T_*>0$ such that the IBVP $\sim$ has a unique solution $({\psi},{\eta},{\chi},{\sigma})\in\big[\mathfrak{Y}_4([0,T_*])\cap H^2(0,T_*;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,T_*])\cap H^2(0,T_*;L^2(\Omega))\big]\times\big[\mathfrak{Y}_2([0,T_*])\cap H^1(0,T_*;H^1(\Omega))\big]\times\mathfrak{Y}_4^2([0,T_*])$ with the property that $({\psi}+{\tilde{w}},{\eta}+{\tilde{j}},{\chi}+{\tilde{\theta}})$ satisfies .
To show the global existence of the solution, the key step is to derive the a priori estimate for the local solution in Corollary \[cor1\]. The next three subsections are devoted to the proof of Proposition \[prop1\], where the following notations are frequently used. $$\label{16.1}
{N_{\varepsilon}(T)}:=\sup_{t\in[0,T]}{n_{\varepsilon}(t)},\quad {n_{\varepsilon}(t)}:=\|({\psi},{\eta},{\chi})(t)\|_2+\|({\varepsilon}{\partial_{x}^{3}}{\psi},{\varepsilon}{\partial_{x}^{3}}{\eta},{\varepsilon}^2{\partial_{x}^{4}}{\psi})(t)\|.$$
\[prop1\] Let $({\psi},{\eta},{\chi},{\sigma})$ be a solution to the IBVP $\sim$ which belongs to $\big[\mathfrak{Y}_4([0,T])\cap H^2(0,T;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,T])\cap H^2(0,T;L^2(\Omega))\big]\times\big[\mathfrak{Y}_2([0,T])\cap H^1(0,T;H^1(\Omega))\big]\times\mathfrak{Y}_4^2([0,T])$. Then there exist positive constants $\delta_0$, $C$ and $\gamma$ such that if ${N_{\varepsilon}(T)}+\delta+{\varepsilon}\leq\delta_0$, then the following estimate holds for $t\in[0,T]$, $$\label{127.2}
{n_{\varepsilon}(t)}+\|{\sigma}(t)\|_4\leq Cn_{\varepsilon}(0)e^{-\gamma t},$$ where $C$ and $\gamma$ are two positive constants independent of $\delta$, ${\varepsilon}$ and $T$.
Using the Sobolev inequality, the estimate , the perturbed system and the notation , we can derive some frequently used estimates in the next lemma. Since the proof is straightforward and tedious, we omit the details.
Under the same assumptions as in Proposition \[prop1\], the following estimates hold for $t\in[0,T]$, $$\begin{gathered}
|{\tilde{w}}|_1+\big|\big({\varepsilon}^{1/2}{\tilde{w}}_{xx},{\varepsilon}^{3/2}{\tilde{w}}_{xxx}\big)\big|_0\leq C,\quad |{\tilde{j}}|+|{\tilde{\theta}}-\theta_L|_2\leq C\delta,\label{17.1}\\
|({\psi},{\eta},{\chi})(t)|_1+\big|\big({\varepsilon}^{1/2}{\psi}_{xx},{\varepsilon}^{1/2}{\eta}_{xx},{\varepsilon}^{3/2}{\psi}_{xxx},{\psi}_t,{\eta}_t\big)(t)\big|_0\leq C{N_{\varepsilon}(T)},\label{16.3+16.5}\\
\|{\partial_{t}^{i}}{\sigma}(t)\|_2\leq C\bigg[\|{\partial_{t}^{i}}{\psi}(t)\|+\frac{i(i-1)}{2}{N_{\varepsilon}(T)}\|{\psi}_t(t)\|\bigg],\quad i=0,1,2,\label{17.2}\\
\|{\sigma}_{tx}(t)\|\leq\|{\eta}(t)\|,\quad\|{\sigma}(t)\|_4\leq C\|{\psi}(t)\|_2,\label{17.3}\\
\|{\partial_{x}^{l}}{\eta}_x(t)\|\leq C\|{\psi}_t(t)\|_l,\quad\|{\partial_{x}^{l}}{\psi}_t(t)\|\leq C\|{\eta}_x(t)\|_l,\quad l=0,1,2,\label{32.4+44.2}\\
\|{\eta}_{tx}(t)\|\leq C\|({\psi}_t,{\psi}_{tt})(t)\|,\quad\|{\eta}_{txx}(t)\|\leq C\|({\psi}_{tt},{\psi}_{tx},{\psi}_{ttx})(t)\|,\label{38.2c+38.2d}\end{gathered}$$ where the positive constant $C$ is independent of $\delta$, ${\varepsilon}$ and $T$.
Basic estimate {#Subsect.3.3}
--------------
In this subsection, we derive the following basic estimate.
\[be\] Suppose the same assumptions as in Proposition \[prop1\] hold. Then there exist positive constants $\delta_0$, $c$ and $C$ such that if ${N_{\varepsilon}(T)}+\delta+{\varepsilon}\leq\delta_0$, it holds that for $t\in[0,T]$, $$\label{25.1}
\frac{d}{dt}\Xi(t)+c\Pi(t)\leq C\Gamma(t),$$ where $$\label{25.2}
\Xi(t):=\int_0^1\bigg\{\bigg[\frac{1}{2{w}^2}{\eta}^2+{\tilde{\theta}}{w}^2\Psi\bigg(\frac{{\tilde{w}}^2}{{w}^2}\bigg)+{\varepsilon}^2{\psi}_x^2+\frac{1}{2}{\sigma}_x^2\bigg]+\frac{3{w}^2}{4{\tilde{\theta}}}{\chi}^2-\alpha\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg){\sigma}_x\bigg\}dx,$$ here $\alpha\in(0,1)$ is a small constant which will be determined later and $\Psi(s):=s-1-\ln s$ for $s>0$, $$\label{D} \Pi(t):=\|({\psi},{\varepsilon}{\psi}_x,{\eta},{\chi},{\chi}_x)(t)\|^2,$$ and $$\label{R} \Gamma(t):=\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{3/2}\big)\|({\psi}_x,{\eta}_x)(t)\|^2+{\varepsilon}^3\|({\psi}_{xx},{\eta}_{xx})(t)\|^2.$$ Furthermore, if $\alpha$ is small enough, then the following equivalent relation holds true, $$\label{25.3}
c\|({\psi},{\eta},{\chi},{\varepsilon}{\psi}_x)(t)\|^2\leq\Xi(t)\leq C\|({\psi},{\eta},{\chi},{\varepsilon}{\psi}_x)(t)\|^2,$$ where the constants $c$ and $C$ are independent of $\delta$, ${\varepsilon}$ and $T$.
Firstly, multiplying the equation by ${\eta}$ and applying Leibniz formula to the resultant equality together with the equations and , we obtain $$\label{18.1}
{\partial_{t}^{}}\bigg[\frac{1}{2{w}^2}{\eta}^2+{\tilde{\theta}}{w}^2\Psi\bigg(\frac{{\tilde{w}}^2}{{w}^2}\bigg)+{\varepsilon}^2{\psi}_x^2+\frac{1}{2}{\sigma}_x^2\bigg]+\frac{1}{{\tilde{w}}^2}{\eta}^2+\frac{2{\tilde{w}}_x}{{w}}{\chi}{\eta}+{\chi}_x{\eta}={\partial_{x}^{}}R_{1}(t,x)+R_2(t,x),$$ where $$\label{18.3a}
R_1(t,x):={\sigma}{\sigma}_{tx}+{\sigma}{\eta}-{\tilde{\theta}}\Big(\ln{w}^2-\ln{\tilde{w}}^2\Big){\eta}+{\varepsilon}^2\bigg[\bigg(\frac{{w}_{xx}}{{w}}-\frac{{\tilde{w}}_{xx}}{{\tilde{w}}}\bigg){\eta}+2{\psi}_t{\psi}_x\bigg],$$ $$\begin{aligned}
R_2(t,x):=&-\frac{{\eta}+2{\tilde{j}}}{2{w}^4}{\eta}{\eta}_x-\frac{1}{2}\Bigg[\bigg(\frac{{j}}{{w}^2}\bigg)^2-\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)^2\Bigg]_x{\eta}\notag\\
&-\frac{2{\psi}_x}{{w}}{\chi}{\eta}+{\tilde{\theta}}_x\Big(\ln{w}^2-\ln{\tilde{w}}^2\Big){\eta}+\frac{({w}+{\tilde{w}}){j}}{{w}^2{\tilde{w}}^2}{\psi}{\eta}+{\varepsilon}^2\frac{{\tilde{w}}_{xx}}{{\tilde{w}}{w}}{\psi}{\eta}_x.\label{18.3b}\end{aligned}$$ Applying the estimates $\sim$ and Cauchy-Schwarz inequality to , we have the following pointwise estimate $$\label{18.4b}
R_2(t,x)\leq C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{3/2}\big)|({\psi},{\eta},{\psi}_x,{\eta}_x)(t,x)|^2.$$
In addition, multiplying the equation by $3{\chi}/(2{\tilde{\theta}})$ and applying Leibniz formula to the resultant equality, we obtain $$\label{19.1}
{\partial_{t}^{}}\bigg(\frac{3{w}^2}{4{\tilde{\theta}}}{\chi}^2\bigg)+\frac{3{\tilde{w}}^2}{2{\tilde{\theta}}}{\chi}^2+\frac{1}{{\tilde{\theta}}}{\chi}_x^2-\frac{2{\tilde{w}}_x}{{w}}{\eta}{\chi}-{\eta}{\chi}_x={\partial_{x}^{}}R_3(t,x)+R_4(t,x),$$ where $$\label{19.2a}
R_3(t,x):=\frac{1}{{\tilde{\theta}}}{\chi}{\chi}_x-{\eta}{\chi}+\frac{{\varepsilon}^2}{2{\tilde{\theta}}}\bigg[{w}^2\bigg(\frac{{j}}{{w}^2}\bigg)_{xx}-{\tilde{w}}^2\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)_{xx}\bigg]{\chi},$$ $$\begin{aligned}
R_4(t,x):=&\frac{2{\psi}_x}{{w}}{\eta}{\chi}-\frac{3{\eta}_x}{4{\tilde{\theta}}}{\chi}^2+\frac{{\tilde{\theta}}_x}{{\tilde{\theta}}^2}{\chi}{\chi}_x+\frac{2{\tilde{j}}}{{\tilde{w}}}{\psi}_x{\chi}+\frac{3}{2{\tilde{\theta}}}H_1(t,x){\chi}\notag\\
&+\frac{{\varepsilon}^2{\tilde{\theta}}_x}{2{\tilde{\theta}}^2}\bigg[{w}^2\bigg(\frac{{j}}{{w}^2}\bigg)_{xx}-{\tilde{w}}^2\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)_{xx}\bigg]{\chi}-\frac{{\varepsilon}^2}{2{\tilde{\theta}}}\bigg[{w}^2\bigg(\frac{{j}}{{w}^2}\bigg)_{xx}-{\tilde{w}}^2\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)_{xx}\bigg]{\chi}_x.\label{19.2b}\end{aligned}$$ Applying the estimates $\sim$ and Cauchy-Schwarz inequality to together with for $k=0$, we have the following pointwise estimate $$\begin{aligned}
R_4(t,x)\leq&C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^3\big)|({\psi},{\eta},{\psi}_x,{\eta}_x)(t,x)|^2\notag\\
&+C{\varepsilon}|({\chi},{\chi}_x)(t,x)|^2+C{\varepsilon}^3|({\psi}_{xx},{\eta}_{xx})(t,x)|^2.\label{20.2b}\end{aligned}$$
Since the stationary density ${\tilde{w}}$ is non-flat, see , we have to capture the dissipation rate of the perturbed density ${\psi}$ in establishing the basic estimate. To this end, multiplying the equation by $-{\sigma}_x$ and applying Leibniz formula to the resultant equality, we obtain $$\label{22.1}
-{\partial_{t}^{}}\bigg[\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg){\sigma}_x\bigg]+{\tilde{\theta}}({w}+{\tilde{w}})\Big(\ln{w}^2-\ln{\tilde{w}}^2\Big){\psi}+\frac{{w}+{\tilde{w}}}{{w}}{\varepsilon}^2{\psi}_x^2+{\sigma}_x^2={\partial_{x}^{}}R_5(t,x)+R_6(t,x),$$ where $$\label{22.2a}
R_5(t,x):={\tilde{\theta}}\Big(\ln{w}^2-\ln{\tilde{w}}^2\Big){\sigma}_x-{\varepsilon}^2\bigg(\frac{{w}_{xx}}{{w}}-\frac{{\tilde{w}}_{xx}}{{\tilde{w}}}\bigg){\sigma}_x+{\varepsilon}^2\frac{{w}+{\tilde{w}}}{{w}}{\psi}_x{\psi},$$ $$\begin{aligned}
R_6:=&-\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg){\sigma}_{tx}+\frac{1}{2}\Bigg[\bigg(\frac{{j}}{{w}^2}\bigg)^2-\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)^2\Bigg]_x{\sigma}_x+\Big(\ln{w}^2\Big)_x{\chi}{\sigma}_x\notag\\
&-{\tilde{\theta}}_x\Big(\ln{w}^2-\ln{\tilde{w}}^2\Big){\sigma}_x+{\chi}_x{\sigma}_x+\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg){\sigma}_x+{\varepsilon}^2\frac{({w}+{\tilde{w}}){\psi}}{{w}^2}{\psi}_x^2\notag\\
&-{\varepsilon}^2\frac{{\tilde{w}}_{xx}({w}+{\tilde{w}})}{{w}{\tilde{w}}}{\psi}^2+{\varepsilon}^2\frac{({w}+{\tilde{w}}){\tilde{w}}_x}{{w}^2}{\psi}_x{\psi}-{\varepsilon}^2\frac{({w}+{\tilde{w}})_x}{{w}}{\psi}_x{\psi}.\label{22.2b}\end{aligned}$$ Similarly, we also have the pointwise estimate $$\begin{aligned}
R_6(t,x)\leq&(\mu+C\delta)|{\sigma}_x(t,x)|^2+C\big({N_{\varepsilon}(T)}+\delta\big)|({\psi},{\psi}_x,{\eta}_x)(t,x)|^2\notag\\
&+C_\mu|({\chi},{\chi}_x,{\eta})(t,x)|^2+C|({\eta},{\sigma}_{tx})(t,x)|^2+C\big({N_{\varepsilon}(T)}+{\varepsilon}\big)|({\psi},{\varepsilon}{\psi}_x)(t,x)|^2,\label{22.3b}\end{aligned}$$ In particular, applying the mean value theorem to the second term on the left-side of , and using the estimates $\sim$, the second and the third terms on the left-side of can be further treated as $$\label{23.0}
\text{(the 2nd and 3rd terms)}\geq c|({\psi},{\varepsilon}{\psi}_x)(t,x)|^2,$$ and the quantity in the first term on the left-side of can be estimated as $$\label{23.2b}
\bigg|-\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg){\sigma}_x\bigg|\leq C|({\psi},{\eta},{\sigma}_x)(t,x)|^2.$$ Substituting and into , letting $\mu$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ be small enough, we have $$\begin{aligned}
-{\partial_{t}^{}}\bigg[\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg){\sigma}_x\bigg]+c|({\psi},{\varepsilon}{\psi}_x)(t,x)|^2\leq&{\partial_{x}^{}}R_5(t,x)+C|({\sigma}_{tx},{\eta},{\chi},{\chi}_x)(t,x)|^2\notag\\
&+C\big({N_{\varepsilon}(T)}+\delta\big)|({\psi}_x,{\eta}_x)(t,x)|^2.\label{23.1}\end{aligned}$$
Finally, from the following procedure $$\int_0^1\Big[\eqref{18.1}+\eqref{19.1}+\alpha\eqref{23.1}\Big]dx,$$ where $\alpha$ is an arbitrary positive constant to be determined, we obtain $$\begin{gathered}
\label{24.2}
\frac{d}{dt}\Xi(t)+\int_0^1\bigg(\frac{1}{{\tilde{w}}^2}{\eta}^2+\frac{3{\tilde{w}}^2}{2{\tilde{\theta}}}{\chi}^2+\frac{1}{{\tilde{\theta}}}{\chi}_x^2\bigg)dx+c\alpha\|({\psi},{\varepsilon}{\psi}_x)(t)\|^2\\
\leq\int_0^1\Big[R_2(t,x)+R_4(t,x)\Big]dx+C\alpha\|({\sigma}_{tx},{\eta},{\chi},{\chi}_x)(t)\|^2+C\alpha\big({N_{\varepsilon}(T)}+\delta\big)\|({\psi}_x,{\eta}_x)(t)\|^2,\end{gathered}$$ where we have used the fact $$\label{18.4a+20.1b+23.2a}
\int_0^1{\partial_{x}^{}}\Big[R_1(t,x)+R_3(t,x)+\alpha R_5(t,x)\Big]=0,$$ which follows from the boundary conditions . Applying the estimates , , , and to the inequality , and then letting $\alpha$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ be sufficiently small. These procedures yield the desired estimates and .
Higher order estimates {#Subsect.3.4}
----------------------
This subsection is devoted to the derivation of the higher order estimates. In order to use the homogeneous boundary condition , we first establish the estimates of the temporal derivatives of the perturbations $({\psi},{\eta},{\chi})$. And then, we find that the spatial derivatives of the perturbations $({\psi},{\eta},{\chi},{\sigma})$ can be bounded by the temporal derivatives of the perturbations $({\psi},{\eta},{\chi})$ with the help of the special structure of the perturbed system . To justify the above mentioned computations, we need to use the mollifier arguments with respect to the time variable $t$ because the regularity of the local solution $({\psi},{\eta},{\chi})$ is insufficient. However, we omit these arguments since they are standard.
It is convenient to intoduce the notations $$\begin{gathered}
\label{39.1}
A_{-1}(t):=\|({\psi},{\eta},{\chi},{\chi}_x)(t)\|,\\
A_k(t):=A_{-1}(t)+\sum_{i=0}^k\|({\partial_{t}^{i}}{\psi}_t,{\partial_{t}^{i}}{\psi}_x,{\varepsilon}{\partial_{t}^{i}}{\psi}_{xx})(t)\|,\quad k=0,1.\end{gathered}$$
Differentiating with respect to $x$ and multiplying the result by $1/{w}$. Similarly, differentiating with respect to $x$ and multiplying the result by $1/{\tilde{w}}$. Furthermore, taking the difference between the two resultant equalities and substituting the equations and in the resultant equation. Then applying the operator ${\partial_{t}^{k}}$ for $k=0,1$ to the result, we obtain the equation $$\begin{aligned}
2{\partial_{t}^{k}}{\psi}_{tt}-&2{\tilde{\theta}}{\partial_{t}^{k}}{\psi}_{xx}+{\varepsilon}^2{\partial_{t}^{k}}{\psi}_{xxxx}+2{\partial_{t}^{k}}{\psi}_t-{\tilde{w}}{\partial_{t}^{k}}{\chi}_{xx}-2{\tilde{w}}_{xx}{\partial_{t}^{k}}{\chi}\notag\\
=&\frac{2({\eta}+{\tilde{j}})}{({\psi}+{\tilde{w}})^3}{\partial_{t}^{k}}{\eta}_{xx}-\frac{2({\eta}+{\tilde{j}})^2}{({\psi}+{\tilde{w}})^4}{\partial_{t}^{k}}{\psi}_{xx}+2{\chi}{\partial_{t}^{k}}{\psi}_{xx}+{\psi}{\partial_{t}^{k}}{\chi}_{xx}\notag\\
&+{\varepsilon}^2\frac{(k+1){\psi}_{xx}+2{\tilde{w}}_{xx}}{{\psi}+{\tilde{w}}}{\partial_{t}^{k}}{\psi}_{xx}+{\partial_{t}^{k}}P(t,x)+O_k(t,x),\quad k=0,1,\label{12.3}\end{aligned}$$ where $$\begin{aligned}
P(t,x):=&-({\psi}+{\tilde{w}})({\psi}+2{\tilde{w}}){\psi}-({\tilde{w}}^2-D){\psi}-\frac{2({\psi}+{\tilde{w}})_x^2({\chi}+{\tilde{\theta}})}{({\psi}+{\tilde{w}}){\tilde{w}}}{\psi}\notag\\
&+\frac{2{\tilde{w}}_x^2}{{\tilde{w}}}{\chi}+4{\tilde{w}}_x{\chi}_x-2({\psi}+{\tilde{w}})_x{\sigma}_x\notag\\ &+{\tilde{\theta}}_{xx}{\psi}+\frac{6({\psi}+{\tilde{w}})_x^2({\eta}+{\tilde{j}})^2}{({\psi}+{\tilde{w}})^5{\tilde{w}}^5}\big[{\tilde{w}}^5-({\psi}+{\tilde{w}})^5\big]+\frac{6{\tilde{w}}_x^2({\eta}+2{\tilde{j}})}{{\tilde{w}}^5}{\eta}\notag\\
&-\frac{2({\eta}+{\tilde{j}})^2\big[{\tilde{w}}^4-({\psi}+{\tilde{w}})^4\big]}{({\psi}+{\tilde{w}})^4{\tilde{w}}^4}{\tilde{w}}_{xx}-\frac{2({\eta}+2{\tilde{j}}){\eta}}{{\tilde{w}}^4}{\tilde{w}}_{xx}-\frac{{\varepsilon}^2{\tilde{w}}_{xx}^2}{({\psi}+{\tilde{w}}){\tilde{w}}}{\psi}\\ &+\frac{6({\psi}+2{\tilde{w}})_x({\eta}+{\tilde{j}})^2}{{\tilde{w}}^5}{\psi}_x+\frac{2({\chi}+{\tilde{\theta}})}{{\tilde{w}}}{\psi}_x^2+\frac{2({\psi}+2{\tilde{w}})_x{\chi}}{{\tilde{w}}}{\psi}_x+4({\chi}+{\tilde{\theta}})_x{\psi}_x\notag\\
&-\frac{2}{{\psi}+{\tilde{w}}}{\psi}_t^2+\frac{2}{({\psi}+{\tilde{w}})^3}{\eta}_x^2-\frac{8({\psi}+{\tilde{w}})_x({\eta}+{\tilde{j}})}{({\psi}+{\tilde{w}})^4}{\eta}_x+2\bigg(\frac{2{\tilde{w}}_x{\tilde{\theta}}}{{\tilde{w}}}-{\tilde{\phi}}_x\bigg){\psi}_x, $$ and $$\begin{aligned}
&O_0(t,x):=0,& O_1(t,x):=&-\frac{6({\eta}+{\tilde{j}})}{({\psi}+{\tilde{w}})^4}{\psi}_t{\eta}_{xx}+\frac{2}{({\psi}+{\tilde{w}})^3}{\eta}_t{\eta}_{xx}+\frac{8({\eta}+{\tilde{j}})^2}{({\psi}+{\tilde{w}})^5}{\psi}_t{\psi}_{xx}\notag\\
&&&-\frac{4({\eta}+{\tilde{j}})}{({\psi}+{\tilde{w}})^4}{\eta}_t{\psi}_{xx}+2{\chi}_t{\psi}_{xx}+{\psi}_t{\chi}_{xx}-\frac{{\varepsilon}^2({\psi}_{xx}+2{\tilde{w}}_{xx}){\psi}_{xx}}{({\psi}+{\tilde{w}})^2}{\psi}_t. $$ According to $\sim$, we show the estimate $$\begin{aligned}
\|{\partial_{t}^{k}}P(t)\|+\|O_k(t)\|\leq&C\|({\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_{x})(t)\|\notag\\
&+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\partial_{t}^{k}}{\psi}_{t},{\partial_{t}^{k}}{\psi}_{x},{\partial_{t}^{k}}{\eta})(t)\|,\quad k=0,1,\label{26.1+26.4+27.2+27.3+29.1}\end{aligned}$$ where $C$ is a positive constant independent of $\delta$, ${\varepsilon}$ and $T$. In deriving the $L^2$-norm estimate of $\|{\partial_{t}^{k}}P(t)\|$, we have used the equation and the estimate to deal with the coefficient of the last term in the expression of $P(t,x)$ as $$\Bigg|2\bigg(\frac{2{\tilde{w}}_x{\tilde{\theta}}}{{\tilde{w}}}-{\tilde{\phi}}_x\bigg)\Bigg|=\Bigg|2\bigg[\frac{2{\tilde{j}}^2}{{\tilde{w}}^5}{\tilde{w}}_x-{\tilde{\theta}}_x+{\varepsilon}^2\bigg(\frac{{\tilde{w}}_{xx}}{{\tilde{w}}}\bigg)_x-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg]\Bigg|\leq C\big(\delta+{\varepsilon}^{1/2}\big).$$
Applying the operator ${\partial_{t}^{k}}$ for $k=0,1$ to , we have $$\begin{gathered}
\label{15.1}
({\psi}+{\tilde{w}})^2{\partial_{t}^{k}}{\chi}_t-\frac{2}{3}{\partial_{t}^{k}}{\chi}_{xx}+\frac{2}{3}{\tilde{\theta}}{\partial_{t}^{k}}{\eta}_x-\frac{4{\tilde{j}}{\tilde{\theta}}}{3{\tilde{w}}}{\partial_{t}^{k}}{\psi}_x\\
={\partial_{x}^{}}\mathcal{V}_k(t,x)+{\partial_{t}^{k}}H(t,x)+L_k(t,x),\quad k=0,1,\end{gathered}$$ where $$\begin{aligned}
\mathcal{V}_k(t,x):=&\frac{{\varepsilon}^2}{3}{\partial_{t}^{k}}\Bigg\{({\psi}+{\tilde{w}})^2\bigg[\frac{{\eta}+{\tilde{j}}}{({\psi}+{\tilde{w}})^2}\bigg]_{xx}-{\tilde{w}}^2\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)_{xx}\Bigg\}\notag\\ =&\frac{{\varepsilon}^2}{3}{\partial_{t}^{k}}{\eta}_{xx}-\frac{2{\varepsilon}^2{\tilde{j}}}{3{\tilde{w}}}{\partial_{t}^{k}}{\psi}_{xx}+{\partial_{t}^{k}}\mathcal{K}(t,x),\quad k=0,1,\label{32.1+33.1}\end{aligned}$$ $$\begin{aligned}
\mathcal{K}(t,x):=\frac{{\varepsilon}^2}{3}\bigg[-\frac{4({\psi}+{\tilde{w}})_x}{{\psi}+{\tilde{w}}}&{\eta}_x+\frac{6({\psi}+{\tilde{w}})_x^2}{({\psi}+{\tilde{w}})^2}{\eta}-\frac{6{\tilde{j}}({\psi}+2{\tilde{w}})({\psi}+{\tilde{w}})_x^2}{({\psi}+{\tilde{w}})^2{\tilde{w}}^2}{\psi}\notag\\
+&\frac{6{\tilde{j}}({\psi}+2{\tilde{w}})_x}{{\tilde{w}}^2}{\psi}_x-\frac{2({\psi}+{\tilde{w}})_{xx}}{{\psi}+{\tilde{w}}}{\eta}+\frac{2{\tilde{j}}({\psi}+{\tilde{w}})_{xx}}{({\psi}+{\tilde{w}}){\tilde{w}}}{\psi}\bigg],\label{32.2a}\end{aligned}$$ $$L_0(t,x):=0,\quad L_1(t,x):=-2({\psi}+{\tilde{w}}){\psi}_t{\chi}_t.$$ For convenience, we further calculate the ${\partial_{x}^{}}\mathcal{V}_k(t,x)$ as $$\label{34.1}
{\partial_{x}^{}}\mathcal{V}_0(t,x)=\frac{{\varepsilon}^2}{3}{\eta}_{xxx}-\frac{2{\varepsilon}^2{\tilde{j}}}{3{\tilde{w}}}{\psi}_{xxx}+\underbrace{\frac{2{\varepsilon}^2{\tilde{j}}{\tilde{w}}_x}{{\tilde{w}}^2}{\psi}_{xx}+{\partial_{x}^{}}\mathcal{K}(t,x)}_{=:\mathcal{K}_1(t,x)}, $$ and $$\label{37.2}
{\partial_{x}^{}}\mathcal{V}_1(t,x)=\frac{{\varepsilon}^2}{3}{\eta}_{txxx}-\frac{2{\varepsilon}^2({\eta}+{\tilde{j}})}{3({\psi}+{\tilde{w}})}{\psi}_{txxx}+\mathcal{K}_2(t,x),$$ where $$\begin{aligned}
\mathcal{K}_2(t,x):=\frac{{\varepsilon}^2}{3}\bigg[-&\frac{4({\psi}+{\tilde{w}})_x}{{\psi}+{\tilde{w}}}{\eta}_{txx}+\frac{4({\psi}+{\tilde{w}})_x{\eta}_{xx}}{({\psi}+{\tilde{w}})^2}{\psi}_t-\frac{4{\eta}_{xx}}{{\psi}+{\tilde{w}}}{\psi}_{tx}\notag\\ &+\frac{10({\psi}+{\tilde{w}})_x^2}{({\psi}+{\tilde{w}})^2}{\eta}_{tx}-\frac{20({\psi}+{\tilde{w}})_x^2{\eta}_x}{({\psi}+{\tilde{w}})^3}{\psi}_t+\frac{20({\psi}+{\tilde{w}})_x{\eta}_x}{({\psi}+{\tilde{w}})^2}{\psi}_{tx}\notag\\ &-\frac{6({\psi}+{\tilde{w}})_{xx}}{{\psi}+{\tilde{w}}}{\eta}_{tx}+\frac{6({\psi}+{\tilde{w}})_{xx}{\eta}_x}{({\psi}+{\tilde{w}})^2}{\psi}_t-\frac{6{\eta}_x}{{\psi}+{\tilde{w}}}{\psi}_{txx}\notag\\&-\frac{12({\psi}+{\tilde{w}})_x^3}{({\psi}+{\tilde{w}})^3}{\eta}_t+\frac{36({\eta}+{\tilde{j}})({\psi}+{\tilde{w}})_x^3}{({\psi}+{\tilde{w}})^4}{\psi}_t-\frac{36({\eta}+{\tilde{j}})({\psi}+{\tilde{w}})_x^2}{({\psi}+{\tilde{w}})^3}{\psi}_{tx}\notag\\ &+\frac{14({\psi}+{\tilde{w}})_{x}({\psi}+{\tilde{w}})_{xx}}{({\psi}+{\tilde{w}})^2}{\eta}_t-\frac{28({\eta}+{\tilde{j}})({\psi}+{\tilde{w}})_{x}({\psi}+{\tilde{w}})_{xx}}{({\psi}+{\tilde{w}})^3}{\psi}_t\notag\\
&\qquad\qquad\qquad+\frac{14({\eta}+{\tilde{j}})({\psi}+{\tilde{w}})_{xx}}{({\psi}+{\tilde{w}})^2}{\psi}_{tx}+\frac{14({\eta}+{\tilde{j}})({\psi}+{\tilde{w}})_x}{({\psi}+{\tilde{w}})^2}{\psi}_{txx}\notag\\ &-\frac{2({\psi}+{\tilde{w}})_{xxx}}{{\psi}+{\tilde{w}}}{\eta}_t+\frac{2({\eta}+{\tilde{j}})({\psi}+{\tilde{w}})_{xxx}}{({\psi}+{\tilde{w}})^2}{\psi}_t\bigg].\label{37.1} $$ According to $\sim$, we further show the estimates $$\begin{gathered}
\|H(t)\|\leq C\|({\eta},{\chi})(t)\|+C\big({N_{\varepsilon}(T)}+\delta\big)\|({\psi},{\chi}_x)(t)\|,\label{29.2a}\\
\|{\partial_{t}^{}}H(t)\|+\|L_1(t)\|\leq C\|({\eta}_t,{\chi}_t)(t)\|+C\big({N_{\varepsilon}(T)}+\delta\big)\|({\psi}_t,{\psi}_{tt},{\psi}_{tx},{\chi}_{tx})(t)\|,\label{30.1a+30.2}\\
\|\mathcal{K}(t)\|\leq C{\varepsilon}^{3/2}\|({\psi},{\psi}_t,{\psi}_x,{\eta})(t)\|,\label{32.2b}\\
\|{\partial_{t}^{}}\mathcal{K}(t)\|\leq C{\varepsilon}^{3/2}\|{\psi}_t(t)\|+C{\varepsilon}^2\|({\psi}_{tt},{\psi}_{tx},{\eta}_t)(t)\|+C\big({N_{\varepsilon}(T)}+\delta\big){\varepsilon}\|{\varepsilon}{\psi}_{txx}(t)\|,\label{33.2b}\\
\|\mathcal{K}_1(t)\|\leq C{\varepsilon}^{1/2}\|({\psi},{\eta})(t)\|+C{\varepsilon}^{3/2}\|({\psi}_t,{\psi}_x)(t)\|+C{\varepsilon}^2\|({\psi}_{tx},{\psi}_{xx})(t)\|,\label{34.2c}\\
\|\mathcal{K}_2(t)\|\leq C{\varepsilon}^{1/2}\|({\psi}_t,{\eta}_t)(t)\|+C{\varepsilon}^{3/2}\|({\psi}_{tt},{\psi}_{tx})(t)\|+C{\varepsilon}\|({\varepsilon}{\psi}_{ttx},{\varepsilon}{\psi}_{txx})(t)\|,\label{37.1b}\end{gathered}$$ where the positive constant $C$ is independent of $\delta$, ${\varepsilon}$ and $T$.
The following lemma is important for the strategy in which we establish the a priori estimate . This means that the spatial derivatives of the perturbations $({\psi},{\eta},{\chi})$ can be controlled by their temporal derivatives.
\[lem5\] Under the same assumptions as in Proposition \[prop1\], the following equivalent relation holds for $t\in[0,T]$, $$\label{46.5}
c\big(A_1(t)+\|{\chi}_t(t)\|\big)\leq{n_{\varepsilon}(t)}\leq C\big(A_1(t)+\|{\chi}_t(t)\|\big),$$ where the two positive constants $c$ and $C$ are independent of $\delta$, ${\varepsilon}$ and $T$.
We only show the right-side inequality in because the left-side inequality in can be established by the similar method and the corresponding computations are much easier than the right-side one. According to the definitions of the notations ${n_{\varepsilon}(t)}$, $A_{-1}(t)$, $A_0(t)$ and $A_1(t)$, and using the estimates $\sim$, with $k=0$, and , the equation with $k=0$, the equation with $k=0$ and the equality , we have $$\begin{aligned}
{n_{\varepsilon}(t)}=&\|({\psi},{\eta},{\chi})(t)\|_2+\|({\varepsilon}{\partial_{x}^{3}}{\psi},{\varepsilon}{\partial_{x}^{3}}{\eta},{\varepsilon}^2{\partial_{x}^{4}}{\psi})(t)\|\notag\\
\leq&CA_1(t)+\|{\varepsilon}^2{\partial_{x}^{4}}{\psi}(t)\|+\|{\chi}_{xx}(t)\|+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\notag\\
=&CA_1(t)+\|{\chi}_{xx}(t)\|+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\notag\\
&+\Big\|\big[\eqref{12.3}_r|_{k=0}-\big(2{\psi}_{tt}-2{\tilde{\theta}}{\psi}_{xx}+2{\psi}_t-{\tilde{w}}{\chi}_{xx}-2{\tilde{w}}_{xx}{\chi}\big)\big](t)\Big\|\notag\\
\leq&CA_1(t)+\|{\chi}_{xx}(t)\|+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\notag\\
&+C\|({\psi}_t,{\psi}_{tt},{\eta}_{xx},P,{\chi},{\chi}_{x},{\chi}_{xx},{\psi}_{xx})(t)\|\notag\\ \leq&C\big(A_1(t)+\|{\chi}_{xx}(t)\|+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\big)\notag\\
=&C\big(A_1(t)+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\big)\notag\\
&+C\bigg\|-\frac{3}{2}\bigg[-({\psi}+{\tilde{w}})^2{\chi}_t-\frac{2{\tilde{\theta}}}{3}{\eta}_x+\frac{4{\tilde{j}}{\tilde{\theta}}}{3{\tilde{w}}}{\psi}_x+\frac{{\varepsilon}^2}{3}{\partial_{x}^{3}}{\eta}-\frac{2{\varepsilon}^2{\tilde{j}}}{3{\tilde{w}}}{\partial_{x}^{3}}{\psi}+\mathcal{K}_1+H\bigg](t)\bigg\|\notag\\
\leq&C\big(A_1(t)+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\big)+C\big(A_1(t)+\|{\chi}_t(t)\|+{\varepsilon}\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\big)\notag\\ \leq&C\big(A_1(t)+\|{\chi}_t(t)\|+\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\big).\label{42.2}\end{aligned}$$
Moreover, multiplying the equation with $k=0$ by $-{\psi}_{xx}$ and integrating by parts with using the boundary condition , we obtain $$\begin{aligned}
&\theta_{L}\|{\psi}_{xx}(t)\|^2+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|^2\notag\\
\leq&-\int_0^1\Big[\eqref{12.3}_r|_{k=0}-(2{\psi}_{tt}+2{\psi}_t-{\tilde{w}}{\chi}_{xx}-2{\tilde{w}}_{xx}{\chi})\Big]{\psi}_{xx}dx\notag\\
\leq&\big[\mu+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{3/2}\big)\big]\|{\psi}_{xx}(t)\|^2+C_\mu\|({\psi}_t,{\psi}_{tt},{\psi}_{tx},{\chi},{\chi}_x,{\chi}_{xx},P)(t)\|^2\notag\\
\leq&\big[\mu+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{3/2}\big)\big]\|{\psi}_{xx}(t)\|^2+C_\mu\big(A_1^2(t)+\|{\chi}_{xx}(t)\|^2\big)\notag\\
\leq&\big[\mu+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{3/2}\big)\big]\|{\psi}_{xx}(t)\|^2+C_\mu\big(A_1^2(t)+\|{\chi}_t(t)\|^2+{\varepsilon}^2\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|^2\big),\label{42.0}\end{aligned}$$ Let $\mu$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ small enough, the inequality implies $$\label{42.1}
\|{\psi}_{xx}(t)\|+\|{\varepsilon}{\partial_{x}^{3}}{\psi}(t)\|\leq C\big(A_1(t)+\|{\chi}_t(t)\|\big)$$ Substituting into , we have ${n_{\varepsilon}(t)}\leq C(A_1(t)+\|{\chi}_t(t)\|)$.
For convenience of later use, we estimate the $L^2$-norm of ${\partial_{t}^{k}}{\eta}_t$ for $k=0,1$ in the next lemma.
Under the same assumptions as in Proposition \[prop1\], the following estimates hold for $t\in[0,T]$, $$\label{P79-70.3a}
\|{\eta}_t(t)\|\leq C\|({\psi},{\eta},{\chi},{\chi}_x,{\psi}_x)(t)\|+C\big({N_{\varepsilon}(T)}+\delta\big)\|{\psi}_t(t)\|+C{\varepsilon}^{1/2}\big(A_1(t)+\|{\chi}_t(t)\|\big),$$ and
\[P79-72.2+P79-78.1a\] $$\label{P79-72.2}
{\partial_{t}^{}}{\eta}_t={\varepsilon}^2{w}{\psi}_{txxx}+Y_1(t,x),$$ $$\begin{aligned}
\|Y_1(t)\|\leq&C\|({\psi},{\eta},{\chi},{\chi}_x,{\psi}_t,{\psi}_x,{\psi}_{tx},{\chi}_{tx})(t)\|\notag\\
&+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\psi}_{tt},{\varepsilon}{\psi}_{xx},{\varepsilon}{\psi}_{txx})(t)\|,\label{P79-78.1a}\end{aligned}$$
where $Y_1(t,x)$ is given by and the positive constant $C$ is independent of $\delta$, ${\varepsilon}$ and $T$.
Solving the equation with respect to ${\eta}_t$, we have $$\label{es68.1}
{\eta}_t={\varepsilon}^2{w}^2\bigg(\frac{{w}_{xx}}{{w}}-\frac{{\tilde{w}}_{xx}}{{\tilde{w}}}\bigg)_x+Y(t,x),$$ where $$\begin{aligned}
Y(t,x):=&\frac{2{j}}{{w}}{\psi}_t-\frac{{w}^2}{2}\Bigg[\bigg(\frac{{j}}{{w}^2}\bigg)^2-\bigg(\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg)^2\Bigg]_x-{w}^2{\chi}\big(\ln{w}^2\big)_x\notag\\
&-{w}^2{\tilde{\theta}}\big(\ln{w}^2-\ln{\tilde{w}}^2\big)_x-{w}^2{\chi}_x+{w}^2{\sigma}_x-{w}^2\bigg(\frac{{j}}{{w}^2}-\frac{{\tilde{j}}}{{\tilde{w}}^2}\bigg).\end{aligned}$$ Taking the $L^2$-norm of directly, and applying the estimates , , and to the resultant equality, we obtain the desired estimate .
Next, differentiating the equation with respect to the time variable $t$, we get the equality , where $Y_1(t,x)$ is defined by $$\begin{aligned}
Y_1(t,x):=&-{\varepsilon}^2{w}_x{\psi}_{txx}-{\varepsilon}^2{w}_{xxx}{\psi}_t+\frac{2{\varepsilon}^2{w}_{xx}{w}_x}{{w}}{\psi}_t\notag\\
&-{\varepsilon}^2{w}_{xx}{\psi}_{tx}+2{\varepsilon}^2{w}{\psi}_t\bigg(\frac{{w}_{xx}}{{w}}-\frac{{\tilde{w}}_{xx}}{{\tilde{w}}}\bigg)_x+{\partial_{t}^{}}Y(t,x).\label{72.3}\end{aligned}$$ Similarly, taking the $L^2$-norm of directly, and applying the estimates , , , , and to the resultant equality, we have the desired estimate .
Now, we begin to derive the higher order estimates to complete the a priori estimate . From the following lemma, we can see that the Bohm potential term in the momentum equation contributes the quantum dissipation rate $\|{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx}(t)\|$.
Suppose the same assumptions as in Proposition \[prop1\] hold. Then there exist positive constants $\delta_0$, $c$ and $C$ such that if ${N_{\varepsilon}(T)}+\delta+{\varepsilon}\leq\delta_0$, it holds that for $t\in[0,T]$, $$\label{115.1}
\frac{d}{dt}\Xi_1^{(k)}(t)+c\Pi_1^{(k)}(t)\leq C\Gamma_1^{(k)}(t),\quad k=0,1,$$ where $$\begin{gathered}
\Xi_1^{(k)}(t):=\int_0^1\Big[\big({\partial_{t}^{k}}{\psi}\big)^2+2{\partial_{t}^{k}}{\psi}_t{\partial_{t}^{k}}{\psi}\Big]dx,\quad \Pi_1^{(k)}(t):=\|({\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2,\notag\\
\Gamma_1^{(k)}(t):=\|({\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x)(t)\|^2+\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)A_k^2(t),\label{115.1a}\end{gathered}$$ and the constants $c$ and $C$ are independent of $\delta$, ${\varepsilon}$ and $T$.
Multiplying the equation by ${\partial_{t}^{k}}{\psi}$ and integrating the resultant equality by parts over the domain $\Omega=(0,1)$ together with the homogeneous boundary conditions , we get $$\label{48.1}
\frac{d}{dt}\Xi_1^{(k)}(t)+\int_0^1\Big[2{\tilde{\theta}}\big({\partial_{t}^{k}}{\psi}_x\big)^2+\big({\varepsilon}{\partial_{t}^{k}}{\psi}_{xx}\big)^2\Big]dx=\mathcal{I}_1^{(k)}(t),\quad k=0,1,$$ where the integral term $\mathcal{I}_1^{(k)}(t)$ is defined by $$\begin{aligned}
\mathcal{I}_1^{(k)}(t):=\int_0^1\bigg\{&2\big({\partial_{t}^{k}}{\psi}_t\big)^2-2{\tilde{\theta}}_x{\partial_{t}^{k}}{\psi}_x{\partial_{t}^{k}}{\psi}-\Big({\tilde{w}}_x{\partial_{t}^{k}}{\chi}_x{\partial_{t}^{k}}{\psi}+{\tilde{w}}{\partial_{t}^{k}}{\chi}_x{\partial_{t}^{k}}{\psi}_x\Big)\notag\\
&+2{\tilde{w}}_{xx}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}+\bigg(\frac{6{w}_x{j}}{{w}^4}{\partial_{t}^{k}}{\eta}_x{\partial_{t}^{k}}{\psi}-\frac{2{\eta}_x}{{w}^3}{\partial_{t}^{k}}{\eta}_x{\partial_{t}^{k}}{\psi}-\frac{2{j}}{{w}^3}{\partial_{t}^{k}}{\eta}_x{\partial_{t}^{k}}{\psi}_x\bigg)\notag\\
&-\bigg[\frac{8{w}_x{j}^2}{{w}^5}{\partial_{t}^{k}}{\psi}_x{\partial_{t}^{k}}{\psi}-\frac{4{j}{\eta}_x}{{w}^4}{\partial_{t}^{k}}{\psi}_x{\partial_{t}^{k}}{\psi}-\frac{2{j}^2}{{w}^4}\big({\partial_{t}^{k}}{\psi}_x\big)^2\bigg]\notag\\
&-\Big[2{\chi}_x{\partial_{t}^{k}}{\psi}_x{\partial_{t}^{k}}{\psi}+2{\chi}\big({\partial_{t}^{k}}{\psi}_x\big)^2\Big]-\Big({\psi}_x{\partial_{t}^{k}}{\chi}_x{\partial_{t}^{k}}{\psi}+{\psi}{\partial_{t}^{k}}{\chi}_x{\partial_{t}^{k}}{\psi}_x\Big)\notag\\
&\qquad\quad+{\varepsilon}^2\frac{(k+1){\psi}_{xx}+2{\tilde{w}}_{xx}}{{w}}{\partial_{t}^{k}}{\psi}_{xx}{\partial_{t}^{k}}{\psi}+\Big({\partial_{t}^{k}}P+O_k\Big){\partial_{t}^{k}}{\psi}\bigg\}dx.\label{48.0}\end{aligned}$$ According to the estimates , , , and , and then using Cauchy-Schwarz, Young and Hölder inequalities, we have the estimates $$\label{52.1}
\int_0^1\Big[2{\tilde{\theta}}\big({\partial_{t}^{k}}{\psi}_x\big)^2+\big({\varepsilon}{\partial_{t}^{k}}{\psi}_{xx}\big)^2\Big]dx\geq c\Pi_1^{(k)}(t),$$ and $$\begin{aligned}
\mathcal{I}_1^{(k)}(t)\leq&2\|{\partial_{t}^{k}}{\psi}_t(t)\|^2+C\delta\|({\partial_{t}^{k}}{\psi}_x,{\partial_{t}^{k}}{\psi})(t)\|^2+\mu\|{\partial_{t}^{k}}{\psi}_x(t)\|^2+C_\mu\|({\partial_{t}^{k}}{\chi}_x,{\partial_{t}^{k}}{\psi})(t)\|^2\notag\\
&+C\|({\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x,{\partial_{t}^{k}}{\psi})(t)\|^2+C\big({N_{\varepsilon}(T)}+\delta\big)\|({\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\psi}_x,{\partial_{t}^{k}}{\eta}_x)(t)\|^2\notag\\
&+C{\varepsilon}^{1/2}\|({\partial_{t}^{k}}{\psi},{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2+C\|({\partial_{t}^{k}}P,O_k,{\partial_{t}^{k}}{\psi})(t)\|^2\notag\\
\leq&\mu\|{\partial_{t}^{k}}{\psi}_x(t)\|^2+C_\mu\|({\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x)(t)\|^2\notag\\
&+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)A_k^2(t).\label{52.2}\end{aligned}$$ Substituting and into and taking $\mu$ small enough, we obtain the desired estimate .
Next, the following lemma is the most difficult part in establishing the higher order estimates. From this lemma, we can find that the dispersive velocity term in the energy equation contributes the extra quantum dissipation rate $\|{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx}(t)\|$, see . It plays a similar role like the additional dissipation rate $\|{\chi}_{tx}(t)\|$ contributed by the diffusion term in the energy equation, see .
Suppose the same assumptions as in Proposition \[prop1\] hold. Then there exist positive constants $\delta_0$, $c$ and $C$ such that if ${N_{\varepsilon}(T)}+\delta+{\varepsilon}\leq\delta_0$, it holds that for $t\in[0,T]$, $$\label{115.2}
\frac{d}{dt}\Xi_2^{(k)}(t)+c\Pi_2^{(k)}(t)\leq C\Gamma_2^{(k)}(t),\quad k=0,1,$$ where $$\begin{gathered}
\Xi_2^{(k)}(t):=\int_0^1\Bigg\{\big({\partial_{t}^{k}}{\psi}_t\big)^2+\bigg({\theta}-\frac{{j}^2}{{w}^4}\bigg)\big({\partial_{t}^{k}}{\psi}_x\big)^2+\frac{1}{2}\big({\varepsilon}{\partial_{t}^{k}}{\psi}_{xx}\big)^2\\
-\frac{3{w}^3}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_t-k\bigg[\frac{9{w}^5{\varepsilon}}{8}{\chi}_t\big({\varepsilon}{\psi}_{txx}\big)+\frac{3{w}^4{\varepsilon}^2}{8}\big({\varepsilon}{\psi}_{txx}\big)^2\bigg]\Bigg\}dx,\end{gathered}$$ $$\Pi_2^{(k)}(t):=\|({\partial_{t}^{k}}{\psi}_t,{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx})(t)\|^2,\quad \Gamma_2^{(k)}(t):=\big(\mu+{N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|{\partial_{t}^{k}}{\psi}_x(t)\|^2+\Upsilon^{(k)}(t),$$ $$\Upsilon^{(0)}(t):=C_\mu\|({\psi},{\eta},{\chi},{\chi}_x)(t)\|^2+\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\psi}_{tt},{\psi}_{tx},{\varepsilon}{\psi}_{xx},{\varepsilon}{\psi}_{txx},{\chi}_t)(t)\|^2,$$ $$\begin{aligned}
\Upsilon^{(1)}(t):=C_\mu&\|({\chi}_t,{\chi}_{tx})(t)\|^2+\|({\psi},{\eta},{\chi},{\chi}_x,{\psi}_t,{\psi}_x)(t)\|^2\notag\\
&+\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\psi}_{tt},{\varepsilon}{\psi}_{xx},{\varepsilon}{\psi}_{txx})(t)\|^2,\label{115.2a}\end{aligned}$$ and the constants $c$ and $C$ are independent of $\delta$, ${\varepsilon}$ and $T$. Here $\mu$ is an arbitrary positive constant to be determined and $C_\mu$ is a generic constant which only depends on $\mu$.
Multiplying the equation by ${\partial_{t}^{k}}{\psi}_t$ and integrating the resultant equality by parts over the domain $\Omega$ together with the homogeneous boundary conditions , we get $$\begin{gathered}
\label{60.1}
\frac{d}{dt}\int_0^1\bigg[\big({\partial_{t}^{k}}{\psi}_t\big)^2+\bigg({\theta}-\frac{{j}^2}{{w}^4}\bigg)\big({\partial_{t}^{k}}{\psi}_x\big)^2+\frac{1}{2}\big({\varepsilon}{\partial_{t}^{k}}{\psi}_{xx}\big)^2\bigg]dx\\
+2\|{\partial_{t}^{k}}{\psi}_t(t)\|^2-\int_0^1{w}{\partial_{t}^{k}}{\chi}_{xx}{\partial_{t}^{k}}{\psi}_tdx=\mathcal{I}_2^{(k)}(t),\quad k=0,1,\end{gathered}$$ where the integral term $\mathcal{I}_2^{(k)}(t)$ is given by $$\begin{aligned}
\mathcal{I}_2^{(k)}(t):=\int_0^1\bigg\{&\bigg[-2\bigg({\theta}-\frac{{j}^2}{{w}^4}\bigg)_x{\partial_{t}^{k}}{\psi}_x{\partial_{t}^{k}}{\psi}_t+\bigg({\theta}-\frac{{j}^2}{{w}^4}\bigg)_t\big({\partial_{t}^{k}}{\psi}_x\big)^2\bigg]\notag\\
&+2{\tilde{w}}_{xx}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_t+\frac{2{j}}{{w}^3}{\partial_{t}^{k}}{\eta}_{xx}{\partial_{t}^{k}}{\psi}_t+{\varepsilon}^2\frac{(k+1){\psi}_{xx}+2{\tilde{w}}_{xx}}{{w}}{\partial_{t}^{k}}{\psi}_{xx}{\partial_{t}^{k}}{\psi}_t\notag\\
&+\Big[{\partial_{t}^{k}}P(t,x)+O_k(t,x)\Big]{\partial_{t}^{k}}{\psi}_t\bigg\}dx\end{aligned}$$ and can be estimated by the standard method as follows $$\begin{aligned}
\mathcal{I}_2^{(k)}(t)\leq&C({N_{\varepsilon}(T)}+\delta)\big(\|({\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x)(t)\|^2+\|{\chi}_t(t)\|_k^2\big)\notag\\ &+\mu\|{\partial_{t}^{k}}{\psi}_t(t)\|^2+C_\mu\|{\partial_{t}^{k}}{\chi}(t)\|_1^2\notag\\ &+C({N_{\varepsilon}(T)}+\delta)\|({\partial_{t}^{k}}{\psi}_t,k{\psi}_{tx})(t)\|^2\notag\\ &+C{\varepsilon}^{1/2}\|({\partial_{t}^{k}}{\psi}_t,{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2\notag\\ &+\mu\|{\partial_{t}^{k}}{\psi}_t(t)\|^2+C_\mu\|({\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x)(t)\|^2\notag\\
&\qquad+C_\mu\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x)(t)\|^2\notag\\ \leq&2\mu\|{\partial_{t}^{k}}{\psi}_t(t)\|^2+C_\mu\|({\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x)(t)\|^2\notag\\
&+C_\mu\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\Big(\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2+\|{\chi}_t(t)\|_k^2\Big),\label{107.2r}\end{aligned}$$ with the aid of the estimates , , and , and the Hölder, Young and Sobolev inequalities.
Now, we have to deal with the last integral term on the left-side of the equality . It is the most difficult part in the proof due to the dispersive velocity term in the energy equation and the Bohm potential term in the momentum equation. Precisely, solving the equation with respect to ${\partial_{t}^{k}}{\chi}_{xx}$ and substituting the result in the last integral term on the left-side of give $$\begin{aligned}
-\int_0^1{w}{\partial_{t}^{k}}{\chi}_{xx}{\partial_{t}^{k}}{\psi}_tdx=&\int_0^1{w}\frac{3}{2}\bigg[-{w}^2{\partial_{t}^{k}}{\chi}_t-\frac{2}{3}{\tilde{\theta}}{\partial_{t}^{k}}{\eta}_x+\frac{4{\tilde{j}}{\tilde{\theta}}}{3{\tilde{w}}}{\partial_{t}^{k}}{\psi}_x\notag\\
&\qquad\qquad\qquad+{\partial_{x}^{}}\mathcal{V}_k(t,x)+{\partial_{t}^{k}}H(t,x)+L_k(t,x)\bigg]{\partial_{t}^{k}}{\psi}_tdx\notag\\
=&-\int_0^1\frac{3}{2}{w}^3{\partial_{t}^{k}}{\chi}_t{\partial_{t}^{k}}{\psi}_tdx-\int_0^1{w}{\tilde{\theta}}{\partial_{t}^{k}}{\eta}_x{\partial_{t}^{k}}{\psi}_tdx\notag\\
&+\int_0^1\frac{2{w}{\tilde{j}}{\tilde{\theta}}}{{\tilde{w}}}{\partial_{t}^{k}}{\psi}_x{\partial_{t}^{k}}{\psi}_tdx+\int_0^1\frac{3}{2}{w}{\partial_{x}^{}}\mathcal{V}_k(t,x){\partial_{t}^{k}}{\psi}_tdx\notag\\
&+\int_0^1\frac{3}{2}{w}\Big[{\partial_{t}^{k}}H(t,x)+L_k(t,x)\Big]{\partial_{t}^{k}}{\psi}_tdx\notag\\
=&\mathfrak{T}^{(k)}_1(t)+\mathfrak{T}^{(k)}_2(t)+\mathfrak{T}^{(k)}_3(t)+\mathfrak{T}^{(k)}_4(t)+\mathfrak{T}^{(k)}_5(t). \label{64.2}\end{aligned}$$ The integrals $\mathfrak{T}^{(k)}_2(t)$, $\mathfrak{T}^{(k)}_3(t)$ and $\mathfrak{T}^{(k)}_5(t)$ are relatively easier to be estimated than to deal with the integrals $\mathfrak{T}^{(k)}_1(t)$ and $\mathfrak{T}^{(k)}_4(t)$. Before treating them one by one, we first derive the following equality which follows from the equation , $$\label{65.3}
{\partial_{t}^{k}}{\psi}_{tt}=-\frac{1}{2{w}}{\partial_{t}^{k}}{\eta}_{tx}+\mathcal{B}_k(t,x),\quad k=0,1,$$ where $$\label{65.2}
\mathcal{B}_0(t,x):=\frac{1}{2{w}^2}{\psi}_t{\eta}_x,\quad \mathcal{B}_1(t,x):=\frac{1}{{w}^2}{\psi}_t{\eta}_{tx}-\frac{1}{{w}^3}{\psi}_t^2{\eta}_x+\frac{1}{2{w}^2}{\psi}_{tt}{\eta}_x,$$ satisfying the estimate $$\label{66.2}
\|\mathcal{B}_k(t)\|\leq C{N_{\varepsilon}(T)}\|({\partial_{t}^{k}}{\psi}_t,{\psi}_t)(t)\|.$$
Now, we begin to estimate $\mathfrak{T}^{(k)}_l(t)$, $l=1,\cdots,5$. Firstly, using the estimates , , and , the equation and the Young inequality, via the standard computations, we have $$\begin{aligned}
\mathfrak{T}^{(k)}_2(t)+\mathfrak{T}^{(k)}_3(t)+\mathfrak{T}^{(k)}_5(t)\geq&\int_0^1{w}{\tilde{\theta}}\big(2{w}{\partial_{t}^{k}}{\psi}_t+k2{\psi}_t^2\big){\partial_{t}^{k}}{\psi}_tdx-C\delta\|({\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x)(t)\|^2\notag\\
&-\mu\|{\partial_{t}^{k}}{\psi}_t(t)\|^2-C_\mu\|({\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x)(t)\|^2\notag\\
&-kC_\mu({N_{\varepsilon}(T)}+\delta)\|({\psi}_{tt},{\psi}_{tx})(t)\|^2\notag\\
\geq&c\|{\partial_{t}^{k}}{\psi}_t(t)\|^2-C\|({\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\chi},{\partial_{t}^{k}}{\chi}_x)(t)\|^2\notag\\
&-C({N_{\varepsilon}(T)}+\delta)\|{\partial_{t}^{k}}{\psi}_x(t)\|^2.\label{92.2+93.1+101.1} $$
In addition, we continue to estimate $\mathfrak{T}^{(k)}_1(t)$ by using , and the integration by parts. $$\begin{aligned}
\mathfrak{T}^{(k)}_1(t)=&-\frac{d}{dt}\int_0^1\frac{3{w}^3}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_tdx+\int_0^1\frac{9{w}^2{\psi}_t}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_tdx+\int_0^1\frac{3{w}^3}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_{tt}dx\notag\\
\geq&-\frac{d}{dt}\int_0^1\frac{3{w}^3}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_tdx-C{N_{\varepsilon}(T)}\|({\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\chi})(t)\|^2\notag\\
&+\int_0^1\frac{3{w}^3}{2}{\partial_{t}^{k}}{\chi}\bigg[-\frac{1}{2{w}}{\partial_{t}^{k}}{\eta}_{tx}+\mathcal{B}_k(t,x)\bigg]dx\notag\\
\geq&-\frac{d}{dt}\int_0^1\frac{3{w}^3}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\psi}_tdx-C{N_{\varepsilon}(T)}\|({\partial_{t}^{k}}{\psi}_t,{\psi}_t,{\partial_{t}^{k}}{\chi})(t)\|^2\notag\\
&+\int_0^1\frac{3{w}^2}{4}{\partial_{t}^{k}}{\chi}_x{\partial_{t}^{k}}{\eta}_tdx+\int_0^1\frac{3{w}{w}_x}{2}{\partial_{t}^{k}}{\chi}{\partial_{t}^{k}}{\eta}_tdx,\quad k=0,1.\label{ge67.1}\end{aligned}$$ Moreover, we have to separately deal with the last two terms on the right-side of for $k=0$ and $k=1$. In fact,
\[ge80.1+90.3\] $$\label{ge80.1}
\int_0^1\frac{3{w}^2}{4}{\chi}_x{\eta}_tdx\geq-\mu\|{\eta}_t(t)\|^2-C_\mu\|{\chi}_x(t)\|^2,\qquad\text{for}\ k=0,$$ $$\begin{aligned}
\int_0^1\frac{3{w}^2}{4}{\chi}_{tx}{\partial_{t}^{}}{\eta}_tdx=&\int_0^1\frac{3{w}^2}{4}{\chi}_{tx}\Big[{\varepsilon}^2{w}{\psi}_{txxx}+Y_1(t,x)\Big]dx\notag\\
=&-\int_0^1\frac{3{w}^3{\varepsilon}^2}{4}{\chi}_{txx}{\psi}_{txx}dx-\int_0^1\frac{9{w}^2{w}_x{\varepsilon}}{4}{\chi}_{tx}({\varepsilon}{\psi}_{txx})dx+\int_0^1\frac{3{w}^2}{4}{\chi}_{tx}Y_1dx\notag\\
\geq&\int_0^1\frac{9{w}^3{\varepsilon}^2}{8}\bigg[-{w}^2{\chi}_{tt}-\frac{2}{3}{\tilde{\theta}}{\eta}_{tx}+\frac{4{\tilde{j}}{\tilde{\theta}}}{3{\tilde{w}}}{\psi}_{tx}+{\partial_{x}^{}}\mathcal{V}_1+{\partial_{t}^{}}H+L_1\bigg]{\psi}_{txx}dx\notag\\
&-C{\varepsilon}\|({\varepsilon}{\psi}_{txx},{\chi}_{tx})(t)\|^2-\mu\|Y_1(t)\|^2-C_\mu\|{\chi}_{tx}(t)\|^2\notag\\
\geq&-\int_0^1\frac{9{w}^5{\varepsilon}^2}{8}{\chi}_{tt}{\psi}_{txx}dx+\int_0^1\frac{9{w}^3{\varepsilon}^2}{8}{\partial_{x}^{}}\mathcal{V}_1{\psi}_{txx}dx\notag\\
&-C{\varepsilon}\|({\eta}_t,{\psi}_t,{\psi}_{tt},{\psi}_{tx},{\varepsilon}{\psi}_{txx})(t)\|^2-\mu\|Y_1(t)\|^2-C_\mu\|{\chi}_{tx}(t)\|^2\notag\\
\geq&-\frac{d}{dt}\int_0^1\frac{9{w}^5{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{txx})dx-\frac{d}{dt}\int_0^1\frac{3{w}^4{\varepsilon}^2}{8}({\varepsilon}{\psi}_{txx})^2dx\notag\\
&-C{\varepsilon}\|({\eta}_t,{\psi}_t,{\psi}_{tt},{\psi}_{tx},{\varepsilon}{\psi}_{ttx},{\varepsilon}{\psi}_{txx})(t)\|^2\notag\\
&-\mu\|Y_1(t)\|^2-C_\mu\|{\chi}_{tx}(t)\|^2,\quad\qquad\qquad\qquad\text{for}\ k=1,\label{90.3}\end{aligned}$$
where we have used the equality , the equation with $k=1$, the estimates and , the Cauchy-Schwarz inequality, and the following computations, $$\begin{aligned}
-\int_0^1\frac{9{w}^5{\varepsilon}^2}{8}{\chi}_{tt}{\psi}_{txx}dx=&-\frac{d}{dt}\int_0^1\frac{9{w}^5{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{txx})dx+\int_0^1\frac{45{w}^4{\psi}_t{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{txx})dx\notag\\
&+\int_0^1\frac{9{w}^5{\varepsilon}^2}{8}{\chi}_t{\psi}_{ttxx}dx\notag\\
=&-\frac{d}{dt}\int_0^1\frac{9{w}^5{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{txx})dx+\int_0^1\frac{45{w}^4{\psi}_t{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{txx})dx\notag\\
&-\int_0^1\frac{45{w}^4{w}_x{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{ttx})dx-\int_0^1\frac{9{w}^5{\varepsilon}}{8}{\chi}_{tx}({\varepsilon}{\psi}_{ttx})dx\notag\\
\geq&-\frac{d}{dt}\int_0^1\frac{9{w}^5{\varepsilon}}{8}{\chi}_t({\varepsilon}{\psi}_{txx})dx-C{\varepsilon}\|({\varepsilon}{\psi}_{ttx},{\varepsilon}{\psi}_{txx},{\chi}_{tx})(t)\|^2,\label{a1-82.1}\end{aligned}$$ and $$\begin{aligned}
&\int_0^1\frac{9{w}^3{\varepsilon}^2}{8}{\partial_{x}^{}}\mathcal{V}_1{\psi}_{txx}dx\notag\\
=&\int_0^1\frac{9{w}^3{\varepsilon}^2}{8}\bigg(\frac{{\varepsilon}^2}{3}{\eta}_{txxx}-\frac{2{\varepsilon}^2{j}}{3{w}}{\psi}_{txxx}+\mathcal{K}_2\bigg){\psi}_{txx}dx\notag\\
\geq&-\int_0^1\frac{3{w}^3{\varepsilon}^2}{8}\Big(2{w}{\psi}_{ttxx}+4{w}_x{\psi}_{ttx}+4{\psi}_{tx}^2+4{\psi}_t{\psi}_{txx}+2{w}_{xx}{\psi}_{tt}\Big){\psi}_{txx}dx\notag\\
&+\int_0^1\bigg(\frac{3{\varepsilon}^4}{8}{w}^2{j}\bigg)_x{\psi}_{txx}^2dx-C{\varepsilon}\|(\mathcal{K}_2,{\varepsilon}{\psi}_{txx})(t)\|^2\notag\\
\geq&-\frac{d}{dt}\int_0^1\frac{3{w}^4{\varepsilon}^2}{8}({\varepsilon}{\psi}_{txx})^2dx-C{\varepsilon}\|({\eta}_t,{\psi}_t,{\psi}_{tt},{\psi}_{tx},{\varepsilon}{\psi}_{ttx},{\varepsilon}{\psi}_{txx})(t)\|^2,\label{a4-88.2}\end{aligned}$$ with the aid of the equality , the equation , the integration by parts, the estimates , and , and the Sobolev, Hölder and Cauchy-Schwarz inequalities. Similarly, we continue to estimate the last term on the right-side of as follows
\[91.1+91.2\] $$\label{91.1}
\int_0^1\frac{3{w}{w}_x}{2}{\chi}{\eta}_tdx\geq-\mu\|{\eta}_t(t)\|^2-C_\mu\|{\chi}(t)\|^2, \quad\text{for}\ k=0,$$ $$\begin{aligned}
\int_0^1\frac{3{w}{w}_x}{2}{\chi}_t{\partial_{t}^{}}{\eta}_tdx=&\int_0^1\frac{3{w}{w}_x}{2}{\chi}_t\Big({\varepsilon}^2{w}{\psi}_{txxx}+Y_1\Big)dx\notag\\
\geq&-\int_0^1\bigg(\frac{3{w}^2{w}_x{\varepsilon}^2}{2}{\chi}_t\bigg)_x{\psi}_{txx}dx-\mu\|Y_1(t)\|^2-C_\mu\|{\chi}_t(t)\|^2\notag\\
\geq&-C{\varepsilon}^{1/2}\|({\varepsilon}{\psi}_{txx},{\chi}_{tx})(t)\|^2-\mu\|Y_1(t)\|^2-C_\mu\|{\chi}_t(t)\|^2, \quad\text{for}\ k=1.\label{91.2}\end{aligned}$$
Next, we estimate the integral $\mathfrak{T}^{(k)}_4(t)$ by using the integration by parts and the equality , $$\begin{aligned}
\mathfrak{T}^{(k)}_4(t)=&-\int_0^1\bigg(\frac{3}{2}{w}{\partial_{t}^{k}}{\psi}_t\bigg)_x\mathcal{V}_kdx\notag\\
=&-\int_0^1\frac{3}{2}{w}{\partial_{t}^{k}}{\psi}_{tx}\mathcal{V}_kdx-\int_0^1\frac{3}{2}{w}_x{\partial_{t}^{k}}{\psi}_t\mathcal{V}_kdx\notag\\
\geq&-\int_0^1\frac{3}{2}{w}{\partial_{t}^{k}}{\psi}_{tx}\bigg(\frac{{\varepsilon}^2}{3}{\partial_{t}^{k}}{\eta}_{xx}-\frac{2{\varepsilon}^2{\tilde{j}}}{3{\tilde{w}}}{\partial_{t}^{k}}{\psi}_{xx}+{\partial_{t}^{k}}\mathcal{K}\bigg)dx\notag\\
&-C{\varepsilon}\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx},{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2\notag\\ \geq&-\int_0^1\frac{{w}{\varepsilon}^2}{2}{\partial_{t}^{k}}{\psi}_{tx}{\partial_{t}^{k}}{\eta}_{xx}dx\notag\\
&-C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx},{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2\notag\\
=&\int_0^1\frac{{w}{\varepsilon}^2}{2}{\partial_{t}^{k}}{\psi}_{tx}\Big(2{w}{\partial_{t}^{k}}{\psi}_{tx}+2{w}_x{\partial_{t}^{k}}{\psi}_t+k4{\psi}_t{\psi}_{tx}\Big)dx\notag\\
&-C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx},{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2\notag\\
\geq&\int_0^1{w}^2\big({\varepsilon}{\partial_{t}^{k}}{\psi}_{tx}\big)^2dx\notag\\
&-C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx},{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2\notag\\
\geq&c\|{\varepsilon}{\partial_{t}^{k}}{\psi}_{tx}(t)\|^2-C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}^{1/2}\big)\|({\partial_{t}^{k}}{\eta},{\partial_{t}^{k}}{\psi},{\partial_{t}^{k}}{\psi}_t,{\partial_{t}^{k}}{\psi}_x,{\varepsilon}{\partial_{t}^{k}}{\psi}_{xx})(t)\|^2,\label{100.2}\end{aligned}$$ where we have used the equation and the estimates $\sim$.
Finally, substituting , , , , , and into , applying the estiamtes and to the resultant inequality, making $\mu$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ small enough and rewriting the result as a unified form in $k=0,1$, we obtain the desired estimate .
In order to close the uniform a priori estimate, we continue to derive the higher order estimates of the perturbed temperature ${\chi}$. The dispersive velocity term in the energy equation makes the corresponding computations more complex.
Suppose the same assumptions as in Proposition \[prop1\] hold. Then there exist positive constants $\delta_0$, $c$ and $C$ such that if ${N_{\varepsilon}(T)}+\delta+{\varepsilon}\leq\delta_0$, it holds that for $t\in[0,T]$, $$\label{116.2}
\frac{d}{dt}\Xi_3(t)+c\Pi_3(t)\leq C\Gamma_3(t),$$ where $$\begin{gathered}
\Xi_3(t):=\int_0^1\bigg(\frac{1}{3}{\chi}_x^2+\frac{2{\tilde{\theta}}}{3}{\eta}_x{\chi}\bigg)dx,\quad\Pi_3(t):=\|{\chi}_t(t)\|^2,\notag\\
\Gamma_3(t):=\mu\|{\psi}_x(t)\|^2+C_\mu A_{-1}^2(t)+\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}\big)\Big(A_1^2(t)+\|{\chi}_{tx}(t)\|^2\Big),\label{116.2a}\end{gathered}$$ and $$\label{117.2}
\frac{d}{dt}\Xi_4(t)+c\Pi_4(t)\leq C\Gamma_4(t),$$ where $$\begin{gathered}
\Xi_4(t):=\int_0^1\frac{{w}^2}{2}{\chi}_t^2dx,\quad\Pi_4(t):=\|{\chi}_{tx}(t)\|^2,\notag\\
\Gamma_4(t):=\|({\psi},{\eta},{\chi},{\chi}_x,{\psi}_x,{\chi}_t)(t)\|^2+\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}\big)\Big(A_1^2(t)+\|{\varepsilon}{\psi}_{ttx}(t)\|^2\Big),\label{117.2a}\end{gathered}$$ and the constants $c$ and $C$ are independent of $\delta$, ${\varepsilon}$ and $T$. Here $\mu$ is an arbitrary positive constant to be determined and $C_\mu$ is a generic constant which only depends on $\mu$.
Multiplying the equation with $k=0$ by ${\chi}_t$ and integrating the resultant equality by parts over $\Omega$ together with the boundary conditions , we get $$\label{53.1}
\frac{d}{dt}\Xi_3(t)+\int_0^1{w}^2{\chi}_t^2dx=\mathcal{I}_3(t),$$ where the integral term $\mathcal{I}_3(t)$ is defined by $$\mathcal{I}_3(t):=\int_0^1\bigg[-\bigg(\frac{2{\tilde{\theta}}_x}{3}{\eta}_t{\chi}+\frac{2{\tilde{\theta}}}{3}{\eta}_t{\chi}_x\bigg)+\frac{4{\tilde{j}}{\tilde{\theta}}}{3{\tilde{w}}}{\psi}_x{\chi}_t+{\partial_{x}^{}}\mathcal{V}_0(t,x){\chi}_t+H(t,x){\chi}_t\bigg]dx,$$ and can be estimated as $$\begin{aligned}
\mathcal{I}_3(t)\leq&\int_0^1{\partial_{x}^{}}\mathcal{V}_0(t,x){\chi}_tdx+(\mu+C\delta)\|{\eta}_t(t)\|^2+C_\mu\|({\chi},{\chi}_x)(t)\|^2\notag\\
&+C\delta\|({\psi}_x,{\chi}_t)(t)\|^2+\mu\|{\chi}_t(t)\|^2+C_\mu\|H(t)\|^2\notag\\
\leq&\int_0^1{\partial_{x}^{}}\mathcal{V}_0(t,x){\chi}_tdx+(\mu+\delta+{\varepsilon})\|{\chi}_t(t)\|^2\notag\\
&+\mu\|{\psi}_x(t)\|^2+C_\mu A_{-1}^2(t)+C\big({N_{\varepsilon}(T)}+\delta+{\varepsilon}\big)A_1^2(t),\label{55.2}\end{aligned}$$ with the aid of the estimates , and , the Hölder and Young inequalities. Furthermore, applying the integration by parts and the equality with $k=0$ to the first integral term on the right-side of , we have $$\begin{aligned}
\int_0^1{\partial_{x}^{}}\mathcal{V}_0(t,x){\chi}_tdx=&-\int_0^1\mathcal{V}_0(t,x){\chi}_{tx}dx\notag\\
=&-\int_0^1\bigg(\frac{{\varepsilon}^2}{3}{\eta}_{xx}-\frac{2{\varepsilon}^2{\tilde{j}}}{3{\tilde{w}}}{\psi}_{xx}+\mathcal{K}(t,x)\bigg){\chi}_{tx}dx\notag\\
\leq&C{\varepsilon}\|({\eta}_{xx},{\varepsilon}{\psi}_{xx},{\chi}_{tx})(t)\|^2+\|\mathcal{K}(t)\|\|{\chi}_{tx}(t)\|\notag\\
\leq&C{\varepsilon}\|({\psi},{\eta},{\psi}_t,{\psi}_x,{\psi}_{tx},{\varepsilon}{\psi}_{xx},{\chi}_{tx})(t)\|^2\notag\\
\leq&C{\varepsilon}\big(A_1^2(t)+\|{\chi}_{tx}(t)\|^2\big),\label{54.2}\end{aligned}$$ with the aid of the estimates and , the Hölder and Cauchy-Schwarz inequalities. Substituting into , we obtain $$\label{55.3a}
\mathcal{I}_3(t)\leq(\mu+\delta+{\varepsilon})\|{\chi}_t(t)\|^2+C\Gamma_3(t).$$ On the other hand, it is easy to see that $$\label{55.3b}
\int_0^1{w}^2{\chi}_t^2dx\geq c\Pi_3(t).$$ Substituting and into , and letting $\mu$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ sufficiently small, we have proved the desired estimate .
Next, multiplying the equation with $k=1$ by ${\chi}_t$ and integrating the resultant equality by parts over $\Omega$ together with the boundary conditions , we get $$\label{ge56.1}
\frac{d}{dt}\Xi_4(t)+\frac{2}{3}\Pi_4(t)=\mathcal{I}_4(t),$$ where the integral term $\mathcal{I}_4(t)$ is given by $$\begin{aligned}
\mathcal{I}_4(t):=\int_0^1\bigg[&{w}{\psi}_t{\chi}_t^2+\bigg(\frac{2{\tilde{\theta}}_x}{3}{\eta}_t{\chi}_t+\frac{2{\tilde{\theta}}}{3}{\eta}_t{\chi}_{tx}\bigg)\notag\\
&+\frac{4{\tilde{j}}{\tilde{\theta}}}{3{\tilde{w}}}{\psi}_{tx}{\chi}_t+{\partial_{x}^{}}\mathcal{V}_1(t,x){\chi}_t+\big({\partial_{t}^{}}H+L_1\big)(t,x){\chi}_t\bigg]dx,\label{56.0}\end{aligned}$$ and can be estimated as $$\begin{aligned}
\mathcal{I}_4(t)\leq&\int_0^1{\partial_{x}^{}}\mathcal{V}_1(t,x){\chi}_tdx+C\big({N_{\varepsilon}(T)}+\delta\big)\|({\psi}_{tx},{\chi}_t)(t)\|^2\notag\\
&+\mu\|{\chi}_{tx}(t)\|^2+C_\mu\|{\eta}_t(t)\|^2+\big(\|{\partial_{t}^{}}H(t)\|+\|L_1(t)\|\big)\|{\chi}_t(t)\|\notag\\
\leq&\int_0^1{\partial_{x}^{}}\mathcal{V}_1(t,x){\chi}_tdx+\big[\mu+C({N_{\varepsilon}(T)}+\delta)\big]\|{\chi}_{tx}(t)\|^2\notag\\
&+C_\mu\|({\eta}_t,{\chi}_t)(t)\|^2+C({N_{\varepsilon}(T)}+\delta)\|({\psi}_t,{\psi}_{tt},{\psi}_{tx})(t)\|^2,\label{59.2}\end{aligned}$$ with the aid of the estimates , and , the Hölder and Young inequalities. Furthermore, applying the integration by parts and the equality with $k=1$ to the first integral term on the right-side of , we have $$\begin{aligned}
\int_0^1{\partial_{x}^{}}\mathcal{V}_1(t,x){\chi}_tdx=&-\int_0^1\mathcal{V}_1(t,x){\chi}_{tx}dx\notag\\
=&-\int_0^1\bigg(\frac{{\varepsilon}^2}{3}{\eta}_{txx}-\frac{2{\varepsilon}^2{\tilde{j}}}{3{\tilde{w}}}{\psi}_{txx}+{\partial_{t}^{}}\mathcal{K}(t,x)\bigg){\chi}_{tx}dx\notag\\
\leq&C{\varepsilon}\|({\varepsilon}{\eta}_{txx},{\varepsilon}{\psi}_{txx},{\chi}_{tx})(t)\|^2+\|{\partial_{t}^{}}\mathcal{K}(t)\|\|{\chi}_{tx}(t)\|\notag\\
\leq&C{\varepsilon}\big(\|{\chi}_{tx}(t)\|^2+\|{\eta}_t(t)\|^2+A_1^2(t)+\|{\varepsilon}{\psi}_{ttx}(t)\|^2\big),\label{58.1}\end{aligned}$$ with the aid of the estimates and , the Hölder and Cauchy-Schwarz inequalities. Substituting into together with the estimate , we obtain $$\begin{aligned}
\label{I4}
\mathcal{I}_4(t)\leq&\big[\mu+C({N_{\varepsilon}(T)}+\delta+{\varepsilon})\big]\|{\chi}_{tx}(t)\|^2+C({N_{\varepsilon}(T)}+\delta+{\varepsilon})\big(A_1^2(t)+\|{\varepsilon}{\psi}_{ttx}(t)\|^2\big)\notag\\
&+C_\mu\|({\eta}_t,{\chi}_t)(t)\|^2\notag\\
\leq&\big[\mu+C({N_{\varepsilon}(T)}+\delta+{\varepsilon})\big]\|{\chi}_{tx}(t)\|^2+C_\mu\Gamma_4(t).\end{aligned}$$ Substituting into , and letting $\mu$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ sufficiently small, we have shown the desired estimate .
Decay estimate {#Subsect.3.5}
--------------
Based on the basic estimate and the higher order estimates , , and , it is not difficult to find that we have captured the strong enough dissipation mechanism to show the decay estimate in Proposition \[prop1\].
From the procedure $$\label{118.1}
\eqref{25.1}+\beta\Big[\alpha\eqref{115.1}+\eqref{115.2}\Big]\Big|_{k=0}+\beta\Big[\eqref{116.2}+\beta\eqref{117.2}\Big]+\beta^3\Big[\alpha\eqref{115.1}+\eqref{115.2}\Big]\Big|_{k=1},$$ where $\alpha$ is the positive constant in and $\beta$ is another positive constant, both of them will be determined later, we have the energy inequality $$\label{118.2}
\frac{d}{dt}\mathbb{E}(t)+\mathbb{D}(t)\leq0,\quad\forall t\in[0,T],$$ where the total energy $\mathbb{E}(t)$ is defined by $$\label{119.2}
\mathbb{E}(t):=\Xi(t)+\beta\Big[\alpha\Xi_1^{(0)}(t)+\Xi_2^{(0)}(t)\Big]+\beta\Big[\Xi_3(t)+\beta\Xi_4(t)\Big]+\beta^3\Big[\alpha\Xi_1^{(1)}(t)+\Xi_2^{(1)}(t)\Big],$$ and the total dissipation rate $\mathbb{D}(t)$ is given by $$\begin{aligned}
\mathbb{D}(t):=&\Big[c\Pi(t)-C\Gamma(t)\Big]+\beta\Big\{\alpha\Big[c\Pi_1^{(0)}(t)-C\Gamma_1^{(0)}(t)\Big]+\Big[c\Pi_2^{(0)}(t)-C\Gamma_2^{(0)}(t)\Big]\Big\}\notag\\
&+\beta\Big\{\Big[c\Pi_3(t)-C\Gamma_3(t)\Big]+\beta\Big[c\Pi_4(t)-C\Gamma_4(t)\Big]\Big\}\notag\\
&+\beta^3\Big\{\alpha\Big[c\Pi_1^{(1)}(t)-C\Gamma_1^{(1)}(t)\Big]+\Big[c\Pi_2^{(1)}(t)-C\Gamma_2^{(1)}(t)\Big]\Big\}.\label{122.1}\end{aligned}$$ Substituting the specific definitions $\sim$, , , and into and , and then taking $\alpha$, $\mu$, $\beta$ and ${N_{\varepsilon}(T)}+\delta+{\varepsilon}$ sufficiently small in the following order $0<{N_{\varepsilon}(T)}+\delta+{\varepsilon}\ll\beta^3\ll\beta^2\ll\beta\ll\mu\ll\alpha\ll1$, via the elaborate calculations, we obtain the estimates $$\label{121.2}
c\big(A_1^2(t)+\|{\chi}_t(t)\|^2\big)\leq\mathbb{E}(t)\leq C\big(A_1^2(t)+\|{\chi}_t(t)\|^2\big),$$ and $$\begin{aligned}
\mathbb{D}(t)\geq&c\big(A_1^2(t)+\|{\chi}_t(t)\|^2+\|({\chi}_{tx},{\varepsilon}{\psi}_{ttx})(t)\|^2\big)\notag\\
\geq&c\big(A_1^2(t)+\|{\chi}_t(t)\|^2\big),\label{126.2}\\
\mathbb{D}(t)\leq&C\big(A_1^2(t)+\|{\chi}_t(t)\|^2+\|({\chi}_{tx},{\varepsilon}{\psi}_{ttx})(t)\|^2\big),\notag\end{aligned}$$ where the positive constants $c$ and $C$ are independent of $\delta$, ${\varepsilon}$ and $T$.
Applying and to the inequality , we see that there exists a positive constant $\gamma$ which is independent of $\delta$, ${\varepsilon}$ and $T$ such that the following inequality holds, $$\label{127.1}
\frac{d}{dt}\mathbb{E}(t)+2\gamma\mathbb{E}(t)\leq0,\quad\forall t\in[0,T].$$ Finally, applying Gronwall inequality to and using the elliptic estimate and the equivalent relations and , we have the desired decay estimate .
Once Proposition \[prop1\] is proved, Theorem \[thm2\] immediately follows.
The existence of the global-in-time solution to the initial-boundary value problem $\sim$ follows from the continuation argument with Corollary \[cor1\] and Proposition \[prop1\]. The decay estimate is derived by the transformations ${n}={w}^2$, ${\tilde{n}}={\tilde{w}}^2$ and the estimate .
Semi-classical limit {#Sect.4}
====================
In this section, we prove Theorem \[thm3\] in Subsection \[Subsect.4.1\] and Theorem \[thm4\] in Subsection \[Subsect.4.2\], respectively.
Stationary case {#Subsect.4.1}
---------------
In this subsection, we discuss the semi-classical limit of the stationary solutions based on the existence and uniqueness results in Lemma \[lem1\] and Theorem \[thm1\]. Since both of the quantum stationary density ${\tilde{n}^{{\varepsilon}}}$ and the limit one ${\tilde{n}^{0}}$ are non-flat, it is convenient to introduce the logarithmic transformations ${\tilde{z}^{{\varepsilon}}}:=\ln {\tilde{n}^{{\varepsilon}}}$ and ${\tilde{z}^{0}}:=\ln {\tilde{n}^{0}}$ in the following discussion. We also introduce the error variables as follows $$\label{ss7.1}
{\tilde{\mathcal{Z}}^{{\varepsilon}}}:={\tilde{z}^{{\varepsilon}}}-{\tilde{z}^{0}},\quad {\tilde{\mathcal{J}}^{{\varepsilon}}}:={\tilde{j}^{{\varepsilon}}}-{\tilde{j}^{0}},\quad{\tilde{\varTheta}^{{\varepsilon}}}:={\tilde{\theta}^{{\varepsilon}}}-{\tilde{\theta}^{0}},\quad{\tilde{\varPhi}^{{\varepsilon}}}:={\tilde{\phi}^{{\varepsilon}}}-{\tilde{\phi}^{0}}.$$ These quantities $({\tilde{z}^{{\varepsilon}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}},{\tilde{\phi}^{{\varepsilon}}})$, $({\tilde{z}^{0}},{\tilde{j}^{0}},{\tilde{\theta}^{0}},{\tilde{\phi}^{0}})$ and $({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\mathcal{J}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}},{\tilde{\varPhi}^{{\varepsilon}}})$ satisfy the following properties $$\label{psbc}
{\tilde{\mathcal{Z}}^{{\varepsilon}}}\in H_0^1(\Omega)\cap C^2(\overline{\Omega}),\quad {\tilde{\varTheta}^{{\varepsilon}}}\in H_0^1(\Omega)\cap H^3(\Omega),\quad {\tilde{\varPhi}^{{\varepsilon}}}\in H_0^1(\Omega)\cap C^2(\overline{\Omega}),$$ $$\label{psbcxx}
\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_x)^2}{2}\bigg](0)=\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_x)^2}{2}\bigg](1)=0,$$ the estimates $$\label{ss9.3}
\ln c\leq{\tilde{z}^{0}}(x)\leq\ln C,\quad 0<c\leq{\tilde{\theta}^{0}}(x)\leq C,\quad|{\tilde{j}^{0}}|+\|{\tilde{\theta}^{0}}-\theta_{L}\|_3\leq C\delta,\quad |({\tilde{n}^{0}},{\tilde{\phi}^{0}})|_2\leq C,$$ $$\begin{gathered}
2\ln b\leq{\tilde{z}^{{\varepsilon}}}(x)\leq2\ln B,\quad 0<\frac{\theta_L}{2}\leq{\tilde{\theta}^{{\varepsilon}}}(x)\leq \frac{3\theta_l}{2},\quad |{\tilde{j}^{{\varepsilon}}}|+\|{\tilde{\theta}^{{\varepsilon}}}-\theta_{L}\|_3\leq C\delta,\notag\\
\|{\tilde{z}^{{\varepsilon}}}\|_2+\|({\varepsilon}{\partial_{x}^{3}}{\tilde{z}^{{\varepsilon}}},{\varepsilon}^2{\partial_{x}^{4}}{\tilde{z}^{{\varepsilon}}})\|+|{\tilde{\phi}^{{\varepsilon}}}|_2\leq C,\qquad\forall{\varepsilon}\in(0,{\varepsilon}_1], \label{ss9.2}\end{gathered}$$ and the equations $$\label{ss8.1a}
S[e^{{\tilde{z}^{0}}},{\tilde{j}^{0}},{\tilde{\theta}^{0}}]{\tilde{z}^{0}}_x+{\tilde{\theta}^{0}}_x={\tilde{\phi}^{0}}_x-{\tilde{j}^{0}}e^{-{\tilde{z}^{0}}},$$ $$\label{ss8.1b}
S[e^{{\tilde{z}^{{\varepsilon}}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}}]{\tilde{z}^{{\varepsilon}}}_x+{\tilde{\theta}^{{\varepsilon}}}_x-\frac{{\varepsilon}^2}{2}\Bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\Bigg]_x={\tilde{\phi}^{{\varepsilon}}}_x-{\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}},$$ $$\label{ss8.1c}
S[e^{{\tilde{z}^{{\varepsilon}}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}}]{\tilde{z}^{{\varepsilon}}}_x-S[e^{{\tilde{z}^{0}}},{\tilde{j}^{0}},{\tilde{\theta}^{0}}]{\tilde{z}^{0}}_x+{\tilde{\varTheta}^{{\varepsilon}}}_{x}-{\tilde{\varPhi}^{{\varepsilon}}}_x-\frac{{\varepsilon}^2}{2}\Bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\Bigg]_x=-\Big({\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}}-{\tilde{j}^{0}}e^{-{\tilde{z}^{0}}}\Big),$$ $$\begin{gathered}
\label{ss8.1d}
{\tilde{j}^{{\varepsilon}}}{\tilde{\theta}^{{\varepsilon}}}_x-{\tilde{j}^{0}}{\tilde{\theta}^{0}}_x-\frac{2}{3}\Big({\tilde{j}^{{\varepsilon}}}{\tilde{\theta}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x-{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{z}^{0}}_x\Big)-\frac{2}{3}{\tilde{\varTheta}^{{\varepsilon}}}_{xx}+\frac{{\varepsilon}^2}{3}{\tilde{j}^{{\varepsilon}}}\Big({\tilde{z}^{{\varepsilon}}}_{xxx}-2{\tilde{z}^{{\varepsilon}}}_{xx}{\tilde{z}^{{\varepsilon}}}_{x}\Big)\\
=\frac{1}{3}\Big[({\tilde{j}^{{\varepsilon}}})^2e^{-{\tilde{z}^{{\varepsilon}}}}-({\tilde{j}^{0}})^2e^{-{\tilde{z}^{0}}}\Big]-\Big[e^{{\tilde{z}^{{\varepsilon}}}}({\tilde{\theta}^{{\varepsilon}}}-\theta_L)-e^{{\tilde{z}^{0}}}({\tilde{\theta}^{0}}-\theta_L)\Big],\end{gathered}$$ $$\label{ss8.1e}
{\tilde{\varPhi}^{{\varepsilon}}}_{xx}=e^{{\tilde{z}^{{\varepsilon}}}}-e^{{\tilde{z}^{0}}},\qquad\forall{\varepsilon}\in (0,{\varepsilon}_1]$$ due to the boundary conditions and , the estimates and , and the equations and .
Firstly, we prove the convergence rate in ${\varepsilon}\in (0,{\varepsilon}_1]$. Note that if $\delta$ is small enough, we have known that the quantum stationary current density ${\tilde{j}^{{\varepsilon}}}=J[e^{{\tilde{z}^{{\varepsilon}}}},{\tilde{\theta}^{{\varepsilon}}}]$ is defined by the explicit formula . Furthermore, the limit stationary current density ${\tilde{j}^{0}}$ can also be written by the same formula as ${\tilde{j}^{0}}=J[e^{{\tilde{z}^{0}}},{\tilde{\theta}^{0}}]$. Therefore, the following estimate $$\begin{aligned}
|{\tilde{\mathcal{J}}^{{\varepsilon}}}|=|{\tilde{j}^{{\varepsilon}}}-{\tilde{j}^{0}}|&\leq C\Big(\delta\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|+\|{\tilde{\varTheta}^{{\varepsilon}}}_x\|\Big)\notag\\
&\leq C\Big(\delta\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|+\|{\tilde{\varTheta}^{{\varepsilon}}}_x\|\Big).\label{ss10.2}\end{aligned}$$ follows from the straightforward computations with the formula , the estimates and .
Multiplying the equation by ${\tilde{\mathcal{Z}}^{{\varepsilon}}}_x$ and integrating the resultant equality over the domain $\Omega$, we obtain $$\begin{gathered}
\label{ss10.5}
\int_0^1\Big(S[e^{{\tilde{z}^{{\varepsilon}}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}}]{\tilde{z}^{{\varepsilon}}}_x-S[e^{{\tilde{z}^{0}}},{\tilde{j}^{0}},{\tilde{\theta}^{0}}]{\tilde{z}^{0}}_x\Big){\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx-\int_0^1{\tilde{\varPhi}^{{\varepsilon}}}_x{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx\\
=\frac{{\varepsilon}^2}{2}\int_0^1\Bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\Bigg]_x{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx-\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_{x}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx-\int_0^1\Big({\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}}-{\tilde{j}^{0}}e^{-{\tilde{z}^{0}}}\Big){\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx.\end{gathered}$$ By virtue of integration by parts, the boundary conditions and , the equation , the Young and Hölder inequalities, the mean value theorem and the estimates , and , the left-side of can be estimated as follows $$\begin{aligned}
\eqref{ss10.5}_l&=\int_0^1\Big({\tilde{S}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x-{\tilde{S}^{0}}{\tilde{z}^{0}}_x\Big){\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx+\int_0^1{\tilde{\varPhi}^{{\varepsilon}}}_{xx}{\tilde{\mathcal{Z}}^{{\varepsilon}}}dx\notag\\
&=\int_0^1\Big[\Big({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}}\Big){\tilde{z}^{{\varepsilon}}}_x+{\tilde{S}^{0}}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\Big]{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx+\int_0^1\underbrace{\Big(e^{{\tilde{z}^{{\varepsilon}}}}-e^{{\tilde{z}^{0}}}\Big)({\tilde{z}^{{\varepsilon}}}-{\tilde{z}^{0}})}_{\geq0} dx\notag\\
&\geq\frac{\theta_L}{4}\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2+\int_0^1\Big({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}}\Big){\tilde{z}^{{\varepsilon}}}_x{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx\notag\\
&\geq\frac{\theta_L}{4}\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2-\mu\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2-C_\mu\|{\tilde{\varTheta}^{{\varepsilon}}}\|^2-C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}}_x,{\tilde{\varTheta}^{{\varepsilon}}}_x)\|^2\notag\\
&\geq\frac{\theta_L}{8}\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2-C\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1^2,\label{ss11.1}\end{aligned}$$ where we have used the notation ${\tilde{S}^{{\varepsilon}}}:=S[e^{{\tilde{z}^{{\varepsilon}}}},{\tilde{j}^{{\varepsilon}}},{\tilde{\theta}^{{\varepsilon}}}]$ for any ${\varepsilon}\in[0,{\varepsilon}_1]$ and the following estimate $$\label{ss12.1-2-3}
|({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}})(x)|\leq |{\tilde{\varTheta}^{{\varepsilon}}}(x)|+C\delta|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+C\delta^2|{\tilde{\mathcal{Z}}^{{\varepsilon}}}(x)|,\quad\forall x\in\Omega.$$ Similarly, we further estimate the right-side of as follows $$\begin{aligned}
\eqref{ss10.5}_r=&\frac{{\varepsilon}^2}{2}\int_0^1\Bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\Bigg]_x{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx-\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_{x}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx-\int_0^1\Big({\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}}-{\tilde{j}^{0}}e^{-{\tilde{z}^{0}}}\Big){\tilde{\mathcal{Z}}^{{\varepsilon}}}_xdx\notag\\
\leq&-\frac{{\varepsilon}^2}{2}\int_0^1\Bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\Bigg]{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}dx+\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|\|{\tilde{\varTheta}^{{\varepsilon}}}_{x}\|+C|{\tilde{\mathcal{J}}^{{\varepsilon}}}|\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|+C\delta\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|\notag\\
\leq&C{\varepsilon}^2\int_0^1\Big(|{\tilde{z}^{{\varepsilon}}}_{xx}|+|{\tilde{z}^{{\varepsilon}}}_{x}|^2\Big)|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}|dx+\big(C\delta+\mu\big)\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2+C_\mu\|{\tilde{\varTheta}^{{\varepsilon}}}_{x}\|^2\notag\\
\leq&C{\varepsilon}^2\underbrace{\Big(\|{\tilde{z}^{{\varepsilon}}}_{xx}\|+1\Big)\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|}_{\leq C}+\big(C\delta+\mu\big)\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2+C_\mu\|{\tilde{\varTheta}^{{\varepsilon}}}_{x}\|^2\notag\\
\leq&C{\varepsilon}^2+\big(C\delta+\mu\big)\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|^2+C_\mu\|{\tilde{\varTheta}^{{\varepsilon}}}_{x}\|^2.\label{ss12.4-5-ss13.1}\end{aligned}$$ Substituting and into , and letting $\delta$ and $\mu$ small enough, we have $$\label{ss13.3}
\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1^2\leq C\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1^2+C{\varepsilon}^2,$$ where we have used the Poincaré inequality $\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|\leq C\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|$.
Next, multiplying the equation by ${\tilde{\varTheta}^{{\varepsilon}}}$ and integrating the resultant equality over the domain $\Omega$, we obtain $$\begin{aligned}
\label{ss14.1}
-\frac{2}{3}\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_{xx}&{\tilde{\varTheta}^{{\varepsilon}}}dx+\int_0^1\Big[e^{{\tilde{z}^{{\varepsilon}}}}({\tilde{\theta}^{{\varepsilon}}}-\theta_L)-e^{{\tilde{z}^{0}}}({\tilde{\theta}^{0}}-\theta_L)\Big]{\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&=-\frac{{\varepsilon}^2}{3}{\tilde{j}^{{\varepsilon}}}\int_0^1\Big({\tilde{z}^{{\varepsilon}}}_{xxx}-2{\tilde{z}^{{\varepsilon}}}_{xx}{\tilde{z}^{{\varepsilon}}}_{x}\Big){\tilde{\varTheta}^{{\varepsilon}}}dx+\frac{2}{3}\int_0^1\Big({\tilde{j}^{{\varepsilon}}}{\tilde{\theta}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x-{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{z}^{0}}_x\Big){\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&\qquad\qquad-\int_0^1\Big({\tilde{j}^{{\varepsilon}}}{\tilde{\theta}^{{\varepsilon}}}_x-{\tilde{j}^{0}}{\tilde{\theta}^{0}}_x\Big){\tilde{\varTheta}^{{\varepsilon}}}dx+\frac{1}{3}\int_0^1\Big[({\tilde{j}^{{\varepsilon}}})^2e^{-{\tilde{z}^{{\varepsilon}}}}-({\tilde{j}^{0}})^2e^{-{\tilde{z}^{0}}}\Big]{\tilde{\varTheta}^{{\varepsilon}}}dx\\
&=I_1+I_2+I_3+I_4.\notag\end{aligned}$$ By the same fashion used to derive the estimate , the left-side of can be estimated as $$\begin{aligned}
\eqref{ss14.1}_l&=\frac{2}{3}\|{\tilde{\varTheta}^{{\varepsilon}}}_{x}\|^2+\int_0^1\Big[\Big(e^{{\tilde{z}^{{\varepsilon}}}}-e^{{\tilde{z}^{0}}}\Big)({\tilde{\theta}^{{\varepsilon}}}-\theta_L)+e^{{\tilde{z}^{0}}}{\tilde{\varTheta}^{{\varepsilon}}}\Big]{\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&\geq\frac{2}{3}\|{\tilde{\varTheta}^{{\varepsilon}}}_{x}\|^2+c\|{\tilde{\varTheta}^{{\varepsilon}}}\|^2-C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|^2\notag\\
&\geq c\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1^2-C\delta\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|^2.\label{ss14.2-ss18.3}\end{aligned}$$ We further estimate the integrals $I_i$ $(i=1,2,3,4)$ on the right-side of one by one, $$\begin{aligned}
I_1&=\frac{{\varepsilon}^2}{3}{\tilde{j}^{{\varepsilon}}}\int_0^1{\tilde{z}^{{\varepsilon}}}_{xx}{\tilde{\varTheta}^{{\varepsilon}}}_xdx+\frac{2{\varepsilon}^2}{3}{\tilde{j}^{{\varepsilon}}}\int_0^1{\tilde{z}^{{\varepsilon}}}_{x}{\tilde{z}^{{\varepsilon}}}_{xx}{\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&\leq \frac{1}{3}{\varepsilon}^2|{\tilde{j}^{{\varepsilon}}}|\|{\tilde{z}^{{\varepsilon}}}_{xx}\|\|{\tilde{\varTheta}^{{\varepsilon}}}_x\|+\frac{2}{3}{\varepsilon}^2|{\tilde{j}^{{\varepsilon}}}||{\tilde{z}^{{\varepsilon}}}_{x}|_0\|{\tilde{z}^{{\varepsilon}}}_{xx}\|\|{\tilde{\varTheta}^{{\varepsilon}}}\|\notag\\
&\leq C{\varepsilon}^2\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1\leq C{\varepsilon}^2\|{\tilde{\theta}^{{\varepsilon}}}-\theta_L+\theta_L-{\tilde{\theta}^{0}}\|_1\notag\\
&\leq C\Big(\|{\tilde{\theta}^{{\varepsilon}}}-\theta_L\|_1+\|{\tilde{\theta}^{0}}-\theta_L\|_1\Big){\varepsilon}^2\notag\\
&\leq C{\varepsilon}^2.\label{ss18.1}\end{aligned}$$ It is easy to estimate $I_3+I_4$ by the standard computations, that is, $$\label{ss15.1-ss18.2}
I_3+I_4\leq C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2.$$ However, we need to pay more attention to the integral $I_2$ due to the non-flatness of ${\tilde{z}^{{\varepsilon}}}$, $$\begin{aligned}
I_2&=\frac{2}{3}\int_0^1\Big({\tilde{j}^{{\varepsilon}}}{\tilde{\theta}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x-{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{z}^{0}}_x\Big){\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&=\frac{2}{3}\int_0^1\Big({\tilde{\mathcal{J}}^{{\varepsilon}}}{\tilde{\theta}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x+{\tilde{j}^{0}}{\tilde{\varTheta}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x+{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\Big){\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&=\frac{2}{3}\int_0^1\Big[{\tilde{\mathcal{J}}^{{\varepsilon}}}\big({\tilde{\theta}^{{\varepsilon}}}-\theta_L+\theta_L\big){\tilde{z}^{{\varepsilon}}}_x+{\tilde{j}^{0}}{\tilde{z}^{{\varepsilon}}}_x{\tilde{\varTheta}^{{\varepsilon}}}+{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\Big]{\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&=\frac{2\theta_L}{3}{\tilde{\mathcal{J}}^{{\varepsilon}}}\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_xdx+\frac{2}{3}\int_0^1\Big[\big({\tilde{\theta}^{{\varepsilon}}}-\theta_L\big){\tilde{z}^{{\varepsilon}}}_x{\tilde{\mathcal{J}}^{{\varepsilon}}}+{\tilde{j}^{0}}{\tilde{z}^{{\varepsilon}}}_x{\tilde{\varTheta}^{{\varepsilon}}}+{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\Big]{\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&=-\frac{2\theta_L}{3}{\tilde{\mathcal{J}}^{{\varepsilon}}}\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx+\frac{2}{3}\int_0^1\Big[\big({\tilde{\theta}^{{\varepsilon}}}-\theta_L\big){\tilde{z}^{{\varepsilon}}}_x{\tilde{\mathcal{J}}^{{\varepsilon}}}+{\tilde{j}^{0}}{\tilde{z}^{{\varepsilon}}}_x{\tilde{\varTheta}^{{\varepsilon}}}+{\tilde{j}^{0}}{\tilde{\theta}^{0}}{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\Big]{\tilde{\varTheta}^{{\varepsilon}}}dx\notag\\
&\leq-\frac{2\theta_L}{3}{\tilde{\mathcal{J}}^{{\varepsilon}}}\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx\notag\\
&\qquad+\frac{2}{3}\Big(|{\tilde{\theta}^{{\varepsilon}}}-\theta_L|_0|{\tilde{z}^{{\varepsilon}}}_x|_0|{\tilde{\mathcal{J}}^{{\varepsilon}}}|\|{\tilde{\varTheta}^{{\varepsilon}}}\|+|{\tilde{j}^{0}}||{\tilde{z}^{{\varepsilon}}}_x|_0\|{\tilde{\varTheta}^{{\varepsilon}}}\|^2+|{\tilde{j}^{0}}||{\tilde{\theta}^{0}}|_0\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_x\|\|{\tilde{\varTheta}^{{\varepsilon}}}\|\Big)\notag\\
&\leq-\frac{2\theta_L}{3}{\tilde{\mathcal{J}}^{{\varepsilon}}}\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&=-\frac{2\theta_L}{3}\big({\tilde{j}^{{\varepsilon}}}-{\tilde{j}^{0}}\big)\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&=-\frac{2\theta_L}{3}\Bigg[\frac{2\big(\bar{b}+\int_0^1{\tilde{\theta}^{{\varepsilon}}}_{x}{\tilde{z}^{{\varepsilon}}}dx\big)}{{\tilde{K}^{{\varepsilon}}}}-\frac{2\big(\bar{b}+\int_0^1{\tilde{\theta}^{0}}_{x}{\tilde{z}^{0}}dx\big)}{{\tilde{K}^{0}}}\Bigg]\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&=-\frac{4\theta_L}{3}\Bigg[\frac{1}{{\tilde{K}^{{\varepsilon}}}}\int_0^1\Big({\tilde{\theta}^{{\varepsilon}}}_{x}{\tilde{z}^{{\varepsilon}}}-{\tilde{\theta}^{0}}_{x}{\tilde{z}^{0}}\Big)dx+\bigg(\bar{b}+\int_0^1{\tilde{\theta}^{0}}_{x}{\tilde{z}^{0}}dx\bigg)\bigg(\frac{1}{{\tilde{K}^{{\varepsilon}}}}-\frac{1}{{\tilde{K}^{0}}}\bigg)\Bigg]\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx\notag\\
&\qquad+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&=-\frac{4\theta_L}{3}\Bigg[\frac{1}{{\tilde{K}^{{\varepsilon}}}}\int_0^1\Big({\tilde{\varTheta}^{{\varepsilon}}}_{x}{\tilde{z}^{{\varepsilon}}}+{\tilde{\theta}^{0}}_{x}{\tilde{\mathcal{Z}}^{{\varepsilon}}}\Big)dx-\bigg(\bar{b}+\int_0^1{\tilde{\theta}^{0}}_{x}{\tilde{z}^{0}}dx\bigg)\frac{{\tilde{K}^{{\varepsilon}}}-{\tilde{K}^{0}}}{{\tilde{K}^{{\varepsilon}}}{\tilde{K}^{0}}}\Bigg]\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx\notag\\
&\qquad+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&=\underbrace{-\frac{4\theta_L}{3{\tilde{K}^{{\varepsilon}}}}\bigg(\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_{x}{\tilde{z}^{{\varepsilon}}}dx\bigg)^2}_{\leq0}\notag\\
&\qquad-\frac{4\theta_L}{3{\tilde{K}^{{\varepsilon}}}}\Bigg[\int_0^1{\tilde{\theta}^{0}}_{x}{\tilde{\mathcal{Z}}^{{\varepsilon}}}dx-\frac{\big(\bar{b}+\int_0^1{\tilde{\theta}^{0}}_{x}{\tilde{z}^{0}}dx\big)}{{\tilde{K}^{0}}}\Big({\tilde{K}^{{\varepsilon}}}-{\tilde{K}^{0}}\Big)\Bigg]\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx\notag\\
&\qquad+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&\leq-\frac{4\theta_L}{3{\tilde{K}^{{\varepsilon}}}}\Bigg[\int_0^1{\tilde{\theta}^{0}}_{x}{\tilde{\mathcal{Z}}^{{\varepsilon}}}dx-\frac{{\tilde{j}^{0}}}{2}\Big({\tilde{K}^{{\varepsilon}}}-{\tilde{K}^{0}}\Big)\Bigg]\int_0^1{\tilde{\varTheta}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}dx+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&\leq C\delta\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|+|{\tilde{K}^{{\varepsilon}}}-{\tilde{K}^{0}}|\Big)\|{\tilde{\varTheta}^{{\varepsilon}}}_x\|+C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2\notag\\
&\leq C\delta\|({\tilde{\mathcal{Z}}^{{\varepsilon}}},{\tilde{\varTheta}^{{\varepsilon}}})\|_1^2,\label{ss17.2}\end{aligned}$$ where we have adopted the notation ${\tilde{K}^{{\varepsilon}}}:=K[e^{{\tilde{z}^{{\varepsilon}}}},{\tilde{\theta}^{{\varepsilon}}}]$ (see formula ) for any ${\varepsilon}\in[0,{\varepsilon}_1]$ and the following estimates $$\label{Pss4B-ss10.2bc}
0<c\leq\frac{1}{{\tilde{K}^{{\varepsilon}}}}\leq C,\qquad |{\tilde{K}^{{\varepsilon}}}-{\tilde{K}^{0}}|\leq C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|+\|{\tilde{\varTheta}^{{\varepsilon}}}_x\|\Big),\qquad\forall{\varepsilon}\in[0,{\varepsilon}_1]$$ which follow from the straightforward but tedious computations. Inserting the estimates $\sim$ into , and letting $\delta\ll1$, we have $$\label{ss19.2}
\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1^2\leq C\delta\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1^2+C{\varepsilon}^2.$$ Moreover, substituting into , and letting $\delta$ small enough, we get $$\label{ss19.4}
\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1^2\leq C{\varepsilon}^2,\quad\forall{\varepsilon}\in(0,{\varepsilon}_1].$$ Combining with , and the elliptic estimate $\|{\tilde{\varPhi}^{{\varepsilon}}}\|_3\leq C\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1$, we obtain $$\label{ss20.1}
\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1+\|{\tilde{\varPhi}^{{\varepsilon}}}\|_3\leq C{\varepsilon}.$$
Next, we solve ${\tilde{\varTheta}^{{\varepsilon}}}_{xx}$ from the equation and directly take the $L^2$-norm of the resultant equality, the standard but tedious computations yield the following estimate $$\label{ss21.1}
\|{\tilde{\varTheta}^{{\varepsilon}}}_{xx}\|\leq C{\varepsilon}\|{\varepsilon}{\tilde{z}^{{\varepsilon}}}_{xxx}\|+C{\varepsilon}^2\|{\tilde{z}^{{\varepsilon}}}_{xx}\|+C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1\Big)\leq C{\varepsilon}.$$ Adding and up, we have $$\label{ss22.2}
\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}\|_2+\|{\tilde{\varPhi}^{{\varepsilon}}}\|_3\leq C{\varepsilon},\quad\forall{\varepsilon}\in(0,{\varepsilon}_1].$$ By using the exponential transformations ${\tilde{n}^{{\varepsilon}}}=e^{{\tilde{z}^{{\varepsilon}}}}$, ${\tilde{n}^{0}}=e^{{\tilde{z}^{0}}}$ and the above estimate , we have showed the algebraic convergence rate .
Now, we begin to show the convergence . Firstly, differentiating the equation once and solving ${\tilde{\varTheta}^{{\varepsilon}}}_{xxx}$ from the resultant equation, and taking the $L^2$-norm of the expression of ${\tilde{\varTheta}^{{\varepsilon}}}_{xxx}$, then these computations yield the following estimate $$\begin{aligned}
\|{\tilde{\varTheta}^{{\varepsilon}}}_{xxx}\|\leq&C\Big(\|{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx}\|+\underbrace{{\varepsilon}\|{\varepsilon}{\tilde{z}^{{\varepsilon}}}_{xxx}\|+{\varepsilon}^2\|{\tilde{z}^{{\varepsilon}}}_{xx}\|+\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}\|_2}_{\leq C{\varepsilon}}+\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|\Big)\notag\\
\leq&C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|+\|{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx}\|\Big) +C{\varepsilon}.\label{ss22.1c}\end{aligned}$$ Adding the elliptic estimate $\|{\tilde{\varPhi}^{{\varepsilon}}}_{xxxx}\|\leq C\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|$ to the above estimate , we have $$\label{wnd3+dws4}
\|({\tilde{\varTheta}^{{\varepsilon}}}_{xxx},{\tilde{\varPhi}^{{\varepsilon}}}_{xxxx})\|\leq C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|+\|{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx}\|\Big) +C{\varepsilon},\quad\forall{\varepsilon}\in(0,{\varepsilon}_1].$$
In order to complete the proof, we need to establish the convergence results $\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|\rightarrow0$, $\|{\varepsilon}{\tilde{z}^{{\varepsilon}}}_{xxx}\|\rightarrow0$ and $\|{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx}\|\rightarrow0$ as ${\varepsilon}\rightarrow0$. To this end, we first show $\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|$ converges to zero as ${\varepsilon}$ tends to zero. From the boundedness of $\|{\tilde{z}^{{\varepsilon}}}\|_2$ and the strong convergence , we have $$\label{ss23.1}
{\tilde{z}^{{\varepsilon}}}_{xx}\rightharpoonup{\tilde{z}^{0}}_{xx}\quad\text{in}\ L^2(\Omega)\ \text{weakly as}\ {\varepsilon}\rightarrow0.$$ However, we need to improve the above weak convergence into strong convergence. For this purpose, differentiating the equation once, multiplying the resultant equality by ${\tilde{z}^{{\varepsilon}}}_{xx}+({\tilde{z}^{{\varepsilon}}}_x)^2/2$ and integrating the result over the domain $\Omega$, the integration by parts yields $$\label{ss26.1}
\int_0^1{\tilde{S}^{{\varepsilon}}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx+\frac{{\varepsilon}^2}{2}\int_0^1\bigg\{\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_x\bigg\}^2dx={\tilde{\mathcal{R}}^{{\varepsilon}}},\quad\forall{\varepsilon}\in(0,{\varepsilon}_1],$$ where $$\begin{gathered}
\label{ss26.2}
{\tilde{\mathcal{R}}^{{\varepsilon}}}:=-\int_0^1{\tilde{S}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_{xx}\frac{({\tilde{z}^{{\varepsilon}}}_x)^2}{2}dx-\int_0^1{\tilde{S}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}_x\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]dx\\
+\int_0^1\bigg[{\tilde{\phi}^{{\varepsilon}}}_{xx}-\Big({\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}}\Big)_x-{\tilde{\theta}^{{\varepsilon}}}_{xx}\bigg]\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]dx,\qquad\forall{\varepsilon}\in[0,{\varepsilon}_1].\end{gathered}$$ Similarly, differentiating the equation once, multiplying the resultant equality by ${\tilde{z}^{0}}_{xx}+({\tilde{z}^{0}}_{x})^2/2$ and integrating the result over $\Omega$, we have $$\label{ss26.4}
\int_0^1{\tilde{S}^{0}}({\tilde{z}^{0}}_{xx})^2dx={\tilde{\mathcal{R}}^{0}},\quad\text{where}\ {\tilde{\mathcal{R}}^{0}}\ \text{is given by}\ \eqref{ss26.2}\ \text{with}\ {\varepsilon}=0.$$ By using the estimates , , , the weak convergence result and the standard computations, we obtain $$\begin{aligned}
0\leq|{\tilde{\mathcal{R}}^{{\varepsilon}}}-{\tilde{\mathcal{R}}^{0}}|\leq&C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}\|_2+\|{\tilde{\varPhi}^{{\varepsilon}}}_{xx}\|\Big)\notag\\
&\quad+\bigg|\int_0^1\underbrace{\bigg[{\tilde{S}^{0}}\frac{({\tilde{z}^{0}}_x)^2}{2}+{\tilde{S}^{0}}{\tilde{z}^{0}}_x+{\tilde{\theta}^{0}}_{xx}+{\tilde{\phi}^{0}}_{xx}+\Big({\tilde{j}^{0}}e^{-{\tilde{z}^{0}}}\Big)_x\bigg]}_{=:f^0\in L^2(\Omega)}\big({\tilde{z}^{{\varepsilon}}}_{xx}-{\tilde{z}^{0}}_{xx}\big)dx\bigg|\notag\\
\leq&C{\varepsilon}+\Big|\big<f^0, {\tilde{z}^{{\varepsilon}}}_{xx}-{\tilde{z}^{0}}_{xx}\big>_{L^2(\Omega)}\Big|\rightarrow0\quad\text{as}\ {\varepsilon}\rightarrow0.\label{ss31.2}\end{aligned}$$ Combining with , we have $$\label{ss31.3}
\lim_{{\varepsilon}\rightarrow0}{\tilde{\mathcal{R}}^{{\varepsilon}}}=\int_0^1{\tilde{S}^{0}}({\tilde{z}^{0}}_{xx})^2dx.$$ On the other hand, owing to the estimates , , the Sobolev inequality, we obtain $$\begin{aligned}
0\leq\bigg|\int_0^1\big({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}}\big)({\tilde{z}^{{\varepsilon}}}_{xx})^2dx\bigg|\leq&\big|\big({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}}\big)\big|_0\|{\tilde{z}^{{\varepsilon}}}_{xx}\|^2\notag\\
\leq&C\Big(|{\tilde{\mathcal{Z}}^{{\varepsilon}}}|_0+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+|{\tilde{\varTheta}^{{\varepsilon}}}|_0\Big)\notag\\
\leq&C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}\|_1\Big)\notag\\
\leq&C{\varepsilon}\rightarrow0\quad\text{as}\ {\varepsilon}\rightarrow0.\label{ss32.2}\end{aligned}$$ Combining the limits , with the equality , we have $$\begin{aligned}
\Bigg\{\limsup_{{\varepsilon}\rightarrow0}\bigg[\int_0^1{\tilde{S}^{0}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx\bigg]^{1/2}\Bigg\}^2&\leq\limsup_{{\varepsilon}\rightarrow0}\int_0^1{\tilde{S}^{0}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx\notag\\
&=\lim_{{\varepsilon}\rightarrow0}\int_0^1\big({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}}\big)({\tilde{z}^{{\varepsilon}}}_{xx})^2dx+\limsup_{{\varepsilon}\rightarrow0}\int_0^1{\tilde{S}^{0}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx\notag\\
&=\limsup_{{\varepsilon}\rightarrow0}\int_0^1\big({\tilde{S}^{{\varepsilon}}}-{\tilde{S}^{0}}+{\tilde{S}^{0}}\big)({\tilde{z}^{{\varepsilon}}}_{xx})^2dx\notag\\
&=\limsup_{{\varepsilon}\rightarrow0}\int_0^1{\tilde{S}^{{\varepsilon}}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx\notag\\
&\leq\limsup_{{\varepsilon}\rightarrow0}{\tilde{\mathcal{R}}^{{\varepsilon}}}=\lim_{{\varepsilon}\rightarrow0}{\tilde{\mathcal{R}}^{{\varepsilon}}}=\int_0^1{\tilde{S}^{0}}({\tilde{z}^{0}}_{xx})^2dx,\label{ss33.3}\end{aligned}$$ where we have used the non-negativity of the second term on the left-side of the equality . Motivated by , we choose ${\tilde{S}^{0}}$ as the weight to define a weighted-$L^2$ space as follows $$\label{ss34.1a}
L^2_{{\tilde{S}^{0}}}(\Omega):=\Bigg\{f:\Omega\rightarrow\mathbb{R} \text{ is measurable}\ \Bigg|\ \int_0^1{\tilde{S}^{0}}|f|^2dx<+\infty\Bigg\}$$ with the inner product $$\label{ss34.1b}
\big<f,g\big>_{L^2_{{\tilde{S}^{0}}}(\Omega)}:=\int_0^1{\tilde{S}^{0}}fgdx$$ and the associated norm $$\label{ss34.1c}
\|f\|_{L^2_{{\tilde{S}^{0}}}(\Omega)}:=\Bigg(\int_0^1{\tilde{S}^{0}}|f|^2\;dx\Bigg)^{1/2}.$$ Since the weight function ${\tilde{S}^{0}}$ is strictly positive and continuous, then the weighted-$L^2$ space $L^2_{{\tilde{S}^{0}}}(\Omega)$ is a Hilbert space and the weak convergence implies $$\label{ss34.2}
{\tilde{z}^{{\varepsilon}}}_{xx}\rightharpoonup{\tilde{z}^{0}}_{xx}\quad\text{in}\ L^2_{{\tilde{S}^{0}}}(\Omega)\ \text{weakly as}\ {\varepsilon}\rightarrow0.$$ Furthermore, we can rewrite the inequality in terms of the norm defined in as $$\label{ss33.3wn}
\limsup_{{\varepsilon}\rightarrow0}\|{\tilde{z}^{{\varepsilon}}}_{xx}\|_{L^2_{{\tilde{S}^{0}}}(\Omega)}\leq\|{\tilde{z}^{0}}_{xx}\|_{L^2_{{\tilde{S}^{0}}}(\Omega)}.$$ The weak convergence together with implies the strong convergence $$\label{ss34.3}
{\tilde{z}^{{\varepsilon}}}_{xx}\rightarrow{\tilde{z}^{0}}_{xx}\quad\text{in}\ L^2_{{\tilde{S}^{0}}}(\Omega)\ \text{strongly as}\ {\varepsilon}\rightarrow0,$$ which immediately implies $$\label{ss34.5}
\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0.$$
In addition, we prove $\|{\varepsilon}{\tilde{z}^{{\varepsilon}}}_{xxx}\|$ converges to zero as ${\varepsilon}$ tends to zero. From the strong convergence , we can directly deduce that $$\label{ss34.6}
\lim_{{\varepsilon}\rightarrow0}\int_0^1{\tilde{S}^{0}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx=\int_0^1{\tilde{S}^{0}}({\tilde{z}^{0}}_{xx})^2dx.$$ Combining with , we have $$\label{ss35.1}
\lim_{{\varepsilon}\rightarrow0}\int_0^1{\tilde{S}^{{\varepsilon}}}({\tilde{z}^{{\varepsilon}}}_{xx})^2dx=\int_0^1{\tilde{S}^{0}}({\tilde{z}^{0}}_{xx})^2dx.$$ Letting ${\varepsilon}\rightarrow0$ in the equality , and using the limit results and , we can easily see that $$\label{ss35.4}
{\varepsilon}\bigg\|\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_x\bigg\|\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0.$$ Therefore, $$\begin{aligned}
0\leq\|{\varepsilon}{\tilde{z}^{{\varepsilon}}}_{xxx}\|=&\bigg\|{\varepsilon}\bigg\{\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_x-{\tilde{z}^{{\varepsilon}}}_{x}{\tilde{z}^{{\varepsilon}}}_{xx}\bigg\}\bigg\|\notag\\
\leq&{\varepsilon}\bigg\|\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_x\bigg\|+C{\varepsilon}\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0.\label{ss36.1}\end{aligned}$$
Finally, we show $\|{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx}\|$ converges to zero as ${\varepsilon}$ tends to zero. Differentiating the equation once, solving the quantum term from the resultant equality and taking the $L^2$-norm of this quantum term, we obtain $$\label{ss36.2}
\frac{{\varepsilon}^2}{2}\bigg\|\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_{xx}\bigg\|={\tilde{\mathcal{Q}}^{{\varepsilon}}},\quad\forall{\varepsilon}\in(0,{\varepsilon}_1],$$ where $$\label{ss36.3}
{\tilde{\mathcal{Q}}^{{\varepsilon}}}:=\Big\|\Big({\tilde{S}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x\Big)_x+{\tilde{\theta}^{{\varepsilon}}}_{xx}-{\tilde{\phi}^{{\varepsilon}}}_{xx}+\Big({\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}}\Big)_x\Big\|,\quad\forall{\varepsilon}\in[0,{\varepsilon}_1].$$ The standard computations yield $$\begin{aligned}
0\leq{\tilde{\mathcal{Q}}^{{\varepsilon}}}={\tilde{\mathcal{Q}}^{{\varepsilon}}}-{\tilde{\mathcal{Q}}^{0}}\leq&\Big\|\Big({\tilde{S}^{{\varepsilon}}}{\tilde{z}^{{\varepsilon}}}_x\Big)_x-\Big({\tilde{S}^{0}}{\tilde{z}^{0}}_x\Big)_x+{\tilde{\varTheta}^{{\varepsilon}}}_{xx}-{\tilde{\varPhi}^{{\varepsilon}}}_{xx}+\Big({\tilde{j}^{{\varepsilon}}}e^{-{\tilde{z}^{{\varepsilon}}}}\Big)_x-\Big({\tilde{j}^{0}}e^{-{\tilde{z}^{0}}}\Big)_x\Big\|\notag\\
\leq&C\Big(\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}\|_1+|{\tilde{\mathcal{J}}^{{\varepsilon}}}|+\|{\tilde{\varTheta}^{{\varepsilon}}}_{xx}\|+\|{\tilde{\varPhi}^{{\varepsilon}}}_{xx}\|\Big)+C\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|\notag\\
\leq&C{\varepsilon}+C\|{\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx}\|\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0,\end{aligned}$$ where we have used ${\tilde{\mathcal{Q}}^{0}}=0$ which follows from the differentiation of the equation . Consequently, $$\begin{aligned}
0\leq\|{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx}\|=&\bigg\|{\varepsilon}^2\bigg\{\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_{xx}-\bigg[\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_{xx}\bigg\}\bigg\|\notag\\
\leq&{\varepsilon}^2\bigg\|\bigg[{\tilde{z}^{{\varepsilon}}}_{xx}+\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_{xx}\bigg\|+{\varepsilon}^2\bigg\|\bigg[\frac{({\tilde{z}^{{\varepsilon}}}_{x})^2}{2}\bigg]_{xx}\bigg\|\notag\\
=&2{\tilde{\mathcal{Q}}^{{\varepsilon}}}+{\varepsilon}^2\big\|\big({\tilde{z}^{{\varepsilon}}}_{xx}\big)^2+{\tilde{z}^{{\varepsilon}}}_x{\tilde{z}^{{\varepsilon}}}_{xxx}\big\|\notag\\
\leq&2{\tilde{\mathcal{Q}}^{{\varepsilon}}}+{\varepsilon}^2\Big(|{\tilde{z}^{{\varepsilon}}}_{xx}|_0\|{\tilde{z}^{{\varepsilon}}}_{xx}\|+|{\tilde{z}^{{\varepsilon}}}_{x}|_0\|{\tilde{z}^{{\varepsilon}}}_{xxx}\|\Big)\notag\\
\leq&2{\tilde{\mathcal{Q}}^{{\varepsilon}}}+C{\varepsilon}^2\Big(\|{\tilde{z}^{{\varepsilon}}}_{xx}\|_1\|{\tilde{z}^{{\varepsilon}}}_{xx}\|+\|{\tilde{z}^{{\varepsilon}}}_{x}\|_1\|{\tilde{z}^{{\varepsilon}}}_{xxx}\|\Big)\notag\\
\leq&2{\tilde{\mathcal{Q}}^{{\varepsilon}}}+C{\varepsilon}\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0.\label{ss37.3}\end{aligned}$$ From , , and , we know that $$\label{ss38.1b}
\big\|\big({\tilde{\mathcal{Z}}^{{\varepsilon}}}_{xx},{\varepsilon}{\tilde{z}^{{\varepsilon}}}_{xxx},{\varepsilon}^2{\tilde{z}^{{\varepsilon}}}_{xxxx},{\tilde{\varTheta}^{{\varepsilon}}}_{xxx},{\tilde{\varPhi}^{{\varepsilon}}}_{xxxx}\big)\big\|\rightarrow0,\quad\text{as}\ {\varepsilon}\rightarrow0.$$ By using the exponential transformations ${\tilde{n}^{{\varepsilon}}}=e^{{\tilde{z}^{{\varepsilon}}}}$ and ${\tilde{n}^{0}}=e^{{\tilde{z}^{0}}}$ again, the strong convergence implies the convergence .
Non-stationary case {#Subsect.4.2}
-------------------
In this subsection, we continue to discuss the semi-classical limit of the global solutions based on the existence and uniqueness results in Lemma \[lem2\] and Theorem \[thm2\]. We introduce the error variables as follows $$\label{131.1}
{\mathcal{N}^{{\varepsilon}}}:={n^{{\varepsilon}}}-{n^{0}},\quad {\mathcal{J}^{{\varepsilon}}}:={j^{{\varepsilon}}}-{j^{0}},\quad{\varTheta^{{\varepsilon}}}:={\theta^{{\varepsilon}}}-{\theta^{0}},\quad{\varPhi^{{\varepsilon}}}:={\phi^{{\varepsilon}}}-{\phi^{0}}.$$
Since the global solutions $({n^{{\varepsilon}}},{j^{{\varepsilon}}},{\theta^{{\varepsilon}}},{\phi^{{\varepsilon}}})$ and $({n^{0}},{j^{0}},{\theta^{0}},{\phi^{0}})$ satisfy the same initial and boundary conditions, the error variables $({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}},{\varPhi^{{\varepsilon}}})$ satisfy the following initial and boundary conditions $$\begin{gathered}
({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}},{\varPhi^{{\varepsilon}}})(0,x)=(0,0,0,0),\label{134.1}\\
({\partial_{t}^{k}}{\mathcal{N}^{{\varepsilon}}},{\partial_{t}^{k}}{\varTheta^{{\varepsilon}}},{\partial_{t}^{k}}{\varPhi^{{\varepsilon}}})(t,0)=({\partial_{t}^{k}}{\mathcal{N}^{{\varepsilon}}},{\partial_{t}^{k}}{\varTheta^{{\varepsilon}}},{\partial_{t}^{k}}{\varPhi^{{\varepsilon}}})(t,1)=(0,0,0),\quad k=0,1.\label{134.2}\end{gathered}$$ Moreover, subtracting from , the error variables $({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}},{\varPhi^{{\varepsilon}}})$ also satisfy the equations $$\begin{gathered}
{\mathcal{N}^{{\varepsilon}}}_t+{\mathcal{J}^{{\varepsilon}}}_x=0,\label{136.1a}\\
{\mathcal{J}^{{\varepsilon}}}_t+{\mathcal{J}^{{\varepsilon}}}=\mathcal{H}_1(t,x)+{\varepsilon}^2{n^{{\varepsilon}}}\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg]_x,\label{136.1b}\\
{n^{{\varepsilon}}}{\varTheta^{{\varepsilon}}}_t-\frac{2}{3}{\varTheta^{{\varepsilon}}}_{xx}+{n^{0}}{\varTheta^{{\varepsilon}}}=\mathcal{H}_2(t,x;{\varepsilon}),\label{157.2}\\
{\varPhi^{{\varepsilon}}}_{xx}={\mathcal{N}^{{\varepsilon}}},\label{136.1d}\end{gathered}$$ where $$\begin{aligned}
\mathcal{H}_1(t,x):=&-\big({\mathcal{N}^{{\varepsilon}}}_x{\theta^{{\varepsilon}}}+{n^{0}}_x{\varTheta^{{\varepsilon}}}\big)-\big({\mathcal{N}^{{\varepsilon}}}{\theta^{{\varepsilon}}}_x+{n^{0}}{\varTheta^{{\varepsilon}}}_x\big)+\big({\mathcal{N}^{{\varepsilon}}}{\phi^{{\varepsilon}}}_x+{n^{0}}{\varPhi^{{\varepsilon}}}_x\big)\notag\\
&+\Bigg[\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}\bigg)^2{n^{{\varepsilon}}}_x-\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2{n^{0}}_x\Bigg]-2\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}{j^{{\varepsilon}}}_x-\frac{{j^{0}}}{{n^{0}}}{j^{0}}_x\bigg),\label{136.1br}\end{aligned}$$ and $$\begin{aligned}
\mathcal{H}_2(t,x;{\varepsilon}):=&-{\theta^{0}}_t{\mathcal{N}^{{\varepsilon}}}-\big({\mathcal{J}^{{\varepsilon}}}{\theta^{{\varepsilon}}}_x+{j^{0}}{\varTheta^{{\varepsilon}}}_x\big)-\frac{2}{3}\bigg[{n^{{\varepsilon}}}{\theta^{{\varepsilon}}}\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}\bigg)_x-{n^{0}}{\theta^{0}}\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)_x\bigg]\notag\\
&+\frac{1}{3}\Bigg[\frac{\big({j^{{\varepsilon}}}\big)^2}{{n^{{\varepsilon}}}}-\frac{\big({j^{0}}\big)^2}{{n^{0}}}\Bigg]-{\mathcal{N}^{{\varepsilon}}}\big({\theta^{{\varepsilon}}}-\theta_L\big)+\frac{{\varepsilon}^2}{3}\Bigg[{n^{{\varepsilon}}}\Bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}\Bigg)_{xx}\Bigg]_x.\label{157.3}\end{aligned}$$ Differentiating the equation with respect to $x$ and using the equation , we obtain the equation $$\begin{aligned}
\label{135.1}
{\mathcal{N}^{{\varepsilon}}}_{tt}-{\theta^{{\varepsilon}}}{\mathcal{N}^{{\varepsilon}}}_{xx}+{\mathcal{N}^{{\varepsilon}}}_t=&{\mathcal{N}^{{\varepsilon}}}_x{\theta^{{\varepsilon}}}_x+\big({n^{0}}_x{\varTheta^{{\varepsilon}}}\big)_x+\big({\mathcal{N}^{{\varepsilon}}}{\theta^{{\varepsilon}}}_x+{n^{0}}{\varTheta^{{\varepsilon}}}_x\big)_x\notag\\
&-\big({\mathcal{N}^{{\varepsilon}}}{\phi^{{\varepsilon}}}_x+{n^{0}}{\varPhi^{{\varepsilon}}}_x\big)_x-\Bigg[\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}\bigg)^2{n^{{\varepsilon}}}_x-\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2{n^{0}}_x\Bigg]_x\notag\\
&+2\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}{j^{{\varepsilon}}}_x-\frac{{j^{0}}}{{n^{0}}}{j^{0}}_x\bigg)_x-{\varepsilon}^2\Bigg\{{n^{{\varepsilon}}}\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg]_x\Bigg\}_x.\end{aligned}$$
From the estimates and , we can deduce the following estimates $$\begin{gathered}
{n^{0}}(t,x),\ {\theta^{0}}(t,x),\ {S^{0}}(t,x):=S[{n^{0}},{j^{0}},{\theta^{0}}]\geq c>0,\notag\\
\|({n^{0}},{j^{0}},{\theta^{0}},{\phi^{0}})(t)\|_2+\|({n^{0}}_t,{j^{0}}_t,{\theta^{0}}_t)(t)\|_1\leq C,\label{138.1}\end{gathered}$$ and $$\begin{gathered}
{n^{{\varepsilon}}}(t,x),\ {\theta^{{\varepsilon}}}(t,x),\ {S^{{\varepsilon}}}(t,x):=S[{n^{{\varepsilon}}},{j^{{\varepsilon}}},{\theta^{{\varepsilon}}}]\geq c>0,\notag\\
\|({n^{{\varepsilon}}},{j^{{\varepsilon}}},{\theta^{{\varepsilon}}},{\phi^{{\varepsilon}}})(t)\|_2+\|({\varepsilon}{\partial_{x}^{3}}{n^{{\varepsilon}}},{\varepsilon}{\partial_{x}^{3}}{j^{{\varepsilon}}},{\varepsilon}^2{\partial_{x}^{4}}{n^{{\varepsilon}}})(t)\|+\|({n^{{\varepsilon}}}_t,{j^{{\varepsilon}}}_t)(t)\|_1+\|{\theta^{{\varepsilon}}}_t(t)\|\leq C,\label{138.2}\end{gathered}$$ where $c$ and $C$ are positive constants independent of ${\varepsilon}$, $\delta$, $x$ and $t$.
Based on Theorem \[thm3\], Lemma \[lem2\] and Theorem \[thm2\], it is easy to see that the assumption guarantees the coexistence of the quantum and limit global solutions with the same initial and boundary data. Thus, the above facts $\sim$ are available now.
Multiplying the equation by ${\mathcal{J}^{{\varepsilon}}}$ and integrating the resultant equality over the domain $\Omega$, we have $$\begin{aligned}
\frac{d}{dt}\int_0^1\frac{1}{2}\big({\mathcal{J}^{{\varepsilon}}}\big)^2dx+\big\|{\mathcal{J}^{{\varepsilon}}}(t)\big\|^2&=\int_0^1\mathcal{H}_1{\mathcal{J}^{{\varepsilon}}}dx+{\varepsilon}^2\int_0^1{n^{{\varepsilon}}}\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg]_x{\mathcal{J}^{{\varepsilon}}}dx \notag\\
&\leq C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2+C{\varepsilon}^2.\label{142.1}\end{aligned}$$ In the derivation of the estimate , we have used the Cauchy-Schwarz inequality, the elliptic estimate $\|{\varPhi^{{\varepsilon}}}(t)\|_2\leq C\|{\mathcal{N}^{{\varepsilon}}}(t)\|$ and the following estimate $$\|\mathcal{H}_1(t)\|\leq C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{N}^{{\varepsilon}}}_x,{\mathcal{J}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}_x,{\varTheta^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}_x,{\varPhi^{{\varepsilon}}}_x\big)(t)\big\|\label{140.1-141.1}$$ to control the first term on the right-side of this equality, and we have also used the integration by parts, the boundary condition and the estimates $\sim$ to bound the last term on the right-side as follows $$\begin{aligned}
&{\varepsilon}^2\int_0^1{n^{{\varepsilon}}}\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg]_x{\mathcal{J}^{{\varepsilon}}}dx\notag\\
=&-{\varepsilon}^2\int_0^1\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\big({n^{{\varepsilon}}}{\mathcal{J}^{{\varepsilon}}}\big)_xdx\notag\\
\leq&{\varepsilon}^2\Bigg\|\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg](t)\Bigg\|\Big\|\big({n^{{\varepsilon}}}{\mathcal{J}^{{\varepsilon}}}\big)_x(t)\Big\|\notag\\
\leq&{\varepsilon}^2\Bigg|\frac{1}{\sqrt{{n^{{\varepsilon}}}}}(t)\Bigg|_0\Big\|\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}(t)\Big\|\Big(\big|{n^{{\varepsilon}}}_x(t)\big|_0\big\|{\mathcal{J}^{{\varepsilon}}}(t)\big\|+\big|{n^{{\varepsilon}}}(t)\big|_0\big\|{\mathcal{J}^{{\varepsilon}}}_x(t)\big\|\Big)\notag\\
\leq&C{\varepsilon}^2.\end{aligned}$$
Multiplying the equation by ${\mathcal{N}^{{\varepsilon}}}_t$ and integrating the resultant equality over the domain $\Omega$, we obtain $$\begin{aligned}
&\frac{d}{dt}\int_0^1\frac{1}{2}\big({\mathcal{J}^{{\varepsilon}}}_x\big)^2dx-\int_0^1{\theta^{{\varepsilon}}}{\mathcal{N}^{{\varepsilon}}}_{xx}{\mathcal{N}^{{\varepsilon}}}_tdx+\big\|{\mathcal{J}^{{\varepsilon}}}_x(t)\big\|^2\notag\\
=&\int_0^1\Big[{\mathcal{N}^{{\varepsilon}}}_x{\theta^{{\varepsilon}}}_x+\big({n^{0}}_x{\varTheta^{{\varepsilon}}}\big)_x-\big({\mathcal{N}^{{\varepsilon}}}{\phi^{{\varepsilon}}}_x+{n^{0}}{\varPhi^{{\varepsilon}}}_x\big)_x\Big]{\mathcal{N}^{{\varepsilon}}}_tdx\notag\\
&+\int_0^1\big({\mathcal{N}^{{\varepsilon}}}{\theta^{{\varepsilon}}}_x+{n^{0}}{\varTheta^{{\varepsilon}}}_x\big)_x{\mathcal{N}^{{\varepsilon}}}_tdx-\int_0^1\Bigg[\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}\bigg)^2{n^{{\varepsilon}}}_x-\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2{n^{0}}_x\Bigg]_x{\mathcal{N}^{{\varepsilon}}}_tdx\notag\\
&+2\int_0^1\bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}{j^{{\varepsilon}}}_x-\frac{{j^{0}}}{{n^{0}}}{j^{0}}_x\bigg)_x{\mathcal{N}^{{\varepsilon}}}_tdx-\int_0^1{\varepsilon}^2\Bigg\{{n^{{\varepsilon}}}\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg]_x\Bigg\}_x{\mathcal{N}^{{\varepsilon}}}_tdx\notag\\
=&\Lambda_1+\Lambda_2+\Lambda_3+\Lambda_4+\Lambda_5,\label{142.2}\end{aligned}$$ where we have used the equation . Next, we respectively estimate the second term on the left-side of the equation and the integrals $\Lambda_l$, $l=1,2,\cdots,5$ on the right-side of by using the integration by parts, the Sobolev inequality, the Hölder inequality, the Cauchy-Schwarz inequality, the equation , the boundary condition , the equation and the estimates $\sim$ as follows $$\begin{aligned}
-\int_0^1{\theta^{{\varepsilon}}}{\mathcal{N}^{{\varepsilon}}}_{xx}{\mathcal{N}^{{\varepsilon}}}_tdx=&\int_0^1\big({\theta^{{\varepsilon}}}{\mathcal{N}^{{\varepsilon}}}_t\big)_x{\mathcal{N}^{{\varepsilon}}}_xdx\notag\\
=&\frac{d}{dt}\int_0^1\frac{1}{2}{\theta^{0}}\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx-\int_0^1\frac{1}{2}{\theta^{0}}_t\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx+\int_0^1{\varTheta^{{\varepsilon}}}{\mathcal{N}^{{\varepsilon}}}_{tx}{\mathcal{N}^{{\varepsilon}}}_xdx\notag\\
&-\int_0^1{\theta^{{\varepsilon}}}_x{\mathcal{J}^{{\varepsilon}}}_x{\mathcal{N}^{{\varepsilon}}}_xdx\notag\\
\geq&\frac{d}{dt}\int_0^1\frac{1}{2}{\theta^{0}}\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx\notag\\
&-C\Big(\big|{\theta^{0}}_t\big|_0\big\|{\mathcal{N}^{{\varepsilon}}}_x\big\|^2+\big|{\varTheta^{{\varepsilon}}}\big|_0\big\|{\mathcal{N}^{{\varepsilon}}}_{tx}\big\|\big\|{\mathcal{N}^{{\varepsilon}}}_x\big\|+\big|{\theta^{{\varepsilon}}}_x\big|_0\big\|{\mathcal{J}^{{\varepsilon}}}_x\big\|\big\|{\mathcal{N}^{{\varepsilon}}}_{x}\big\|\Big)\notag\\
\geq&\frac{d}{dt}\int_0^1\frac{1}{2}{\theta^{0}}\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx-C\Big(\big\|{\mathcal{N}^{{\varepsilon}}}_x\big\|^2+\big\|{\varTheta^{{\varepsilon}}}\big\|_1\big\|{\mathcal{N}^{{\varepsilon}}}_x\big\|+\big\|{\mathcal{J}^{{\varepsilon}}}_x\big\|\big\|{\mathcal{N}^{{\varepsilon}}}_{x}\big\|\Big)\notag\\
\geq&\frac{d}{dt}\int_0^1\frac{1}{2}{\theta^{0}}\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx-C\big\|\big({\mathcal{N}^{{\varepsilon}}}_x,{\mathcal{J}^{{\varepsilon}}}_x,{\varTheta^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}_x\big)(t)\big\|^2,\label{143.2}\end{aligned}$$ and $$\label{145.1-2+152.2}
\Lambda_1\leq C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{N}^{{\varepsilon}}}_x,{\mathcal{J}^{{\varepsilon}}}_x,{\varTheta^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}_x\big)(t)\big\|^2,$$ and $$\begin{aligned}
\Lambda_2=&-\int_0^1\big({\mathcal{N}^{{\varepsilon}}}_x{\theta^{{\varepsilon}}}_x+{\mathcal{N}^{{\varepsilon}}}{\theta^{{\varepsilon}}}_{xx}+{n^{0}}_x{\varTheta^{{\varepsilon}}}_x+{n^{0}}{\varTheta^{{\varepsilon}}}_{xx}\big){\mathcal{J}^{{\varepsilon}}}_xdx\notag\\
\leq&-\int_0^1{n^{0}}{\varTheta^{{\varepsilon}}}_{xx}{\mathcal{J}^{{\varepsilon}}}_xdx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{N}^{{\varepsilon}}}_x,{\mathcal{J}^{{\varepsilon}}}_x,{\varTheta^{{\varepsilon}}}_x\big)(t)\big\|^2\notag\\
=&\frac{3}{2}\int_0^1{n^{0}}\big[\mathcal{H}_2(t,x;{\varepsilon})-{n^{{\varepsilon}}}{\varTheta^{{\varepsilon}}}_t-{n^{0}}{\varTheta^{{\varepsilon}}}\big]{\mathcal{J}^{{\varepsilon}}}_xdx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{N}^{{\varepsilon}}}_x,{\mathcal{J}^{{\varepsilon}}}_x,{\varTheta^{{\varepsilon}}}_x\big)(t)\big\|^2\notag\\
\leq&\mu\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2+C_\mu\big\|{\mathcal{J}^{{\varepsilon}}}_x(t)\big\|^2+C\|\mathcal{H}_2(t;{\varepsilon})\|^2+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{N}^{{\varepsilon}}}_x,{\mathcal{J}^{{\varepsilon}}}_x,{\varTheta^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}_x\big)(t)\big\|^2\notag\\
\leq&\mu\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2+C_\mu\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2+C{\varepsilon}^2,\label{152.1}\end{aligned}$$ where we have used the estimate $$\begin{aligned}
&\|\mathcal{H}_2(t;{\varepsilon})\|\notag\\
\leq&C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1+C\Bigg\|{\varepsilon}^2\Bigg[{n^{{\varepsilon}}}\Bigg(\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}\Bigg)_{xx}\Bigg]_x\Bigg\|\notag\\
=&C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1\notag\\
&+C\Bigg\|{\varepsilon}^2\Bigg[{j^{{\varepsilon}}}_{xxx}-2\frac{{n^{{\varepsilon}}}_x}{{n^{{\varepsilon}}}}{j^{{\varepsilon}}}_{xx}+4\bigg(\frac{{n^{{\varepsilon}}}_x}{{n^{{\varepsilon}}}}\bigg)^2{j^{{\varepsilon}}}_x-3\frac{{j^{{\varepsilon}}}_x}{{n^{{\varepsilon}}}}{n^{{\varepsilon}}}_{xx}-4\bigg(\frac{{n^{{\varepsilon}}}_x}{{n^{{\varepsilon}}}}\bigg)^3{j^{{\varepsilon}}}+5\frac{{j^{{\varepsilon}}}{n^{{\varepsilon}}}_x}{({n^{{\varepsilon}}})^2}{n^{{\varepsilon}}}_{xx}-\frac{{j^{{\varepsilon}}}}{{n^{{\varepsilon}}}}{n^{{\varepsilon}}}_{xxx}\Bigg]\Bigg\|\notag\\
\leq&C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1+C{\varepsilon}\big\|\big({\varepsilon}{j^{{\varepsilon}}}_{xxx},{j^{{\varepsilon}}}_{xx},{j^{{\varepsilon}}}_x,{n^{{\varepsilon}}}_{xx},{j^{{\varepsilon}}},{n^{{\varepsilon}}}_{xx},{\varepsilon}{n^{{\varepsilon}}}_{xxx}\big)(t)\big\|\notag\\
\leq&C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1+C{\varepsilon}\label{148.3-151.1} $$ in the derivation of . Next, we continue to estimate $$\begin{aligned}
\Lambda_3\leq&-\int_0^1\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2{\mathcal{N}^{{\varepsilon}}}_{xx}{\mathcal{N}^{{\varepsilon}}}_tdx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
=&\int_0^1\bigg[\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2{\mathcal{N}^{{\varepsilon}}}_t\bigg]_x{\mathcal{N}^{{\varepsilon}}}_{x}dx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
=&\frac{d}{dt}\int_0^1\frac{1}{2}\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx-\int_0^1\frac{1}{2}\bigg[\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2\bigg]_t\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx-\int_0^1\bigg[\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2\bigg]_x{\mathcal{J}^{{\varepsilon}}}_x{\mathcal{N}^{{\varepsilon}}}_xdx\notag\\
&+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
\leq&\frac{d}{dt}\int_0^1\frac{1}{2}\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)^2\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2dx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2,\label{153.1}\end{aligned}$$ and $$\begin{aligned}
\Lambda_4\leq&2\int_0^1\frac{{j^{0}}}{{n^{0}}}{\mathcal{J}^{{\varepsilon}}}_{xx}{\mathcal{N}^{{\varepsilon}}}_tdx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
=&-2\int_0^1\frac{{j^{0}}}{{n^{0}}}{\mathcal{N}^{{\varepsilon}}}_{tx}{\mathcal{N}^{{\varepsilon}}}_tdx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
=&-\int_0^1\frac{{j^{0}}}{{n^{0}}}\big[\big({\mathcal{N}^{{\varepsilon}}}_t\big)^2\big]_xdx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
=&\int_0^1\bigg(\frac{{j^{0}}}{{n^{0}}}\bigg)_x\big({\mathcal{J}^{{\varepsilon}}}_x\big)^2dx+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2\notag\\
\leq&C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}}\big)(t)\big\|_1^2,\label{gs154.1}\end{aligned}$$ and $$\begin{aligned}
\Lambda_5=&\int_0^1{\varepsilon}^2{n^{{\varepsilon}}}\Bigg[\frac{\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}}{\sqrt{{n^{{\varepsilon}}}}}\Bigg]_x{\mathcal{N}^{{\varepsilon}}}_{tx}dx\notag\\
=&\int_0^1{\varepsilon}^2\Big[\sqrt{{n^{{\varepsilon}}}}\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xxx}-\big(\sqrt{{n^{{\varepsilon}}}}\big)_{x}\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}\Big]{\mathcal{N}^{{\varepsilon}}}_{tx}dx\notag\\
\leq&C{\varepsilon}^2\Big(\big\|\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xxx}(t)\big\|+\big\|\big(\sqrt{{n^{{\varepsilon}}}}\big)_{xx}(t)\big\|\Big)\big\|{\mathcal{N}^{{\varepsilon}}}_{tx}(t)\big\|\notag\\
\leq&C{\varepsilon}.\label{155.1}\end{aligned}$$ Substituting $\sim$ and $\sim$ into , we have $$\begin{gathered}
\label{156.2}
\frac{d}{dt}\int_0^1\bigg[\frac{1}{2}{S^{0}}\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2+\frac{1}{2}\big({\mathcal{J}^{{\varepsilon}}}_x\big)^2\bigg]dx+\big\|{\mathcal{J}^{{\varepsilon}}}_x(t)\big\|^2\\
\leq\mu\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2+C_\mu\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2+C{\varepsilon}.\end{gathered}$$
Multiplying the equation by ${\varTheta^{{\varepsilon}}}_t$ and integrating the resultant equality over the domain $\Omega$, we get $$\label{161.1}
\int_0^1{n^{{\varepsilon}}}\big({\varTheta^{{\varepsilon}}}_t\big)^2dx-\int_0^1\frac{2}{3}{\varTheta^{{\varepsilon}}}_{xx}{\varTheta^{{\varepsilon}}}_tdx+\int_0^1{n^{0}}{\varTheta^{{\varepsilon}}}{\varTheta^{{\varepsilon}}}_tdx=\int_0^1\mathcal{H}_2(t,x;{\varepsilon}){\varTheta^{{\varepsilon}}}_tdx.$$ Similarly, we can use the standard computations to deal with the each term in as follows $$\label{161.2}
\int_0^1{n^{{\varepsilon}}}\big({\varTheta^{{\varepsilon}}}_t\big)^2dx\geq 2c\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2,$$ and $$\label{161.3}
-\int_0^1\frac{2}{3}{\varTheta^{{\varepsilon}}}_{xx}{\varTheta^{{\varepsilon}}}_tdx=\int_0^1\frac{2}{3}{\varTheta^{{\varepsilon}}}_{x}{\varTheta^{{\varepsilon}}}_{xt}dx=\frac{d}{dt}\int_0^1\frac{1}{3}\big({\varTheta^{{\varepsilon}}}_{x}\big)^2dx,$$ and $$\begin{aligned}
\int_0^1{n^{0}}{\varTheta^{{\varepsilon}}}{\varTheta^{{\varepsilon}}}_tdx=&\frac{d}{dt}\int_0^1\frac{1}{2}{n^{0}}\big({\varTheta^{{\varepsilon}}}\big)^2dx-\int_0^1\frac{1}{2}{n^{0}}_t\big({\varTheta^{{\varepsilon}}}\big)^2dx\notag\\
\geq&\frac{d}{dt}\int_0^1\frac{1}{2}{n^{0}}\big({\varTheta^{{\varepsilon}}}\big)^2dx-C\big\|{\varTheta^{{\varepsilon}}}(t)\big\|^2,\label{161.4}\end{aligned}$$ and $$\begin{aligned}
\int_0^1\mathcal{H}_2(t,x;{\varepsilon}){\varTheta^{{\varepsilon}}}_tdx\leq&c\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2+C\|\mathcal{H}_2(t;{\varepsilon})\|^2\notag\\
\leq&c\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2+C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2+C{\varepsilon}^2,\label{161.5}\end{aligned}$$ where we have used the estimate again in the last inequality of . Substituting $\sim$ into , we have $$\label{161.6}
\frac{d}{dt}\int_0^1\bigg[\frac{1}{2}{n^{0}}\big({\varTheta^{{\varepsilon}}}\big)^2+\frac{1}{3}\big({\varTheta^{{\varepsilon}}}_{x}\big)^2\bigg]dx+c\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2\leq C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2+C{\varepsilon}^2.$$
Adding , and up, and letting $\mu$ small enough, we obtain $$\label{163.1}
\frac{d}{dt}\mathcal{E}^{\varepsilon}(t)+\underbrace{\big\|{\mathcal{J}^{{\varepsilon}}}(t)\big\|^2+\big\|{\mathcal{J}^{{\varepsilon}}}_x(t)\big\|^2+\frac{c}{2}\big\|{\varTheta^{{\varepsilon}}}_t(t)\big\|^2}_{\geq0}\leq C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2+C{\varepsilon},$$ where $$\label{163.2}
\mathcal{E}^{\varepsilon}(t):=\int_0^1\bigg[\frac{1}{2}{S^{0}}\big({\mathcal{N}^{{\varepsilon}}}_x\big)^2+\frac{1}{2}\big({\mathcal{J}^{{\varepsilon}}}\big)^2+\frac{1}{2}\big({\mathcal{J}^{{\varepsilon}}}_x\big)^2+\frac{1}{2}{n^{0}}\big({\varTheta^{{\varepsilon}}}\big)^2+\frac{1}{3}\big({\varTheta^{{\varepsilon}}}_{x}\big)^2\bigg]dx.$$ From the estimate , it is easy to check the following equivalent relation $$\label{163.3}
c\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2\leq\mathcal{E}^{\varepsilon}(t)\leq C\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1^2,\quad\forall t\in[0,\infty)$$ by using the Poincaré inequality. Therefore, the inequality implies $$\label{164.1}
\frac{d}{dt}\mathcal{E}^{\varepsilon}(t)\leq 2\gamma_3\mathcal{E}^{\varepsilon}(t)+C{\varepsilon},\quad\forall t\in[0,\infty),$$ where the positive constant $\gamma_3$ is independent of ${\varepsilon}$ and $t$. Applying the Gronwall inequality to , we have $$\label{164.3}
\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1\leq Ce^{\gamma_3 t}{\varepsilon}^{1/2},\quad\forall t\in[0,\infty)$$ Combining with the elliptic estimate $\|{\varPhi^{{\varepsilon}}}(t)\|_3\leq C\|{\mathcal{N}^{{\varepsilon}}}(t)\|_1$, we get the desired estimate .
Finally, for fixed ${\varepsilon}\in(0,\delta_6)$, we define a time $$\label{165.1}
T_{\varepsilon}:=-\frac{\ln{\varepsilon}}{4\gamma_3}>0.$$ For $t\leq T_{\varepsilon}$, the estimate yields that $$\label{165.2}
\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1+\big\|{\varPhi^{{\varepsilon}}}(t)\big\|_3\leq Ce^{\gamma_3 T_{\varepsilon}}{\varepsilon}^{1/2}=C{\varepsilon}^{1/4}.$$ For $t\geq T_{\varepsilon}$, using the estimates , and , we obtain $$\begin{aligned}
&\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1+\big\|{\varPhi^{{\varepsilon}}}(t)\big\|_3\notag\\
\leq&\|({n^{{\varepsilon}}}-{\tilde{n}^{{\varepsilon}}},{j^{{\varepsilon}}}-{\tilde{j}^{{\varepsilon}}},{\theta^{{\varepsilon}}}-{\tilde{\theta}^{{\varepsilon}}})(t)\|_1+\|({\phi^{{\varepsilon}}}-{\tilde{\phi}^{{\varepsilon}}})(t)\|_3\notag\\
&+\|({\tilde{n}^{{\varepsilon}}}-{\tilde{n}^{0}},{\tilde{j}^{{\varepsilon}}}-{\tilde{j}^{0}},{\tilde{\theta}^{{\varepsilon}}}-{\tilde{\theta}^{0}})\|_1+\|({\tilde{\phi}^{{\varepsilon}}}-{\tilde{\phi}^{0}})\|_3\notag\\
&+\|({n^{0}}-{\tilde{n}^{0}},{j^{0}}-{\tilde{j}^{0}},{\theta^{0}}-{\tilde{\theta}^{0}})(t)\|_1+\|({\phi^{0}}-{\tilde{\phi}^{0}})(t)\|_3\notag\\
\leq&C\big(e^{-\gamma_2T_{\varepsilon}}+{\varepsilon}+e^{-\gamma_1T_{\varepsilon}}\big)\notag\\
=&C\Big({\varepsilon}^{\frac{\gamma_2}{4\gamma_3}}+{\varepsilon}+{\varepsilon}^{\frac{\gamma_1}{4\gamma_3}}\Big)\notag\\
\leq&C{\varepsilon}^{\gamma_4},\label{165.3}\end{aligned}$$ where $$\gamma_4:=\min\bigg\{\frac{\gamma_1}{4\gamma_3},\frac{\gamma_2}{4\gamma_3},\frac{1}{4}\bigg\}>0.$$ Owing to and , we have $$\label{166.1}
\big\|\big({\mathcal{N}^{{\varepsilon}}},{\mathcal{J}^{{\varepsilon}}},{\varTheta^{{\varepsilon}}}\big)(t)\big\|_1+\big\|{\varPhi^{{\varepsilon}}}(t)\big\|_3\leq C{\varepsilon}^{\gamma_4},\quad\forall t\in[0,\infty).$$ Note that the right-side of is independent of $t$, this immediately implies the estimate .
Appendix {#A}
========
In this appendix, we study the unique solvability of the linear IBVP $\sim$.
Firstly, the parabolic equation with the initial condition $\hat{{\theta}}(0,x)={\theta}_0(x)$ and the boundary condition has a unique solution $\hat{{\theta}}\in\mathfrak{Y}_2([0,T])\cap H^1(0,T;H^1(\Omega))$ for given function $({w},{j})\in\big[\mathfrak{Y}_4([0,T])\cap H^2(0,T;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,T])\cap H^2(0,T;L^2(\Omega))\big]$. This fact is proved by the Galerkin method (see [@T88; @Z90] for example).
Next, we only need to show the unique solvability of the following linear IBVP for given function $({w},{j},{\theta},\hat{{\theta}})$, namely,
\[a7.1\]
2[w]{}\_t+\_x=0, \[a7.1-1\]\
\_t+2S\[[w]{}\^2,[j]{},\][w]{}\_x+\_x+[w]{}\^2\_x-\^2[w]{}\^2()\_x=[w]{}\^2\_x-[j]{}, \[a7.1-2\]\
:=, t>0, x:=(0,1),\[a7.1-3\]
with the initial condition $$\label{a7.2}
(\hat{{w}},\hat{{j}})(0,x)=({w}_0,{j}_0)(x),$$ and the boundary conditions
\[a7.3\] $$\begin{gathered}
\hat{{w}}(t,0)={w}_{l},\qquad \hat{{w}}(t,1)={w}_{r},\label{a7.3-1}\\
\hat{{w}}_{xx}(t,0)=\hat{{w}}_{xx}(t,1)=0.\label{a7.3-2}\end{gathered}$$
To this end, performing the procedure ${\partial_{x}^{}}\eqref{a7.1-2}/(-2{w})$ and inserting the transformation $U(t,x):=\hat{{w}}(t,x)-\bar{w}(x)$ into the resultant system, where $\bar{w}(x):=w_l(1-x)+w_rx$, we can equivalently reduce the IBVP $\sim$ to the IBVP of a fourth order wave equation satisfied by $U$, $$\begin{gathered}
U_{tt}+b_0{\partial_{x}^{}}U_t+b_1U_t+b_2U_x+b_3U_{xx}+a{\partial_{x}^{4}}U=f,\label{a8.1}\\
U(0,x)={w}_0(x)-\bar{w}(x),\quad U_t(0,x)=-\frac{{j}_{0x}}{2{w}_0}(x),\label{a8.2}\\
U(t,0)=U(t,1)=U_{xx}(t,0)=U_{xx}(t,1)=0,\label{a8.3}\end{gathered}$$ where $$\begin{aligned}
&b_0:=\frac{2j}{w^2},\qquad b_1:=\frac{1}{w}\bigg[\bigg(\frac{2j}{w}\bigg)_x+w_t\bigg],\qquad b_2:=-\frac{1}{w}\bigg[\bigg({\theta}-\frac{j^2}{w^4}\bigg)w\bigg]_x,\notag\\
&b_3:=-\bigg[\bigg({\theta}-\frac{j^2}{w^4}\bigg)+\frac{{\varepsilon}^2}{2}\frac{w_{xx}}{w}\bigg],\qquad a:=\frac{{\varepsilon}^2}{2},\notag\\
&f:=-\frac{1}{2w}\big(w^2{\phi}_x-j-w^2\hat{{\theta}}_x\big)_x+\frac{1}{w}\bigg[\bigg({\theta}-\frac{j^2}{w^4}\bigg)w\bigg]_x\bar{w}_x.\label{a8.4}\end{aligned}$$ Applying the Lemma A.1 ([@NS08], P870) to the linear IBVP $\sim$, we see that this problem has a unique solution $U\in\mathfrak{Y}_4([0,T])$.
We proceed to construct the solution $(\hat{{w}},\hat{{j}})$ to the IBVP $\sim$ from $U$ as follows,
$$\begin{gathered}
\hat{{w}}(t,x):=U(t,x)+\bar{w}(x),\label{a9.0}\\
\hat{j}(t,x):=-\int_0^x2w\hat{w}_t(t,y)dy+\hat{j}(t,0),\label{a9.1}\\
\hat{j}(t,0):=\int_0^t\bigg[-2\bigg({\theta}-\frac{j^2}{w^4}\bigg)w\hat{w}_x+\frac{4j}{w}\hat{w}_t-w^2\hat{{\theta}}_x\notag\\
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+{\varepsilon}^2w^2\bigg(\frac{\hat{w}_{xx}}{w}\bigg)_x+w^2{\phi}_x-j\bigg](\tau,0)d\tau+j_0(0).\label{a9.2}\notag\end{gathered}$$
By the standard argument (see [@NS08; @NS09] for example), we can easily see that the function $(\hat{{w}},\hat{{j}})\in\big[\mathfrak{Y}_4([0,T])\cap H^2(0,T;H^1(\Omega))\big]\times\big[\mathfrak{Y}_3([0,T])\cap H^2(0,T;L^2(\Omega))\big]$ is a desired solution to the linear IBVP $\sim$.
Acknowledgements {#acknowledgements .unnumbered}
================
The research of KJZ was supported in part by NSFC (No.11371082) and the Fundamental Research Funds for the Central Universities (No.111065201).
[99]{} M. Ancona and G. Iafrate, Quantum correction to the equation of state of an electron gas in a semiconductor, *Phys. Rev. B*, **39** (1989), 9536–9540.
M. Ancona and H. Tiersten, Macroscopic physics of the silicon inversion layer, *Phys. Rev. B*, **35** (1987), 7959–7965.
P. Antonelli and P. Marcati, On the finite energy weak solutions to a system in quantum fluid dynamics, *Comm. Math. Phys.*, **287** (2009), 657–686.
P. Antonelli and P. Marcati, The quantum hydrodynamics system in two space dimensions, *Arch. Ration. Mech. Anal.*, **203** (2012), 499–527.
D. Ferry and J. R. Zhou, Form of the quantum potential for use in hydrodynamic equations for semiconductor device modeling, *Phys. Rev. B*, **48** (1993), 7944–7950.
Carl L. Gardner, The quantum hydrodynamic model for semiconductor devices, *SIAM J. Appl. Math.*, **54** (1994), 409–427.
Carl L. Gardner, Resonant tunneling in the quantum hydrodynamic model, *VLSI Design*, **3** (1995), 201–210.
D. Gilbarg and Neil S. Trudinger, *Elliptic partial differential equations of second order*, Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001.
H. Hu, M. Mei and K. Zhang, Asymptotic stability and semi-classical limit for bipolar quantum hydrodynamic model, *Commun. Math. Sci.*, **14** (2016), 2331–2371.
F. Huang, H. Li and A. Matsumura, Existence and stability of steady-state of one-dimensional quantum hydrodynamic system for semiconductors, *J. Differential Equations*, **225** (2006), 1–25.
A. Jüngel, D. Matthes and Josipa P. Milišić, Derivation of new quantum hydrodynamic equations using entropy minimization, *SIAM J. Appl. Math.*, **67** (2006), 46–68.
A. Jüngel, *Quasi-Hydrodynamic Semiconductor Equations*, Progress in Nonlinear Differential Equations and their Applications, Birkhäuser Verlag, Besel-Boston-Berlin, 2001.
A. Jüngel, A steady-state quantum Euler-Poisson system for potential flows, *Comm. Math. Phys.*, **194** (1998), 463–479.
A. Jüngel and H. Li, Quantum Euler-Poisson systems: existence of stationary states, *Arch. Math. (Brno)*, **40** (2004), 435–456.
A. Jüngel and H. Li, Quantum Euler-Poisson systems: global existence and exponential decay, *Quart. Appl. Math.*, **62** (2004), 569–600.
S. Kawashima, Y. Nikkuni and S. Nishibata, The initial value problem for hyperbolic–elliptic coupled systems and applications to radiation hydrodynamics, *Analysis of systems of conservation laws, Chapman and Hall/CRC, Monogr. Surv. Pure Appl. Math.*, **99** (1999), 87–127.
S. Kawashima, Y. Nikkuni and S. Nishibata, Large-time behavior of solutions to hyperbolic-elliptic coupled systems, *Arch. Rational Mech. Anal.*, **170** (2003), 297–329.
H. Li and P. Marcati, Existence and asymptotic behavior of multi-dimensional quantum hydrodynamic model for semiconductors, *Comm. Math. Phys.*, **245** (2004), 215–247.
B. Liang and K. Zhang, Steady-state solutions and asymptotic limits on the multi-dimensional semiconductor quantum hydrodynamic model, *Math. Models Methods Appl. Sci.*, **17** (2007), 253–275.
H. Li, G. Zhang and K. Zhang, Algebraic time decay for the bipolar quantum hydrodynamic model, *Math. Models Methods Appl. Sci.*, **18** (2008), 859–881.
X. Li and Y. Yong, Large time behavior of solutions to 1-dimensional bipolar quantum hydrodynamic model for semiconductors, *Acta Math. Sci. Ser. B Engl. Ed.*, **37** (2017), 806–835.
F. Di Michele, M. Mei, B. Rubino and R. Sampalmieri, Thermal equilibrium solution to new model of bipolar hybrid quantum hydrodynamics, *J. Differential Equations*, **263** (2017), 1843–1873.
P. A. Markowich, C. A. Ringhofer and C. Schmeiser, *Semiconductor Equations*, Springer-Verlag, Vienna, 1990.
S. Nishibata and M. Suzuki, Initial boundary value problems for a quantum hydrodynamic model of semiconductors: asymptotic behaviors and classical limits, *J. Differential Equations*, **244** (2008), 836–874.
S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a thermal hydrodynamic model for semiconductors, *Arch. Rational Mech. Anal.*, **192** (2009), 187–215.
R. Pinnau, A note on boundary conditions for quantum hydrodynamic equations, *Appl. Math. Lett.*, **12** (1999), 77–82.
X. Pu and B. Guo, Global existence and semiclassical limit for quantum hydrodynamic equations with viscosity and heat conduction, *Kinet. Relat. Models*, **9** (2016), 165–191.
X. Pu and X. Xu, Asymptotic behaviors of the full quantum hydrodynamic equations, *J. Math. Anal. Appl.*, **454** (2017), 219–245.
R. Temam, *Infinite-dimensional dynamical systems in mechanics and physics*, Applied Mathematical Sciences, 68. Springer-Verlag, New York, 1988.
A. Unterreiter, The thermal equilibrium solution of a generic bipolar quantum hydrodynamic model, *Comm. Math. Phys.*, **188** (1997), 69–88.
E. Wigner, On the quantum correction for thermodynamic equilibrium, *Phys. Rev.*, **40** (1932), 749–759.
E. Zeidler, *Nonlinear functional analysis and its applications. II/A. Linear monotone operators*, Translated from the German by the author and Leo F. Boron. Springer-Verlag, New York, 1990.
K. Zhang and H. Hu, *Introduction to Semiconductor Partial Differential Equations (Chinese Edition)*, Science Press, Beijing, 2016.
G. Zhang, H. Li and K. Zhang, Semiclassical and relaxation limits of bipolar quantum hydrodynamic model for semiconductors, *J. Differential Equations*, **245** (2008), 1433–1453.
|
{
"pile_set_name": "ArXiv"
}
|
**HUSIMI TRANSFORM OF AN OPERATOR PRODUCT**
D M APPLEBY
Department of Physics, Queen Mary and Westfield College, Mile End Rd, London E1 4NS, UK
(E-mail: [email protected])
**Abstract**\
Introduction {#sec: intro}
============
A particularly useful and illuminating way of studying the classical limit is to formulate quantum mechanics in terms of phase space distributions [@Hill; @Lee; @Leon]. The advantage of such a formulation as compared with the standard Hilbert space formulation is that it puts quantum mechanics into a form which is similar to the probabilistic phase space formulation of classical mechanics. At least from a formal, mathematical point of view it thus allows one to regard quantum mechanics as a kind of generalized version of classical mechanics.
There are, of course, many different phase space formulations of quantum mechanics. The one which was discovered first is the formulation based on the Wigner function [@Hill; @Lee; @Leon; @Wigner]. In the case of a system having one degree of freedom with position $\hat{x}$, momentum $\hat{p}$ and density matrix $\hat{\rho}$ the Wigner function is defined by $$W(x,p) = \frac{1}{2 \pi}\int dy \, e^{i p y}
{\bigl \langle x-\tfrac{y}{2} \bigr| \, \hat{\rho}\, \bigl| x+\tfrac{y}{2}
\bigr \rangle}$$ (in units chosen such that $\hbar=1$). The Wigner function continues to find many important applications (in quantum tomography [@Leon], for example). However, if the aim is specifically to represent quantum mechanics in a manner which resembles classical mechanics as closely as possible, then the Wigner function suffers from the serious disadvantage that it is not strictly non-negative (except in special cases [@PosWig])—which makes the analogy with the classical phase space probability distribution somewhat strained.
There has accordingly been some interest in the problem of constructing alternative distributions, which are strictly non-negative, and which can be interpreted as probability density functions. There are, in fact, infinitely many such functions [@Davies; @Cart; @Wod1; @Dav1; @Halli; @Wun]. The one which was discovered first, and which is the focus of this paper, is the Husimi, or $Q$-function [@Hill; @Lee; @Leon; @Hus; @Kano; @Glaub; @Miz], which is obtained from the Wigner function by smearing it with a Gaussian convolution: $${Q}(x,p)
= \frac{1}{\pi} \int dx' dp' \,
\exp\left[ -(x-x')^2 - (p-p')^2 \right]
W(x',p')$$ (in units such that $\hbar=1$, and where we assume that $x$, $p$ have been made dimensionless by choosing a suitable length scale $\lambda$, and making the replacements $x\rightarrow x/\lambda$, $p\rightarrow \lambda p$). It should be emphasised that it is not simply that the Husimi function has the mathematical significance of a probability density function. It also has this significance physically. It has been shown that the Husimi function is the probability distribution describing the outcome of a joint measurement of position and momentum in a number of particular cases [@Leon; @Wod1; @Art; @Raymer; @Leon2]. More generally it can be shown [@Ali; @self1] that the Husimi function has a universal significance: namely, it is the probability density function describing the outcome of *any* retrodictively optimal joint measurement process. In Appleby [@self1] it is argued that this means that the Husimi function may be regarded as the canonical quantum mechanical phase space probability distribution, which plays the same role in relation to joint measurements of $x$ and $p$ as does the function $\left| { \left \langle x \vphantom{\psi } \,
\right| \left. \psi \vphantom{x}
\right \rangle }\right|^2$ in relation to single measurements of $x$ only.
If one wants to construct a systematic procedure for investigating the transition from quantal to classical it is not enough simply to find an analogue for the classical phase space probability distribution. One also needs an analogue of the classical Liouville equation, giving the time evolution of the probability distribution. In the formulation based on the Husimi function this is accomplished by means of Mizrahi’s formula [@Miz], giving the Husimi transform of an operator product (also see Lee [@Lee], Cohen [@Cohen], Prugovečki [@Prugo] and O’Connell and Wigner [@Conn]).
Let ${ A_{\mathrm{W}}}$ denote the Weyl transform of the operator $\hat{A}$, defined by [@Hill; @Lee; @Weyl] $${ A_{\mathrm{W}}}(x,p) = \int dy \, e^{i p y}
{\bigl \langle x-\tfrac{y}{2} \bigr| \, \hat{A}\, \bigl| x+\tfrac{y}{2}
\bigr \rangle}$$ The Husimi transform (or covariant symbol) ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ is then given by $${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
= \frac{1}{\pi} \int dx' dp' \,
\exp\left[ -(x-x')^2 - (p-p')^2 \right]
{ A_{\mathrm{W}}}(x',p')
\label{eq: HusDef}$$ Mizrahi [@Miz] has derived the following formula for the Husimi transform of the product of two operators $\hat{A}$, $\hat{B}$: $${ (\hat{A}\hat{ B})_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}
= {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}} e^{\overleftarrow{{\partial_{+}}} \overrightarrow{{\partial_{-}}}}
{B_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}
\label{eq: HusProd}$$ where $ {\partial_{\pm}}=2^{-1/2}(\partial_{x} \mp i \partial_{p})$. Using this formula, and the fact that the Husimi function is just the Husimi transform of the density matrix scaled by a factor $1/(2 \pi)$, it is straightforward to derive the following generalization of the Liouville equation: $$\frac{\partial}{\partial t} {Q}= {\left\{{H_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}, {Q}\right\}_{\mathrm{H}}}
\label{eq: HusLiouGen}$$ where ${H_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ is the Husimi transform of the Hamiltonian, and ${\left\{{H_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}},
{Q}\right\}_{\mathrm{H}}}$ is the generalized Poisson bracket $${\left\{{H_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}},{Q}\right\}_{\mathrm{H}}}
= \sum_{n=0}^{\infty}
\frac{2}{n!} \operatorname{Im}\left( {\partial_{+}}^{n}{H_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}} \,
{\partial_{-}}^{n}{Q}\right)
\label{eq: HusBracket}$$ The first term in the sum on the right-hand side is just the ordinary Poisson bracket. The remaining terms represent quantum mechanical corrections.
It is not apparent from Mizrahi’s derivation, whether these expressions are exact, or whether they are only asymptotic. In Section \[sec: ProdForm\] we will show that there is a large class of operators for which the series in Eq. (\[eq: HusProd\]) \[and consequently the series in Eq. (\[eq: HusBracket\])\] is absolutely convergent . This property is closely connected with the complex analytic properties of the Husimi transform, as discussed by Mehta and Sudarshan [@Mehta] and Appleby [@self2].
The significance of the result proved in Section \[sec: ProdForm\] is best appreciated if one compares Eqs. (\[eq: HusProd\]–\[eq: HusBracket\]) with the corresponding formulae in the Wigner-Weyl formalism [@Hill; @Lee; @Groen; @Moyal; @WeylProd]: $$\begin{aligned}
{ (\hat{A} \hat{B})_{\mathrm{W}}}
& = { A_{\mathrm{W}}} \exp\left[ \tfrac{i}{2}
\left(\overleftarrow{\partial_{x}}
\overrightarrow{\partial_{p}}-
\overleftarrow{\partial_{p}}
\overrightarrow{\partial_{x}}
\right) \right]
{ B_{\mathrm{W}}}
\label{eq: WeylProduct}
\\
\frac{\partial}{\partial t} W
& = {\left\{ { H_{\mathrm{W}}}, W\right\}_{\mathrm{W}}}
\label{eq: LiouGen}
\\
\left\{ { H_{\mathrm{W}}}, W\right\}_W
& = \sum_{n=0}^{\infty}
\frac{(-1)^n}{(2n+1)! 2^{2 n}}
H_W
\left(\overleftarrow{\partial_{x}}
\overrightarrow{\partial_{p}}-
\overleftarrow{\partial_{p}}
\overrightarrow{\partial_{x}}
\right)^{2 n+1}
W
\label{eq: MoyBrack}\end{aligned}$$ where ${ H_{\mathrm{W}}}$ is the Weyl transform of the Hamiltonian, and where ${\left\{ { H_{\mathrm{W}}}, W\right\}_{\mathrm{W}}}$ denotes the Moyal bracket [@Moyal].
It can be seen that Eqs (\[eq: HusProd\]–\[eq: HusBracket\]) and Eqs. (\[eq: WeylProduct\]–\[eq: MoyBrack\]) are formally very similar. However, this formal resemblance is somewhat deceptive, for it turns out that the two sets of equations have quite different convergence properties.
The formula for the Weyl transform of an operator product, Eq. (\[eq: WeylProduct\]), is exact if either $\hat{A}$ or $\hat{B}$ is a polynomial in $\hat{x}$ and $\hat{p}$ (in which case the series terminates after a finite number of terms). More generally, if ${ A_{\mathrm{W}}}$, ${ B_{\mathrm{W}}}$ are $C^{\infty}$ functions satisfying appropriate conditions on their growth at infinity, then it can be shown that the series is asymptotic [@Omnes]. However, there are many operators of physical interest for which the Weyl transform is only defined in a distributional sense, and for operators such as this the series can be highly singular. Consider, for example, the parity operator $\hat{V}$, whose action in the $x$-representation is given by $${\bigl \langle x \bigr| \, \hat{V}\, \bigl| \psi
\bigr \rangle} = {
\bigl \langle -x \, \bigr| \bigl. \psi \bigr\rangle }$$ We have $${ V_{\mathrm{W}}}(x,p)=\pi \delta(x)\delta(p)$$ Substituting this expression into Eqs. (\[eq: WeylProduct\]) gives $${ (\hat{V}^2)_{\mathrm{W}}}(x,p)
= \pi^2 \delta(x) \delta(p)
\exp\left[ \tfrac{i}{2}
\left(\overleftarrow{\partial_{x}}
\overrightarrow{\partial_{p}}-
\overleftarrow{\partial_{p}}
\overrightarrow{\partial_{x}}
\right) \right]
\delta(x) \delta(p)$$ The left-hand side of this equation $=1$; whereas the expression on the right-hand side is an infinite sum each individual term of which is ill-defined (being a product of distributions concentrated at the origin).
Mizrahi’s [@Miz] derivation of the formula for the Husimi transform of an operator product, Eq. (\[eq: HusProd\]), depends on the same kind of formal manipulation that is used in Groenewold’s [@Groen] derivation of Eq. (\[eq: WeylProduct\]), and so it might be supposed that the validity of the formula is similarly restricted. However, it turns out that the sum in Eq. (\[eq: HusProd\]) is actually much better behaved. In fact, it will be shown in Section \[sec: ProdForm\] that, subject to certain not very restrictive conditions on the operators $\hat{A}$ and $\hat{B}$, the sum on the right-hand side of Eq. (\[eq: HusProd\]) is not only defined and asymptotic; it is even absolutely convergent for all $x$, $p$. This is essentially because ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ is typically a much less singular object than ${ A_{\mathrm{W}}}$ \[due to the Gaussian convolution in Eq. (\[eq: HusDef\])\].
In Section \[sec: Expect\] we apply the result just described to the problem of expressing expectation values in terms of the Husimi function.
The expectation value of an operator $\hat{A}$ can be obtained from the Wigner function using the formula [@Hill; @Lee] $$\operatorname{Tr}(\hat{\rho} \hat{A} )
= \int dx dp \, { A_{\mathrm{W}}}(x,p) W(x,p)
\label{eq: WigExpect}$$ In certain cases we can also express the expectation value in terms of the Husimi function using [@Hill; @Lee; @Miz] $$\operatorname{Tr}(\hat{\rho} \hat{A})
= \int dx dp \, {A_{\overline{\mathrm{H}}}}(x,p) \, {Q}(x,p)
\label{eq: HusExpect}$$ where ${A_{\overline{\mathrm{H}}}}$ is the anti-Husimi transform (or contravariant symbol) of $\hat{A}$, defined by $${A_{\overline{\mathrm{H}}}}
= e^{-{\partial_{+}}{\partial_{-}}} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}
\label{eq: antiDef}$$ \[with $ {\partial_{\pm}}=2^{-1/2}(\partial_{x} \mp i \partial_{p})$, as before\]. Eq. (\[eq: HusExpect\]) is valid (for example) whenever [@Reed] ${A_{\overline{\mathrm{H}}}}$ exists as a tempered distribution and ${Q}$ belongs to the corresponding space of test functions (*i.e.* the $C^{\infty}$ functions of rapid decrease). However, we have the problem that ${A_{\overline{\mathrm{H}}}}$ is often so highly singular that it is not defined as a tempered distribution—which means that the usefulness of Eq. (\[eq: HusExpect\]) is somewhat limited. This is often seen as a serious drawback of the Husimi formalism.
However, it turns out that it is often possible to circumvent this difficulty. Suppose we substitute the series given by Eq. (\[eq: antiDef\]) into the right hand side of Eq. (\[eq: HusExpect\]), and suppose we then reverse the order of sum and integral. This gives $$\operatorname{Tr}(\hat{\rho} \hat{A})
= \sum_{n=0}^{\infty} \frac{(-1)^n}{n!}
\int dx dp \, \left({\partial_{+}}^n {\partial_{-}}^n {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p) \right)
{Q}(x,p)
\label{eq: HusExpectB}$$ In Section \[sec: Expect\] we show that it often happens that the sum on the right-hand side of this equation is absolutely convergent, even in many of the cases where ${A_{\overline{\mathrm{H}}}}$ fails to exist as a tempered distribution.
Convergence of the Product Formula {#sec: ProdForm}
==================================
We will find it convenient to work in terms of coherent states. Define $$\hat{a} = \frac{1}{\sqrt{2}}(\hat{x}+i \hat{p})
\hspace{0.5 in}
\hat{a}^{\dagger} = \frac{1}{\sqrt{2}}(\hat{x}-i \hat{p})$$ and let $\phi_{n}$ denote the $n^{\rm th}$ (normalised) eigenstate of the number operator $\hat{a}^{\dagger} \hat{a}$: $$\hat{a} \, \phi_{0} = 0 \hspace{0.5 in}
\phi_{n} = \frac{1}{\sqrt{n!}} (\hat{a}^{\dagger})^{ n}
\phi_{0}$$ Let $\hat{D}_{xp}$ be the displacement operator $$\hat{D}_{xp} = e^{i (p \hat{x} - x \hat{p})}$$ and define $$\phi_{n; xp} = \hat{D}_{xp} \phi_{n}
\hspace{0.5 in}
\phi_{xp} = \phi_{0; xp}$$ The $\phi_{xp}$ are the coherent states. Let $\hat{A}$ be any operator (not necessarily bounded) with domain of definition $\mathscr{D}_{\hat{A}}$, and suppose that $\phi_{xp}\in \mathscr{D}_{\hat{A}}$ for all $x$, $p$. It is then straightforward to show that $${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p) = {\langle \phi_{xp}, \, \hat{A} \phi_{xp}\rangle}$$ (we no longer use the Dirac bra-ket notation, because the existence of ${\langle \psi, \, \hat{A} \chi\rangle}$ does not, in general, imply the existence of ${\langle \hat{A}^{\dagger}\psi, \, \chi\rangle}$).
If $\hat{A}$, $\hat{B}$ are both bounded then the proof of Eq. (\[eq: HusProd\]) is comparatively straightforward. However, we want to make the proof as general as possible. We then have the difficulty that the sum in Eq. (\[eq: HusProd\]) will only be defined if ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ and ${B_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ are both $C^{\infty}$; whereas functions of the form ${\langle \hat{D}_{xp}\psi, \, \hat{A}
\hat{D}_{xp} \psi\rangle}$ are, in general, not even once-differentiable, let alone $C^{\infty}$. We are thus faced with the question: what conditions must we impose on the operator $\hat{A}$ in order to ensure that the function ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ is $C^{\infty}$? One answer to this question is given by the following theorem.
\[th: a\] Let $\mathscr{D}_{\hat{A}}$, $\mathscr{D}_{\hat{A}^{\dagger}}$ be the domains of definition of $\hat{A}$, $\hat{A}^{\dagger}$ respectively. Suppose that $\phi_{xp} \in \mathscr{D}_{\hat{A}}\cap
\mathscr{D}_{\hat{A}^{\dagger}}$ for all $x$, $p$. Suppose, also, that ${\langle \phi_{x_1 p_1}, \, \hat{A} \phi_{x_2 p_2}\rangle}$, ${\langle \phi_{1; x_1 p_1}, \, \hat{A} \phi_{x_2 p_2}\rangle}$ and ${\langle \phi_{1; x_1 p_1}, \, \hat{A}^{\dagger} \phi_{x_2 p_2}\rangle}$ are continuous functions on $\mathbb{R}^4$. Then ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ is an analytic function, which uniquely continues to a holomorphic function defined on the whole of $\mathbb{C}^2$.
The continuation is given by $${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
= \frac{ {\langle \phi_{x_{-}p_{-}} , \,
\hat{A}\phi_{x_{+}p_{+}}\rangle}
}{{\langle \phi_{x_{-}p_{-}} , \, \phi_{x_{+}p_{+}} \rangle} }
\label{eq: Continue}$$ where $x,p$ are arbitrary complex, and where $x_{\pm}, p_{\pm}$ are the real variables defined by $$\begin{aligned}
x_{\pm} & = \frac{1}{2}(x+x^{*})\pm\frac{i}{2}(p-p^{*})
\label{eq: xPmDef}
\\
p_{\pm} & = \frac{1}{2}(p+p^{*})\mp\frac{i}{2}(x-x^{*})
\label{eq: pPmDef}\end{aligned}$$
This theorem is a strengthened version of results proved by Mehta and Sudarshan [@Mehta] and Appleby [@self2]. The proof is given in Appendix \[app: ProofA\].
It is worth noting that the condition in the statement of this theorem is quite weak. If the three functions listed exist and are continuous then, without making any explicit assumption regarding the differentiability of these functions, it automatically follows that ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ must be complex analytic.
We also have the following lemma:
\[lem: b\] Suppose that $\hat{A}$ satisfies the conditions of Theorem \[th: a\]. Then $$\begin{aligned}
\frac{ {\langle \phi_{n;x_{-}p_{-}}
, \, \hat{A}\phi_{x_{+}p_{+}}\rangle}
}{ {\langle \phi_{x_{-}p_{-}}
, \, \phi_{x_{+}p_{+}} \rangle}
}
& = \frac{1}{\sqrt{n!}}
\sum_{r=0}^{n} \begin{pmatrix} n \\ r\end{pmatrix}
(z_{+}^{\vphantom{*}}-z_{-}^{*})^{n-r}
\frac{\partial^r}{\partial z_{-}^{r} }
{A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
\label{eq: dBydZMinus}
\\
\frac{ {\langle \hat{A}^{\dagger}\phi_{x_{-}p_{-}}
, \, \phi_{n;x_{+}p_{+}}\rangle}
}{ {\langle \phi_{x_{-}p_{-}}
, \, \phi_{x_{+}p_{+}} \rangle}
}
& = \frac{1}{\sqrt{n!}}
\sum_{r=0}^{n} \begin{pmatrix} n \\ r\end{pmatrix}
(z_{-}^{\vphantom{*}}-z_{+}^{*})^{n-r}
\frac{\partial^r}{\partial z_{+}^{r} }
{A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
\label{eq: dBydZPlus}\end{aligned}$$ where $x_{\pm}$, $p_{\pm}$ are the variables defined by Eqs. (\[eq: xPmDef\]) and (\[eq: pPmDef\]), and where $$z_{\pm} = \frac{1}{\sqrt{2}} (x \pm i p)$$
The proof of this lemma is given in Appendix \[app: ProofB\].
If $x$, $p$ are both real (so that $z_{-}=z_{+}^{*}$) Eqs. (\[eq: dBydZMinus\]) and (\[eq: dBydZPlus\]) become $$\begin{aligned}
{\langle \phi_{n;x p} , \, \hat{A}\phi_{x p} \rangle}
& = \frac{1}{\sqrt{n!}} \, {\partial_{-}}^{n} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
\label{eq: dmnA}
\\
{\langle \hat{A}^{\dagger}\phi_{x p} , \, \phi_{n;x p} \rangle}
& = \frac{1}{\sqrt{n!}} \, {\partial_{+}}^{n} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
\label{eq: dplA}\end{aligned}$$ where $${\partial_{\pm}}= \frac{\partial}{\partial z_{\pm}}
= \frac{1}{\sqrt{2}}
\left(\frac{\partial}{\partial x}
\mp i \frac{\partial}{\partial p}
\right)$$
Using these results the proof of the product formula becomes very straightforward. Let $\hat{A}$, $\hat{B}$ be any pair of operators satisfying the conditions of Theorem \[th: a\]. Suppose, also, that $\phi_{xp}\in \mathscr{D}_{\hat{A}\hat{B}}$ for all $x, p \in \mathbb{R}$. Then, for real $x$, $p$,
Expectation Values {#sec: Expect}
==================
We now discuss the implications that the result just proved has for the convergence of Eq. (\[eq: HusExpectB\]), giving the expectation value of $\hat{A}$ in terms of the Husimi function.
Of course, one does not expect the right hand side of Eq. (\[eq: HusExpectB\]) to converge for arbitrary $\hat{A}$ and $\hat{\rho}$—since, apart from anything else, an unbounded operator does not have a well-defined expectation value for every state $\hat{\rho}$. We therefore need to place some kind of restriction on the class of operators $\hat{A}$ and density matrices $\hat{\rho}$ considered. The result we prove is probably not the most general possible. However, it will serve to illustrate the point, that the sum on the right hand side of Eq. (\[eq: HusExpectB\]) is often absolutely convergent, even in many of the cases where the anti-Husimi transform fails to exist as a tempered distribution.
We accordingly confine ourselves to the case of density matrices for which the Husimi function ${Q}\in \mathscr{I}(\mathbb{R}^2)$, where $\mathscr{I}(\mathbb{R}^2)$ is the space of $C^{\infty}$ functions which are rapidly decreasing at infinity [@Reed] (*i.e.* the space of test functions for the space of tempered distributions). In other words, we assume that $$\sup_{(x,p)\in\mathbb{R}^2}
\left| (1+x^2+p^2)^l \partial_{x}^{m} \partial_{p}^{n} {Q}(x,p)
\right|
< \infty$$ for every triplet of non-negative integers $l$, $m$, $n$.
We assume that $\hat{A}$ has the properties
1. $\phi_{xp} \in \mathscr{D}_{\hat{A}}
\cap \mathscr{D}_{\hat{A}^{\dagger}}$ for all $x$,$p\in\mathbb{R}$.
2. There exist positive constants $K_{\pm}$ and non-negative integers $N_{\pm}$ such that $$\begin{aligned}
\| \hat{A}\phi_{xp}\| & \le K_{-} (1+x^2+p^2)^{N_{-}}
\label{eq: Abound1}
\\
\| \hat{A}^{\dagger}\phi_{xp}\|
& \le K_{+} (1+x^2+p^2)^{N_{+}}
\label{eq: Abound2}
\end{aligned}$$ for all $x$, $p \in \mathbb{R}$.
We will say that an operator satisfying these two conditions is *polynomial bounded*. The following lemma gives two properties of such operators which will be needed in the sequel.
\[lem: c\] Suppose that $\hat{A}$ is polynomial bounded. Then
1. $\hat{A}$ satisfies the conditions of Theorem \[th: a\]. In particular, ${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}$ is analytic.
2. For every pair of non-negative integers $m$, $n$ there exists a positive constant $K_{mn}$ and a non-negative integer $N_{mn}$ such that $$\left|\partial_{x}^{m} \partial_{p}^{n}{A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)\right|
\le K_{mn} (1+x^2+p^2)^{N_{mn}}
\label{eq: OMbound}$$ for all $x$, $p\in\mathbb{R}$.
The proof is given in Appendix \[app: ProofC\].
We now ready to prove the main result of this section.
Suppose that $\hat{A}$ is polynomially bounded, and suppose that the density matrix $\hat{\rho}$ is such that the corresponding Husimi function is rapidly decreasing at infinity. Suppose, also, that $\hat{A}\hat{\rho}$ is of trace-class. Then $$\operatorname{Tr}(\hat{A}\hat{\rho})
= \sum_{n=0}^{\infty} \frac{(-1)^n}{n!}
\int dx dp \, \left({\partial_{+}}^n {\partial_{-}}^n {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p) \right)
{Q}(x,p)
\label{eq: HusExpectC}$$ where the sum on the right hand side is absolutely convergent.
We have $$\begin{aligned}
\operatorname{Tr}(\hat{A}\hat{\rho})
& = \frac{1}{2 \pi} \int dx dp \,
{\langle \phi_{xp}, \, \hat{A} \hat{\rho} \phi_{xp}\rangle}
\\
& = \frac{1}{2 \pi} \int dx dp \,
\left( \sum_{n=0}^{\infty}
{\langle \hat{A}^{\dagger} \phi_{xp}, \,
\phi_{n; xp}\rangle}
{\langle \phi_{n; xp}, \, \hat{\rho} \phi_{xp}\rangle}
\right)\end{aligned}$$ We now use Lebesgue’s dominated convergence theorem [@Reed] to show that we may reverse the order of sum and integral. In fact, it follows from the Schwartz inequality that $$\begin{aligned}
& \left| \sum_{n=0}^{m}
{\langle \hat{A}^{\dagger} \phi_{xp}, \,
\phi_{n; xp}\rangle}
{\langle \phi_{n; xp}, \, \hat{\rho} \phi_{xp}\rangle}
\right|
\\
& \hspace{1.0 in}
\le
\left( \biggl(\sum_{n=0}^{m}
\bigl|{\langle \hat{A}^{\dagger} \phi_{xp}, \,
\phi_{n; xp}\rangle}
\bigr|^{2}
\biggr)\biggl(
\sum_{n=0}^{m}
\bigl|{\langle \phi_{n; xp}, \, \hat{\rho} \phi_{xp}\rangle}
\bigr|^{2}
\biggr)
\right)^{\frac{1}{2}}
\\
& \hspace{1.0 in}
\le
\|\hat{A}^{\dagger} \phi_{xp}\| \;
\|\hat{\rho}\phi_{xp}\|\end{aligned}$$ We have $$\| \hat{\rho}\phi_{xp} \|
= \bigl( {\langle \phi_{xp}
, \, \hat{\rho}^2 \phi_{xp}\rangle}
\bigr)^{\frac{1}{2}}
\le \bigl( {\langle \phi_{xp}
, \, \hat{\rho} \phi_{xp}\rangle}
\bigr)^{\frac{1}{2}}
= \sqrt{2 \pi Q(x,p)}$$ which, together with Inequality (\[eq: Abound2\]), implies $$\|\hat{A}^{\dagger} \phi_{xp}\| \;
\|\hat{\rho}\phi_{xp}\|
\le \sqrt{2\pi} K_{+}
(1+x^2+p^2)^{N_{+}} \sqrt{Q(x,p)}$$ By assumption, $Q(x,p)\in\mathscr{I}(\mathbb{R}^2)$. It follows that $\|\hat{A}^{\dagger} \phi_{xp}\| \;
\|\hat{\rho}\phi_{xp}\|$ is integrable. We may therefore use Lebesgue’s dominated convergence theorem [@Reed] to deduce $$\operatorname{Tr}(\hat{A}\hat{\rho})
=
\frac{1}{2 \pi}\sum_{n=0}^{\infty}
\int dx dp \,
{\langle \hat{A}^{\dagger} \phi_{xp}, \,
\phi_{n; xp}\rangle}
{\langle \phi_{n; xp}, \, \hat{\rho} \phi_{xp}\rangle}
\label{eq: traceForm1}$$ where the sum is absolutely convergent, since $$\sum_{n=0}^{\infty}
\left|
\int dx dp \,
{\langle \hat{A}^{\dagger} \phi_{xp}, \,
\phi_{n; xp}\rangle}
{\langle \phi_{n; xp}, \, \hat{\rho} \phi_{xp}\rangle}
\right|
\le
\int dx dp \,
\|\hat{A}^{\dagger} \phi_{xp}\| \;
\|\hat{\rho}\phi_{xp}\|
< \infty$$ We know from Lemma \[lem: c\] that $\hat{A}$ satisfies the conditions of Theorem \[th: a\]. We may therefore use the results proved in the last section to rewrite Eq. (\[eq: traceForm1\]) in the form $$\operatorname{Tr}(\hat{A}\hat{\rho})
=
\sum_{n=0}^{\infty}
\frac{1}{n!}
\int dx dp \,
{\partial_{+}}^{n} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
{\partial_{-}}^{n} {Q}(x,p)$$ Finally, it follows from Inequality (\[eq: OMbound\]), together with the fact that $Q\in \mathscr{I}(\mathbb{R}^2)$, that we may partially integrate term-by-term to obtain $$\operatorname{Tr}(\hat{A}\hat{\rho})
=
\sum_{n=0}^{\infty}
\frac{(-1)^{n}}{n!}
\int dx dp \,
\bigl(
{\partial_{+}}^{n} {\partial_{-}}^{n} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
\bigr)
{Q}(x,p)$$
The right hand side of Eq. (\[eq: WigExpect\]) (expressing $\langle\hat{A}\rangle$ in terms of the Wigner function) is defined whenever $W\in \mathscr{I}(\mathbb{R}^2)$ and ${ A_{\mathrm{W}}}$ exists as a tempered distribution. On the other hand, although it is true that ${Q}\in \mathscr{I}(\mathbb{R}^2)$ whenever $W\in \mathscr{I}(\mathbb{R}^2)$ (see Theorem IX.3 of ref. [@Reed]), the fact that ${ A_{\mathrm{W}}}$ exists as a tempered distribution is not evidently sufficient to ensure that $\hat{A}$ is polynomial bounded. So we have not shown that Eq. (\[eq: HusExpectC\]) has the same range of validity as Eq. (\[eq: WigExpect\]). However, it can be shown that $\hat{A}$ is polynomially bounded if $\phi_{xp}\in \mathscr{D}_{\hat{A}}\cap
\mathscr{D}_{\hat{A}^{\dagger}}$, and if ${ (\hat{A}^{\dagger} \hat{A})_{\mathrm{W}}}$ and ${ (\hat{A}\hat{A}^{\dagger} )_{\mathrm{W}}}$ exist as tempered distributions (see Theorem IX.4 of ref. [@Reed]). In the applications one meets with operators satisfying these conditions much more commonly than one meets with operators for which ${A_{\overline{\mathrm{H}}}}$ exists as a tempered distribution. For instance, every bounded operator is polynomially bounded, whereas there are many bounded operators of physical interest for which ${A_{\overline{\mathrm{H}}}}$ fails to exist as a tempered distribution. The above result consequently represents a significant improvement on the results that were previously known.
Of course, just from the fact that Eq. (\[eq: HusExpectC\]) is convergent, it does not necessarily follow that the convergence is sufficiently rapid to make the formula useful in practical, numerical work. This question requires further investigation.
Conclusion {#sec: conc}
==========
As has been stressed by Mizrahi [@Miz], Lalović *et al* [@Dav1], Davidović and Lalović [@Dav2] and others, the Husimi formalism provides an especially perspicuous method for studying the relationship between quantum and classical mechanics. It establishes a one-to-one correspondence between the basic equations of the two theories, so that one can start with a classical formula, and then turn it into the corresponding quantum formula by adding successive correction terms. Moreover, the fact that ${Q}(x,p)$ describes the outcome of a retrodictively optimal joint measurement of $x$ and $p$ [@Ali; @self1], means that one could reasonably argue that the Husimi function is the most natural choice for a quantum mechanical analogue of the classical probability distribution.
In this paper we have investigated the convergence properties of two of the key formulae in the Husimi formalism. We have shown that the formula giving the Husimi transform of an operator product has much better convergence properties than the corresponding formula in the Wigner function formalism. In particular, the Husimi formalism leads to a convergent generalization of the Liouville equation for a very large class of Hamiltonians. We have also shown that the convergence properties of the formula expressing the expectation value $\langle\hat{A}\rangle$ in terms of the Husimi function, although seemingly not as good as those of the corresponding formula in the Wigner function formalism, are significantly better than the often highly singular character of ${A_{\overline{\mathrm{H}}}}$ would suggest.
These results lend additional support to the suggestion that, in so far as the aim is specifically to formulate quantum mechanics as a kind of generalized version of classical mechanics, then the formalism based on the Husimi function has some significant advantages.
Proof of Theorem \[th: a\] {#app: ProofA}
==========================
For arbitrary complex $x$, $p$ define $$F(x,p)
= \frac{{\langle \phi_{x_{-}p_{-}}, \, \hat{A}\,\phi_{x_{+}p_{+}}\rangle}
}{{\langle \phi_{x_{-}p_{-}}, \, \phi_{x_{+}p_{+}}\rangle}
}$$ where $x_{\pm}$, $p_{\pm}$ are the (real) variables defined by Eqs. (\[eq: xPmDef\]) and (\[eq: pPmDef\]).
It is easily seen that, if $x$, $p$ are both real, then $$F(x,p) = {\langle \phi_{x p}, \, \hat{A}\, \phi_{x p}\rangle}
= {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)$$ The problem thus reduces to that of showing that, if $\hat{A}$ has the properties stipulated, then $F$ is holomorphic. We will do this by showing that $F$ satisfies the Cauchy-Riemann equations with respect to the complex variables $$z_{\pm} = \frac{1}{\sqrt{2}} (x_{\pm}\pm i p_{\pm})
= \frac{1}{\sqrt{2}} (x \pm i p)
\label{eq: zPmDef}$$
In fact, it is straightforward to show that $\phi_{xp}$, regarded as a vector-valued function of two real variables, is differentiable in the norm topology; the derivatives being given by $$\begin{aligned}
\frac{\partial}{\partial x} \phi_{xp}
& = \frac{1}{\sqrt{2}} \phi_{1; \, xp} -\frac{i}{2} p\phi_{xp}
\\
\frac{\partial}{\partial p} \phi_{xp}
& = \frac{i}{\sqrt{2}} \phi_{1; \, xp} +\frac{i}{2} x\phi_{xp}\end{aligned}$$ Also, $${\langle \phi_{x_{-}p_{-}}, \, \phi_{x_{+}p_{+}}\rangle}
= \exp\bigl[-\frac{1}{4}(x_{+}-x_{-})^2
-\frac{1}{4}(p_{+}-p_{-})^2
+\frac{i}{2}(p_{+}x_{-}-p_{-}x_{+})\bigr]$$ Consequently, $F$ is differentiable with respect to the variables $x_{-}$, $p_{-}$. Moreover $$\begin{aligned}
\frac{\partial}{\partial x_{-}} F(x,p)
& = \frac{1}{\sqrt{2}}
\frac{{\langle \phi_{1;\,x_{-}p_{-}}
, \, \hat{A}\phi_{x_{+}p_{+}}\rangle}
}{{\langle \phi_{x_{-}p_{-}}, \, \phi_{x_{+}p_{+}}\rangle}}
+\frac{1}{\sqrt{2}} (z_{-}^{*}-z_{+})F(x,p)
\notag
\\
& =
i \frac{\partial}{\partial p_{-}} F(x,p)
\label{eq: CauRieA}\end{aligned}$$ from which it follows that $F$ satisfies the Cauchy-Riemann equations with respect to the complex variable $z_{-}=(x_{-}-ip_{-})/\sqrt{2}$.
We can alternatively write $$F(x,p)
=
\frac{{\langle \hat{A}^{\dagger}\, \phi_{x_{-}p_{-}}
, \, \phi_{x_{+}p_{+}}\rangle}
}{{\langle \phi_{x_{-}p_{-}}, \, \phi_{x_{+}p_{+}}\rangle}
}$$ Consequently, $F$ is also differentiable with respect to the real variables $x_{+}$, $p_{+}$. Moreover $$\begin{aligned}
\frac{\partial}{\partial x_{+}} F(x,p)
& = \frac{1}{\sqrt{2}}
\frac{{\langle \hat{A}^{\dagger}\, \phi_{x_{-}p_{-}}
, \, \phi_{1;\, x_{+}p_{+}}\rangle}
}{{\langle \phi_{x_{-}p_{-}}, \, \phi_{x_{+}p_{+}}\rangle}}
+\frac{1}{\sqrt{2}} (z_{+}^{*}-z_{-})F(x,p)
\notag
\\
& =
-i \frac{\partial}{\partial p_{+}} F(x,p)
\label{eq: CauRieB}\end{aligned}$$ from which it follows that $F$ satisfies the Cauchy-Riemann equations with respect to the complex variable $z_{+}=(x_{+}+ip_{+})/\sqrt{2}$.
If $\hat{A}$ has the properties specified in the statement of theorem, then we see from Eqs. (\[eq: CauRieA\]) and (\[eq: CauRieB\]) that the partial derivatives $\partial F/\partial x_{\pm}$, $\partial F/\partial p_{\pm}$, are continuous functions on $\mathbb{R}^4$. It follows [@Grau] that $F$ is a holomorphic function of the complex variables $z_{\pm}$. Referring to Eq. (\[eq: zPmDef\]) it can be seen that the variables $z_{\pm}$ are linear combinations of $x$, $p$. We conclude that $F$ is a holomorphic function of $x$, $p$.
Proof of Lemma \[lem: b\] {#app: ProofB}
=========================
It is straightforward to show that $\phi_{n; x_{-} p_{-}}$, regarded as a vector valued function of two real variables, is differentiable in the norm topology. Moreover $$\frac{1}{\sqrt{2}}
\left( \frac{\partial}{\partial x_{-}}
-i \frac{\partial}{\partial p_{-}}
\right) \phi_{n ; x_{-} p_{-}}
= \sqrt{n+1} \phi_{(n+1); x_{-} p_{-}}
+\frac{1}{2} z_{-}\, \phi_{n; x_{-} p_{-}}$$ Hence $$\begin{gathered}
\frac{1}{\sqrt{2}}
\left( \frac{\partial}{\partial x_{-}}
+i \frac{\partial}{\partial p_{-}}
\right)
\frac{ {\langle \phi_{n;x_{-}p_{-}}
, \, \hat{A}\phi_{x_{+}p_{+}}\rangle}
}{ {\langle \phi_{x_{-}p_{-}}
, \, \phi_{x_{+}p_{+}} \rangle}
}
\\
= \sqrt{n+1}
\frac{ {\langle \phi_{(n+1);x_{-}p_{-}}
, \, \hat{A}\phi_{x_{+}p_{+}}\rangle}
}{ {\langle \phi_{x_{-}p_{-}}
, \, \phi_{x_{+}p_{+}} \rangle}
}
+ \left(z_{-}^{*}-z_{+}
\right)
\frac{ {\langle \phi_{n;x_{-}p_{-}}
, \, \hat{A}\phi_{x_{+}p_{+}}\rangle}
}{ {\langle \phi_{x_{-}p_{-}}
, \, \phi_{x_{+}p_{+}} \rangle}
}\end{gathered}$$ Iterating this result, and using $$\frac{1}{2^{r/2}}
\left(\frac{\partial}{\partial x_{-}}+
i\frac{\partial}{\partial p_{-}}
\right)^{r}
{A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
= \frac{\partial^r}{\partial z_{-}^{r}}
{A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)$$ we obtain Eq. (\[eq: dBydZMinus\]).
The proof of Eq. (\[eq: dBydZPlus\]) is similar.
Proof of Lemma \[lem: c\] {#app: ProofC}
=========================
Proof of (1) {#proof-of-1 .unnumbered}
------------
We need to show that, if $\hat{A}$ is polynomial bounded, then the functions ${\langle \phi_{x_1 p_1}, \, \hat{A} \phi_{x_2 p_2}\rangle}$, ${\langle \phi_{1; x_1 p_1}, \, \hat{A} \phi_{x_2 p_2}\rangle}$ and ${\langle \phi_{1; x_1 p_1}, \, \hat{A}^{\dagger} \phi_{x_2 p_2}\rangle}$ are continuous.
Consider the function ${\langle \phi_{1; x_1 p_1}, \, \hat{A}
\phi_{x_2 p_2}\rangle}$. We have $$\begin{gathered}
\left|{\bigl< \phi_{1; {x'\vphantom{p}}_{\!\! 1} {p'}_{\!\! 1}}
, \, \hat{A} \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2} } \bigr>}
-{\bigl< \phi_{1; x_1 p_1}, \, \hat{A} \phi_{x_2 p_2} \bigr>}
\right|
\\
\le
\left|{\bigl< (\phi_{1; {x'\vphantom{p}}_{\!\! 1}
{p'}_{\!\! 1}}
-\phi_{1; x_1 p_1})
, \, \hat{A} \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}} \bigr>}
\right| +
\left|{\bigl< \phi_{1; x_1 p_1}
, \, \hat{A}( \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}}- \phi_{x_2 p_2}) \bigr>}
\right|\end{gathered}$$ In view of Eq. (\[eq: Abound1\]) we have $$\left|{\bigl< (\phi_{1; {x'\vphantom{p}}_{\!\! 1}
{p'}_{\!\! 1}}-\phi_{1;x_1 p_1})
, \, \hat{A} \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}} \bigr>}
\right|
\le
K_{-} (1+{x'\vphantom{p}}_{\!\! 2}^{2}+
{p'}_{\!\! 2}^{2})^{N_{-}}
\|\phi_{1; {x'\vphantom{p}}_{\!\! 1}
{p'}_{\!\! 1}}-\phi_{1; x_1 p_1} \|$$ Also, using the completeness relation for coherent states, together with Eq. (\[eq: Abound2\]), we find $$\begin{aligned}
& \left|{\bigl< \phi_{1; x_1 p_1}
, \, \hat{A}( \phi_{{x' \vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}}-
\phi_{x_2 p_2}) \bigr>}
\right|
\\
& \hspace{0.8 in}
=
\left|\frac{1}{2 \pi} \int dx_{3} dp_{3} \,
{\bigl< \phi_{1; x_1 p_1}
, \, \phi_{x_3 p_3} \bigr>}
{\bigl< \hat{A}^{\dagger}\phi_{x_3 p_3}
, \, ( \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}}-
\phi_{x_2 p_2}) \bigr>}
\right|
\\
& \hspace{0.8 in}
\le f(x_1,p_1) \| \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}}-
\phi_{x_2 p_2} \|\end{aligned}$$ where $f$ is the polynomial $$\begin{aligned}
f(x_1,p_1)
& = \frac{K_{+}}{2 \pi}
\int dx_3 dp_3 \,
\bigl|{\bigl< \phi_{1; x_1 p_1}
, \, \phi_{x_3 p_3} \bigr>}
\bigr|
(1+x_{3}^{2}+p_{3}^{2})^{N_{+}}
\\
& = \frac{K_{+}}{2^{\frac{3}{2}} \pi}
\int d{x'\vphantom{p}}_{\! \! 3}
d{p'}_{\! \! 3} \,
\sqrt{1+{x'\vphantom{p}}_{\! \! 3}^{2}
+ {p'}_{\! \! 3}^{2}}
\left(1+({x'\vphantom{p}}_{\! \! 3}+
x_{1})^2+({p'}_{\! \! 3}+p_{1})^2
\right)^{N_{+}}
\\
& \hspace{2.5 in} \times
\exp\left[-\frac{1}{4}
\bigl({x'\vphantom{p}}_{\! \! 3}^{2}
+ {p'}_{\! \!3}^{2}\bigr)
\right]\end{aligned}$$ Putting these results together we find $$\begin{gathered}
\left|{\bigl< \phi_{1; {x'\vphantom{p}}_{\!\! 1} {p'}_{\!\! 1}}
, \, \hat{A} \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2} } \bigr>}
-{\bigl< \phi_{1; x_1 p_1}, \, \hat{A} \phi_{x_2 p_2} \bigr>}
\right|
\\
\le
K_{-} (1+{x'\vphantom{p}}_{\!\! 2}^{2}+
{p'}_{\!\! 2}^{2})^{N_{-}}
\|\phi_{1; {x'\vphantom{p}}_{\!\! 1}
{p'}_{\!\! 1}}-\phi_{1; x_1 p_1} \|
+
f(x_1,p_1) \| \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 2}}-
\phi_{x_2 p_2} \|\end{gathered}$$ $\phi_{x p}$ and $\phi_{1; x p}$, regarded as vector-valued functions on $\mathbb{R}^2$, are continuous in the norm topology. Consequently $${\bigl< \phi_{1; {x'\vphantom{p}}_{\!\! 1} {p'}_{\!\! 1} }
, \,
\hat{A} \phi_{{x'\vphantom{p}}_{\!\! 2}
{p'}_{\!\! 1}}
\bigr>}
\rightarrow
{\bigl< \phi_{1; x_1 p_1}, \, \hat{A} \phi_{x_2 p_2} \bigr>}$$ as $({x'\vphantom{p}}_{\!\! 1}, {p'}_{\!\! 1},
{x'\vphantom{p}}_{\!\! 2}, {p'}_{\!\! 2})
\rightarrow (x^{\vphantom{j}}_1 , p^{\vphantom{j}}_1,
x^{\vphantom{j}}_2 , p^{\vphantom{j}}_2)$. It follows that ${\bigl< \phi_{1; x_1 p_1}, \, \hat{A} \phi_{x_2 p_2} \bigr>}$ is continuous. Continuity of the functions ${\bigl< \phi_{x_1 p_1}, \, \hat{A} \phi_{x_2 p_2} \bigr>}$ and ${\bigl< \phi_{1; x_1 p_1}, \, \hat{A}^{\dagger}
\phi_{x_2 p_2} \bigr>}$ is proved in the same way.
Proof of (2). {#proof-of-2. .unnumbered}
-------------
The completeness relation for coherent states implies $${A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p) = {\langle \phi_{xp}, \, \hat{A}\phi_{xp}\rangle}
= \frac{1}{2 \pi} \int dx' dp' \,
{\langle \phi_{xp}, \, \phi_{x' p'}\rangle}
{\langle \hat{A}^{\dagger}\phi_{x' p'}, \, \phi_{xp}\rangle}$$ Using $$\begin{aligned}
{\partial_{+}}\phi_{n; xp}
& = \sqrt{n+1}\phi_{(n+1);xp} + \frac{1}{2}z^{*}\phi_{n;xp}
\\
{\partial_{-}}\phi_{n; xp}
& =
\begin{cases}
-\frac{1}{2} z \phi_{xp}
\hspace{0.5 in} & \text{if}\; n=0 \\
-\sqrt{n} \phi_{(n-1); xp}
-\frac{1}{2} z \phi_{n; xp}
\hspace{0.5 in} & \text{if}\; n>0
\end{cases}\end{aligned}$$ \[where ${\partial_{\pm}}= 2^{-1/2}(\partial_{x}\mp i \partial_{p})$ and $z=2^{-1/2}(x+i p)$)\], and differentiating under the integral sign, it is not difficult to show that $$\partial_{x}^{m} \partial_{p}^{n} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
=
\sum_{r,s=0}^{n+m}
c_{r s}
\int dx' dp' \,
{\langle \phi_{r; xp}, \, \phi_{x' p'}\rangle}
{\langle \hat{A}^{\dagger}\phi_{x' p'}, \, \phi_{s; xp}\rangle}$$ for suitable constants $c_{r s}$. We have $$\bigl| {\langle \phi_{r; xp}, \, \phi_{x' p'}\rangle}
\bigr|
= \frac{1}{2^{\frac{r}{2}}\sqrt{r!}}
\left( (x'-x)^2 +(p'-p)^2
\right)^{\frac{r}{2}}
\exp\left[ -\frac{1}{4}(x'-x)^2 -\frac{1}{4}(p'-p)^2
\right]$$ In view of Inequality (\[eq: Abound2\]) it follows that $$\begin{gathered}
\left| \int dx' dp' \,
{\langle \phi_{r; xp}, \, \phi_{x' p'}\rangle}
{\langle \hat{A}^{\dagger}\phi_{x' p'}, \, \phi_{s; xp}\rangle}
\right|
\\ \le
\frac{K_{+}}{2^{\frac{r}{2}}\sqrt{r!}}
\int dx'' dp'' \,
\bigl( 1 + (x''+x)^2 +(p''+p)^2
\bigr)^{N_{+}}
\\
\times
\bigl({x''}^{2} +{p''}^{2}\bigr)^{\frac{r}{2}}
\exp \left[ -\frac{1}{4} \bigl({x''}^{2}+{p''}^{2}\bigr)
\right]\end{gathered}$$ It can be seen that the expression on the right hand side of this inequality is a polynomial in $x$ and $p$. Consequently $$\left| \partial_{x}^{m} \partial_{p}^{n} {A_{ \mathrm{H}
\vphantom{ \overline{ \mathrm{H} } }}}(x,p)
\right|
\le
f(x,p)$$ for some polynomial $f(x,p)$. The claim is now immediate.
[99]{} \[sec: bibliography\] Hillery M, O’Connell R F, Scully M O and Wigner E P 1984 *Phys. Rep.* **106** 121 Lee H W 1995 *Phys. Rep.* **259** 147 Leonhardt U 1997 *Measuring the Quantum State of Light* (Cambridge: Cambridge University Press) Wigner E P 1932 *Phys. Rev.* **40** 749 Hudson R L 1974 *Rep. Math. Phys.* **6** 249\
Soto F and Claverie P 1983 *J. Math. Phys.* **24** 97\
Jagannathan R, Simon R, Sudarshan E C G and Vasudevan R 1987 *Phys. Lett. A* **120** 161\
Narcowich F J 1988 *J. Math. Phys.* **29** 2036\
Bröcker T and Werner R F 1995 *J. Math. Phys.* **36** 62 Davies E B 1976 *Quantum Theory of Open Systems* (London: Academic Press) Cartwright N D 1976 *Physica A* **83** 210 Wódkiewicz K 1984 *Phys. Rev. Lett.* **52** 1064\
Wódkiewicz K 1986 *Phys. Lett. A* **115** 304\
Wódkiewicz K 1987 *Phys. Lett. A* **124** 207 Lalović D, Davidović D M and Bijedić N 1992 *Phys. Rev. A* **46** 1206\
Lalović D, Davidović D M and Bijedić N 1992 *Physica A* **184** 231\
Lalović D, Davidović D M and Bijedić N 1992 *Phys. Lett. A* **166** 99 Halliwell J J 1992 *Phys. Rev. D* **46** 1610 Wünsche A 1996 *Quantum Semiclass. Opt.* **8** 343\
Wünsche A and Bužek V 1997 *Quantum Semiclass. Opt.* **9** 631 Husimi K 1940 *Proc. Phys. Math. Soc. Japan* **22** 264 Kano Y 1965 *J. Math. Phys.* **6** 1913 Glauber R J 1965 *Quantum Optics and Electronics* ed C de Witt, A Blandin and C Cohen-Tannoudji (New York: Gordon and Breach) Mizrahi S S 1984 *Physica A* **127** 241\
Mizrahi S S 1986 *Physica A* **135** 237\
Mizrahi S S 1988 *Physica A* **150** 541 Arthurs E and Kelly J L 1965 *Bell Syst. Tech. J.* **44** 725\
Busch P 1985 *Int. J. Theor. Phys.* **24** 63\
Braunstein S L, Caves C M and Milburn G J 1991 *Phys. Rev. A* **43** 1153\
Stenholm S 1992 *Ann. Phys., NY* **218** 233\
Appleby D M 1998 *J. Phys. A* **31** 6419 Raymer M G 1994 *Am. J. Phys.* **62** 986 Leonhardt U and Paul H 1993 *J. Mod. Opt.* **40** 1745\
Leonhardt U and Paul H 1993 *Phys. Rev. A* **48** 4598 Ali S T and Prugovečki E 1977 *J. Math. Phys. ***18** 219 Appleby D M 1999 *Int. J. Theor. Phys. ***38** 807 Cohen L 1966 *J. Math. Phys.* **7** 781 Prugovečki E 1978 *Ann. Phys., NY* **110** 102 O’Connell R F and Wigner E P 1981 *Phys. Lett. A* **85** 121 Weyl H 1927 *Z. Phys.* **46** 1 Mehta C L and Sudarshan E C G 1965 *Phys. Rev. ***138** B274 Appleby D M 1999 *J. Mod. Opt. ***46** 825 Groenewold H J 1946 *Physica* **12** 405 Moyal J E 1949 *Cambridge Philos. Soc.* **45** 99 Baker G A 1958 *Phys. Rev.* **109** 2198\
Imre K, Ozizmir E, Rosenbaum M and Zweifel P F 1967 *J. Math. Phys.* **8** 1097 Omnès R 1994 *The Interpretation of Quantum Mechanics* (Princeton NJ: Princeton University Press). Reed M and Simon B 1980 *Methods of Modern Mathematical Physics, vols. 1–4* (New York: Academic Press). Grauert H and Fritzsche K 1976 *Several Complex Variables* (New York: Springer-Verlag, Graduate Texts in Mathematics no. 38) Davidović D M and Lalović D 1993 *J. Phys. A* **26** 5099
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We address the inverse problem of local volatility surface calibration from market given option prices. We integrate the ever-increasing flow of option price information into the well-accepted local volatility model of Dupire. This leads to considering both the local volatility surfaces and their corresponding prices as indexed by the observed underlying stock price as time goes by in appropriate function spaces. The resulting parameter to data map is defined in appropriate Bochner-Sobolev spaces. Under this framework, we prove key regularity properties. This enable us to build a calibration technique that combines online methods with convex Tikhonov regularization tools. Such procedure is used to solve the inverse problem of local volatility identification. As a result, we prove convergence rates with respect to noise and a corresponding discrepancy-based choice for the regularization parameter. We conclude by illustrating the theoretical results by means of numerical tests.'
author:
- 'Vinicius V.L. Albani[^1] and Jorge P. Zubelli[^2]'
title: '**Online Local Volatility Calibration by Convex Regularization**'
---
[**Keywords:**]{} Local Volatility Calibration, Convex Regularization, Online Estimation, Morozov’s Principle, Convergence Rates.
Introduction {#sec:intro}
============
A number of interesting problems in nonlinear analysis are motivated by questions from mathematical finance. Among those problems, the robust identification of the variable diffusion coefficient that appears in Dupire’s local volatility model [@dupire; @volguide] presents substantial difficulties for its nonlinearity and ill-posedness. In previous works tools from Convex Analysis and Inverse Problem theory have been used to address this problem. See [@acpaper] and references therein.
In this work, we incorporate the fact that as time evolves more data is available for the identification of Dupire’s volatility surface. Thus we develop an [*online*]{} approach to the ill-posed problem of the local volatility surface calibration. Such surface is characterized by a non-negative two-variable function $\sigma = \sigma(\tau,K)$ of the time to expiration $\tau$ and the strike price $K$.
In what follows, we consider that the local volatility surfaces are indexed by the observed underlying asset price $S_0$. The reason for that stems from the fact that if we try to use information of prices observed on different dates, there is no financial or economical reason for the volatility surface to stay exactly the same. Thus, in principle we may have different volatility surfaces, although such change may be small.
Let us quickly review the standard Black-Scholes setting and Dupire’s local volatility model. Recall that an option or derivative is a contract whose value depends on the value of an underlying stock or index. Perhaps the most well known derivative is an European call option, where the holder has the right (but not the obligation) to buy the underlying at time $t = T$ for a strike value $K$. We shall denote the stochastic process defining such underlying $S(t) = S(t,\omega)$, where as usual we assume that it is an adapted stochastic process on a suitable filtered probability space $(\Omega,\mathscr{U},\mathbb{F},\widetilde{\mathbb{P}})$, where $\mathbb{F} = \{\mathbb{F}_t\}_{t \in {\mathbb{R}}}$ is a filtration [@korn].
It is well known [@dupire; @volguide; @korn] that, by setting the current time as $t=0$, the value $C$ of an European call option with strike $K$ and expiration $T = \tau$ satisfies:
$$\left\{
\begin{array}{rcll}
-\displaystyle\frac{\partial C}{\partial \tau} +
\frac{1}{2}\sigma^2(\tau,K)K^2\frac{\partial^2 C}{\partial K^2} -
bK\frac{\partial C}{\partial K} &=& 0 & \tau > 0, ~K \geq 0\\
C(\tau = 0,K) &=& (S_0 - K)^+, & \text{for}~ K>0,\\
\displaystyle\lim_{K\rightarrow +\infty}C(\tau,K) & = & 0,&\text{for }~ \tau > 0,\\
\displaystyle\lim_{K\rightarrow 0^+}C(\tau,K) & = & S_0,&\text{for }~ \tau > 0
\end{array}
\right.
\label{dup1}$$
where $b$ is the difference between the continuously compounded interest and dividend rates of the underlying asset. In what follows, we assume that such quantities are constant. Defining the diffusion parameter $a(\tau,K) = \sigma(\tau,K)^2/2$, Problem (\[dup1\]) leads to the following parameter to solution map: $$\begin{array}{rcl}
F : D(F) \subset X &\longrightarrow & Y\\
a \in D(F) & \longmapsto & F(a) = C \in Y
\end{array}$$ where $X$ and $Y$ are Hilbert spaces to be properly defined below. $D(F)$ is the domain of the parameter to solution map (not necessarily dense in $X$) and $C = C(a,\tau,K)$ is the solution of Problem (\[dup1\]) with diffusion parameter $a$.
The inverse problem of local volatility calibration, as it was tackled in previous works [@crepey; @acthesis; @acpaper; @eggeng], consists in given option prices $C$, find an element ${{\tilde{a}}}$ of $D(F)$ such that $F({{\tilde{a}}}) = C$ in the least-square sense below. Indeed, the operator $F$ is compact and weakly closed. Thus, this inverse problem is ill-posed. In [@crepey; @acthesis; @acpaper; @eggeng] different aspects of the Tikhonov regularization were analyzed. In our case, it is characterized by the following: Find an element of $$\argmin \left\{\|F(a) - C\|^2_Y + \alpha f_{a_0}(a) \right\} ~~\text{subject to }~ a \in D(F) \subset X,$$ where $f_{a_0}$ is a weak lower semi-continuous convex coercive functional. The analysis presented in [@crepey; @acthesis; @acpaper; @eggeng] was based on an [*a priori*]{} choice of the regularization parameter with convex regularization tools.
In contrast, in the present work we explore the dependence of the local volatility surface on the observed asset price in order to incorporate different option price surfaces in the same procedure of Tikhonov regularization. More precisely, we consider the map $$\begin{array}{rcl}
{{\mathcal{U}}}: D({{\mathcal{U}}}) \subset \mathcal{X} & \longmapsto & \mathcal{Y}\\
{{\mathcal{A}}}\in D({{\mathcal{U}}}) & \longmapsto & {{\mathcal{U}}}({{\mathcal{A}}}): S\in [S_{\min},S_{\max}] \mapsto C(S,a(S))
\end{array}$$ where $C(S,{{\mathcal{A}}}(S))$ is the solution of (\[dup1\]) with $S_0 = S$ and $\sigma^2/2 = a(S)$. Moreover, ${{\mathcal{A}}}$ maps $S \in [S_{\min},S_{\max}]$ to $a(S) \in D(F)$ in a well-behaved way.
In this context the inverse problem becomes the following: Given a family of option prices $\mathcal{C} \in \mathcal{Y}$, find ${{\widetilde{\mathcal{A}}}}\in D({{\mathcal{U}}})$ such that ${{\mathcal{U}}}({{\widetilde{\mathcal{A}}}}) = \mathcal{C}$. We shall see that the operator ${{\mathcal{U}}}$ is also compact and weakly closed. Thus, this problem is also ill-posed. The corresponding regularized problem is defined by the following:
Find an element of $$\argmin\left\{ \displaystyle\int_{S_{\min}}^{S_{\max}}\|F(a(S)) - C(S)\|^2_YdS + \alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}) \right\} ~~\text{ subject to }~ {{\mathcal{A}}}\in D({{\mathcal{U}}}).$$
The main contributions of the current work are the following:
Firstly, we extend the local volatility calibration problem to local volatility families. This new setting allows incorporating more data into the calibration problem, leading to an online Tikhonov regularization. We prove that the so-called direct problem is well-posed, i.e., the forward operator satisfies key regularity properties. This framework generalizes in a nontrivial way the structure used in previous works [@crepey; @acthesis; @acpaper; @eggeng] since it requires the introduction of more tools, in particular that of Bochner spaces.
Secondly, in this setting, we develop a convergence analysis in a general context, based on convex regularization tools. See [@schervar].
Thirdly, we establish a relaxed version of Morozov’s discrepancy principle with convergence rates. This allows us to find the regularization parameter appropriately for the present problem. See [@anram; @moro].
The article is divided as follows:
In Section 2, we present the setting of the direct problem. In Section 3, we define properly the forward operator and prove some key regularity properties that are important in the analysis of the inverse problem. This is done in Theorem \[prop22\] and Propositions \[prop4\], \[prop6\], \[prop7\] and \[prf1\]. In Section 4, we tie up the inverse problem with convex Tikhonov regularization under an [*a priori*]{} choice of the regularization parameter. The convergence of the regularized solutions to the true one, with respect to $\delta\rightarrow 0$, is stated in Theorem \[tc1\]. In Section 5 we establish the Morozov discrepancy principle for the present problem with convergence rates. This is done in Theorems \[tma\] and \[mor:cr\]. Illustrative numerical tests are presented in Section 6.
Preliminaries {#sec:preliminar}
=============
\[sec:dupsurv\] We start by setting the so-called direct problem. It is based on the pricing of European call options by a generalization of Black-Scholes-Merton model.
Performing the change of variables $y := \text{log}(K/S_0)$ and $\tau : = T$ on the Cauchy problem (\[dup1\]) and defining $u(S_0,\tau,y) : = C(S_0,\tau,S_0\text{e}^y)$ and $a(S_0,\tau,y) :=
\frac{1}{2}\sigma^2(S_0,\tau,S_0\text{e}^y)$, it follows that $u(S_0,\tau,y)$ satisfies $$\left\{
\begin{array}{rcll}
-\displaystyle\frac{\partial u}{\partial \tau} + a(S_0,\tau,y)\left(\frac{\partial^2 u}{\partial y^2}
- \frac{\partial u}{\partial y}\right)
+ b\frac{\partial u}{\partial y} &=& 0 & \tau > 0, ~y \in {\mathbb{R}}\\
u(\tau = 0,y) &=& S_0(1 - \text{e}^y)^+, &\text{for }~ y \in {\mathbb{R}},\\
\displaystyle\lim_{y\rightarrow +\infty}u(\tau,y) & = & 0,&\text{for }~ \tau > 0,\\
\displaystyle\lim_{y\rightarrow -\infty}u(\tau,y) & = & S_0,&\text{for }~ \tau > 0.
\end{array}\right.
\label{dup2}$$ Note that, $\sigma$ and $a$ are assumed strictly positive and are related by a smooth bijection (since $\sigma>0$). Thus, in what follows we shall work only with the local variance $a$ instead of volatility $\sigma$. This simplifies the analysis that follows.
Denote by $D:=(0,T)\times {\mathbb{R}}$ the set where problem is defined. From [@eggeng] we know that has a unique solution in $W^{1,2}_{2,loc}(D)$, the space of functions $u : (\tau,y) \in D \mapsto u(\tau,y) \in \mathbb{R}$ such that, it has locally squared integrable weak derivatives up to order one in $\tau$ and up to order two in $y$.
We now define the set where the diffusion parameter $a$ lives. For fixed $\varepsilon > 0$, take scalar constants $a_1,a_2 \in \mathbb{R}$ such that $0 < a_1 \leq a_2 < +\infty$ and a fixed function $a_0 \in {H^{1+\varepsilon}(D)}$, with $a_0 < a < a_1$. Define $$Q:= \{a \in a_0 + {H^{1+\varepsilon}(D)}: a_1\leq a \leq a_2\}
\label{domopdi}$$ Note that $Q$ is weakly closed and has nonempty interior under the standard topology of ${H^{1+\varepsilon}(D)}$. See the first two chapters of [@acthesis; @acpaper] and references therein.
The Forward Operator {#sec:forward}
====================
Since we assume that the local variance surface is dependent on the current price, we have to introduce proper spaces for the analysis of the problem. As it turns out, we have to make use of Bochner integral techniques. See [@evanspde; @reedsimon1; @yosida]. The main reference for this section is [@haschele].
We start with some definitions. Given a time interval, say $[0,\overline{T}]$, the realized prices $S(t)$ vary within $[S_{\min},S_{\max}]$. After reordering $S(t)$ in ascending order, we perform the change of variables $s = S(t)-S_{\min}$, denote $S = S_{\max}-S_{\min}$. Thus $s \in [0,S]$. Hence, for each $s$, we denote $a(s) := a(s,\tau,y)$ the local variance surface correspondent to $s$.
Given ${{\mathcal{A}}}\in {{L^2(0,S,H^{1+\varepsilon}(D))}}$, with ${{\mathcal{A}}}: s\mapsto a(s)$ (see [@yosida]), we define its Fourier series $\hat{{{\mathcal{A}}}} = \{\hat{a}(k)\}_{k \in {\mathbb{Z}}}$ by $$\hat{a}(k) := \displaystyle\frac{1}{2S}\int^S_0 a(s)\exp(-iks\pi/S)ds +
\displaystyle\frac{1}{2S}\int^0_{-S} a(-s)\exp(-iks\pi/S)ds.$$
It is well defined, since $\{s \mapsto a(s)\exp(-iks2\pi/S)\}$ is weakly measurable and ${{L^2(0,S,H^{1+\varepsilon}(D))}}\subset L^1(0,S,{H^{1+\varepsilon}(D)})$ by the Cauchy-Schwartz inequality.
We now define a class of Bochner-type Sobolev spaces:
Let ${H^l(0,S,H^{1+\varepsilon}(D))}$ be the space of ${{\mathcal{A}}}\in {{L^2(0,S,H^{1+\varepsilon}(D))}}$, such that $$\|{{\mathcal{A}}}\|_l := \displaystyle\sum_{k \in {\mathbb{Z}}} (1+|k|^l)^2\|\hat{a}(k)\|^2_{{H^{1+\varepsilon}(D)}_\mathbb{C}} < \infty,$$ where ${H^{1+\varepsilon}(D)}_\mathbb{C} = {H^{1+\varepsilon}(D)}\oplus i{H^{1+\varepsilon}(D)}$ is the complexification of ${H^{1+\varepsilon}(D)}$. Moreover, ${H^l(0,S,H^{1+\varepsilon}(D))}$ is a Hilbert space with the inner product $$\langle {{\mathcal{A}}},{{\widetilde{\mathcal{A}}}}\rangle_l := \displaystyle\sum_{k\in{\mathbb{Z}}}(1+|k|^l)^2\langle a(k),{{\tilde{a}}}(k)\rangle_{{H^{1+\varepsilon}(D)}_\mathbb{C}}.$$
[[@haschele Lemma 3.2]]{} For $l > 1/2$, each ${{\mathcal{A}}}\in {H^l(0,S,H^{1+\varepsilon}(D))}$ has a continuous representative and the map $i_l : {H^l(0,S,H^{1+\varepsilon}(D))}\hookrightarrow C(0,S,{H^{1+\varepsilon}(D)})$ is continuous (bounded). Moreover, we have the estimate $$\displaystyle\sup_{s\in[0,S]}\|u(s)\|_{{H^{1+\varepsilon}(D)}} \leq \|{{\mathcal{U}}}\|_{l}\left(2\sum_{k = 0}^{\infty}\frac{1}{(1+k^l)^2}\right)^{1/2}.
\label{estimate1}$$ Defining the application $\langle {{\mathcal{A}}}, x\rangle_{{H^{1+\varepsilon}(D)}} : = \{s\mapsto \langle a(s), x \rangle\}$ for each $x$ in ${H^{1+\varepsilon}(D)}$ and ${{\mathcal{A}}}$ in ${H^l(0,S,H^{1+\varepsilon}(D))}$, it follows that $\langle {{\mathcal{A}}}, x\rangle_{{H^{1+\varepsilon}(D)}}$ is an element of $H^l[0,S]$ and the inequality $\|\langle {{\mathcal{A}}}, x\rangle_{{H^{1+\varepsilon}(D)}}\|_{ H^l[0,S]} \leq \|{{\mathcal{A}}}\|_l\|x\|_{{H^{1+\varepsilon}(D)}}$ holds. Moreover, for every ${{\mathcal{A}}},\mathcal{B} \in {{L^2(0,S,H^{1+\varepsilon}(D))}}$, we have the identity $$\langle {{\mathcal{A}}}, \mathcal{B} \rangle_{{L^2(0,S,H^{1+\varepsilon}(D))}}= \sum_{k \in {\mathbb{Z}}}\langle \hat{a}(k),\hat{b}(k)\rangle_{{H^{1+\varepsilon}(D)}_\mathbb{C}}.$$ \[p1\]
Assume that $l > 1/2$. If the sequence $\{{{\mathcal{A}}}_n\}_{n\in{\mathbb{N}}}$ converges weakly to ${{\widetilde{\mathcal{A}}}}$ in ${H^l(0,S,H^{1+\varepsilon}(D))}$, then, the sequence $\{a_k(s)\}_{k\in{\mathbb{N}}}$ weakly converges to ${{\tilde{a}}}(s)$ in ${H^{1+\varepsilon}(D)}$ for every $s \in [0,S]$. \[lemw\]
Take a $\{{{\mathcal{A}}}_n\}_{n\in{\mathbb{N}}}$ and ${{\widetilde{\mathcal{A}}}}$ as above. We want to show that, given a weak zero neighborhood $U$ of ${H^{1+\varepsilon}(D)}$, then for a sufficiently large $n$, $a_n(s) - a(s) \in U$ for every $s \in [0,S]$. A weak zero neighborhood $U$ of ${H^{1+\varepsilon}(D)}$ is defined by a set of $\alpha_1,...,\alpha_K \in {H^{1+\varepsilon}(D)}$ and an $\epsilon > 0$ such that $g \in {H^{1+\varepsilon}(D)}$ is an element of $U$ if $\max_{k = 1,...,K}|\langle g , \alpha_n\rangle| < \epsilon$.
Since the immersion $H^l[0,S] \hookrightarrow C([0,S])$ is compact and $H^l[0,S]$ is reflexive, it follows that each weak zero neighborhood of $H^l[0,S]$ is a zero neighborhood of $C([0,S])$. Furthermore, from Proposition \[p1\] we know that $\langle {{\mathcal{A}}}, \alpha \rangle_{{H^{1+\varepsilon}(D)}} \in H^l[0,S]$ with its norm bounded by $\|{{\mathcal{A}}}\|_l\|\alpha\|_{{H^{1+\varepsilon}(D)}}$, for every $n \in {\mathbb{N}}$ and $\alpha \in {H^{1+\varepsilon}(D)}$. Thus, we take the smallest closed ball centered at zero, $B$, which contains $\langle {{\widetilde{\mathcal{A}}}},\alpha_k\rangle_{{H^{1+\varepsilon}(D)}}$ with $k = 1,...,K$ and every $\langle {{\mathcal{A}}}_n,\alpha_k\rangle_{{H^{1+\varepsilon}(D)}}$ with $n\in {\mathbb{N}}$ and $k = 1,...,K$. Therefore, choosing $\epsilon > 0$ as above, it is true that for each $k = 1,...,K$, there are $f_{k,1}, ...,f_{k,M(k)} \in H^l[0,S]$ and $\eta_k > 0$, such that $\|f\|_{C([0,S])} < \epsilon$ for every $f \in B$ with $\max_{m = 1,...,M(k)}|\langle f,f_{k,m}\rangle|<\eta_k$. Hence, we define $\mathcal{C}_{k,m} := \alpha_k \otimes f_{k,m} \in {H^l(0,S,H^{1+\varepsilon}(D))}^*$ and the weak zero neighborhood $A = \cap^K_{k = 1}A_k$ of ${H^l(0,S,H^{1+\varepsilon}(D))}$ with $$A_k := \{{{\mathcal{A}}}\in {H^l(0,S,H^{1+\varepsilon}(D))}~: ~|\langle {{\mathcal{A}}}, \mathcal{C}_{k,m}\rangle|\leq \eta_k, ~m=1,...,M(k) \}.$$ As $A$ is a weak zero neighborhood of ${H^l(0,S,H^{1+\varepsilon}(D))}$, it is true that for sufficiently large $n$, ${{\mathcal{A}}}_n - {{\widetilde{\mathcal{A}}}}\in A$, which implies that $a_n(s) - {{\tilde{a}}}(s) \in U$ for every $s \in [0,S]$, i.e., $\{a_n(s)\}_{n\in {\mathbb{N}}}$ weakly converges to ${{\tilde{a}}}(s)$ for every $s \in [0,S]$.
Define the set ${{\mathfrak{Q}}}:= \{ {{\mathcal{A}}}\in {H^l(0,S,H^{1+\varepsilon}(D))}: a(s) \in Q ,~\forall s \in [0,S]\}$, i.e., each ${{\mathcal{A}}}$ in ${{\mathfrak{Q}}}$ is the map ${{\mathcal{A}}}: s \in [0,S] \mapsto a(s) \in Q$. Note that ${{\mathfrak{Q}}}$ is the space of $Q$-valued paths, with $Q$ defined in (\[domopdi\]).
For $l > 1/2$, the set ${{\mathfrak{Q}}}$ is weakly closed and its interior is nonempty in ${H^l(0,S,H^{1+\varepsilon}(D))}$. \[p2\]
By Lemma \[lemw\] and the fact that $Q$ is weakly closed it follows that ${{\mathfrak{Q}}}$ is weakly closed. The interior of ${{\mathfrak{Q}}}$ is nonempty since the inclusion ${H^l(0,S,H^{1+\varepsilon}(D))}\hookrightarrow C(0,S,{H^{1+\varepsilon}(D)})$ is continuous and bounded. Note that, given $\epsilon > 0$, it follows that ${{\widetilde{\mathcal{A}}}}= \{s \mapsto {{\tilde{a}}}(s)\}$ with $\underline{a} + \epsilon \leq {{\tilde{a}}}(s) \leq \overline{a} + \epsilon$ for every $s \in [0,S]$ is in the interior of ${{\mathfrak{Q}}}$.
We stress that, in what follows, we always assume that $l>1/2$, since it is enough to state our results concerning regularity aspects of the forward operator.
We define below the forward operator, that associates each family of local variance surfaces to the corresponding family of option price surfaces, determined by the Cauchy problem . Thus, for a given $a_0 \in Q$ we define: $$\begin{array}{rcl}
\mathcal{U}: {{\mathfrak{Q}}}&\longrightarrow& {L^2(0,S,W^{1,2}_2(D))},\\
{{\mathcal{A}}}& \longmapsto & {{\mathcal{U}}}({{\mathcal{A}}}) :s \in [0,S] \mapsto F(s,a(s)) \in {W^{1,2}_2(D)},
\end{array}$$ where $[{{\mathcal{U}}}({{\mathcal{A}}})](s) = F(s,a(s)):=u(s,a(s))-u(s,a_0)$ and $u(s,a)$ is the solution of the Cauchy problem with local variance $a$. The following results state some regularity properties concerning the forward operator. See [@acpaper] and references therein.
The operator $F:[0,S]\times Q\longrightarrow {W^{1,2}_2(D)}$ is continuous and compact. Moreover, it is sequentially weakly continuous and weakly closed.\[prop21\]
We define below the concept of Frechét equi-differentiability for a family of operators.
We call a family of operators $\{\mathcal{F}_s:Q\longrightarrow{W^{1,2}_2(D)}\left|~ s \in [0,S] \right.\}$ Frechét equi-differentiable, if for all $\tilde{a} \in Q$ and $\epsilon > 0$, there is a $\delta > 0$, such that $$\displaystyle\sup_{s \in [0,S]}\|\mathcal{F}_t(\tilde{a}+h) - \mathcal{F}_s(\tilde{a}) - \mathcal{F}^\prime_s(\tilde{a})h\| \leq \epsilon\|h\|,$$ for $\|h\|_{{H^{1+\varepsilon}(D)}}<\delta$ and $\mathcal{F}^\prime_s(\tilde{a})$ the Frechét derivative of $\mathcal{F}_s(\cdot)$ at $\tilde{a}$.
Using this concept, we have the following proposition.
The family of operators $\{F(s,\cdot) : Q \longrightarrow {W^{1,2}_2(D)}\left|~s \in [0,S] \right.\}$ is Frechét equi-differentiable. \[prop4\]
Given ${{\tilde{a}}}\in Q$ and $\epsilon > 0$, define $w = F(s,{{\tilde{a}}}+h) - F(s,{{\tilde{a}}}) - \partial_a F(s,{{\tilde{a}}})h$, it is equivalent to $w = u(s,{{\tilde{a}}}+h) - u(s,{{\tilde{a}}}) - \partial_a u(s,{{\tilde{a}}})h$. We denote $v := u(s,{{\tilde{a}}}+h) - u(s,{{\tilde{a}}})$. Thus, by linearity $w$ satisfies $$-w_\tau + {{\tilde{a}}}(w_{yy} - w_y) + bw_y = h(v_{yy}-v_y),$$ with homogeneous boundary condition. Such problem does not depend on $s$, as ${{\tilde{a}}}$ is independent of $s$. From the proof of Proposition \[prop21\] (see also [@eggeng]), we have $
\|w\|_{{W^{1,2}_2(D)}} \leq C\|h\|_{L^2(D)}\|v\|_{{W^{1,2}_2(D)}}
$. By the continuity of the operator $F$, given $\epsilon > 0 $ we can chose $h \in {H^{1+\varepsilon}(D)}$ with $\|h\|_{{H^{1+\varepsilon}(D)}} \leq \delta$, such that $\|v\|_{{W^{1,2}_2(D)}} \leq \epsilon /C$ and thus the assertion follows.
The following theorem is the principal result of this section, since it states some properties that are at the core of the inverse problems analysis [@ern; @schervar]. For its proof see Appendix \[app:results\].
The forward operator ${{\mathcal{U}}}: {{\mathfrak{Q}}}\longrightarrow {L^2(0,S,W^{1,2}_2(D))}$ is well defined, continuous and compact. Moreover, it is sequentially weakly continuous and weakly closed. \[prop22\]
The next result states necessary conditions for the convergence analysis. See [@ern; @schervar]. Its proof is in the Appendix \[app:results\].
The operator ${{\mathcal{U}}}(\cdot)$ admits a one sided derivative at ${{\widetilde{\mathcal{A}}}}\in {{\mathfrak{Q}}}$ in the direction $\mathcal{H}$, such that ${{\widetilde{\mathcal{A}}}}+\mathcal{H} \in {{\mathfrak{Q}}}$. The derivative ${{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})$ satisfies $$\left\|\mathcal{U}^\prime({{\widetilde{\mathcal{A}}}})\mathcal{H}\right\|_{{L^2(0,S,W^{1,2}_2(D))}} \leq c\|\mathcal{H}\|_{{H^l(0,S,H^{1+\varepsilon}(D))}}.$$ Moreover, ${{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})$ satisfies the Lipschitz condition $$\left\|{{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}}) - {{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}}+\mathcal{H})\right\|_{\mathcal{L}\left({H^l(0,S,H^{1+\varepsilon}(D))},{L^2(0,S,W^{1,2}_2(D))}\right)} \leq \gamma\|\mathcal{H}\|_{{H^l(0,S,H^{1+\varepsilon}(D))}}$$ for all ${{\widetilde{\mathcal{A}}}},\mathcal{H}\in {{\mathfrak{Q}}}$ such that ${{\widetilde{\mathcal{A}}}},{{\widetilde{\mathcal{A}}}}+\mathcal{H} \in {{\mathfrak{Q}}}$. \[prop6\]
The following result is a consequence of the compactness of ${{\mathcal{U}}}(\cdot)$.
The Frechét derivative of the operator ${{\mathcal{U}}}(\cdot)$ is injective and compact.\[prop7\]
Take $\mathcal{H} \in \ker\left({{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})\right)$. Thus, from the proof of Proposition \[prop6\], we have $
h(s)\cdot (u_{yy} - u_y) = 0.
$ However, for each $t$, $G = u_{yy}-u_y$ is the solution of $$\left\{\begin{array}{ll}
\partial_\tau G = \displaystyle\frac{1}{2}\left(\partial^2_{yy} - \partial_y\right)\left(a(s)G + bG\right)\\
G\displaystyle\left|_{\tau=0} = \delta(y)\right.,
\end{array}\right.$$ i.e., $G$ is the Green’s function of the Cauchy problem above. Thus, $G > 0$ for every $y$,$\tau > 0$ and $s \in [0,S]$. Therefore $h(t) = 0$. Since this holds for every $s \in [0,S]$, then the result follows.
We now make use of the bounded embedding of the space $
{L^2(0,S,W^{1,2}_2(D))}$ into the space $
L^2(0,S,L^2(D)),
$ since it implies that ${{\mathcal{U}}}$ satisfies the same results presented above with $L^2(0,S,L^2(D))$ instead of ${L^2(0,S,W^{1,2}_2(D))}$. Thus, we characterize the range of ${{\mathcal{U}}}^\prime({{\mathcal{A}}})$ as a subset of $L^2(0,S,L^2(D))$ and the range of ${{\mathcal{U}}}^\prime({{\mathcal{A}}})^*$ as a subset of ${H^l(0,S,H^{1+\varepsilon}(D))}$ in order to proceed in Section 4 the convergence analysis.
The operator $\mathcal{U}^\prime({{\mathcal{A}}}^\dagger)^*$ has a trivial kernel. \[prf1\]
For simplicity take $b = 0$. Denote by $
\mathcal{L} := -\partial_\tau+a(\partial{yy} - \partial_y)
$ the parabolic operator of Equation with homogeneous boundary condition and $\mathcal{G}_{u_{yy}-u_y}$ the multiplication operator by $u_{yy}-u_y$. Thus, for each $s \in [0,S]$, we have $\partial_a u(s,{{\tilde{a}}}(s)) = \mathcal{L}^{-1}\mathcal{G}_{u_{yy}-u_y}$, where $\mathcal{L}^{-1}$ is the left inverse of $\mathcal{L}$ with null boundary conditions. By definition of $
{{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})^*:L^2(0,S,L^2(D)) \rightarrow {H^l(0,S,H^{1+\varepsilon}(D))},
$ we have, $$\left\langle \mathcal{U}^\prime ({{\widetilde{\mathcal{A}}}})\mathcal{H},\mathcal{Z}\right\rangle_{L^2(0,S,L^2(D))} =
\langle \mathcal{H}, \Phi \rangle_{{H^l(0,S,H^{1+\varepsilon}(D))}},$$ $\forall ~\mathcal{H} \in {H^l(0,S,H^{1+\varepsilon}(D))}$ and $\forall ~\mathcal{Z} \in L^2(0,S,L^2(D))$, with $\Phi = {{\mathcal{U}}}^\prime ({{\widetilde{\mathcal{A}}}})^*\mathcal{Z}$. Thus, given any $\mathcal{Z} \in \ker\left({{\mathcal{U}}}^\prime ({{\widetilde{\mathcal{A}}}})^*\right)$, it follows that $$\begin{array}{rcl}
0 &=& \left\langle {{\mathcal{U}}}^\prime ({{\widetilde{\mathcal{A}}}})\mathcal{H},\mathcal{Z}\right\rangle_{L^2(0,S,L^2(D))}
= \displaystyle\int^S_0\left\langle\mathcal{L}^{-1}\mathcal{G}_{u_{yy}-u_y}h(s),z(s) \right\rangle_{L^2(D)}ds \\
&=& \displaystyle\int^S_0\left\langle \mathcal{G}_{u_{yy}-u_y}h(s),[\mathcal{L}^{-1}]^*z(s)\right \rangle_{L^2\left(D\right)} ds =\displaystyle\int^S_0\left\langle \mathcal{G}_{u_{yy}-u_y}h(s),g(s)\right\rangle_{L^2\left(D\right)}ds, \end{array}$$ where $g$ is a solution of the adjoint equation $$g_\tau+(ag)_{yy} + (ag)_y = z$$ for each $s \in [0,S]$, with homogeneous boundary conditions. Since $z(t) \in L^2(D)$, we have that $g(s) \in {H^{1+\varepsilon}(D)}$ (see [@lady]) and $g \in L^2\left(0,S,{H^{1+\varepsilon}(D)}\right)$. Since $\mathcal{G}>0$, from the proof of Proposition \[prop7\] and the fact that $h \in {H^l(0,S,H^{1+\varepsilon}(D))}$ is arbitrary, it follows that $g = 0$. Therefore $\mathcal{Z} = 0$ almost everywhere in $s \in [0,S]$. It yields that $\ker\left(\mathcal{U}^\prime (a)^*\right) = \{0\}$.
From the last proposition it follows that $$\ker\{{{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})\} = \{0\} \Rightarrow \overline{\mathcal{R}\left\{\left({{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})\right)^*\right\}} =
{H^l(0,S,H^{1+\varepsilon}(D))}.$$ In other words, the range of the adjoint operator of the Frechét derivative of the forward operator ${{\mathcal{U}}}$ at ${{\widetilde{\mathcal{A}}}}$ is dense in ${H^l(0,S,H^{1+\varepsilon}(D))}$.
To finish this section we shall present below the tangential cone condition for ${{\mathcal{U}}}$. It follows almost directly by the above results and Theorem 1.4.2 from [@acthesis]. See also [@acpaper2].
The map ${{\mathcal{U}}}(\cdot)$ satisfies the local tangential cone condition $$\left\|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}}}({{\widetilde{\mathcal{A}}}}) - {{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})({{\mathcal{A}}}- {{\widetilde{\mathcal{A}}}})\right\|_{{L^2(0,S,W^{1,2}_2(D))}} \leq \gamma \left\|{{\mathcal{U}}}({{\mathcal{A}}})- {{\mathcal{U}}}({{\widetilde{\mathcal{A}}}})\right\|_{{L^2(0,S,W^{1,2}_2(D))}}$$ for all ${{\mathcal{A}}},{{\widetilde{\mathcal{A}}}}$ in a ball $B({{\mathcal{A}}}^*,\rho) \subset {{\mathfrak{Q}}}$ with some $\rho >0$ and $\gamma < 1/2$. \[tang\]
As a corollary we have the following result:
The operator ${{\mathcal{U}}}$ is injective.
The Inverse Problem {#sec:tikho}
===================
Following the notation of Section 3, we want to define a precise and robust way of relating each family of European option price surfaces to the corresponding family of local volatility surfaces, both parameterized by the underlying stock price. We first present an analysis of existence and stability of regularized solutions, then we establish some convergence rates. We also prove Morozov’s discrepancy principle for the present problem with the same convergence rates.
The inverse problem of local volatility calibration can be restated as:
[*Given a family of European call option price surfaces ${{\widetilde{\mathcal{U}}}}= \{s \mapsto \tilde{u}(s)\}$ in the space ${L^2(0,S,L^2(D))}$, find the correspondent family of local variance surfaces ${{\mathcal{A}}}^\dagger = \{s \mapsto a^\dagger(s)\} \in {{\mathfrak{Q}}}$, satisfying $${{\widetilde{\mathcal{U}}}}= {{\mathcal{U}}}({{\mathcal{A}}}^\dagger).
\label{ip1a}$$*]{} In what follows we assume that for a given data ${{\widetilde{\mathcal{U}}}}$, the inverse problem has always a unique solution ${{\mathcal{A}}}^\dagger$ in ${{\mathfrak{Q}}}$. Such uniqueness follows by the forward operator being injective. Note that, ${{\widetilde{\mathcal{U}}}}$ is noiseless, i.e., is known without uncertainties. This is an idealized situation, thus, to be more realistic, we assume that we can only observe corrupted data ${{\mathcal{U}^\delta}}$, satisfying a perturbed version of (\[ip1a\]), $${{\mathcal{U}^\delta}}= {{\widetilde{\mathcal{U}}}}+ \mathcal{E} = {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)+\mathcal{E}
\label{pi1}$$ where $\mathcal{E} = \{s \mapsto E(s)\}$ compiles all the uncertainties associated to this problem and ${{\widetilde{\mathcal{U}}}}$ is the unobservable noiseless data. We assume further that, the norm of $\mathcal{E}$ is bounded by the noise level $\delta
> 0$. Moreover, for each $s \in [0,S]$, we assume that $\|E(s)\| \leq \delta/S$. These hypotheses imply that $$\|{{\mathcal{U}^\delta}}- {{\widetilde{\mathcal{U}}}}\|_{{L^2(0,S,L^2(D))}} \leq \delta ~\text{ and }~
\|u^\delta(s) - \tilde{u}(s)\|_{L^2(D)} \leq \delta/S \text{ for every } s \in [0,S].
\label{errorb}$$ Proposition \[prop22\] gives that ${{\mathcal{U}}}(\cdot)$ is compact, implying that the associated inverse problem is ill-posed. It means that such inverse problem cannot be solved directly in a stable way. Hence, we must apply regularization techniques. This, roughly speaking, relies on stating the original problem under a more robust setting. More specifically, instead of looking for an ${{\mathcal{A}}}^\delta \in {{\mathfrak{Q}}}$ satisfying (\[pi1\]), we shall search for an ${{\mathcal{A}}}^\delta \in {{\mathfrak{Q}}}$ minimizing the Tikhonov functional $$\mathcal{F}^{{{\mathcal{U}}}^\delta}_{{{\mathcal{A}}}_0,\alpha}({{\mathcal{A}}}) = \|\mathcal{U}^\delta -
\mathcal{U}({{\mathcal{A}}})\|^2_{{L^2(0,S,L^2(D))}} + \alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}).
\label{tik1}$$
The functional $f_{{{\mathcal{A}}}_0}$ has the goal of stabilizing the inverse problem and allows us to incorporate [*a priori*]{} information through ${{\mathcal{A}}}_0$.
We shall see later that, the minimizers of are approximations for the solution of (\[ip1a\]).
In order to guarantee the existence of stable minimizers for the functional , we assume that $f_{{{\mathcal{A}}}_0} :{{\mathfrak{Q}}}\rightarrow [0,\infty]$ is convex, coercive and weakly lower semi-continuous. A classical reference on convex analysis is [@ekte]. Note that, these assumptions are not too restrictive, since they are fulfilled by a large class of functionals on ${H^l(0,S,H^{1+\varepsilon}(D))}$. A canonical example is $$f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}) = \|{{\mathcal{A}}}- {{\mathcal{A}}}_0\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}},$$ which is leads us to the classical Tikhonov regularization.
Recall that ${{\mathcal{U}}}$ is weakly continuous and ${{\mathfrak{Q}}}$ is weakly closed. Combining that with the required properties of $f_{{{\mathcal{A}}}_0}$ we can apply [@schervar Theorem 3.22], which gives for a fixed ${{\mathcal{U}^\delta}}\in {L^2(0,S,L^2(D))}$ the existence of at least one element of ${{\mathfrak{Q}}}$ minimizing $\mathcal{F}^{{{\mathcal{U}}}^\delta}_{{{\mathcal{A}}}_0,\alpha}(\cdot)$, the functional defined in .
For the sake of completeness, we present the definition of stability of a minimizer:
If ${{\widetilde{\mathcal{A}}}}$ is a minimizer of with data ${{\mathcal{U}}}$, then it is called stable if for every sequence $\{{{\mathcal{U}}}_k\}_{k \in {\mathbb{N}}} \subset {L^2(0,S,W^{1,2}_2(D))}$ converging strongly to ${{\mathcal{U}}}$, the sequence $\{{{\mathcal{A}}}_k\}_{k\in {\mathbb{N}}} \subset {{\mathfrak{Q}}}$ of minimizers of $\mathcal{F}^{{{\mathcal{U}}}^k}_{{{\mathcal{A}}}_0,\alpha}(\cdot)$ has a subsequence converging weakly to ${{\widetilde{\mathcal{A}}}}$. \[dfstab\]
Then, by [@schervar Theorem 3.23], it follows that the minimizers of are stable in the sense of Definition \[dfstab\].
By [@schervar Theorem 3.26], when the noise level $\delta$ and the regularization parameter $\alpha = \alpha(\delta)$ vanish, we can find a sequence of minimizers of converging weakly to the solution of (\[ip1a\]). In other words, the minimizers of (\[ip1a\]) are indeed approximations of the family of true local volatility surfaces. In addition, as one interpretation of this theorem, we can say that the smaller the noise level $\delta$ is, if the regularization parameter $\alpha$ is properly chosen, the less dependent on the regularization functional and the [*a priori*]{} information the Tikhonov minimizers are.
Making use of convex regularization tools, we provide some convergence rates with respect to the noise level. In order to do that, we need some abstract concepts, as the Bregman distance related to $f_{{{\mathcal{A}}}_0}$, $q$-coerciveness and the source condition related to operator ${{\mathcal{U}}}$. Such ideas were also used in [@crepey; @acthesis; @acpaper; @eggeng], but here they are extended to the context of online local volatility calibration. For the definitions of Bregman distance and $q$-coerciveness see Appendix \[app:def\].
In what follows we always assume that (\[ip1a\]) has a (unique) solution which is an element of the Bregman domain $\mathcal{D}_B(f_{{{\mathcal{A}}}_0})$.
Before stating the result about convergence rates, we need the following auxiliary lemma, which introduces the so-called source condition. For a review on Convex Regularization, see [@schervar Chapter 3].
For every $\xi^\dagger \in \partial f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$, there exists $\omega^\dagger \in {L^2(0,S,L^2(D))}$ and $\mathscr{E} \in {H^l(0,S,H^{1+\varepsilon}(D))}$ such that $\xi^\dagger = \left[{{\mathcal{U}}}^\prime({{\mathcal{A}}}^\dagger)\right]^*\omega^\dagger + \mathscr{E}$ holds. Moreover, $\mathscr{E}$ can be chosen such that $\|\mathscr{E}\|_{{H^l(0,S,H^{1+\varepsilon}(D))}}$ is arbitrarily small. \[lemmax\]
Lemma \[lemmax\] follows by $\mathcal{R}({{\mathcal{U}}}^\prime({{\mathcal{A}}}^\dagger)^*)$ being dense in ${H^l(0,S,H^{1+\varepsilon}(D))}$. See Proposition \[prf1\] in Section 3. Observe also that, we identify ${L^2(0,S,L^2(D))}^*$ and ${H^l(0,S,H^{1+\varepsilon}(D))}^*$ with ${L^2(0,S,L^2(D))}$ and ${H^l(0,S,H^{1+\varepsilon}(D))}$, respectively, since they are Hilbert spaces.
Assume that (\[ip1a\]) has a (unique) solution. Let the map $\alpha : (0,\infty) \rightarrow (0,\infty)$ be such that $\alpha(\delta) \approx \delta$ as $\delta\searrow 0$. Furthermore, assume that the convex functional $f_{{{\mathcal{A}}}_0}(\cdot)$ is also $q$-coercive with constant $\zeta$, with respect to the norm of ${H^l(0,S,H^{1+\varepsilon}(D))}$. Then under the source condition of Lemma \[lemmax\] it follows that $$D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) = \mathcal{O}(\delta) ~~~\text{ and }~~~
\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\| = \mathcal{O}(\delta).$$ \[tc1\]
Let ${{\mathcal{A}}}^\dagger$ and ${{\mathcal{A}}}^\delta_\alpha$ denote the solution of (\[ip1a\]) and the minimizer of , respectively. It follows that, $
\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\|^2 + \alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\delta_\alpha) \leq \|{{\mathcal{U}}}({{\mathcal{A}}}^\dagger) - {{\mathcal{U}^\delta}}\|^2 + \alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger) \leq \delta^2 + \alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger).
$
Since, $D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) = f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\delta_\alpha) - f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger) - \langle \xi^\dagger , {{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\rangle$, it follows by Lemma \[lemmax\] and the above estimate that, $$\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\|^2 + \alpha D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq
\delta^2 - \alpha(\langle \omega^\dagger , {{\mathcal{U}}}^\prime({{\mathcal{A}}}^\dagger)({{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger)\rangle + \langle \mathscr{E} , {{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\rangle).$$
By Proposition \[tang\], it follows that $
|\langle \omega^\dagger , {{\mathcal{U}}}^\prime({{\mathcal{A}}}^\dagger)({{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger)\rangle|
\leq (1+\gamma)\|\omega^\dagger\|\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\| \leq (1+\gamma)\|\omega^\dagger\|(\delta + \|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\|).
$ Thus, $
\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\|^2 + \alpha D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq \delta^2 + \alpha (1+\gamma)\|\omega^\dagger\|(\delta + \|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\|) + \alpha \|\mathscr{E}\|\cdot\|{{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\|.
$
Since $\|\mathscr{E}\|$ is arbitrarily small, it follows that, $(\zeta - \|\mathscr{E}\|)/\zeta > 0$. Moreover, since $f_{{{\mathcal{A}}}_0}$ is $q$-coercive with constant $\zeta$ we divide the estimates in two cases, when $q=1$ and $q>1$. For the case $q = 1$, the above inequalities imply that, $$(\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\| - \alpha(1+\gamma)\|\omega^\dagger\|/2)^2 + \alpha(1 - 1/\zeta\|\mathscr{E}\|)D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq
(\delta + \alpha(1+\gamma)\|\omega^\dagger\|)^2$$ Hence, the assertions follow. For the case $q > 1$, we denote $\beta_1 = \|\mathscr{E}\|/\zeta$ and we have that, $$\beta_1(D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger))^{1/q} \leq \displaystyle\frac{\beta_1^q}{q} + \frac{1}{q}D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger).$$ Thus, assuming that $\beta_1 = \mathcal{O}(\delta^{1/q})$, we have the estimate: $$\begin{gathered}
\left(\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\| - \alpha\frac{1+\gamma}{2}\|\omega^\dagger\|\right)^2 + \displaystyle\alpha\frac{q - 1}{q}D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq\\
(\delta + \alpha(1+\gamma)\|\omega^\dagger\|)^2 + \alpha\displaystyle\frac{\beta_1^q}{q},\end{gathered}$$ and the assertions follow.
Note that the rates obtained in Theorem \[tc1\] state that, in some sense, the distance between the true local variance and the Tikhonov solution is of order $\mathcal{O}(\delta)$. This can be seen as a measure of the reliability of Tikhonov minimizers for this specific example.
Morozov’s Principle {#sec:morozov}
===================
We now establish a relaxed version of Morozov’s discrepancy principle for the specific problem under consideration [@moro]. This is one of the most reliable ways of finding the regularization parameter $\alpha$ as a function of the data ${{\mathcal{U}^\delta}}$ and the noise level $\delta$. Intuitively, the regularized solution should not fit the data more accurately than the noise level. We remark that this statement does not follow immediately because, the parameter now has to be chosen as a function of the noise level $\delta$ and the data ${{\mathcal{U}^\delta}}$. Thus, it is necessary to prove that such functional in fact satisfies the required criteria to achieve the desired convergence rates.
From Equation (\[errorb\]), it follows that any ${{\mathcal{A}}}\in {{\mathfrak{Q}}}$ satisfying $$\|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}^\delta}}\| \leq \delta$$ could be an approximate solution for (\[ip1a\]). If ${{\mathcal{A}}}^\delta_\alpha$ is a minimizer of , then Morozov’s discrepancy principle says that the regularization parameter $\alpha$ should be chosen through the condition $$\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\| = \delta
\label{m_init}$$ whenever it is possible. In other words, the regularized solution should not satisfy the data more accurately than up to the noise level.
Since the identity is restrictive, in what follows we combine two strategies. The first one is the relaxed Morozov’s discrepancy principle studied in [@anram]. The second one is the sequential discrepancy principle studied in [@ahm].
Note that, in the analysis that follows, we also require that if $f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}) = 0$ then ${{\mathcal{A}}}= {{\mathcal{A}}}_0$.
[[@anram]]{} Let the noise level $\delta > 0$ and the data ${{\mathcal{U}^\delta}}$ be fixed. Define the functionals $$\begin{aligned}
L:{{\mathcal{A}}}\in {{\mathfrak{Q}}}&\longmapsto & L({{\mathcal{A}}}) = \|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}^\delta}}\|\in\mathbb{R}_+\cup \{+\infty\},\\
H:{{\mathcal{A}}}\in {{\mathfrak{Q}}}&\longmapsto & H({{\mathcal{A}}}) = f_{{{\mathcal{A}}}_0}({{\mathcal{A}}})\in\mathbb{R}_+\cup \{+\infty\},\\
I: \alpha \in \mathbb{R}_+ &\longmapsto & I(\alpha) = \F^{{{\mathcal{U}}}^\delta}_{{{\mathcal{A}}}_0,\alpha}({{\mathcal{A}}}^\delta_\alpha)\in\mathbb{R}_+\cup \{+\infty\}.
\label{func_moro}\end{aligned}$$ We also define the set containing all minimizers of the functional for each fixed $\alpha \in (0,\infty)$ as $$M_\alpha: = \left\{{{\mathcal{A}}}^\delta_\alpha \in {{\mathfrak{Q}}}: L(a^\delta_\alpha) \leq L({{\mathcal{A}}}) ,~\forall {{\mathcal{A}}}\in {H^l(0,S,H^{1+\varepsilon}(D))}\right\}.$$ Note that we have extended $L({{\mathcal{A}}})$ to be equal to $\|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}^\delta}}\|$ when ${{\mathcal{A}}}\in {{\mathfrak{Q}}}$ and to be equal to $+\infty$ otherwise.
The first strategy above mentioned is defined as follows:
For prescribed $1< \tau_1 \leq \tau_2$, choose $\alpha = \alpha(\delta,{{\mathcal{U}^\delta}})$ such that $\alpha>0$ and $$\tau_1\delta \leq \|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}^\delta}}\| \leq \tau_2\delta
\label{morozov}$$ holds for some ${{\mathcal{A}}}^\delta_\alpha$ in $M_\alpha$.
If the first is not possible, then we consider the following:
For prescribed ${{\tilde{\tau}}}>1$, $\alpha_0 > 0$ and $0<q<1$, choose $\alpha_n = q^n\alpha_0$ such that the discrepancy $$\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_{\alpha_{n}}) - {{\mathcal{U}^\delta}}\| \leq {{\tilde{\tau}}}\delta < \|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_{\alpha_{n-1}}) - {{\mathcal{U}^\delta}}\|
\label{seqmorozov}$$ is satisfied for some $n \in {\mathbb{N}}$ and some ${{\mathcal{A}}}^\delta_{\alpha_{n}} \in M_{\alpha_n}$ and ${{\mathcal{A}}}^\delta_{\alpha_{n-1}} \in M_{\alpha_{n-1}}$.
It follows by [@tikar Lemma 2.6.1] that the functional $H(\cdot)$ is non-increasing and the functionals $L(\cdot)$ and $I(\cdot)$ are non-decreasing with respect to $\alpha \in (0,\infty)$ in the following sense, if $0 < \alpha < \beta$ then we have $$\sup_{{{\mathcal{A}}}^\delta_\alpha \in M_\alpha}L({{\mathcal{A}}}^\delta_\alpha) \leq \inf_{{{\mathcal{A}}}^\delta_\beta \in M_\beta}L({{\mathcal{A}}}^\delta_\beta), \inf_{{{\mathcal{A}}}^\delta_\alpha \in M_\alpha}H({{\mathcal{A}}}^\delta_\alpha) \geq \sup_{{{\mathcal{A}}}^\delta_\beta \in M_\beta}H({{\mathcal{A}}}^\delta_\beta) \text{ and } I(\alpha) \leq I(\beta).$$
By [@tikar Lemma 2.6.3], the functional $I(\cdot)$ is continuous and the sets of discontinuities of $L(\cdot)$ and $H(\cdot)$ are at most countable and coincide. If we denote this set by $M$, then $L(\cdot)$ and $H(\cdot)$ are continuous in $(0,\infty) \backslash M$.
Since the set $M_\alpha$ is weakly closed for each $\alpha > 0$, we have the following:
For each $\overline{\alpha}>0$, there exist ${{\mathcal{A}}}_1,{{\mathcal{A}}}_2 \in M_{\overline{\alpha}}$ such that $$L({{\mathcal{A}}}_1) = \displaystyle\inf_{{{\mathcal{A}}}\in M_{\overline{\alpha}}}L({{\mathcal{A}}}) ~~~\text{and} ~~~
L({{\mathcal{A}}}_2) = \displaystyle\sup_{{{\mathcal{A}}}\in M_{\overline{\alpha}}}L({{\mathcal{A}}}).$$
Let $1<\tau_1 \leq \tau_2$ be fixed. Suppose that $\|{{\mathcal{U}}}({{\mathcal{A}}}_0) - {{\mathcal{U}^\delta}}\| > \tau_2\delta$. Then, we can find $\underline{\alpha},\overline{\alpha}>0$, such that $$L({{\mathcal{A}}}_1) < \tau_1\delta \leq \tau_2 \delta < L({{\mathcal{A}}}_2),$$ where ${{\mathcal{A}}}_1 := {{\mathcal{A}}}^\delta_{\underline{\alpha}}$ and ${{\mathcal{A}}}_2 := {{\mathcal{A}}}^\delta_{\overline{\alpha}}$. \[pr7\]
First, let the sequence $\{\alpha_n\}_{n \in {\mathbb{N}}}$ converge to $0$. Then, we can find a sequence $\{{{\mathcal{A}}}_n\}_{n \in {\mathbb{N}}}$ with ${{\mathcal{A}}}_n \in M_{\alpha_n}$ for each $n \in {\mathbb{N}}$. Now, let ${{\mathcal{A}}}^\dagger$ be an $f_{{{\mathcal{A}}}_0}$-minimizing solution of (\[pi1\]). Hence, it follows that $
L({{\mathcal{A}}}_n)^2 \leq I(\alpha_n) \leq \mathcal{F}^{{{\mathcal{U}}}^\delta}_{{{\mathcal{A}}}_0,\alpha_n}({{\mathcal{A}}}^\dagger) \leq \delta^2 + \alpha_n f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger).
$ Thus, for a sufficiently large $n\in{\mathbb{N}}$, $L({{\mathcal{A}}}_n)^2 < (\tau_1\delta)^2$, since $\alpha_n f_{{{\mathcal{A}}}_0}(a^\dagger) \rightarrow 0$. Thus, we can set $\underline{\alpha} := \alpha_n$ for this same $n$ .
We now assume that $\alpha_n \rightarrow \infty$. Taking ${{\mathcal{A}}}_n$ as before, we have the following estimates $
H({{\mathcal{A}}}_n) \leq \displaystyle\frac{1}{\alpha_n}I(\alpha_n) \leq \displaystyle\frac{1}{\alpha_n}\mathcal{F}^{{{\mathcal{U}}}^\delta}_{{{\mathcal{A}}}_0,\alpha_n}({{\mathcal{A}}}_0) =
\displaystyle\frac{1}{\alpha_n}\|{{\mathcal{U}}}({{\mathcal{A}}}_0) - {{\mathcal{U}^\delta}}\| \rightarrow 0$ whenever $n\rightarrow \infty$. Thus, $\displaystyle\lim_{n\rightarrow \infty}f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_n) = 0$, which implies that $\{{{\mathcal{A}}}_n\}_{n \in {\mathbb{N}}}$ converges weakly to ${{\mathcal{A}}}_0$. Then, by the weak continuity of ${{\mathcal{U}}}(\cdot)$ and the lower semi-continuity of the norm, it follows that $$\|{{\mathcal{U}}}({{\mathcal{A}}}_0) - {{\mathcal{U}^\delta}}\| \leq \displaystyle\liminf_{n\rightarrow \infty}\|{{\mathcal{U}}}({{\mathcal{A}}}_n) - {{\mathcal{U}^\delta}}\|,$$ which shows the existence of $\overline{\alpha}$, such that $$L({{\mathcal{A}}}^\delta_{\overline{\alpha}}) > \tau_2\delta.$$
For prescribed $1<\tau_1\leq \tau_2$, the discrepancy principle always works if we assume that there is no $\alpha > 0$ such that the minimizers ${{\mathcal{A}}}_1,{{\mathcal{A}}}_2 \in M_{\alpha}$ satisfy $$\|{{\mathcal{U}}}({{\mathcal{A}}}_1) - {{\mathcal{U}^\delta}}\| < \tau_1\delta \leq \tau_2\delta < \|{{\mathcal{U}}}({{\mathcal{A}}}_2) - {{\mathcal{U}^\delta}}\|.
\label{moro_condition}$$ In other words, only one of the inequalities of the discrepancy principle could be violated by the minimizers associated to $\alpha$. A sufficient condition for such assumption is the uniqueness of Tikhonov minimizers which we are not able to prove for this specific case. Thus, we have to introduce the sequential discrepancy principle whenever the condition is violated. Note that the discrepancy principle is always preferable since its lower inequality implies that the Tikhonov minimizers satisfying do not reproduce noise. Whereas the same conclusion cannot be achieved with the sequential discrepancy principle . See also [@schu Remark 4.7] for another discussion about the discrepancy principle .
Under the condition and Proposition \[pr7\], by [@anram Theorem 3.10] we can always find $\alpha := \alpha(\delta)>0$ and a Tikhonov minimizer ${{\mathcal{A}}}^\delta_\alpha \in M_\alpha$, such that both the inequalities of the discrepancy principle are satisfied. Proposition \[pr7\] also implies that the sequential discrepancy principle is well posed. See [@ahm Lemma 2]. For a convergence analysis under the sequential Morozov, see [@hm].
Assume that the inverse problem has a (unique) solution. If condition holds, then the regularizing parameter $\alpha = \alpha(\delta,{{\mathcal{U}^\delta}})$ obtained through Morozov’s discrepancy principle (\[morozov\]) satisfies the limits $$\displaystyle\lim_{\delta \rightarrow 0+}\alpha(\delta,{{\mathcal{U}^\delta}}) = 0
~~~\text{ and }~~~
\displaystyle\lim_{\delta \rightarrow 0+}\frac{\delta^2}{\alpha(\delta,{{\mathcal{U}^\delta}})} = 0.$$ The same limits hold if $\alpha$ is chosen through the sequential discrepancy principle . \[tma\]
Let $\{\delta_n\}_{n\in {\mathbb{N}}}$ be a sequence such that $\delta_n \downarrow 0$ and let ${{\widetilde{\mathcal{U}}}}$ be the noiseless data. Thus, $\|{{\widetilde{\mathcal{U}}}}- {{\mathcal{U}}}^{\delta_n}\|\leq \delta_n$. In addition, recall that the inverse problem has a unique solution ${{\mathcal{A}}}^\dagger$ and then ${{\mathcal{U}}}({{\mathcal{A}}}^\dagger) = {{\widetilde{\mathcal{U}}}}$. We only prove the case where the choice of the regularization parameter is based on the discrepancy principle . Very similar arguments to the ones that follow show the theorem’s claim when the choice is based on the sequential discrepancy principle . See [@ahm Theorem 1]. Thus, it is straightforward to build diagonal convergent subsequences with elements satisfying one of both strategies, in order to prove the limits above asserted.
Let $\alpha_n := \alpha(\delta_n,{{\mathcal{U}}}^{\delta_n})$ denote the regularizing parameter chosen through . Thus, we denote by ${{\mathcal{A}}}_n:={{\mathcal{A}}}^{\delta_n}_{\alpha_n}$ its associated minimizer of with respect to $\delta_n$, $\alpha_n$ and ${{\mathcal{U}}}^{\delta_n}$. This defines the sequence $\{{{\mathcal{A}}}_n\}_{n\in{\mathbb{N}}}$, which is pre-compact by the coerciveness of $f_{{{\mathcal{A}}}_0}$. Choose a convergent subsequence, denoting it by $\{{{\mathcal{A}}}_k\}_{k\in{\mathbb{N}}}$ and its weak limit by ${{\widetilde{\mathcal{A}}}}$. We shall see that ${{\widetilde{\mathcal{A}}}}= {{\mathcal{A}}}^\dagger$ and thus the original sequence is bounded and has the unique cluster point ${{\mathcal{A}}}^\dagger$.
The weakly lower semi-continuity of $\|{{\mathcal{U}}}(\cdot)-{{\widetilde{\mathcal{U}}}}\|$ and $f_{{{\mathcal{A}}}_0}$ implies that $\|{{\mathcal{U}}}({{\widetilde{\mathcal{A}}}}) - {{\widetilde{\mathcal{U}}}}\| \leq \lim_{k\rightarrow\infty}(\tau_2+1)\delta_k = 0$. Thus, ${{\widetilde{\mathcal{A}}}}$ is a solution of the inverse problem , which is unique, then ${{\widetilde{\mathcal{A}}}}= {{\mathcal{A}}}^\dagger$.
Since, for each $k$, ${{\mathcal{A}}}_k$ is a Tikhonov minimizer satisfying the discrepancy principle , it follows by the weakly lower semi-continuity of $f_{{{\mathcal{A}}}_0}$ that $$f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger) \leq \displaystyle\liminf_{k\rightarrow\infty}f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_k) \leq
\displaystyle\limsup_{k\rightarrow\infty}f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_k) \leq f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger).
\label{moro4}$$ In other words, $f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_k)\rightarrow f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$.
We now prove that $\alpha(\delta,{{\mathcal{U}}}^\delta)\rightarrow 0$. Assume that with respect to the sequence of the beginning of the proof, there exist $\overline{\alpha}>0$ and a subsequence $\{\alpha_k\}_{k\in{\mathbb{N}}}$ such that $\alpha_k \geq \overline{\alpha}$ for every $k \in {\mathbb{N}}$. Denote also by $\{{{\mathcal{A}}}_k\}_{k\in{\mathbb{N}}}$ a sequence of minimizers of with respect to $\delta_k$, $\alpha_k$ and ${{\mathcal{U}}}^{\delta_k}$. Define further the sequence $\{{{\overline{\mathcal{A}}}}_k\}_{k\in{\mathbb{N}}}$ of minimizers of with respect to $\delta_k$, $\overline{\alpha}$ and ${{\mathcal{U}}}^{\delta_k}$. Since $L$ in non-decreasing, by the discrepancy principle , $$\|{{\mathcal{U}}}({{\overline{\mathcal{A}}}}_k) - {{\mathcal{U}}}^{\delta_k}\| \leq \|{{\mathcal{U}}}({{\mathcal{A}}}_k) - {{\mathcal{U}}}^{\delta_k}\| \leq \tau_2\delta_k\rightarrow 0
\label{moro5}$$ On the other hand, $\displaystyle\limsup_{k\rightarrow \infty} \overline{\alpha}f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}_k) \leq \overline{\alpha}f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$. By the coerciveness of $f_{{{\mathcal{A}}}_0}$, the sequence has a convergent subsequence, denoted also by $\{{{\overline{\mathcal{A}}}}_k\}_{k \in {\mathbb{N}}}$, with limit ${{\overline{\mathcal{A}}}}\in {{\mathfrak{Q}}}$. Thus, by the estimates (\[moro4\]) and (\[moro5\]), the weakly lower semi-continuity of $\|{{\mathcal{U}}}(\cdot) - {{\widetilde{\mathcal{U}}}}\|$ and $f_{{{\mathcal{A}}}_0}$, it follows that $\|{{\mathcal{U}}}({{\overline{\mathcal{A}}}}) - {{\widetilde{\mathcal{U}}}}\| =0$ and $f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}) \leq f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$. Since the inverse problem has a unique solution, ${{\overline{\mathcal{A}}}}= {{\mathcal{A}}}^\dagger$ and thus $ f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}_k)\rightarrow f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$. On the other hand, ${{\overline{\mathcal{A}}}}$ is a minimizer of with regularization parameter $\overline{\alpha}$ and the noiseless data ${{\widetilde{\mathcal{U}}}}$, since for each ${{\mathcal{A}}}\in {{\mathfrak{Q}}}$, the following estimate hold: $$\begin{array}{rcl}
\|{{\mathcal{U}}}({{\overline{\mathcal{A}}}}) - {{\widetilde{\mathcal{U}}}}\|^2 + \overline{\alpha} f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}})& \leq &
\displaystyle\liminf_{k\rightarrow \infty}\left(\|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}}}^{\delta_k}\|^2 + \overline{\alpha}f_{{{\mathcal{A}}}_0}({{\mathcal{A}}})\right)\\
& = & \|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}}}^{\delta_k}\|^2 + \overline{\alpha}f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}).
\end{array}$$ Since $f_{{{\mathcal{A}}}_0}$ is convex, it follows that for every $t \in [0,1)$ $$f_{{{\mathcal{A}}}_0}((1-t){{\overline{\mathcal{A}}}}+ t{{\mathcal{A}}}_0) \leq (1-t)f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}) + tf_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_0) = (1-t)f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}).$$ Thus, $\overline{\alpha}f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}) \leq \|{{\mathcal{U}}}((1-t){{\overline{\mathcal{A}}}}+ t{{\mathcal{A}}}_0) - {{\widetilde{\mathcal{U}}}}\|^2 + \overline{\alpha}(1-t)f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}})$. This implies that $\overline{\alpha}tf_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}) \leq \|{{\mathcal{U}}}((1-t){{\overline{\mathcal{A}}}}+ t{{\mathcal{A}}}_0) - {{\widetilde{\mathcal{U}}}}\|^2$. Since ${{\widetilde{\mathcal{U}}}}= {{\mathcal{U}}}({{\overline{\mathcal{A}}}})$, by Proposition \[prop6\] with $\mathcal{H} = {{\mathcal{A}}}_0 - {{\mathcal{A}}}$, $\overline{\alpha}f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}) \leq \displaystyle\lim_{t\rightarrow 0^+}\frac{1}{t}\|{{\mathcal{U}}}((1-t){{\overline{\mathcal{A}}}}+ t{{\mathcal{A}}}_0) - {{\widetilde{\mathcal{U}}}}\|^2 = 0$. Therefore, $f_{{{\mathcal{A}}}_0}({{\overline{\mathcal{A}}}}) = 0$. But, by hypothesis, it could only hold if ${{\overline{\mathcal{A}}}}= {{\mathcal{A}}}_0$, i.e., ${{\mathcal{A}}}^\dagger = {{\mathcal{A}}}_0$. However, $\|{{\mathcal{U}}}({{\mathcal{A}}}_0) - {{\mathcal{U}}}^\delta\| \geq \tau_2\delta$. This is a contradiction. We conclude that $\alpha(\delta,{{\mathcal{U}}}^\delta)\rightarrow 0$ when $\delta \rightarrow 0$.
In order to prove the second limit, consider again the subsequence $\{{{\mathcal{A}}}_k\}_{k\in{\mathbb{N}}}$ converging weakly to ${{\mathcal{A}}}^\dagger$, the solution of the inverse problem , when $\delta_k\downarrow 0$. Thus, since for each $k$ ${{\mathcal{A}}}_k$ satisfies the discrepancy principle (\[morozov\]), it follows that $\tau^2_1\delta^2_k + \alpha_k f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_k) \leq \delta_k^2 + \alpha_k f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$. This implies that $(\tau_1^2 - 1)\displaystyle\frac{\delta_k^2}{\alpha_k} \leq f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger) - f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}_k) \rightarrow 0$.
The following theorem states that, if the regularization parameter $\alpha$ is chosen through the discrepancy principle , we achieve the same convergence rates of the Theorem \[tc1\].
Assume that the inverse problem has a (unique) solution. Suppose that ${{\mathcal{A}}}^\delta_\alpha$ is a minimizer of and $\alpha = \alpha(\delta,{{\mathcal{U}^\delta}})$ is chosen through the discrepancy principle (\[morozov\]) or the sequential discrepancy principle . Then, by the source condition of Lemma \[lemmax\], we have the estimates $$\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\| = \mathcal{O}(\delta) ~~~\text{ and }~~~
D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) = \mathcal{O}(\delta),
\label{rates_conv}$$ with $\xi^\dagger \in \partial f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$. The estimates are achieved whenever is used. \[mor:cr\]
Let ${{\mathcal{A}}}^\dagger$ be the solution of the inverse problem (\[ip1a\]). If ${{\mathcal{A}}}^\delta_\alpha \in M_\alpha$, then, the first estimate is trivial since $
\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\| \leq (\tau_2 + 1)\delta.$
If condition holds, then by the first inequality of the discrepancy principle (\[morozov\]), $\tau_1\delta^2 +\alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\delta_\alpha) \leq \delta^2+\alpha f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$, implying that $f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\delta_\alpha) \leq f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$, since $\tau_1 - 1 > 0$. Hence, for every $\xi^\dagger \in \partial f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)$ satisfying the source condition of Lemma \[lemmax\] and assuming that $f_{{{\mathcal{A}}}_0}$ is $1$-coerciveness with constant $\zeta$, we have the estimates: $$\begin{array}{rcl}
D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) &\leq&
|\langle \xi^\dagger , {{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger \rangle| =
|\langle {{\mathcal{U}}}^\prime({{\mathcal{A}}}^\dagger)^*\omega^\dagger + \mathcal{E}, {{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\rangle|\\
&\leq&
\|\omega^\dagger\|\|{{\mathcal{U}}}^\prime({{\mathcal{A}}}^\dagger)({{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger)\| +
\|\mathcal{E}\|\|{{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\|\\
&\leq&
(1+\gamma)\|\omega^\dagger\|\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\| +
\displaystyle\frac{1}{\zeta}\|\mathcal{E}\|D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger)
\end{array}
\label{mo:eqcr}$$ Since $\xi^\dagger$ can be chosen with $\|\mathcal{E}\|$ arbitrarily small, it follows that $
1 - 1/\zeta\|\mathcal{E}\| > 0
$ and then, by , $$D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq \displaystyle\frac{\zeta}{\zeta - \|\mathcal{E}\|}
(1+\gamma)\|\omega^\dagger\|\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\| \leq
\tau_2 \frac{\zeta}{\zeta - \|\mathcal{E}\|}(1+\gamma)\|\omega^\dagger\|\cdot \delta.$$ On the other hand, let $\alpha$ be given by the sequential discrepancy principle . Since, $ \alpha D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq \|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha)-{{\mathcal{U}^\delta}}\|^2 + \alpha D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger)$, then, $$D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) \leq \displaystyle\frac{\delta^2}{\alpha} + |\langle \xi^\dagger , {{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger \rangle|.$$ By the previous case, the second term in the right hand side of the above inequality has the order $\mathcal{O}(\delta)$. By Theorem \[tma\] the first term also vanishes. Since ${{\tilde{\tau}}}\delta \leq \|{{\mathcal{U}}}({{\mathcal{A}}}^{\delta}_{\alpha/q}) - {{\mathcal{U}^\delta}}\|$, it follows that the first term is of order $\mathcal{O}\left(|f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\delta_{\alpha/q}) - f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)|\right)$ and $|f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\delta_{\alpha/q}) - f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}^\dagger)| \leq |\langle \xi^\dagger , {{\mathcal{A}}}^\delta_{\alpha/q} - {{\mathcal{A}}}^\dagger \rangle|$. See [@ahm Proposition 10].
As above mentioned, the above rates obtained in terms of Bregman distance state that, in some sense, the distance between the true local variance and the Tikhonov solution is of order $\mathcal{O}(\delta)$. Under a more practical perspective, consider $f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}) = \|{{\mathcal{A}}}- {{\mathcal{A}}}_0\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}}$. In this case, it follows that $\|{{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\|_{{H^l(0,S,H^{1+\varepsilon}(D))}} = \mathcal{O}(\delta^{1/2})$. In addition, if $l > 1/2$ in ${H^l(0,S,H^{1+\varepsilon}(D))}$, it follows by the inequality that $$\sup_{s\in [0,S]}\|a^\delta_\alpha(s) - a^\dagger(s)\|_{{H^{1+\varepsilon}(D)}} \leq C \|{{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\|_{{H^l(0,S,H^{1+\varepsilon}(D))}}.$$ Thus, the convergence rates also follows uniformly in $s$ and imply the convergence rates obtained in previous works, such as [@acpaper; @eggeng; @crepey]. This can be understood as the online solution is at least as good as the solution obtained in the standard case, i.e., the Tikhonov minimizers with only one price surface.
For $f_{{{\mathcal{A}}}_0}$ $q$-coercive with $q > 1$, a reasoning as the one used in Equation , gives that $$\begin{array}{rcl}
D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger)
&\leq& \beta_1 (D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger))^{1/q} + \beta_2\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\|\\
&\leq& \displaystyle\frac{\beta_1^q}{q} + \frac{1}{q}D_{\xi^\dagger}({{\mathcal{A}}}^\delta_\alpha,\alpha) + \beta_2\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\|.
\end{array}$$ Assume further that $\beta_1 = \mathcal{O}(\delta^{\frac{1}{q}})$. Since $
\|{{\mathcal{U}}}({{\mathcal{A}}}^\delta_\alpha) - {{\mathcal{U}}}({{\mathcal{A}}}^\dagger)\| = \mathcal{O}(\delta)
$, it follows that $
\|{{\mathcal{A}}}^\delta_\alpha - {{\mathcal{A}}}^\dagger\|^q \leq \displaystyle\frac{1}{\zeta}D_{\xi}({{\mathcal{A}}}^\delta_\alpha,{{\mathcal{A}}}^\dagger) = \mathcal{O}(\delta).
$
Numerical Results {#sec:numerics}
=================
We first perform tests with synthetic data for testing accuracy and advantages of the methods. Then, we present some examples with observed market prices.
We note that Problem is solved by a Crank-Nicolson scheme [@vvlathesis Chapter 5]. Since we shall use a gradient-based method to solve numerically the minimization of the Tikhonov functional . Let $J^\delta({{\mathcal{A}}})$ and $\nabla J^\delta({{\mathcal{A}}})$ denote the quadratic residual and its gradient, respectively. More precisely, the residual is given by $J^\delta({{\mathcal{A}}}) : = \|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}^\delta}}\|^2_{{L^2(0,S,L^2(D))}} = \int^S_0\|F(s,a(s)) - u^\delta(s) \|^2_{L^{2}(D)}ds$ and the gradient is given by $$\begin{gathered}
\langle \nabla J^\delta({{\mathcal{A}}}),\mathcal{H}\rangle_{{H^l(0,S,H^{1+\varepsilon}(D))}} = 2\langle{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}^\delta}},{{\mathcal{U}}}^\prime({{\mathcal{A}}})\mathcal{H}\rangle_{{L^2(0,S,L^2(D))}}\\
= 2\displaystyle\int^S_0\int_D\{[v(u_{yy}-u_y)h(t)](s,a(s))\}(\tau,y)d\tau dyds,
\label{gradj}\end{gathered}$$ where, for each $s \in [0,S]$, $v$ is the solution of equation, $$v_\tau + (av)_{yy} + (av)_y +bv_y= u(t,a) - u^\delta(s)
\label{adj}$$ with homogeneous boundary condition. Note that, $V = \{V: s \mapsto v(s)\}$ is an element of ${L^2(0,S,W^{1,2}_2(D))}$. We also numerically solve Problem (\[adj\]) by a Crank-Nicolson scheme. See [@vvlathesis Chapter 5]. In the following examples we assume that $l=1$ in ${H^l(0,S,H^{1+\varepsilon}(D))}$ and the regularization functional is $
f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}) = \displaystyle\|{{\mathcal{A}}}- {{\mathcal{A}}}_0\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}}.
$
Examples with Synthetic Data
----------------------------
Consider the following local volatility surface: $$a(s,u,x) = \left\{
\begin{array}{ll}
\label{sig}
\displaystyle\frac{2}{5}\left(1 - \frac{2}{5}\text{e}^{-\frac{1}{2}( u - s)} \right)\cos(1.25\,\pi \,x),&(u,x) \in (0,1]\times \left[-\displaystyle\frac{2}{5},\displaystyle\frac{2}{5}\right],\\
\displaystyle\frac{2}{5}, & \text{otherwise.}
\end{array}
\right.$$
We generate the data, i.e., evaluate the call prices with the above volatility, in a very fine mesh. Then we add a zero-mean Gaussian noise with standard deviation $\delta = 0.035,\, 0.01$. We interpolate the resulting prices in coarser grids. This avoids a so-called inverse crime [@somersalo].
In the present test, we assume that, $r = 0.03$, $(\tau,y) \in [0,1]\times [-5,5]$. We generate the price data with step sizes $\Delta \tau = 0.002$ and $\Delta y = 0.01$. Then, we solve the inverse problem with the step sizes $\Delta \tau = 0.01, \,0.005$ and $\Delta y = 0.1$. We also assume that the asset price is given by $s \in [29.5, 32.5]$ with three different step sizes, $\Delta s = 0.25, 0.1, 0.01$.
In what follows, we refer to standard Tikhonov as the case when we consider a single price surface in the Tikhonov regularization. Whereas, we use the terminology “online” Tikhonov whenever we use more than one single price surface.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Left: Original local volatility. Center: Reconstruction with noise level $\delta = 0.035$. Right: Reconstruction with $\delta = 0.01$. When the noise level decreases, the reconstructions become more accurate.[]{data-label="test1"}](noiseless "fig:"){width="30.00000%"} ![Left: Original local volatility. Center: Reconstruction with noise level $\delta = 0.035$. Right: Reconstruction with $\delta = 0.01$. When the noise level decreases, the reconstructions become more accurate.[]{data-label="test1"}](volsurf_noisel1 "fig:"){width="30.00000%"} ![Left: Original local volatility. Center: Reconstruction with noise level $\delta = 0.035$. Right: Reconstruction with $\delta = 0.01$. When the noise level decreases, the reconstructions become more accurate.[]{data-label="test1"}](volsurf_noisel3 "fig:"){width="30.00000%"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Figure \[test1\] shows reconstructions of the local volatility surface from price data with different noise levels. In addition, we can see that, when the noise level decreases, by refining the accuracy of the data, the resulting reconstructions become more similar to the original local volatility surface. This is an illustration of the Theorems \[tc1\], \[tma\] and \[mor:cr\].
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Comparison between standard and online Tikhonov. As the number of price surfaces increases, the reconstructions become more accurate.[]{data-label="test2"}](compvolsol_s305t04_ns05 "fig:"){width="46.00000%"} ![Comparison between standard and online Tikhonov. As the number of price surfaces increases, the reconstructions become more accurate.[]{data-label="test2"}](compvolsol_s305t04_ns01 "fig:"){width="46.00000%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In Figure \[test2\], we can see that the online Tikhonov presents better solutions than the standard one, as we increase the number of price surfaces in the calibration procedure. Here, the regularization parameter was obtained through the Morozov’s discrepancy principle.
![$L^2$ distance between original local variance and its reconstructions, as a function of the number of price surfaces. it is constant for standard Tikhonov and non-increasing for on line Tikhonov.[]{data-label="test3"}](error){width="47.00000%"}
Figure \[test3\] shows the evolution of the $L^2(D)$ distance between the reconstructions and the original local variance as a function of the number of surfaces of call prices: it is constant for standard Tikhonov and non-increasing for online Tikhonov.
Examples with Market Data
-------------------------
We now present some reconstructions of the local volatility by online Tikhonov regularization from market prices. We solve the inverse problem with the step sizes $\Delta \tau = 0.01$ and $\Delta y = 0.1$. The regularizing functional is $f_{{{\mathcal{A}}}_0}({{\mathcal{A}}}) = \|{{\mathcal{A}}}- {{\mathcal{A}}}_0\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}}$ and the regularization parameter is chosen through the discrepancy principle . We estimate the noise level as half of the mean of the bid-ask spread in market prices. The market prices are interpolated linearly in the mesh where the inverse problem is solved. In the present example, we consider seven surfaces of call prices in each experiment. The data corresponds to vanilla option prices on futures of Light Sweet Crude Oil (WTI) and Henry Hub natural gas. For a survey on commodity markets, see the book [@geman]. For a study of of an application of Dupire’s local volatility model on commodity markets, see [@vvlathesis Chapter 4].
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Local Volatility reconstruction from European vanilla options on futures of WTI oil. We used online Tikhonov regularization with the standard quadratic functional.[]{data-label="test4"}](wti_volsurf_ns1 "fig:"){width="46.00000%"} ![Local Volatility reconstruction from European vanilla options on futures of WTI oil. We used online Tikhonov regularization with the standard quadratic functional.[]{data-label="test4"}](wti_volsol_ns1 "fig:"){width="46.00000%"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Note that, in order to use the framework developed in the previous sections, we assumed that, the local volatility is indexed by the unobservable spot price, instead of the future price. For more details on such examples, see Chapters 4 and 5 of [@vvlathesis].
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Local Volatility reconstruction from European vanilla options on futures of Henry Hub natural gas. We used online Tikhonov regularization with the standard quadratic functional.[]{data-label="test5"}](hh_volsurf_ns6 "fig:"){width="46.00000%"} ![Local Volatility reconstruction from European vanilla options on futures of Henry Hub natural gas. We used online Tikhonov regularization with the standard quadratic functional.[]{data-label="test5"}](hh_volsol_ns6 "fig:"){width="46.00000%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Figures \[test4\] and \[test5\] present the best reconstructions of local volatility for WTI and HH data, respectively. We collected the data prices for Henry Hub natural gas and WTI oil between 2011/11/16 and 2011/11/25, i.e., seven consecutive commercial days.
Conclusions {#sec:conclusion}
===========
In this paper we have used convex regularization tools to solve the inverse problem associated to Dupire’s local volatility model when there is a steady flow of data. We first established results concerning existence, stability and convergence of the regularized solutions, making use of convex regularization tools and the regularity of the forward operator. We also proved some convergence rates. Furthermore, we established discrepancy-based choices of the regularization parameter, under a general framework, following [@anram; @ahm]. Such analysis allowed us to implement the algorithms and perform numerical tests.
The main contribution, [*vis a vis*]{} previous works, and in particular [@acpaper], is that we extended the convex regularization techniques to incorporate the information and data stream that is constantly supplied by the market. Furthermore, we have proved discrepancy-based choices for the regularization parameter that are suitable to this context with regularizing properties.
A natural extension of the current work is the application of these techniques to the context of future markets, where the underlying asset is the future price of some financial instrument or commodity. In such markets, vanilla options represent a key instrument in hedging strategies of companies and in general they are far more liquid than in equity markets. The warning here is that, in general, we do not have an entire price surface. Actually in this case, we only have an option price curve for each future’s maturity. Thus, in order to apply the techniques above to this context, it is necessary to assembly all option prices for futures on the same instrument (financial or commodity) in a unique surface in an appropriate way. This was discussed in [@vvlathesis Chapter 4] and will be published elsewhere.
Acknowledgments
===============
V.A. acknowledges and thanks CNPq, Petroleo Brasileiro S.A. and Agência Nacional do Petróleo for the financial support during the preparation of this work. J.P.Z. acknowledges and thanks the financial support from CNPq through grants 302161/2003-1 and 474085/2003-1, and from FAPERJ through the programs [*Cientistas do Nosso Estado*]{} and [*Pensa Rio*]{}.
Proofs, Technical Results and Definitions
=========================================
In this appendix we collect technical results and definitions that were used in the remaining parts of the article. We also present the proofs of some results of from Section 3.
Bregman Distance and $q$-Coerciveness {#app:def}
-------------------------------------
[[@schervar Definition 3.15]]{} Let $X$ denote a Banach space and $f: D(f) \subset X \rightarrow {\mathbb{R}}\cup {\infty}$ be a convex functional with sub-differential $\partial f(x)$ in $x \in D(f)$. The Bregman distance (or divergence) of $f$ at $x \in D(f)$ and $\xi \in \partial f(x) \subset X^*$ is defined by $ D_{\xi}(\tilde{x},x) = f(\tilde{x}) - f(x) - \langle\xi,\tilde{x} - x\rangle,
$ for every $\tilde{x} \in X$, with $\langle\cdot,\cdot\cdot\rangle$ the dual product of $X^*$ and $X$. Moreover, the set $
\mathcal{D}_B(f) = \{x \in D(f) ~:~ \partial f(x) \not= \emptyset\}
$ is called the Bregman domain of $f$.
We stress that the Bregman domain $\mathcal{D}_B(f)$ is dense in $D(f)$ and the interior of $D(f)$ is a subset of $\mathcal{D}_B(f)$. The map $\tilde{x}\mapsto D_{\xi}(\tilde{x},x)$ is convex, non-negative and satisfies $D_{\xi}(x,x) = 0$. In addition, if $f$ is strictly convex, then $D_{\xi}(\tilde{x},x) = 0$ if and only if $\tilde{x} = x$. For a survey in Bregman distances see [@butiusem Chapter I].
For $1\leq q <\infty$ and $x \in D(f)$, the Bregman distance $D_{\xi}(\cdot,x)$ is said to be $q$-coercive with constant $\zeta>0$ if $
D_{\xi}(y,x) \geq \zeta \|y-x\|^q_X
$ for every $y \in D(f)$.
Equicontinuity
--------------
Let $X$ and $Y$ be locally convex spaces. Fix the sets $B_X \subset X$ and $M \subset C(B_X,Y)$. A set $M$ is called equicontinuous on $B_X$ if for every $x_0 \in B_X$ and every zero neighborhood, $V \subset Y$ there is a zero neighborhood $U \subset X$ such that $G(x_0) - G(x) \in V$ for all $G \in M$ and all $x \in B_X$ with $x-x_0 \in U$. Furthermore, $M$ is called uniformly equicontinuous if for every zero neighborhood $V \subset Y$ there exists a zero neighborhood $U \subset X$ such that $G(x) - G(x^\prime) \in V$ for all $G \in M$ and all $x,x^\prime \in B_X$ with $x-x^\prime \in U$.
From [@haschele] we have the technical result:
Let $F: [0,T]\times B_X \longrightarrow Y$ be a function, and $B_X$, $X$ and $Y$ be as above. If $M_1:= \{F(t,\cdot) : t \in [0,T]\} \subset C(B_X,Y)$, $M_2:=\{F(\cdot,x) : x \in B_X\} \subset C([0,T],Y)$ and $M_1$ (respectively $M_2$) is equicontinuous, then $F$ is continuous. Reciprocally, if $F$ is continuous, then $M_1$ is equicontinuous and if additionally $B_X$ is compact, then $M_2$ is equicontinuous, too.\[prop11\]
Proof of Results from Section 3 {#app:results}
-------------------------------
[**Proof of Theorem \[prop22\]:**]{} [*Well Posedness:*]{} Take an arbitrary ${{\widetilde{\mathcal{A}}}}\in {{\mathfrak{Q}}}$, by the continuity of ${{\widetilde{\mathcal{A}}}}$ (see Proposition \[p1\]) and $F$, it follows that $t \mapsto F(s,{{\tilde{a}}}(s))$ is continuous and then weakly measurable. Therefore, $s \mapsto \|F(s,a(s))\|_{{W^{1,2}_2(D)}}$ is bounded, then ${{\mathcal{U}}}({{\widetilde{\mathcal{A}}}}) \in{L^2(0,S,W^{1,2}_2(D))}$, which asserts the well-posedness of ${{\mathcal{U}}}(\cdot)$. [*Continuity:*]{} As $F:[0,S]\times Q \longrightarrow {W^{1,2}_2(D)}$ is continuous, it follows by Proposition \[p1\] that the set $\{F(s,\cdot) \left|~ s \in [0,S]\right.\} \subset C(Q,{W^{1,2}_2(D)})$ is uniformly equicontinuous, i.e., given $\epsilon > 0$, there is a $\delta > 0$ such that, for all $a,{{\tilde{a}}}\in Q$ satisfying $\|a - {{\tilde{a}}}\| < \delta$, we have that $\sup_{s \in [0,S]}\|F(s,a)-F(s,{{\tilde{a}}})\| < \epsilon.$ Thus, given $\epsilon > 0$ and ${{\mathcal{A}}}, {{\widetilde{\mathcal{A}}}}\in {{\mathfrak{Q}}}$ such that $\sup_{s \in [0,S]}\|a(s) - {{\tilde{a}}}(s)\|_{{H^{1+\varepsilon}(D)}}<\delta$, then, by the uniform equicontinuity of $\{F(s,\cdot), s \in [0,S]\}$, it follows that $$\displaystyle\|{{\mathcal{U}}}({{\mathcal{A}}}) - {{\mathcal{U}}}({{\widetilde{\mathcal{A}}}})\|^2_{{L^2(0,S,W^{1,2}_2(D))}} = \displaystyle\int^S_0\|F(s,a(s)) - F(s,{{\tilde{a}}}(s))\|^2_{{W^{1,2}_2(D)}}ds < \epsilon^2\cdot S,$$ which asserts the continuity of ${{\mathcal{U}}}(\cdot)$. [*Compactness:*]{} It is sufficient to prove that, given an $\epsilon > 0$ and a sequence $\{{{\mathcal{A}}}_n\}_{n \in {\mathbb{N}}}$in ${{\mathfrak{Q}}}$ converging weakly to ${{\widetilde{\mathcal{A}}}}$, it follows that there exist an $n_0$ and a weak zero neighborhood $U$ of ${H^l(0,S,H^{1+\varepsilon}(D))}$ such that for $n > n_0$, ${{\mathcal{A}}}_n-{{\widetilde{\mathcal{A}}}}\in U$ and $\|{{\mathcal{U}}}({{\mathcal{A}}}_n) - {{\mathcal{U}}}({{\widetilde{\mathcal{A}}}})\|_{{L^2(0,S,W^{1,2}_2(D))}}< \epsilon.$
Following the same arguments of the proof of Lemma \[lemw\], we can find a set of functionals $\mathcal{C}_{n,m} \in {H^l(0,S,H^{1+\varepsilon}(D))}^*$, defining such zero neighborhood $U$. We first note that, since $F$ is weak continuous, it follows that, given an $\epsilon >0$, there are $\alpha_1,...,\alpha_N \in {H^{1+\varepsilon}(D)}$ and $\delta > 0$, such that $\sup_{s \in [0,S]}\|F(s,a) - F(s,{{\tilde{a}}})\| < \epsilon/S$ for all $a,{{\tilde{a}}}\in B$ with $$\max\{|\langle a - {{\tilde{a}}}, \alpha_n \rangle_{{H^{1+\varepsilon}(D)}} |\,:\, n = 1,...,N\} < \delta.\label{p5:eq1}$$ By Proposition \[p1\], the estimate $\langle {{\mathcal{A}}},\alpha_n\rangle_{{H^{1+\varepsilon}(D)}} \in H^l[0,S]$ holds with its norm bounded by $\|{{\mathcal{A}}}\|_l\|\alpha_n\|_{{H^{1+\varepsilon}(D)}}$. Then, there is a closed and bounded ball $A \subset H^l[0,S]$ containing $\langle {{\mathcal{A}}},\alpha_n\rangle_{{H^{1+\varepsilon}(D)}}$, for all $n = 1,...,N,$ and ${{\mathcal{A}}}\in \mathbb{B}$.
For $n = 1,...,N$ and the same $\delta > 0$ of , there are $f_{n,1},...,f_{n,M(n)}$ in $H^l[0,S]$ and $\xi_n > 0$ such that, $\|f\|_{C([0,S])} < \delta$ for every $f \in A$ satisfying the estimate $\max_{m = 1,...,M(n)}|\langle f,\alpha_n\rangle_{{H^{1+\varepsilon}(D)}}| < \xi_n.$ Define $\mathcal{C}_{n,m} : = \alpha_n \otimes f_{n,m}$, with $n = 1,...,N$ and $ m=1,...,M(n)$. It is an element of ${H^l(0,S,H^{1+\varepsilon}(D))}^*$, where, for each ${{\mathcal{A}}}\in {H^l(0,S,H^{1+\varepsilon}(D))}$, we have that $\langle{{\mathcal{A}}}, \mathcal{C}_{n,m}\rangle_l = \langle \langle {{\mathcal{A}}},\alpha_n\rangle_{{H^{1+\varepsilon}(D)}}, f_{n,m}\rangle_{H^l[0,S]}$ and thus $$\langle{{\mathcal{A}}}, \mathcal{C}_{n,m}\rangle_l = \displaystyle\sum_{k\in {\mathbb{Z}}}(1 + |k|^l)^2\langle \hat{a}(k),\alpha_n\rangle_{{H^{1+\varepsilon}(D)}}\hat{f}_{n,m}(k).$$ These functionals define a weak zero neighborhood $U := \cap^N_{n=1}U_n$ with $$U_n : = \{ {{\mathcal{A}}}\in {H^l(0,S,H^{1+\varepsilon}(D))}: |\langle {{\mathcal{A}}}, \mathcal{C}_{n,m}\rangle_l| < \xi_n, ~m=1,...,M(n)\}.$$ Therefore, if $\{{{\mathcal{A}}}_k\}_{k\in{\mathbb{N}}}\subset \mathbb{B}$ converges weakly to ${{\widetilde{\mathcal{A}}}}\in \mathbb{B}$, then for a sufficient large $k$, ${{\mathcal{A}}}_k-{{\widetilde{\mathcal{A}}}}\in U$ and by the definition of $U$, we have that for each $n = 1,...,N$, $\xi_n > |\langle {{\mathcal{A}}}-{{\widetilde{\mathcal{A}}}}, \mathcal{C}_{n,m}\rangle_l| = |\langle \langle {{\mathcal{A}}}-{{\widetilde{\mathcal{A}}}},\alpha_n\rangle_{{H^{1+\varepsilon}(D)}}, f_{n,m} \rangle_{H^l[0,S]}|
$ for all $m = 1,...,M(n)$. By the choice of the $f_{n,m} \in H^l[0,S]$, it follows that $\|\langle {{\mathcal{A}}}_k-{{\widetilde{\mathcal{A}}}},\alpha_n\rangle_{{H^{1+\varepsilon}(D)}}\|_{H^l[0,S]} < \delta$ for all $n = 1,...,N,$ which implies that $\|{{\mathcal{U}}}({{\mathcal{A}}}_k) - {{\mathcal{U}}}({{\widetilde{\mathcal{A}}}})\|_{{L^2(0,S,W^{1,2}_2(D))}} \leq \epsilon\cdot T$. [*Weak Continuity:*]{} The weak continuity follows directly from the proof of compactness, as we use the same framework, only changing the compactness of $F$, by the weakly equicontinuity of $\{F(s,\cdot) : ~s \in [0,S]\}$ on bounded subsets of $Q$. [*Weak Closedness:*]{} Just note that the set ${{\mathfrak{Q}}}$ is weakly closed and the operator ${{\mathcal{U}}}(\cdot)$ is weakly continuous. [**Proof of Proposition \[prop6\]**]{} By Proposition \[prop4\], the family of operators $\{F(s,\cdot) \,: \,s \in [0,S]\}$ is Frechét equi-differentiable. Take ${{\widetilde{\mathcal{A}}}},\mathcal{H} \in {H^l(0,S,H^{1+\varepsilon}(D))}$, such that ${{\widetilde{\mathcal{A}}}},{{\widetilde{\mathcal{A}}}}+\mathcal{H} \in {{\mathfrak{Q}}}$. Then, define the one sided derivative of ${{\mathcal{U}}}(\cdot)$ at ${{\widetilde{\mathcal{A}}}}$ in the direction $\mathcal{H}$ as ${{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})\mathcal{H} := \{s \mapsto \partial_a F(s,{{\tilde{a}}}(s))h(s)\}$, where for each $s \in [0,S]$, dropping $t$ to easy the notation, $\partial_a F(s,{{\tilde{a}}})h$ is the solution of $$-v_\tau + a(v_{yy}-v_y) + bv_y = h(u_{yy}-u_y)$$ with homogeneous boundary conditions and $u = u(s,a(s))$. From Proposition \[prop21\] we have the estimate $\|\partial_a F(s,{{\tilde{a}}}(s))h(s)\|_{{W^{1,2}_2(D)}} \leq C\|h(s)\|_{L^2(D)}\|u_{yy}(s,{{\tilde{a}}}(s))-u_{y}(s,{{\tilde{a}}}(s))\|_{L^2(D)}$. Note that, $\|u_{yy}(s,a)-u_{y}(s,a)\|_{L^2(D)}$ is uniformly bounded in $[0,S]\times Q$. Thus, ${{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})\mathcal{H}$ is well defined and $$\begin{gathered}
\left\| {{\mathcal{U}}}^\prime({{\widetilde{\mathcal{A}}}})\mathcal{H}\right\|^2_{{L^2(0,S,W^{1,2}_2(D))}} = \displaystyle\int^S_0\|\partial_a F(s,{{\tilde{a}}}(s))h(s)\|^2_{{W^{1,2}_2(D)}}ds \\ \leq C \displaystyle\int^S_0\|h(s)\|_{L^2(D)}\|u_{yy}(s,{{\tilde{a}}}(s))-u_{y}(s,{{\tilde{a}}}(s))\|_{L^2(D)}ds\\
\leq c\displaystyle\int^S_0\|h(s)\|^2_{L^2(D)}ds = c\|\mathcal{H}\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}}\end{gathered}$$ Therefore, $\mathcal{U}^\prime({{\widetilde{\mathcal{A}}}})$ can be extended to a bounded linear operator from the space ${H^l(0,S,H^{1+\varepsilon}(D))}$ into ${L^2(0,S,W^{1,2}_2(D))}$.
Let ${{\widetilde{\mathcal{A}}}},\mathcal{H},\mathcal{G} \in {H^l(0,S,H^{1+\varepsilon}(D))}$ be such that, ${{\widetilde{\mathcal{A}}}},{{\widetilde{\mathcal{A}}}}+\mathcal{H},{{\widetilde{\mathcal{A}}}}+\mathcal{G}, {{\widetilde{\mathcal{A}}}}+\mathcal{H}+\mathcal{G}$ are in $Q$. Define $v:=u(s,a(s)+h(s)) - u(s,a(s))$. Thus, $$w := \partial_a u(s,a(s)+h(s))g(s) - \partial_a u(s,a(s))g(s)$$ satisfies $$-w_\tau + a(w_{yy} - w_y) = -g[v_{yy} - v_{y}] - h[(\partial_a u(s,a+h)g)_{yy} - (\partial_a u(s,a+h)g)_{y}],$$ with homogeneous boundary conditions (dropping the dependence on $s$). As above, we have $$\begin{gathered}
\left\|\mathcal{U}^\prime({{\widetilde{\mathcal{A}}}}+\mathcal{H})\mathcal{G} - \mathcal{U}^\prime({{\widetilde{\mathcal{A}}}}) \mathcal{G}\right\|^2_{{L^2(0,S,W^{1,2}_2(D))}} = \displaystyle\int^S_0\|w\|^2_{{W^{1,2}_2(D)}}ds\\
\leq c_1\displaystyle\int^S_0\|g(s)\|^2_{L^2(D)}\|v_{yy}(s,{{\tilde{a}}}(s)) - v_y(s,{{\tilde{a}}}(s))\|^2_{L^{2}(D)}ds\\
+ c_2 \displaystyle\int^S_0\|h(s)\|^2_{L^2(D)}\|\partial_a u(s,a(s)+h(s))g(s)\|^2_{{W^{1,2}_2(D)}}ds \\
\leq C\|\mathcal{H}\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}}\|\mathcal{G}\|^2_{{H^l(0,S,H^{1+\varepsilon}(D))}},\end{gathered}$$ which yields the Lipschitz condition.
[10]{}
V. Albani. . PhD thesis, IMPA, 2012.
S. Anzengruber, B. Hofmann and P. Mathé. egularization properties of the sequential discrepancy principle for [T]{}ikhonov regularization in [B]{}anach spaces. , 93(7), 1382–1400, 2013.
S. Anzengruber and R. Ramlau. orozov’s discrepancy principle for [T]{}ikhonov-type functionals with nonlinear operators. , 26(2), February 2010.
D. Butnariu and A. Iusem. , volume 40 of [ *Applied Optimization*]{}. Kluwer Academic, 2000.
S. Crepey. Calibration of the local volatility in a generalized [B]{}lack-[S]{}choles model using [T]{}ikhonov regularization. , 34:1183–1206, 2003.
A. De Cezaro. . PhD thesis, IMPA, Rio de Janeiro, 2010.
A. De Cezaro, O. Scherzer, and J. P. Zubelli. Convex regularization of local volatility models from option prices: [C]{}onvergence analysis and rates. , 75(4):2398–2415, 2012.
A. De Cezaro and J. P. Zubelli. The tangential cone condition for the iterative calibration of local volatility surfaces. , 80:1–17, 2013.
B. Dupire. Pricing with a smile. , 7:18–20, 1994.
H. Egger and H. Engl. Tikhonov [R]{}egularization [A]{}pplied to the [I]{}nverse [P]{}roblem of [O]{}ption [P]{}ricing: [C]{}onvergence analysis and [R]{}ates. , 21:1027–1045, 2005.
I. Ekland and R. Teman. . North Holland, Amsterdan, 1976.
H. Engl, M. Hanke, and A. Neubauer. , volume 375 of [ *Mathematics and its Applications*]{}. Kluwer Academic Publishers Group, Dordrecht, 1996.
L. C. Evans. , volume 19 of [*Graduate Studies in Mathematics*]{}. AMS, 1998.
J. Gatheral. . Wiley Finance. John Wiley & Sons, 2006.
H. Geman. John Wiley and Sons, 2005.
M. Haltmeier, O. Scherzer, and A. Leitão. Tikhonov and iterative regularization methods for embedded inverse problems. <http://www.industrial-geometry.at/uploads/emb_preprint.pdf>. B. Hofmann and P. Mathé. Parameter choice in [B]{}anach space regularization under variational inequalities. , 28(10):104006, 17pp, 2012.
R. Korn and E. Korn. , volume 31 of [*Graduate Studies in Mathematics*]{}. AMS, 2001.
O. Ladyzenskaja, V. Solonnikov, and N. Ural’ceva. . Translations of Mathematical Monographs. AMS, 1968.
V. A. Morozov. On the solution of functional equations by the method of regularization. , 7:414–417, 1966.
M. Reed and B. Simon. . Academic Press, 1980.
O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, and F. Lenzen. , volume 167 of [*Applied Mathematical Sciences*]{}. Springer, New York, 2008.
E. Somersalo and J. Kapio. , volume 160 of [*Applied Mathematical Sciences*]{}. Springer, 2004.
T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski. . De Gruyter, 2012.
A. Tikhonov and V. Arsenin. . Chapman and Hall, 1998.
P. Wilmott, S. Howinson, and J. Dewynne. . Cambridge University Press, 1995.
K. Yosida. . Springer-Verlag, Heidelberg, 1995.
Instituto Nacional de Matemática Pura e Aplicada\
Estr. D. Castorina 110, 22460-320. Rio de Janeiro,\
Brazil.
E-mail: <[email protected]> (Vinicius Albani) and <[email protected]> (Jorge Zubelli).
[^1]: IMPA, Estr. D. Castorina 110, 22460-320 Rio de Janeiro, Brazil, [ [email protected]](mailto:[email protected])
[^2]: IMPA, Estr. D. Castorina 110, 22460-320 Rio de Janeiro, Brazil, [ [email protected]](mailto:[email protected])
|
{
"pile_set_name": "ArXiv"
}
|
6.in 8.75in -.25in 0.25in 0.25in plus 12pt minus 1pt plus 1pt
\#1\#2\#3\#4[ [\#1]{} [**\#2**]{}, [\#3]{} ([\#4]{})]{}
(3,0) (4.5,.25)[(2,.25)[CCNY-HEP-94-9]{}]{}
[ ]{}\
.2in
Ngee-Pong Chang ([email protected])\
Department of Physics\
City College & The Graduate School of City University of New York\
New York, N.Y. 10031\
\
September 28, 1994\
Introduction
============
I am very pleased to have the opportunity to present to this distinguished audience some recent developments concerning chiral symmetry of the early universe. Chiral restoration is so taken for granted that it has not even been raised by others at this astroparticle workshop.
As you will see, there is indeed a chiral symmetry at high $T$, but this ‘restored’ chirality is a morphosis of the old zero temperature chirality. The original NJL vacuum undergoes an interesting [*new phase transformation*]{} such that ${ < \bar{\psi} \, \psi
> }$ vanishes, but the vacuum continues to break our zero temperature chirality. The pion remains a Nambu-Goldstone boson, and actually acquires a halo while propagating through the early universe.
The pion has always played a ubiquitous role in strong interaction physics. In the conventional scenario, however, it has not been given any role at high $T$ but is ignominiously dismissed in the early universe, and condemned to dissociate in the early alphabet soup. The results reported here correctly restore the pion to its rightful place in the early universe.
The pion is a messenger of an underlying broken symmetry of the universe, [*viz.*]{} that of chirality, under the transformation $\psi ({\vec{x},t}) \rightarrow {\rm e}^{i \alpha {\mbox{$\gamma_{\stackrel{}{5}}$}}} \; \psi ({\vec{x},t})$. The chiral charge, ${Q_{_{5}}}$, which generates this transformation $${Q_{_{5}}}= \int d^3 x \;\psi^{\dagger} ({\vec{x},t})
{\mbox{$\gamma_{\stackrel{}{5}}$}}\psi ({\vec{x},t}) \label{eq-old-Q}$$ does not annihilate the vacuum. Instead, acting on the NJL vacuum$\cite{NJL}$, it generates, up to a normalization factor, the state for a [*zero momentum pion*]{}, $ \;{Q_{_{5}}}|vac> \;\propto\; | \vec{\pi} (\vec{p} = 0 ) \rangle $, where ($s= \mp 1$ for $L,R$ helicities) $$| vac > \;=\; \prod_{p,s} \left( {\cos{{\theta_{p}}}}\;-\; s \,
{\sin{{\theta_{p}}}}\, {a^{\dagger}_{p,s} b^{\dagger}_{-p,s}}\right) \; | 0 > \label{eq-NJL-vac}$$ Using the fact that ${Q_{_{5}}}$ is a constant of motion, it is easy to show directly that this zero momentum pion, ${Q_{_{5}}}|vac>$, has zero energy, thus confirming the status of the pion as a QCD Nambu-Goldstone boson$\cite{Goldstone}$.
A signature of this dynamical symmetry breaking is the familiar order parameter, ${ < \bar{\psi} \, \psi
> }$. For $T > T_c$, however, it is well known that ${ < \bar{\psi} \, \psi
> }$ vanishes. Chiral symmetry is said to be restored at $T_c$, but is it the [*same old chiral symmetry we knew at $T=0$ ?*]{}
High Temperature Effective Action
=================================
At high temperatures, lattice work as well as continuum field theory calculations show that the effective action indeed exhibits a manifest chiral symmetry. In thermal field theory, there is the famous BPFTW action$\cite{BP}$ that describes the propagation of a QCD fermion through a hot medium ($T^{'2} \equiv \frac{\textstyle g_r^2 }{\textstyle 3} T^2$, while the angular brackets denote an average over the orientation $\hat{n}$) $${\cal L}_{\rm eff} = - {\bar{\psi}}\gamma_{\mu} \partial^{\mu}
\psi
- \frac{T^{'2}}{2\;\;} \, {\bar{\psi}}\left<
\frac{\gamma_o - \vec{\gamma} \cdot \hat{n} }
{D_o + \hat{n} \cdot \vec{D} }
\right> \psi \label{eq-BP-action}$$ and we see the global chiral symmetry of the action. But the [*nonlocality*]{} of the action implies that the Noether charge for this new chirality is not the same as that in eq.(\[eq-old-Q\]).
The fermion propagator that results from this action shows a pseudo-Lorentz invariant particle pole of mass ${T'}$ (the so-called thermal mass). But, in addition, there is a pair of conjugate [*spacelike*]{} plasmon cuts in the $p_o$-plane that run just above and below the real axis$\cite{Chang-xc}$, from $p_o = -p$ to $p_o = p$. As a result, for $t>0$, say, the propagator function takes the form $$\begin{aligned}
< T( \psi(x) \bar{\psi}(0) ) >_{_{\beta}} &=& < \psi(x)
\bar{\psi} (0) > \\
&=& \;\;\; \int \frac{d^3 p}{ (2\pi)^3 } \;
{\rm e}^{i \vec{p} \cdot \vec{x}} \;\left\{
Z_{p} \frac{-i \vec{\gamma}
\cdot \vec{p} + i \gamma_o \omega }{2 \omega} \;
{\rm e}^{ - i \omega t} \right.\\
& & - \left. \frac{{T^{'2}}}{8\;\;} \;
\int_{-p}^{p} \,\frac{dp_o'}{p^3} \;
\frac{i \vec{\gamma} \cdot \vec{p} p_o'
- i \gamma_o p^2}{p^2 - p_o^2 + {T^{'2}}} \;
{\rm e}^{- i p_o' t} \right\}
+ O (T'^4) \label{eq-spacelike-cut}\end{aligned}$$
In a recent study of the spacetime quantization of the BPFTW action$\cite{Chang-bp-local}$, I have shown that the spacelike cuts dictate a new thermal vacuum of the type $$| vac' > \;=\; \prod_{p,s} \left( {\cos{{\theta_{p}}}}\;-\; i\, s \,
{\sin{{\theta_{p}}}}\, {a^{\dagger}_{p,s} b^{\dagger}_{-p,s}}\right) \; | 0 > \label{eq-new-vac}$$ The $90^{o}$ phase here in the generalized NJL vacuum is the reason why ${ < \bar{\psi} \, \psi
> }$ vanishes for $T \geq
T_c$.
The quantization of a nonlocal action is of course a technical matter. Suffice it here to say that the quantization has been formulated in terms of auxiliary fields so that the resulting action is local. In this context, the pseudo-Lorentz particle pole is described in terms of the massive canonical Dirac field, $\Psi$, and the spacelike cuts are associated with the auxiliary fields, which are functions of $\Psi$. This formulation allows for a systematic expansion of the $\psi$ field in terms of the massive canonical Dirac field, $\Psi$. Let the $t=0$ expansion for the original massless $\psi$ field read $$\psi (\vec{x}, 0) = \frac{1}{\sqrt{V}} \sum_{p}
\; {\rm e}^{i \vec{p} \cdot \vec{x}}
\left( \begin{array}{l}
\chi_{_{p,L}} a^{}_{p,L} \;+\;
\chi_{_{p,R}} b^{\dagger}_{-p,R} \\
\chi_{_{p,R}} a^{}_{p,R} \;-\;
\chi_{_{p,L}} b^{\dagger}_{-p,L}
\end{array} \right)$$ with a corresponding canonical expansion for the massive $\Psi$, then we find $$\begin{aligned}
{a^{}_{p,s}}&=& {A^{}_{p,s}}\;-\; i \;s\; \frac{{T'}}{2 p}
{B^{\dagger}_{-p,s}}+ O({T'}{}^2 ) \label{eq-aps-Aps-1} \\
b^{}_{p,s} &=& B^{}_{p,s} \;+\; i \;s\; \frac{{T'}}{2 p}
A^{\dagger}_{-p,s}
+ O({T'}{}^2) \label{eq-bps-Bps-1}\end{aligned}$$ The $O({T'})$ terms in the Bogoliubov transformation imply the new thermal vacuum of eq.(\[eq-new-vac\]).
The chiral charge at high $T$ is given by $$Q_{5}^{\beta} = - \frac{1}{2} \; \sum_{p,s}\; s \;
\left(
A^{\dagger}_{p,s} A^{}_{p,s} + B^{\dagger}_{-p,s}
B^{}_{p,s}
\right)$$ so that it clearly annihilates the new thermal vacuum, in direct contrast with the $T=0$ Noether charge $$Q_{_{5}}
= - \frac{1}{2}
\sum_{p,s}\, s \; \left( a^{\dagger}_{p,s} a^{}_{p,s} +
b^{\dagger}_{-p,s} b^{}_{-p,s}
\right) \label{Q5}$$ which clearly fails to annihilate the vacuum at high $T$.
${ < \bar{\psi} \, \psi
> }$ is an Incomplete Order Parameter
=================================================
The traditional order parameter ${ < \bar{\psi} \, \psi
> }$ cannot by itself give a full description of the nature of chiral symmetry breaking. The operator, $\bar{\psi} \psi$, belongs to a non-Abelian chirality algebra$\cite{Chang-chiralg}$, $SU(2N_f)_{p} \otimes SU(2N_f)_{p}$. The original chiral broken ground state may be written as $ |vac> = \prod_{p} {\rm e}^{i X_{2p} {\theta_{p}}}\; |0>$, where $X_{2p}$ is an element of the algebra, while the new thermal vacuum is generated by a different element, $Y_{2p}$.
Our results here suggest the study of a new class of nonlocal order parameters, $$- \frac{i}{\pi} \int d^3 x \int_{-\infty}^{\infty} dt'
< \bar{\psi} ({\vec{x},t}) \psi( \vec{x}, t') > + c.c.$$ which if nonvanishing would indicate the continued breaking of chiral symmetry. The integration over $t'$ projects away the usual timelike spectrum of the operator $\psi$, and probes directly the properties of the spacelike cut. In our perturbative study here, this order parameter indeed is nonvanishing, being given by $ - 2 \sum_{p} \frac{{T'}}{p^2}$, analogous to the familiar expression for ${ < \bar{\psi} \, \psi
> }$ at $T=0$, given by $ - 2 \sum_{p} \frac{M}{\sqrt{p^2 + M^2}}$, where $M$ is the mass gap parameter.
Pion halo in the Sky
====================
The pion we know at zero temperature is not massless, but has a mass of $135 \;MeV$. This is because of electroweak breakdown, giving rise to a primordial quark mass at the tree level. At very high $T$, when electroweak symmetry is restored, we have the interesting new possibility that the pion will fully manifest its Nambu-Goldstone nature and remain physically massless.$\cite{Chang-QCD}$
The pion is described by an interpolating field operator, $ \sim i \bar{\psi} {\mbox{$\gamma_{\stackrel{}{5}}$}}T^{a} \; \psi$, which does not know about temperature. It is the vacuum that depends on $T$. The state vector for a zero momentum pion at high $T$ may be obtained from the thermal vacuum by the action ${Q_{_{5}}}^{a} | vac' > \;\propto \; | \pi^{a} ( \vec{p} = 0 ) \rangle $. This pion now has the property that even though it is massless, it can acquire a [*screening mass*]{} proportional to $T$. This is the pion mass that has been measured on the lattice at high $T$.
As a result, the pion propagates in the early universe with a halo. The retarded function for the pion shows that the signal propagates along the light cone, with an additional exponentially damped component coming from the past history of the source. $$D_{\rm ret} ({\vec{x},t}) = \theta (-t) \left\{ \delta(t^2 - r^2)
+ \frac{{T'}}{r} \theta(t^2 - r^2)
\left[ {\rm e}^{-{T'}| t-r| }
+ {\rm e}^{-{T'}| t+r| } \right] \right\}$$ The screening mass leads to an accompanying modulator signal that ‘hugs’ the light cone, with a screening length $\propto 1/T$.
What are the cosmological consequences of a pion in the alphabet soup of the early universe?
I am not an expert, and part of my purpose in coming to this workshop is to learn from you. But one thing I know. In the usual scenario, the pion after chiral restoration will have acquired mass $\propto T$, and will quickly dissociate into constituent quark-antiquark pair. According to our new understanding, however, the Nambu-Goldstone theorem forces the pion to remain a strictly massless bound state at high $T$, and so the pion will contribute to the partition function of the early universe.
Fortunately, the pion does not contribute so many degrees of freedom as to upset the usual picture of the cooling of the universe. But I leave it to experts to help figure out the subtle changes there must surely be in the phase transitions of the early universe.
In the beginning there was light, and quarks, and gluons, to which we must now add the pions with halo.
[99]{}
Y. Nambu and G. Jona-Lasinio, Phys. Rev. [**122**]{}, 345 (1961); [*ibid*]{} [**124**]{}, 246 (1961). J. Goldstone, Nuovo Cimento [**19** ]{}, 154 (1961). J.C.Taylor and S.M.H.Wong, ; E. Braaten and R. Pisarski, ; J. Frenkel and J.C. Taylor, . H.A. Weldon, Phys. Rev. [**D40**]{}, 2410 (1989); N.P. Chang, Phys. Rev. [**D 50**]{}, 5403 (1994). N.P. Chang, [*Spacetime Quantization of BPFTW Action: Spacelike Plasmon Cut & New Phase of the Thermal Vacuum* ]{}, CCNY-HEP-94-8, Sept 21, 1994. N.P. Chang, [*Chirality Algebra*]{}, CCNY-HEP-94-9, Sept 28, 1994.
L.N. Chang, N.P. Chang, .
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'T. Vaillant'
- 'J. Laskar'
- 'N. Rambaux'
- 'M. Gastineau'
bibliography:
- 'article.bib'
date: 'Received ; accepted '
title: 'Long-term orbital and rotational motions of Ceres and Vesta'
---
=1
[The dwarf planet Ceres and the asteroid Vesta have been studied by the Dawn space mission. They are the two heaviest bodies of the main asteroid belt and have different characteristics. Notably, Vesta appears to be dry and inactive with two large basins at its south pole. Ceres is an ice-rich body with signs of cryovolcanic activity.]{} [The aim of this paper is to determine the obliquity variations of Ceres and Vesta and to study their rotational stability.]{} [The orbital and rotational motions have been integrated by symplectic integration. The rotational stability has been studied by integrating secular equations and by computing the diffusion of the precession frequency.]{} [The obliquity variations of Ceres over $[-20:0]{\,\mathrm{Myr}}$ are between $2$ and $20\degree$ and the obliquity variations of Vesta are between $21$ and $45\degree$. The two giant impacts suffered by Vesta modified the precession constant and could have put Vesta closer to the resonance with the orbital frequency $2s_6-s_V$. Given the uncertainty on the polar moment of inertia, the present Vesta could be in this resonance, where the obliquity variations can vary between $17$ and $48\degree$.]{} [Although Ceres and Vesta have precession frequencies close to the secular orbital frequencies of the inner planets, their long-term rotations are relatively stable. The perturbations of Jupiter and Saturn dominate the secular orbital dynamics of Ceres and Vesta and the perturbations of the inner planets are much weaker. The secular resonances with the inner planets also have smaller widths and do not overlap contrary to the case of the inner planets.]{}
Introduction
============
Ceres and Vesta are the two heaviest bodies of the main asteroid belt. They have been studied by the Dawn space mission, which has allowed to determine notably their shape, gravity field, surface composition, spin rate and orientation [@russell2012; @russell2016]. However the precession frequency of their spin axes has not be determined and there are still uncertainties about their internal structures [e.g. @park2014; @ermakov2014; @park2016; @ermakov2017b; @konopliv2018]. No satellites have been detected from observations with the Hubble Space Telescope and the Dawn space mission around these bodies [@mcfadden2012; @mcfadden2015; @demario2016].
The long-term rotation of the bodies of the solar system can be studied with secular equations [@kinoshita1977; @laskar1986; @laskarrobutel1993] or with a symplectic integration of the orbital and rotational motions [@toumawisdom1994]. Secular equations are averaged over the mean longitude and over the proper rotation, which is generally fast for the bodies of the solar system, and their integration is much faster. They were used by [@laskarjoutelrobutel1993] and [@laskarrobutel1993] to study the stability of the planets of the solar system.
The method of [@laskarrobutel1993] has been applied by [@skoglov1996] to study the stability of the rotation and the variations of the obliquity for Ceres and nine asteroids including Vesta. However at this time, the initial conditions for the spin axes were not determined precisely and knowledge on the internal structure lacked to constrain the precession frequencies. [@skoglov1996] assumed that the bodies are homogeneous and concluded that their long-term rotations are relatively stable. By using secular equations and a secular model for the orbital motion, [@bills2017] determined the obliquity variations of Ceres. [@ermakov2017a] obtained the obliquity variations of Ceres for different polar moments of inertia by realizing the symplectic integration of the rotational and orbital motions.
Asteroid impacts and close encounters can influence the long-term rotation of bodies in the main asteroid belt. Vesta has suffered two giants impacts [@marchi2012; @schenk2012], which have significantly modified its shape and its spin rate [@fu2014; @ermakov2014]. [@laskar2011a] obtained an orbital solution of Ceres and Vesta, called La2010, which takes into account mutual interactions between bodies of the main asteroid belt and [@laskar2011b] showed that close encounters in the solution La2010 are the cause of the chaotic nature of the orbits of Ceres and Vesta. These close encounters can affect their long-term rotation.
For Ceres, the obliquity drives the ice distribution on and under the surface. Ceres possesses cold trap regions, which do not receive sunlight during a full orbit. This prevents the sublimation of the ice, which can accumulate [@platz2016]. The surface area of these cold traps depends on the value of the obliquity. [@ermakov2017a] determined that the obliquity of Ceres varies between $2$ and $20\degree$ and that the cold trap areas for an obliquity of $20\degree$ correspond to bright crater floor deposits that are likely water ice deposits. [@platz2016] determined that one bright deposit near a shadowed crater is water ice. In addition, the Dawn mission gave evidence of the presence of ice under the surface of Ceres from the nuclear spectroscopy instrument [@prettyman2017] and from the morphology of the terrains [@schmidt2017]. The ice distribution and the burial depth with respect to the latitude depends on the history of the obliquity [@schorghofer2008; @schorghofer2016]. For Vesta, studies of the long-term evolution of the obliquity were not performed with the initial conditions of the spin axis and the physical characteristics determined by the Dawn space mission.
The main purpose of this article is to investigate the long-term evolution of the rotational motions of Ceres and Vesta. First, we explore the obliquity variations of these bodies for a range of possible precession constants obtained from the data of the Dawn mission. Then, the stability of their spin axes is studied.
In this paper, we consider for the orbital motion the solutions [La2011]{} and La2010 [@laskar2011a], which do not include the rotation of Ceres and Vesta. To compute the obliquity variations, we follow the symplectic method of [@farago2009] by averaging the fast proper rotation. This method avoids to integrate the fast rotation and allows to use a large step to reduce the computation time. We call [Ceres2017]{} the long-term rotational solution obtained. The orbital and rotational equations are integrated simultaneously in a symplectic way and the effects of the rotation on the orbital motions are considered. We consider the close encounters of Ceres and Vesta with the bodies of the main asteroid belt used in [@laskar2011b] and estimate with a statistical approach their effects on the long-term rotation of Ceres and Vesta. In order to determine the secular frequencies and identify the possible secular resonances on the orbital and rotational motions, the solutions are studied by the method of the frequency map analysis [@laskar1988; @laskar1990; @laskar1992; @laskar1993; @laskar2003]. Moreover to study the effects of the close secular orbital resonances, we compute a secular Hamiltonian from the method of [@laskarrobutel1995]. We obtain a secular model, which reproduces the secular evolution of the solution [La2011]{} and allows to investigate the effects of the secular resonances.
The stability of the spin axes is studied by using secular equations with a secular orbital solution obtained from the frequency analysis of the solution [La2011]{}. We verify beforehand that they allow to reproduce the obliquity variations computed by the symplectic method and have the same stability properties. We study the stability of the rotation in the vicinity of the range of possible precession constants to identify the secular resonances between the orbital and rotational motions. Vesta has suffered two giant impacts which have changed its shape and its spin rate [@fu2014; @ermakov2014] and also its precession constant. We investigate if this possible evolution of precession constant changed the stability properties. Following the method of [@laskarrobutel1993], we finally realize a stability map of the spin axes of Ceres and Vesta.
In section \[SEC:methods\], we present the methods used in this paper to obtain the long-term rotation. In section \[SEC:prec\], we estimate the precession constants deduced from Dawn space mission and their possible variations during the history of Ceres and Vesta. In section \[SEC:orbobliquite\], we analyse the long-term solutions obtained for the orbital and rotational motions. In section \[SEC:secularmodels\], we study the effects of the orbital secular resonances with a secular Hamiltonian model. In section \[SEC:stab\], we study the stability of the rotation axes from the secular equations of the rotation.
Methods for the integration of the rotation\[SEC:methods\]
==========================================================
The spin rates of Ceres and Vesta are relatively fast (see section \[SEC:prec\]). We thus average the fast rotation using the method of [@farago2009] in order to integrate in a symplectic way the angular momentum of a rigid body,
When we need many integrations with different initial conditions or parameters, we use the secular equations from [@bouelaskar2006] in order to speed up the computation.
Symplectic integration of the angular momentum\[sec:oblisymp\]
--------------------------------------------------------------
We consider a planetary system of $n+1$ bodies with a central body $0$ and $n$ planetary bodies, where the body of index $1$ is a rigid body and the other planetary bodies point masses. We note the vectors in bold. The Hamiltonian $H$ of the system is [@bouelaskar2006] $$H=H_{N}+H_{I,0}+\sum^{n}_{k=2}H_{I,k}+H_{E},$$ with $H_{N}$ the Hamiltonian of $n+1$ point masses. The Hamiltonian $H_{E}$ of the free rigid body is $$H_{E}=\frac{\left(\textbf{G}.\textbf{I}\right)^{2}}{2A}+\frac{\left(\textbf{G}.\textbf{J}\right)^{2}}{2B}+\frac{\left(\textbf{G}.\textbf{K}\right)^{2}}{2C}.$$ ($\mathbf{I}$,$\mathbf{J}$,$\mathbf{K}$) is the basis associated to the principal axes of moments of inertia respectively $A$, $B$, $C$, where $A\leq B\leq C$, and $\mathbf{G}$ the angular momentum of the rigid body. The Hamiltonians $H_{I,0}$ and $H_{I,k}$ are respectively the interactions without the point mass interactions of the central body $0$ and of the planetary body $k$ with the rigid body $1$ and are obtained with a development in Legendre polynomials [@bouelaskar2006] $$\begin{split}
H_{I,0} = & -\frac{\mathcal{G}m_{0}}{2r_{1}^{3}}\left[\left(B+C-2A\right)\left(\frac{\textbf{r}_{1}.\textbf{I}}{r_{1}}\right)^{2}+\left(A+C-2B\right)\left(\frac{\textbf{r}_{1}.\textbf{J}}{r_{1}}\right)^{2} \right. \\
& \left. +\left(A+B-2C\right)\left(\frac{\textbf{r}_{1}.\textbf{K}}{r_{1}}\right)^{2}\right], \label{eq:hamI0}
\end{split}$$ $$\begin{split}
H_{I,k} = & -\frac{\mathcal{G}m_{k}}{2r_{1,k}^{3}}\left[\left(B+C-2A\right)\left(\frac{\textbf{r}_{1,k}.\textbf{I}}{r_{1,k}}\right)^{2}+\left(A+C-2B\right)\left(\frac{\textbf{r}_{1,k}.\textbf{J}}{r_{1,k}}\right)^{2} \right. \\
& \left. +\left(A+B-2C\right)\left(\frac{\textbf{r}_{1,k}.\textbf{K}}{r_{1,k}}\right)^{2}\right],
\end{split}$$ with ($\textbf{r}_k$,$\tilde{\textbf{r}}_k$) the heliocentric position and the conjugate momemtum of the body $k$, $m_{k}$ the mass of the body $k$, $\textbf{r}_{1,k}=\textbf{r}_{1}-\textbf{r}_{k}$ and $\mathcal{G}$ the gravitational constant. By averaging over the fast Andoyer angles $g$, the angle of proper rotation, and $l$, the angle of precession of the polar axis $\textbf{K}$ around the angular momentum $\textbf{G}$ [@bouelaskar2006], $H_{E}$ becomes constant and the averaged total Hamiltonian $\mathcal{H}$ is $$\mathcal{H}=\left\langle H\right\rangle_{g,l}=H_{N}+\mathcal{H}_{I,0}+\sum^{n}_{k=2}\mathcal{H}_{I,k}, \label{eq:hamI0mean}$$ where $$\mathcal{H}_{I,0}=\left\langle H_{I,0} \right\rangle_{g,l}=-\frac{\mathcal{C}_{1}m_{0}}{r_{1}^{3}}\left(1-3\left(\frac{\textbf{r}_{1}.\textbf{w}}{r_{1}}\right)^{2}\right),$$ $$\mathcal{H}_{I,k}=\left\langle H_{I,k} \right\rangle_{g,l}=-\frac{\mathcal{C}_{1}m_{k}}{r_{1,k}^{3}}\left(1-3\left(\frac{\textbf{r}_{1,k}.\textbf{w}}{r_{1,k}}\right)^{2}\right),$$ with $$\textbf{w}=\frac{\textbf{G}}{G},$$ $$\mathcal{C}_{1}=\frac{\mathcal{G}}{2}\left(C-\frac{A+B}{2}\right)\left(1-\frac{3}{2}\sin^{2}J\right)\label{eq:C_1}$$ and $J$ the Andoyer angle between $\textbf{w}$ and $\mathbf{K}$. $G$ is the norm of the angular momentum $\mathbf{G}$.
The Hamiltonian $\mathcal{H}=H_{N}+\mathcal{H}_{I,0}+\sum^{n}_{k=2}\mathcal{H}_{I,k}$ can be split in several parts. The Hamiltonian $H_{N}$ of $n$ point masses can be integrated with the existing symplectic integrators [e.g. @wisdom1991; @laskar2001; @farres2013].
For a planetary system where a planet is located much closer to the central star than the other planets, [@farago2009] averaged its fast orbital motion to obtain a Hamiltonian of interaction between the orbital angular momentum of the closest planet and the other more distant planets. Because the Hamiltonians $\mathcal{H}_{I,0}$ and $\mathcal{H}_{I,k}$ are analogous to this Hamiltonian, we can use the symplectic method developped by [@farago2009] for this case. We detail explicitly how this method can be applied here.
The Hamiltonian $\mathcal{H}_{I,0}$ gives the equations of the motion [@bouelaskar2006]
[align]{} \_[1]{} &= **0**\
\_[1]{} &= -\_[**r**\_[1]{}]{} \_[I,0]{}\
&= -((1-5()\^[2]{})**r**\_1+2(**r**\_[1]{}.**w**)**w**), \[eq:eqdiffh0\]\
&= \_[**w**]{} \_[I,0]{} **w** = (**r**\_[1]{}.**w**)**r**\_[1]{}**w**.
$\textbf{r}_{1}$ is conserved and because of $\textbf{r}_{1}.\dot{\textbf{w}}=0$, $\textbf{r}_{1}.\textbf{w}$ is also constant. With the angular frequency $\Omega_{0}=6\mathcal{C}_{1}m_{0}\left(\textbf{r}_{1}.\textbf{w}\right)/(Gr_{1}^{4})$ as in [@farago2009], the solution for $\textbf{w}$ is $$\textbf{w}\left(t\right)=R_{\textbf{r}_{1}}\left(\Omega_{0} t\right)\textbf{w}\left(0\right),$$ where $R_{\textbf{x}}\left(\theta\right)$ is the rotation matrix of angle $\theta$ around the vector $\textbf{x}$. The solution for $\tilde{\textbf{r}}_{1}$ is [@farago2009] $$\begin{split}
\tilde{\textbf{r}}_{1}\left(t\right)= & \tilde{\textbf{r}}_{1}\left(0\right)-\frac{3\mathcal{C}_{1}m_{0}}{r_{1}^{5}}\left(\left(1-3\left(\frac{\textbf{r}_1.\textbf{w}}{r_1}\right)^{2}\right)t\textbf{r}_{1} \right.\\
& \left.+\frac{2\textbf{r}_1.\textbf{w}}{\Omega_{0}r_1}\left(\textbf{w}\left(t\right)-\textbf{w}\left(0\right)\right)\times\textbf{r}_{1}\right).
\end{split}$$ We have then an exact solution for the Hamiltonian $\mathcal{H}_{I,0}$.
The equations of motion for the Hamiltonian $\mathcal{H}_{I,k}$ are similar. However, this Hamiltonian modifies the variables of the body $k$. The equations are then
[align]{} \_[1]{} &= **0**,\
\_[1]{} &= -((1-5()\^[2]{})**r**\_[1,k]{}+2(**r**\_[1,k]{}.**w**)**w**),\
\_[k]{} &= **0**, \[eq:eqdiffhI\]\
\_[k]{} &= ((1-5()\^[2]{})**r**\_[1,k]{}+2(**r**\_[1,k]{}.**w**)**w**),\
&= (**r**\_[1,k]{}.**w**)**r**\_[1,k]{}**w**,
which have the solution
[align]{} \_[1]{}(t) &= \_[1]{}(0)-((1-3()\^[2]{})t**r**\_[1,k]{} .\
& . +(**w**(t)-**w**(0))**r**\_[1,k]{}),\
\_[k]{}(t) &= \_[k]{}(0)+((1-3()\^[2]{})t**r**\_[1,k]{} .\
& . +(**w**(t)-**w**(0))**r**\_[1,k]{}),\
**w**(t) &= R\_[**r**\_[1,k]{}]{}(\_[k]{}t)**w**(0),
with the angular frequency $\Omega_{k}=6\mathcal{C}_{1}m_{k}\left(\textbf{r}_{1,k}.\textbf{w}\right)/(Gr_{1,k}^{4})$.
The symplectic scheme for the total Hamiltonian is [@farago2009] $$S\left(t\right)=e^{\frac{t}{2}L_{\mathcal{H}_{I,0}}}e^{\frac{t}{2} L_{\mathcal{H}_{I,2}}}\ldots e^{\frac{t}{2} L_{\mathcal{H}_{I,n}}} e^{t L_{H_{N}}}e^{\frac{t}{2} L_{\mathcal{H}_{I,n}}}\ldots e^{\frac{t}{2} L_{\mathcal{H}_{I,2}}}e^{\frac{t}{2}L_{\mathcal{H}_{I,0}}},$$ where $L_{X}$ represents the Lie derivative of a Hamiltonian $X$. This scheme gives a symplectic solution for the long-term evolution of the angular momentum of the rigid body.
It is possible to neglect the effects of the rotation on the orbital motion by keeping $\dot{\tilde{\textbf{r}}}_{1}=0$ and $\dot{\tilde{\textbf{r}}}_{k}=0$ in Eqs. (\[eq:eqdiffh0\]) and (\[eq:eqdiffhI\]). This allows to obtain multiple solutions for the long-term rotation with different initial conditions for the angular momentum by computing only one orbital evolution. In this case, the total energy is still conserved, but it is not the case for the total angular momentum.
By averaging over the fast rotation of Ceres and Vesta, this method is used in section \[SEC:orbobliquite\] to obtain the long-term evolution of the angular momenta of Ceres and Vesta where the torques are exerted by the Sun and the planets.
Secular equations for the angular momentum\[sec:oblisec\]
---------------------------------------------------------
In order to speed up the computation, we average the Hamiltonian (Eq. (\[eq:hamI0mean\])) over the mean longitude of the rigid body. By considering only the torque exerted by the Sun, we obtain the secular Hamiltonian for the rotation axis [@bouelaskar2006] $$H=-\frac{G\alpha}{2\left(1-e^2\right)^{3/2}} \left(\bf{w}.\bf{n}\right)^2,$$ with $G=C\omega$ for the spin rate $\omega$. The motion of the angular momentum is forced by a secular orbital solution, from which the normal to the orbit $\bf{n}$ and the eccentricity $e$ are computed. The precession constant $\alpha$ can be written $$\alpha=\frac{3}{2}\frac{\mathcal{G}M_{\odot}}{C \omega a^{3}}\left(C-\frac{A+B}{2}\right)\left(1-\frac{3}{2}\sin^{2}J\right)$$ with $a$ the semi-major axis and ${M_{\odot}}$ the mass of the Sun. The moments of inertia can be normalized by $$\overline{A}=\frac{A}{mR^2},\ \overline{B}=\frac{B}{mR^2},\ {\overline{C}}=\frac{C}{mR^2},\ {\overline{I}}=\frac{I}{mR^2},\label{eq:Cnorm}$$ with $I=(A+B+C)/3$ the mean moment of inertia, $m$ the mass of the solid body and $R$ the reference radius used for the determination of the gravity field. The gravitationnal flatenning $J_2$ depends on the normalized moments of inertia with $$J_2={\overline{C}}-\frac{\overline{A}+\overline{B}}{2}.$$ The precession constant can then be written $$\alpha=\frac{3}{2}\frac{\mathcal{G}M_{\odot}J_2}{{\overline{C}}\omega a^{3}}\left(1-\frac{3}{2}\sin^{2}J\right).\label{eq:alpha_const}$$ The secular equation for the angular momentum $\bf{w}$ is then [e.g. @colombo1966; @bouelaskar2006] $$\dot{\bf{w}} =\frac{\alpha}{\left(1-e^2\right)^{3/2}} \left(\bf{w}.\bf{n}\right) \bf{w}\wedge\bf{n}.\label{eq:integsec}$$ The angle between the normal to the orbit, $\mathbf{n}$, and the angular momentum, $\mathbf{w}$, is the obliquity $\epsilon$.
The Eq. (\[eq:integsec\]) will be used in section \[SEC:stab\] to study the stability of the spin axes of Ceres and Vesta.
Precession constants and initial conditions\[SEC:prec\]
=======================================================
To determine the quantity $\mathcal{C}_1$ (Eq. (\[eq:C\_1\])) and the precession constant $\alpha$ (Eq. (\[eq:alpha\_const\])), the polar moment of inertia $C$, the spin rate $\omega$, the gravitationnal flattening $J_2$ and the Andoyer angle $J$ are necessary.
Estimation of the Andoyer angle $J$
-----------------------------------
The Andoyer angle $J$ is the angle between the angular momentum $\mathbf{G}$ and the polar axis $\mathbf{K}$.
The Dawn space mission determined the principal axes of Ceres and Vesta and measured the gravitational field in these frames. To obtain the precision of the determination of the principal axes, we estimate the angle $\gamma$ between the polar axis and its determination by Dawn with the expression $$\gamma\approx\sqrt{C_{21}^2+S_{21}^2}/{\overline{C}}. \label{eq:angleJ}$$ The spherical harmonic gravity coefficients of second degree and first order $C_{21}$ and $S_{21}$ were determined with their uncertainties by Dawn for Ceres [@park2016] and Vesta [@konopliv2014]. Because $C_{21}$ and $S_{21}$ are smaller than their uncertainties and than the other coefficients of second degree for both bodies, [@park2016] and [@konopliv2014] deduce that this angle is negligible. By replacing $C_{21}$ and $S_{21}$ by their uncertainites in Eq. (\[eq:angleJ\]) and ${\overline{C}}$ by the values of the sections \[sec:Cceres\] and \[sec:Cvesta\], the angle $\gamma$ is about $7\times 10^{-5}\,\degree$ and $1\times 10^{-6}\,\degree$ for respectively Ceres and Vesta.
In the basis ($\mathbf{I}$,$\mathbf{J}$,$\mathbf{K}$) associated to the principal axes of moments of inertia, the rotational vector $\mathbf{\Omega}$ can be expressed as $$\mathbf{\Omega}= \omega\begin{pmatrix}
m_1 \\
m_2 \\
1+m_3
\end{pmatrix},$$ where $m_1$, $m_2$ described the polar motion and $m_3$ the length of day variations, which were estimated by [@rambaux2011] and [@rambaux2013] for respectively Ceres and Vesta. The amplitude of the polar motion is about $0.4\,{\mathrm{mas}}$ for Ceres and $0.8\,{\mathrm{mas}}$ for Vesta. [@rambaux2011] assumed that Ceres is axisymmetric and obtained an amplitude of about $8\times 10^{-4}\,{\mathrm{mas}}$ for $m_3$. [@rambaux2013] considered a triaxial shape for Vesta and obtained an amplitude for $m_3$ of about $0.1\,{\mathrm{mas}}$. The angle between the rotational vector $\mathbf{\Omega}$ and the polar axis is about $1\times 10^{-7}\,\degree$ for Ceres and $2\times 10^{-7}\,\degree$ for Vesta and the polar motion is then negligible.
The rotational vector can also be approximated by $\mathbf{\Omega}=\omega \mathbf{K}$ and $\mathbf{G}$ verifies $\mathbf{G}=C\omega\mathbf{K}$. Therefore we can neglect $\sin^2J$ in Eq. (\[eq:alpha\_const\]) and the precession constant becomes $$\alpha=\frac{3}{2}\frac{\mathcal{G}M_{\odot}J_2}{{\overline{C}}\omega a^{3}}\label{eq:prec_const}.$$
Precession constant of Ceres
----------------------------
### Physical parameters
From the Dawn data, [@park2016] determined $J_2$ $$J_2=2.6499\times10^{-2}\pm 8.4\times10^{-7} \label{eq:J2ceres}$$ for the reference radius $$R=470\,{\mathrm{km}}.$$ [@park2016] also refined the spin rate to $$\omega=952.1532\pm0.0001{\mathrm{\degree/day}}.$$
### Polar moment of inertia\[sec:Cceres\]
![\[fig:interceres\]Normalized mean moment of inertia ${\overline{I}}$ by assuming a spherical shape with respect to the density and radius of the mantle. The purple line represents the numerical solutions of the Clairaut’s equations which reproduce the observed gravitationnal flatenning $J_2$ [@park2016].](figures/figure1.pdf){width="9cm"}
The polar moment of inertia can be estimated from a model of internal structure. [@park2016] proposed a set of internal models with two layers by numerically integrating the Clairaut’s equations of hydrostatic equilibrium. The mantle of density $2460-2900\,{\mathrm{kg\,m^{-3}}}$ has a composition similar to the ones of different types of chondrites and the outer shell of density $1680-1950\,{\mathrm{kg\,m^{-3}}}$ is a blend of volatiles, silicates and salts. [@ermakov2017b] used the gravity field and the shape obtained by the Dawn space mission and took into account the effect of the isostasy to constrain the internal structure of Ceres. Their favoured model has a crust of density $1287^{-87}_{+70}\,{\mathrm{kg\,m^{-3}}}$ and of thickness $41.0^{-4.7}_{+3.2}\,{\mathrm{km}}$ and a mantle of density $2434^{-8}_{+5}\,{\mathrm{kg\,m^{-3}}}$ and of radius $428.7^{+4.7}_{-3.2}\,{\mathrm{km}}$.
In figure \[fig:interceres\], the purple curve represents the numerical solutions of the Clairaut’s equations, which allow to reproduce the observed gravitationnal flatenning $J_2$ [@park2016]. In figure \[fig:interceres\], the normalized mean moment of inertia ${\overline{I}}$ is computed by assuming a spherical shape as in Eq. (1) of [@rambaux2011]. For a mantle density of $2460-2900\,{\mathrm{kg\,m^{-3}}}$, we have ${\overline{I}}=0.375$. The normalized polar moment of inertia ${\overline{C}}$ can be deduced by $${\overline{C}}=\frac{2J_ 2}{3}+{\overline{I}}.\label{eq:CIJ}$$ With Eq. (\[eq:J2ceres\]) and ${\overline{I}}=0.375$, we found ${\overline{C}}=0.393$[^1].
The gravitationnal flatenning possesses a non-hydrostatic component $J_{2}^{nh}$, which causes an uncertainty on ${\overline{C}}$. [@park2016] estimated $J_{2}^{nh}$ with $$\frac{J_{2}^{nh}}{J_2}=\frac{\sqrt{{\overline{C}}_{22}^{2}+\overline{S}_{22}^{2}}}{\overline{J}_2}$$ for the normalized spherical harmonic gravity coefficients of second degree and second order ${\overline{C}}_{22}$ and $\overline{S}_{22}$ and $\overline{J}_2$ the normalized value of $J_2$. By derivating the Radau-Darwin relation [e.g. @rambaux2015; @ermakov2017b], we obtain the uncertainty $\Delta {\overline{I}}$ on ${\overline{I}}$ $$\Delta {\overline{I}}=\frac{2k}{3\sqrt{\left(4-k\right)\left(1+k\right)^3}}\frac{J_{2}^{nh}}{J_2}. \label{eq:DI}$$ The fluid Love number $k$ verifies $k=3J_2/q$ [@ermakov2017b] with $q=\omega^2 R_{vol}^3/(\mathcal{G}m)$, $R_{vol}$ the volume-equivalent radius and $m$ the mass. For $R_{vol}=469.7\,{\mathrm{km}}$ [@ermakov2017b], Eq. (\[eq:DI\]) gives the uncertainty $\Delta {\overline{I}}=0.0047$. With Eq. (\[eq:CIJ\]), we have $\Delta {\overline{C}}=\Delta {\overline{I}}+2J^{nh}_2/3=0.0053$. We keep $\Delta {\overline{C}}=0.005$ as in [@ermakov2017a].
For the integration of the obliquity, we choose ${\overline{C}}=0.393$ and $0.005$ for its uncertainty. The interval of uncertainty on ${\overline{C}}$ is then $[0.388:0.398]$. [@ermakov2017a] obtained the value ${\overline{C}}=0.392$ for a radius of $R=469.7\,{\mathrm{km}}$, which corresponds to ${\overline{C}}\approx0.3915$ for $R=470\,{\mathrm{km}}$. The value ${\overline{C}}=0.393$ is then consistent with the one of [@ermakov2017a] given the uncertainties.
### Precession constant\[sec:preconstceres\]
We take a constant semi-major axis to compute the precession constant. We use the average of the semi-major axis of the solution La2011 on $\left[-25:5\right]{\,\mathrm{Myr}}$, which is about $a\approx2.767\,{\mathrm{AU}}$. On this interval, the semi-major axis can move away to $\Delta a =0.005\,{\mathrm{AU}}$ from this value. From Eq. (\[eq:prec\_const\]) and previous values and uncertainties, we deduce the precession constant $$\alpha=6.40\pm0.12{\arcsecond\per\mathrm{yr}}.$$
### Early Ceres\[sec:earlyceresalpha\]
[@mao2018] estimated that Ceres should spin about $7\pm4\%$ faster to be in hydrostatic equilibrium with the observed present shape. They supposed that Ceres was in hydrostatic equilibrium in the past and has then slowed down due to some phenomena like significant asteroid impacts. They obtained for this higher spin rate an internal structure of normalized mean moment of inertia $\overline{I}=0.353\pm 0.009$ for a reference radius $R=470\,{\mathrm{km}}$. With Eq. (\[eq:CIJ\]), it corresponds to a normalized polar moment of inertia ${\overline{C}}=0.371\pm0.009$. The corresponding precession constant is $$\alpha=6.34\pm0.43{\arcsecond\per\mathrm{yr}}$$ with the value of the semi-major axis of the section \[sec:preconstceres\]. With the present spin rate and considering a normalized polar moment of inertia of ${\overline{C}}=0.371\pm0.009$, the precession constant of the present Ceres would be $$\alpha=6.78\pm0.20{\arcsecond\per\mathrm{yr}}.$$
Precession constant of Vesta
----------------------------
### Physical parameters
From the Dawn data, [@konopliv2014] gived the normalized value for $J_2$ $$\overline{J}_2=3.1779397\times10^{-2}\pm 1.9\times10^{-8}$$ for the reference radius $$R=265\,{\mathrm{km}}.$$ It corresponds to about $J_2=\sqrt{5}\times\overline{J}_2=7.1060892\times10^{-2}$. The rotation rate has been refined by [@konopliv2014] with $$\omega=1617.3331235\pm0.0000005\,{\mathrm{\degree/day}}.$$
### Polar moment of inertia\[sec:Cvesta\]
$\rho$ $({\mathrm{kg\,m^{-3}}})$ semi-principal axes $\left({\mathrm{km}}\right)$
----------- ---------------------------------- --------------------------------------------------
Crust (a) 2900 $a=b=280.9$ $c=226.2$
Mantle 3200 $a=b=253.3$ $c=198.8$
Core 7800 $a=b=114.1$ $c=102.3$
Crust (b) 2970 $a=284.50$ $b=277.25$ $c=226.43$
Mantle 3160 $a=b=257$ $c=207$
Core 7400 $a=b=117$ $c=105$
Crust (c) 2970 $a=284.50$ $b=277.25$ $c=226.43$
Mantle 3970 $a=b=213$ $c=192$
: \[tab:paramstrucvesta\]Densities and semi-principal axes for different models of internal structure of Vesta. (a) corresponds to the reference ellipsoids of table 3 of [@ermakov2014], (b) and (c) respectively to the three-layer and two-layer models of [@park2014]. For (b) and (c), the dimensions of the crust are given by the best-fit ellipoid of [@konopliv2014].
The observation of the precession and nutation of the pole of Vesta by Dawn has not allowed to obtain the polar moment of inertia $C$ [@konopliv2014]. Following [@rambaux2013], we determine $C$ from an internal model composed of ellipsoidal layers of semi-axes $a_i$, $b_i$, $c_i$ and uniform densities $\rho_i$. $a_i$, $b_i$ and $c_i$ are respectively the major, intermediate and minor semi-axes. For a three-layer model constituted of a crust (1), a mantle (2) and a core (3), $C$ is $$\begin{aligned}
C &=& \frac{4\pi}{15}\left(a_1b_1c_1\left(a_1^2+b_1^2\right)\rho_1+ a_2b_2c_2\left(a_2^2+b_2^2\right)\left(\rho_2-\rho_1\right)\right. \nonumber \\
& & \left. + a_3b_3c_3\left(a_3^2+b_3^2\right)\left(\rho_3-\rho_2\right)\right). \label{eq:polarC}\end{aligned}$$
[@ermakov2014] and [@park2014] proposed internal models from the gravity field and the shape model of [@gaskell2012]. [@ermakov2014] determined the interface between the crust and the mantle. The densities and the reference ellipsoids used by [@ermakov2014] to compare their model are in table \[tab:paramstrucvesta\]. For these parameters, Eq. (\[eq:polarC\]) gives ${\overline{C}}=0.4061$. If we use instead of the biaxial crust of the table \[tab:paramstrucvesta\], the triaxial best-fit ellipsoid of [@ermakov2014] determined from the shape model of [@gaskell2012] with $a=284.895\,{\mathrm{km}}$, $b=277.431\,{\mathrm{km}}$, $c=226.838\,{\mathrm{km}}$, we obtain ${\overline{C}}=0.4086$.
[@park2014] proposed three-layer and two-layer models (table \[tab:paramstrucvesta\]). For the form of the crust, we use the best-fit ellipsoid of [@konopliv2014] instead of the shape of [@gaskell2012]. Eq. (\[eq:polarC\]) gives the approximate values ${\overline{C}}=0.4089$ for the three-layer model and ${\overline{C}}=0.4218$ for the two-layer model.
We keep ${\overline{C}}=0.409$ obtained for the three-layer model of [@park2014] with the uncertainty $0.013$, given from the uncertainty interval $[0.406:0.422]$.
### Precession constant\[sec:preconstvesta\]
Like for Ceres, we consider a mean value for the semi-major axis. With $a\approx2.361\,{\mathrm{AU}}$ and $\Delta a=0.002\,{\mathrm{AU}}$, Eq. (\[eq:prec\_const\]) gives $$\alpha=15.6\pm0.6{\arcsecond\per\mathrm{yr}}.$$
### Early Vesta\[sec:earlyvestaalpha\]
The southern hemisphere of Vesta has a large depression with two basins Veneneia and Rheasilvia created by two giant impacts [@marchi2012; @schenk2012]. [@fu2014] fitted the regions of the northern hemisphere not affected by the giant impacts with an ellipsoid of principal axes of dimensions $a=280.6\,{\mathrm{km}}$, $b=274.6\,{\mathrm{km}}$ and $c=236.8\,{\mathrm{km}}$. By extrapoling this shape to the two hemispheres of the early Vesta supposed hydrostatic, [@fu2014] obtained a paleorotation period of $5.02\,{\mathrm{h}}$ and [@ermakov2014] a paleorotation period between $4.83\,{\mathrm{h}}$ and $4.93\,{\mathrm{h}}$ for respectively the most and least differentiated internal structures.
By replacing the shape of the previous models by the supposed shape of the early Vesta determined by [@fu2014], Eq. (\[eq:polarC\]) gives ${\overline{C}}=0.4055$ for the three layer model of [@ermakov2014], ${\overline{C}}=0.4081$ and ${\overline{C}}=0.4210$ for respectively the three-layer and two-layer models of [@park2014]. We choose ${\overline{C}}=0.408$ with an uncertainty of $0.013$. The corresponding gravitationnal flatenning is $J_2=0.0559\pm0.0003$, where the uncertainty is given from the gravitationnal flatennings of the three different models of internal structure. We choose the paleorotation period of $5.02\,{\mathrm{h}}$ of [@fu2014]. We use the value of the semi-major axis of section \[sec:preconstvesta\] and Eq. (\[eq:prec\_const\]) gives the precession constant for the early Vesta $$\alpha=11.6\pm0.9{\arcsecond\per\mathrm{yr}}.$$
Initial conditions
------------------
The Dawn space mission refined the orientation of the rotation axes of Ceres and Vesta. We use the coordinates given under the form right ascension/declination in the ICRF frame for the epoch J2000 by [@park2016] for Ceres and by [@konopliv2014] for Vesta reminded in table \[tab:CI\]. From these coordinates and their uncertainties, we obtain the obliquities $\epsilon_C$ and $\epsilon_V$ of respectively Ceres and Vesta at the epoch J2000 $$\epsilon_C=3.997\pm0.003\degree,$$ $$\epsilon_V=27.46784\pm0.00003\degree.$$
Ceres Vesta
------------------ ------------------- -----------------------
R.A. $(\degree)$ $291.421\pm0.007$ $309.03300\pm0.00003$
D $(\degree)$ $66.758\pm0.002$ $42.22615\pm0.00002$
: \[tab:CI\]Right ascension (R.A.) and declination (D) of Ceres [@park2016] and Vesta [@konopliv2014] at the epoch J2000 in the ICRF frame.
Orbital and rotational solutions obtained with the symplectic integration \[SEC:orbobliquite\]
==============================================================================================
This section is dedicated to the long-term solutions [La2011]{} for the orbital motion and [Ceres2017]{} for the rotational motion, which is obtained with the symplectic integration of the angular momentum described in section \[sec:oblisymp\]. The time origin of the solutions is the epoch J2000.
We analyse the solutions with the method of the frequency map analysis [@laskar1988; @laskar1990; @laskar1992; @laskar1993; @laskar2003], which decomposes a discrete temporal function in a quasi-periodic approximation. The precision of the obtained frequencies is estimated by performing a frequency analysis of the solution rebuilt from the frequency decomposition with a temporal offset [@laskar1990]. The differences between the frequencies of the two decompositions allow to estimate the accuracy on the determination of the frequencies.
$k_2$ $Q$ $\omega$ $({\mathrm{\degree/day}})$ $R$ $({\mathrm{km}})$ ${\overline{C}}$ $r$ $({\mathrm{AU}})$ $\Gamma/(C\omega)$ $({\mathrm{yr}}^{-1})$
------- ----------- ------- ------------------------------------- ----------------------- ------------------ ----------------------- -------------------------------------------
Mars $0.149$ $92$ $350.89198521$ $3396$ $0.3654$ $1.5237$ $\sim3\times10^{-13}$
Ceres $10^{-3}$ $10$ $952.1532$ $470$ $ 0.393$ $2.7665$ $\sim4\times10^{-16}$
Vesta $10^{-3}$ $100$ $1617.3331235$ $265$ $ 0.409$ $2.3615$ $\sim3\times10^{-17}$
Perturbations on the rotation axis
----------------------------------
We investigate and estimate some effects which can affect the long-term rotation in addition to the torques exerted by the Sun and the planets.
### Tidal dissipation
The torque exerted on a celestial body for the solar tides is [@mignard1979] $$\mathbf{\Gamma} = 3\frac{k_{2}\mathcal{G}{M_{\odot}}^2R^5}{Cr^8}\Delta t\left[
\left(\mathbf{r}.\mathbf{G}\right)\mathbf{r}-r^2\mathbf{G}+C\mathbf{r}\times\mathbf{v}\right]$$ with $\Delta t$ the time delay between the stress exerted by the Sun and the response of the body, $k_2$ the Love number, $R$ the radius of the body, $\mathbf{r}$ and $\mathbf{v}$ the heliocentric position and velocity of the body and $r$ the norm of $\mathbf{r}$. For a circular and equatorial orbit, [@mignard1979] writes $$\Gamma = 3\frac{k_{2}\mathcal{G}{M_{\odot}}^2R^5}{2r^6}|\sin\left(2\delta\right)|$$ with $\delta=\left(\omega-n\right)\Delta t$ the phase lag and $n$ the mean motion. The phase lag $\delta$ is related to the effective specific tidal dissipation function $Q$ by $1/Q=\tan(2\delta)$ [@macdonald1964].
Because of the dependance in $r^{-6}$, the torque decreases strongly with the distance to the Sun. [@laskar2004a] concluded that the tidal dissipation in the long-term rotation of Mars has an effect on the obliquity inferior to $0.002\degree$ in $10{\,\mathrm{Myr}}$. We estimate this torque for Ceres and Vesta and compare with the one for Mars in table \[tab:tides\]. The values of $k_2$ and $Q$ used for the estimation of the torque are the ones used by [@rambaux2011] for Ceres and by [@bills2011] for Vesta. The ratio of the torque on the rotation angular momentum is respectively about $1000$ and $10000$ times weaker for Ceres and Vesta than for Mars, for which the effect can be already considered as weak [@laskar2004a]. Therefore, the solar tidal dissipation for Ceres and Vesta was not considered.
### Close encounters\[sec:closeencounters\]
perturbed body perturbing body R (${\mathrm{km}}$) $N_c$ ($10^{-3}\times{\mathrm{Gyr}}^{-1}$) $A$ ($10^8\times{\mathrm{AU}}^{-2}.{\mathrm{Gyr}}^{-1}$) $B$ ($10^{-10}\times{\mathrm{AU}}^{3/2}$) $V$ (${\mathrm{Gyr}}^{-1}$)
-------------------------- ----------------- --------------------- -------------------------------------------- ---------------------------------------------------------- ------------------------------------------- -----------------------------
(1) (4) $256$ $2.0$ $1.7$ $6.3$ $1.3\times10^{-5}$
$R_1=476\,{\mathrm{km}}$ (2) $252$ $0.9$ $0.76$ $5.3$ $4.4\times10^{-6}$
(7) $112$ $1.3$ $1.7$ $0.41$ $7.2\times10^{-8}$
(324) $102$ $1.0$ $1.3$ $0.25$ $2.1\times10^{-8}$
(4) (1) $476$ $2.0$ $1.7$ $34$ $3.9\times10^{-4}$
$R_4=256\,{\mathrm{km}}$ (2) $252$ $1.0$ $1.7$ $13$ $8.0\times10^{-5}$
(7) $112$ $1.4$ $4.6$ $1.2$ $2.5\times10^{-6}$
(324) $102$ $0.5$ $1.7$ $0.71$ $3.6\times10^{-7}$
In the long-term solution La2010 [@laskar2011a], the five bodies of the asteroid belt (1) Ceres, (2) Pallas, (4) Vesta, (7) Iris and (324) Bamberga are considered as planets and there are mutual gravitationnal interactions between them. [@laskar2011a] considered these bodies because Ceres, Vesta and Pallas are the three main bodies of the main asteroid belt and because Iris and Bamberga influence significantly the orbital motion of Mars. [@laskar2011b] studied the close encounters between these bodies and showed that these close encounters are responsible of their chaotic behaviour. If a body comes close to Ceres or Vesta, it can exert a significant torque during the encounter. The effects of close encounters on the rotation axes of the giant planets have been studied by [@lee2007]. For a body of mass $m$ with no satellites, the maximal difference $\|\Delta \mathbf{w}\|$ between the angular momentum before an encounter with a perturbing body of mass $m_{pert}$ and the one after is [@lee2007] $$\|\Delta \mathbf{w}\|=\frac{\pi}{2}\alpha\frac{m_{pert}}{{M_{\odot}}}\frac{ a^3}{r_p^2v_p}$$ with $\alpha$ the precession constant (Eq. (\[eq:prec\_const\])), $a$ the semi-major axis, $r_p$ and $v_p=\sqrt{2\mathcal{G}(m+m_{pert})/r_p}$ respectively the distance and the relative speed between the two bodies at the closest distance. We can write this formula as $$\|\Delta \mathbf{w}\|=B r_p^{-3/2}$$ with $$B=\frac{3\pi}{4}m_{pert}\sqrt{\frac{\mathcal{G}}{2\left(m+m_{pert}\right)}}\frac{J_2}{{\overline{C}}\omega}.$$ The values of the coefficient $B$ have been computed in table \[tab:closeencounters\] for close encounters considered in [@laskar2011b]. Therefore a close encounter changes the orientation of the rotation axis at the most of the angle $$\theta=\arccos \left(1-\frac{B^2}{2r_p^3}\right).$$
[@laskar2011b] studied the probability of close encounters between the five bodies of the asteroid belt considered in the solution La2010 [@laskar2011a] and determined that the probability density $\rho(r_p)$ per unit of time of an encounter with a distance $r_p$ at the closest approach can be fitted by a linear function of $r_p$ for $r_p\leq1\times10^{-3}\,{\mathrm{AU}}$ $$\rho\left(r_p\right)=Ar_p,$$ with $$A=\frac{2N_c}{\left(R_1+R_2\right)^{2}}.$$ $N_c$ is the collision probability per unit of time between two bodies of radii $R_1$ and $R_2$. Table \[tab:closeencounters\] gives the radii, the collision probability $N_c$ extracted from table 3 of [@laskar2011b] and the deduced coefficient $A$ between each considered pair on $1\,{\mathrm{Gyr}}$ for the five bodies considered in [@laskar2011b].
We suppose that each close encounter moves the angular momentum in a random direction. The motion of the rotation axis is described by a random walk on a sphere of distribution [@perrin1928; @roberts1960] $$\rho_S\left(\theta\right)=\sum_{k=0}^{\infty}\frac{2k+1}{4\pi}e^{-\frac{k\left(k+1\right)}{4}V}P_k\left(\cos\theta\right)$$ with the variance $V$, $\int_0^{2\pi}\int_0^\pi\rho_S\left(\theta\right)\sin\theta d\theta d\phi=1$ and $P_k$ the Legendre polynomial of order $k$. For a random walk of $N$ steps with a large value of $N$, where each step causes a small change $\beta$ of the orientation, the variance $V$ is [@roberts1960] $$V=\sum_{k=1}^N\int_0^\pi\beta^2dp_k\left(\beta\right)$$ with $dp_k\left(\beta\right)$ the probability to have a change of angle $\beta$ for the step $k$. For Ceres, we consider the close encounters with the bodies considered in [@laskar2011b] for which we have the probability of close encounters and we can write $$V_1=\sum_{k\in \{2,4,7,324\}}N_k\int_{\beta_{kmin}}^{\beta_{kmax}}\beta^2dp_k\left(\beta\right)$$ with $k$ the number of the body for which we consider the close encounter with Ceres, $\beta_{kmin}$ the minimal change of orientation at the distance $1\times10^{-3}\,{\mathrm{AU}}$, $\beta_{kmax}$ the maximal change of orientation for a grazing encounter at the distance $R_1+R_k$, $N_k$ the number of close encounters and $dp_k\left(\beta\right)$ the probability distribution to have the change $\beta$ for a close encounter with the body $k$. As $|dp_k\left(\beta\right)|=|A_{1k}r_pdr_p|/N_k$, the variance $V_1$ verifies $$V_1=\sum_{k\in \{2,4,7,324\}}A_{1k}\int_{R_1+R_k}^{10^{-3}\,{\mathrm{AU}}}\arccos^2 \left(1-\frac{B_{1k}^2}{2r_p^3}\right)r_pdr_p.$$ We compute the standard deviation of the distribution of the rotation axis of Ceres on $1\,{\mathrm{Gyr}}$ under the effects of the close encounters with the bodies (2) Pallas, (4) Vesta, (7) Iris and (324) Bamberga with the formula $$\theta_{1sd}=\sqrt{\sum_{k=0}^{\infty}\frac{2k+1}{4\pi}e^{-\frac{k\left(k+1\right)}{4}V_1}\int_0^{2\pi}\int_0^\pi \theta^2P_k\left(\cos\theta\right)\sin\theta d\theta d\phi}.$$ We obtain with the intermediary quantities in table \[tab:closeencounters\] about $$\theta_{1sd}=0.24\degree.$$ We realize a similar computation for Vesta in table \[tab:closeencounters\] to obtain $$\theta_{4sd}=1.3\degree.$$ The time interval $\left[-100:0\right]{\,\mathrm{Myr}}$ is smaller than $1\,{\mathrm{Gyr}}$ and the effects of close encounters are then weaker on this interval. Moreover these standard deviations are computed for close encounters, which cause a maximal effect on the rotation axis
Although their effects are weak, we consider for the long-term integration of the rotation the torques exerted on the angular momenta of Ceres and Vesta by the five bodies of the main asteroid belt considered in [@laskar2011b].
$g_i$ (${\arcsecond\per\mathrm{yr}}$) $s_i$ (${\arcsecond\per\mathrm{yr}}$)
------- --------------------------------- ------- ---------------------------------
$g_1$ $5.59$ $s_1$ $-5.61$
$g_2$ $7.453$ $s_2$ $-7.06$
$g_3$ $17.368$ $s_3$ $-18.848$
$g_4$ $17.916$ $s_4$ $-17.751$
$g_5$ $4.257492$
$g_6$ $28.2452$ $s_6$ $-26.347856$
$g_7$ $3.087927$ $s_7$ $-2.9925254$
$g_8$ $0.673022$ $s_8$ $-0.691742$
$g_9$ $-0.35019$ $s_9$ $-0.35012$
: \[tab:freqref\] Principal secular frequencies of the solution La2011 $g_i$, $s_i$ determined on $\left[-20:0\right]{\,\mathrm{Myr}}$ for the four inner planets and on $\left[-50:0\right]{\,\mathrm{Myr}}$ for the four giant planets and Pluto.
Orbital motion La2011 \[sec:resultorb\]
---------------------------------------
The orbital solution La2011 is computed on $\left[-250:250\right]{\,\mathrm{Myr}}$ in a frame associated to the invariable plane [@laskar2011a]. Two successive rotations allow to pass from this frame to the ICRF as explained in the appendix \[sec:planinv\]. The variables $z=e \exp(i\varpi)$ and $\zeta=\sin\left(i/2\right)\exp(i\Omega)$ are computed from the non canonical elliptical elements $(a,\lambda,e,\varpi,i,\Omega)$, where $a$ is the semi-major axis, $\lambda$ the mean longitude, $e$ the eccentricity ,$\varpi$ the longitude of the perihelion, $i$ the inclination with respect to the invariable plane and $\Omega$ the longitude of the ascending node. These elements are computed from the heliocentric positions and velocities.
As made for the solution La2004 of [@laskar2004b], we perform a frequency analysis of the quantities $z_i$ and $\zeta_i$ on $\left[-20:0\right]{\,\mathrm{Myr}}$ for the four inner planets and on $\left[-50:0\right]{\,\mathrm{Myr}}$ for the four giant planets and Pluto to obtain the proper perihelion precession frequencies $g_i$ and ascending node precession frequencies $s_i$ in table \[tab:freqref\].
The evolutions of the eccentricity and the inclination are represented for Ceres and Vesta on $\left[-1:0\right]{\,\mathrm{Myr}}$ respectively in figures \[fig:paleo11ceres\] and \[fig:paleo11vesta\]. For Ceres, the eccentricity oscillates between $0.0629$ et $0.169$ and the inclination between $8.77$ and $10.6\degree$ on $\left[-20:0\right]{\,\mathrm{Myr}}$. For Vesta, the eccentricity varies between $0.0392$ et $0.160$ and the inclination between $5.21$ and $7.56\degree$ on $\left[-20:0\right]{\,\mathrm{Myr}}$. The amplitudes of the variations have the same order of magnitude for Ceres and Vesta on $\left[-250:250\right]{\,\mathrm{Myr}}$.
For Ceres and Vesta, we perform a frequency analysis of $z$ and $\zeta$ on the time interval $\left[-25:5\right]{\,\mathrm{Myr}}$. We consider the fifty secular terms with the highest amplitudes which have a frequency in the interval $\left[-300:300\right]{\arcsecond\per\mathrm{yr}}$ in tables \[tab:freqorbiceres\] and \[tab:freqorbivesta\]. The frequency decompositions of tables \[tab:freqorbiceres\] and \[tab:freqorbivesta\] allow to obtain a secular solution, which reproduces the secular evolution of the solution [La2011]{} on $\left[-20:0\right]{\,\mathrm{Myr}}$ in figures \[fig:compmodsecceres\] and \[fig:compmodsecvesta\], where the differences with the solution [La2011]{} correspond to the short period terms excluded from the secular solution. It is not the case for the ones obtained with a frequency analysis on the time interval $\left[-20:0\right]{\,\mathrm{Myr}}$.
These frequency decompositions are used in section \[SEC:stab\] to compute the orbital quantities in Eq. (\[eq:integsec\]) needed for the secular integration of the rotation. The terms of weak amplitude can play a role in the long-term rotation in the case of secular resonances. For instance, the passage through the resonance with the frequency $s_6+g_5-g_6$ is responsible for a decrease of the obliquity of about $0.4\degree$ for the Earth [@laskarjoutelboudin1993; @laskar2004b]. Therefore, we add to the frequency decompositions of the variable $\zeta$ for Ceres the 100 following terms in the interval $\left[-45:60\right]{\arcsecond\per\mathrm{yr}}$ and for Vesta the 100 following terms in the interval $\left[-34:60\right]{\arcsecond\per\mathrm{yr}}$. These boundaries have been chosen such that it selects the frequencies, which can play a role in the long-term rotation without all the terms close to the principal frequencies $s$.
For Ceres, the proper secular frequencies are $g_C=54.2525\pm0.0006{\arcsecond\per\mathrm{yr}}$ and $s_C=-59.254\pm0.002{\arcsecond\per\mathrm{yr}}$ with the respectively associated periods $23.888{\,\mathrm{kyr}}$ and $21.872{\,\mathrm{kyr}}$. The first fifty secular terms of the frequency decompositions do not include proper frequencies of the inner planets. Their perturbations on the orbital motions are then much weaker than the ones of the giant planets. We observe the proximity of the frequencies $2g_6-g_5\approx52.23{\arcsecond\per\mathrm{yr}}$ and $2g_6-g_7\approx53.40{\arcsecond\per\mathrm{yr}}$ with $g_C$. Resonances with these two frequencies could affect the orbital motion of Ceres.
For Vesta, the proper secular frequencies are $g_V=36.895\pm0.003{\arcsecond\per\mathrm{yr}}$ and $s_V=-39.609\pm0.003{\arcsecond\per\mathrm{yr}}$ with respectively associated periods $35.13{\,\mathrm{kyr}}$ and $32.72{\,\mathrm{kyr}}$. The proper frequencies of the inner planets are not present except maybe for the frequency $-17.74{\arcsecond\per\mathrm{yr}}$, which could correspond to the node frequency of Mars $s_4$. Vesta has a shorter semi-major axis and the planetary perturbations of Mars are then more important than for Ceres, which could explain the presence of this frequency with a higher amplitude.
As in [@laskar1990], we estimate the size of the chaotic zones and perform a frequency analysis of the solution La2011 on sliding intervals of $30{\,\mathrm{Myr}}$ over $\left[-250:250\right]{\,\mathrm{Myr}}$ with a $5{\,\mathrm{Myr}}$ step size. The evolutions of the proper frequencies of Ceres and Vesta are respectively in figures \[fig:gserrceres\] and \[fig:gserrvesta\]. $g_C$ and $s_C$ vary respectively on about $\left[54.225:54.261\right]{\arcsecond\per\mathrm{yr}}$ and $\left[-59.263:-59.209\right]{\arcsecond\per\mathrm{yr}}$ and $g_V$ and $s_V$ respectively on about $\left[36.809:36.939\right]{\arcsecond\per\mathrm{yr}}$ and $\left[-40.011:-39.514\right]{\arcsecond\per\mathrm{yr}}$. The secular frequencies vary because of the chaotic diffusion, which is then higher for Vesta than for Ceres. The frequency $s_V$ has the highest diffusion with a decrease of about $0.50{\arcsecond\per\mathrm{yr}}$ on $\left[115:220\right]{\,\mathrm{Myr}}$.
Rotational motion Ceres2017\[sec:resultoblisymp\]
-------------------------------------------------
Ceres Vesta
---------------------------------- ---------------------------- ------------------------------
$J_2$ $2.6499\times10^{-2}$ $7.1060892\times10^{-2}$
$R$ (${\mathrm{km}}$) $470$ $265$
$\omega$ ($\mathrm{rad}.s^{-1}$) $1.923403741\times10^{-4}$ $3.26710510494\times10^{-4}$
${\overline{C}}$ $0.393$ $0.409$
: \[tab:phycha\]Physical characteristics of Ceres and Vesta used for the computation of the long-term rotation.
The solution La2011 does not include the integration of the rotation axes of Ceres and Vesta. We compute then the solution Ceres2017 where the spin axes of Ceres and Vesta are integrated with the symplectic method of the section \[sec:oblisymp\]. We consider the interactions between the orbital and rotational motions and the torques exerted by the Sun and the planets on Ceres and Vesta. Like in La2011, Ceres, Vesta, Pallas, Iris and Bamberga are considered as planets and exert a torque on Ceres and Vesta. We use the same initial condition for the orbital motion than La2011. To integrate the long-term rotation, we use the parameters of the table \[tab:phycha\] and the initial conditions for the rotation axis of the table \[tab:CI\]. The integration is realized on $\left[-100:100\right]{\,\mathrm{Myr}}$ in extended precision with a time step of $0.005\,{\mathrm{yr}}$. We use the integrator $\mathcal{SABA}_{C3}$ developped for perturbed Hamiltonians by [@laskar2001]. A symmetric composition of this integrator with the method of [@suzuki1990] allows to obtain a higher order integrator as indicated in [@laskar2001].
[@C[3.65cm]{}R[1.65cm]{}R[1.1cm]{}r@]{} & $\nu_k$ (${\arcsecond\per\mathrm{yr}}$) & $10^6\times A_k$ & $\phi_k$ ($\degree$)\
$f_C$ & -6.15875 & 132796 & 4.534\
$s_C$ & -59.25393 & 19264 & 162.921\
$s_6$ & -26.34785 & 3150 & 33.785\
$s_C+(g_C-g_5)$ & -9.25982 & 3022 & -68.345\
$s_C+\left(g_C-\left(2g_6-g_5\right)\right)$ & -57.23494 & 2952 & 97.518\
$s_C-\left(g_C-\left(2g_6-g_5\right)\right)$ & -61.27289 & 2832 & 48.975\
$s_7$ & -2.99104 & 1915 & 50.007\
$s_C+\left(g_C-\left(2g_6-g_7\right)\right)$ & -58.40439 & 1392 & -172.293\
$s_C-\left(g_C-\left(2g_6-g_7\right)\right)$ & -60.10345 & 1312 & -41.211\
$s_8$ & -0.69160 & 1303 & -69.308\
$s_C+2(g_C-g_6)$ & -7.23883 & 1170 & -127.942\
$f_C-\left(g_C-\left(2g_6-g_5\right)\right)$ & -8.17557 & 668 & -100.922\
$f_C+\left(g_C-\left(2g_6-g_5\right)\right)$ & -4.14024 & 658 & -57.248\
$s_C+2(g_C-g_6)+(g_5-g_7)$ & -6.06275 & 573 & 2.238\
$s_C-2(g_5-g_6)$ & -11.27863 & 420 & 177.727\
$s_C+(g_5-g_7)$ & -58.08387 & 391 & -105.808\
$f_C+\left(g_C-\left(2g_6-g_7\right)\right)$ & -5.30451 & 313 & 48.054\
& -59.15554 & 305 & -28.298\
$f_C-\left(g_C-\left(2g_6-g_7\right)\right)$ & -6.99815 & 290 & -168.326\
& -59.34498 & 280 & -166.997\
$s_C+\left(3g_C-4g_6+g_7\right)$ & -6.39098 & 276 & -113.654\
& -59.45168 & 267 & 113.695\
$s_C+2(g_C-g_6)-(g_5-g_7)$ & -8.41092 & 266 & -45.638\
& -6.27076 & 251 & 121.424\
& -59.04637 & 229 & 60.256\
$s_C+(g_C-g_6)$ & -33.24727 & 227 & -165.136\
& -6.15880 & 225 & -88.261\
$s_C-(g_5-2g_6+g_7)$ & -10.10925 & 219 & 86.998\
$s_C+2\left(g_C-\left(2g_6-g_5\right)\right)$ & -55.21618 & 216 & 30.997\
$s_C-2\left(g_C-\left(2g_6-g_5\right)\right)$ & -63.29223 & 215 & -66.897\
$s_C-(2g_C+g_5-4g_6+g_7)$ & -62.12250 & 204 & -155.831\
$s_C+(2g_C+g_5-4g_6+g_7)$ & -56.38576 & 203 & 121.140\
& -59.25884 & 202 & -106.222\
$2f_C-s_C$ & 46.93652 & 195 & -153.986\
$s_C+(g_C-g_5-2g_6+2g_7)$ & -59.57497 & 183 & -77.254\
$s_C-(g_C-g_6)$ & -85.26054 & 183 & -48.320\
$s_C-(g_C-g_5-2g_6+2g_7)$ & -58.93920 & 173 & -154.076\
$s_1$ & -5.61671 & 159 & -118.819\
$2s_C-f_C$ & -112.34910 & 140 & 141.919\
$s_C+(g_C-g_7)$ & -8.08029 & 127 & -125.830\
$f_C+(g_C-g_6)$ & 19.84789 & 105 & 36.075\
$f_C-(g_C-g_6)$ & -32.16614 & 104 & 150.147\
$s_C-(s_6-s_7-g_C-2g_5+3g_6)$ & -57.86704 & 96 & 15.532\
$s_C+(s_6-s_7-g_C-2g_5+3g_6)$ & -60.64206 & 94 & 127.961\
$2g_C-s_C$ & 167.75745 & 90 & 148.483\
$f_C+(s_C-s_6)$ & -39.06485 & 90 & -46.170\
& -56.11382 & 85 & 12.387\
& -5.91789 & 82 & 112.898\
$s_C+\left(3g_C+g_5-4g_6\right)$ & -5.21225 & 78 & 10.716\
& -59.37412 & 68 & 159.704\
The differences between La2011 and Ceres2017 for the eccentricity and inclination of Ceres and Vesta oscillate around zero. The amplitudes on $\left[-20:0\right]{\,\mathrm{Myr}}$ are about $0.008$ and $0.1\degree$ for the eccentricity and the inclination of Ceres and about $0.02$ and $0.2\degree$ for Vesta. These differences have similar amplitudes to the ones observed for a small change ($1\times10^{-10}\,\rad$) of the initial mean longitude $\lambda$ of Ceres and Vesta. Therefore they come from the chaotic behaviour for the orbital motions of Ceres and Vesta [@laskar2011b] and are then not significant.
The evolution of the obliquity is represented on the time intervals $\left[-100:0\right]{\,\mathrm{kyr}}$, $\left[-1:0\right]{\,\mathrm{Myr}}$ and $\left[-20:0\right]{\,\mathrm{Myr}}$ in figures \[fig:obliceres\] and \[fig:oblivesta\] for respectively Ceres and Vesta. For Ceres, we obtain similar results to [@bills2017] and [@ermakov2017a] with oscillations between about $2.06$ and $19.6\degree$ on $\left[-20:0\right]{\,\mathrm{Myr}}$. For Vesta, we observe oscillations between $21.4$ and $44.1\degree$. The amplitudes of the oscillations of the obliquities of Ceres and Vesta are similar on $\left[-100:100\right]{\,\mathrm{Myr}}$.
We perform the frequency analysis of the solution Ceres2017 on the time interval $\left[-20:0\right]{\,\mathrm{Myr}}$. The frequency decompositions of the quantity $w_x+iw_y$, where $w_x$ and $w_y$ are the coordinates in the invariant frame of the component parallel to the invariable plane of the normalized angular momentum, are in tables \[tab:freqobliceres\] and \[tab:freqoblivesta\] for respectively Ceres and Vesta. For Ceres, the precession frequency of the rotation axis is then $f_C=-6.1588\pm0.0002{\arcsecond\per\mathrm{yr}}$, which corresponds to a precession of a period of about $210.43{\,\mathrm{kyr}}$ and is consistent with the precession period of $210{\,\mathrm{kyr}}$ determined by [@ermakov2017a]. For Vesta, the precession frequency of the rotation axis is $f_V=-12.882\pm0.002{\arcsecond\per\mathrm{yr}}$, which corresponds to a period of precession of about $100.61{\,\mathrm{kyr}}$.
[@C[3.65cm]{}R[1.65cm]{}R[1.1cm]{}r@]{} & $\nu_k$ (${\arcsecond\per\mathrm{yr}}$) & $10^6\times A_k$ & $\phi_k$ ($\degree$)\
$f_V$ & -12.88235 & 536537 & -32.774\
$2f_V-\left(2s_6-s_V\right)$ & -12.68720 & 53372 & -129.004\
$2s_6-s_V$ & -13.07751 & 49031 & -114.858\
$s_V$ & -39.61376 & 31572 & -172.011\
& -12.77160 & 17140 & 52.745\
& -12.99626 & 14654 & 48.485\
$2f_V-s_V$ & 13.84895 & 13649 & 106.512\
& -12.67225 & 10137 & -159.973\
$s_6$ & -26.34823 & 7832 & 32.968\
& -13.10016 & 7321 & -115.847\
& -12.55303 & 6177 & -19.085\
& -12.92955 & 5260 & 14.327\
$2f_V-s6$ & 0.58288 & 4646 & -99.913\
& -12.73796 & 4601 & -121.794\
$f_V-\left(s_V-s_6\right)$ & 0.38433 & 4002 & 174.403\
$f_V-\left(g_V-g_6\right)$ & -21.53224 & 3891 & 27.961\
& -12.82979 & 3878 & 130.392\
$f_V+\left(g_V-g_6\right)$ & -4.23214 & 3863 & 87.770\
$f_V+\left(s_V-s_6\right)$ & -26.14817 & 3364 & -58.177\
& -12.83327 & 3345 & 116.714\
& -13.06876 & 3064 & 61.903\
$3f_V-2s_6$ & 14.04715 & 2933 & 13.044\
$f_V-2(s_V-s_6)$ & 13.65255 & 2838 & 24.381\
& -12.44726 & 2791 & -125.619\
& -13.16296 & 2581 & -44.359\
& -39.71053 & 2303 & -17.854\
& -12.99531 & 2220 & 41.091\
& -39.51364 & 2149 & -138.939\
$2s_V-f_V$ & -66.34537 & 2086 & -131.843\
$s_V+(g_V-g_5)$ & -6.97876 & 1971 & -142.636\
& -13.25676 & 1924 & -122.046\
& -12.86902 & 1919 & -78.108\
& -12.61890 & 1819 & 99.975\
& -12.53596 & 1587 & 159.537\
& -13.11606 & 1508 & 156.028\
$s_V+(g_V-g_6)$ & -30.96333 & 1357 & -51.344\
& -12.78299 & 1342 & -14.556\
& -12.93541 & 1319 & -23.383\
$s_7$ & -2.99285 & 1201 & 45.632\
$s_V+s_6-f_V$ & -53.07896 & 1116 & 75.117\
$s_V-(g_V-g_6)$ & -48.26419 & 1101 & -112.768\
$s_8$ & -0.69187 & 1056 & -69.630\
& -12.37138 & 1049 & -141.990\
& -13.18382 & 1020 & -117.994\
& 0.77870 & 1013 & 161.465\
& -39.43894 & 953 & -156.160\
$f_V-\left(g_V-g_5\right)$ & -45.52000 & 900 & 110.360\
$f_V+\left(g_V-g_5\right)$ & 19.75530 & 886 & 4.257\
& -12.46117 & 863 & 169.774\
$f_V+\left(g_V-g_6+s_V-s_6\right)$ & -17.49709 & 595 & 62.754\
[@skoglov1996] noticed that bodies like Ceres and Vesta, which have a high inclination and a precession frequency of the ascending node higher than the precession frequency of the rotation axis, could have strong variations of the obliquity. Indeed, the obliquity is given by $$\cos \epsilon=\mathbf{n}.\mathbf{w}=\cos i \cos l + \sin i \sin l \cos\left(\Omega-L\right)$$ with $(l,L)$ the inclination and the longitude of the ascending node of the equatorial plane and $(i,\Omega)$ the ones of the orbital plane in the frame of the invariable plane. Then the precession of the ascending node causes obliquity variations, if the inclination of the orbital plane is not null. The inclination of the orbit plane with respect to the initial equatorial plane is represented by a red curve in figures \[fig:obliceres\] and \[fig:oblivesta\] for respectively Ceres and Vesta. For Ceres, a large part of the amplitude of the obliquity is caused by the precession of the ascending node, which creates oscillations between $2.1$ and $17.1\degree$ on $\left[-100:0\right]{\,\mathrm{kyr}}$. For Vesta, the contribution is less important.
We have integrated for the time interval $\left[-100:0\right]{\,\mathrm{Myr}}$ the rotation of Ceres and Vesta for different normalized polar moments of inertia respectively in the intervals $\left[0.380:0.406\right]$ and $\left[0.390:0.430\right]$. For Ceres, all the different normalized polar moments of inertia give solutions for the obliquity with oscillations of similar amplitude (Fig. \[fig:CCeres\]), as noticed by [@ermakov2017a]. The mean differences come from the precession frequency, which depends on the normalized polar moment of inertia. A small difference on the precession frequency causes a phase difference, which grows when the time increases. For Vesta, the obliquity solutions of the different normalized polar moments of inertia have all oscillations of similar amplitude (Fig. \[fig:CVesta\]) except for the solution obtained for ${\overline{C}}=0.406$. Because of a secular resonance with the orbital frequency $2s_6-s_V$ (see section \[sec:closereson\]), the obliquity can decrease to $18.9\degree$ on $\left[-20:0\right]{\,\mathrm{Myr}}$ for ${\overline{C}}=0.406$.
Secular models for the orbital motion\[SEC:secularmodels\]
==========================================================
In section \[sec:resultorb\], we have observed the proximity of Ceres with the resonances of the frequencies $2g_6-g_5\approx52.23{\arcsecond\per\mathrm{yr}}$ and $2g_6-g_7\approx53.40{\arcsecond\per\mathrm{yr}}$. If Ceres is close to these two resonances and if these resonances overlap, it could affect its orbital motion and therefore the rotational motion. We have especially seen in section \[sec:resultoblisymp\] that the values of the inclination have direct consequences on the variations of the obliquity. Moreover, as noted by [@laskarrobutel1993], the chaotic behaviour of the orbital motion can widen by diffusion the possible chaotic zones of the rotation axis.
A secular model can be obtained from the secular Hamiltonian of Ceres and Vesta to get secular equations, which are integrated much faster than full equations. From the development of the secular Hamiltonian of [@laskarrobutel1995], we build a secular model of Ceres and Vesta perturbed only by Jupiter and Saturn, which allows to identify the important terms of the planetary perturbations and to study the close secular resonances.
Hamiltonian secular model\[sec:modsecham\]
------------------------------------------
[@laskarrobutel1995] computed the development of the Hamiltonian of the planetary perturbations. We consider the case of a body only perturbed by Jupiter and Saturn. From [@laskarrobutel1995], the Hamiltonian is $$H= \sum_{i=5}^{6}\sum_{k,k'}\sum_{\mathcal{N}}\Gamma_{\mathcal{N}}\left(\Lambda,\Lambda_{i}\right) X^{n}X_{i}^{n'}\overline{X}^{\overline{n}}\overline{X}_{i}^{\overline{n}'}Y^{m}Y_{i}^{m'}\overline{Y}^{\overline{m}}\overline{Y}_{i}^{\overline{m}'}e^{i\left( k\lambda+k'\lambda_{i}\right)}\label{eq:devham}$$ with $\mathcal{N}=(n,n',\overline{n},\overline{n}',m,m',\overline{m},\overline{m}')$ and the coefficients $\Gamma_{N}\left(\Lambda,\Lambda_{i}\right)$, which depend only on the ratio of the semi-major axes. The Poincaré’s rectangular canonical coordinates $\left(\Lambda,\lambda,x,-i\overline{x},y,-i\overline{y} \right)$ are defined by $$\Lambda=\beta\sqrt{\mu a},$$ $$x=\sqrt{\Lambda\left( 1-\sqrt{1-e^{2}} \right)}e^{i\varpi},$$ $$y=\sqrt{\Lambda\sqrt{1-e^{2}} \left(1-\cos i \right)}e^{i\Omega},$$ with $\beta=m{M_{\odot}}/(m+{M_{\odot}})$, $\mu=\mathcal{G}(m+{M_{\odot}})$ and $m$ the mass of the perturbed body. The variables $X$ and $Y$ are given by $X=x\sqrt{2/\Lambda}$ and $Y=y/\sqrt{2\Lambda}$ [@laskarrobutel1995]. We select the terms verifying the secular inequality $\left(0,0\right)$ for $\left(k,k'\right)$ to obtain the secular part of the Hamiltonian (Eq. (\[eq:devham\])) and we consider the case of a massless perturbed body.
The secular interaction Hamiltonian has been computed for the order 1 in mass and the degree 4 in eccentricity and inclination. We perform a frequency analysis of the solution La2011 on $\left[-20:0\right]{\,\mathrm{Myr}}$ and conserve only the main secular terms to create a secular solution of Jupiter and Saturn, which we inject in the Hamiltonian. The Hamiltonian depends then only on time and on $X$, $Y$. The equations of the motion are $$\frac{dX}{dt}=-\frac{2i}{\Lambda}\frac{\partial H}{\partial \overline{X}}\label{eq:secX}$$ $$\frac{dY}{dt}=-\frac{i}{2\Lambda}\frac{\partial H}{\partial \overline{Y}}\label{eq:secY}.$$
Adjustment of the secular model\[sec:modsecadj\]
------------------------------------------------
Eqs. (\[eq:secX\], \[eq:secY\]) are integrated with a step size of $100$ years with a numerical integrator Runge-Kutta 8(7) on $\left[-20:0\right]{\,\mathrm{Myr}}$. The obtained solution allows to reproduce the amplitudes of the oscillations of the eccentricity and the inclination of the solution La2011. However there are differences in the proper frequencies $g$ and $s$ with the solution La2011. These differences of frequency cause phase differences between the perihelion and ascending node longitudes, which grow approximately linearly with time. The secular model does not allow then to reproduce the solution La2011 (Figs. \[fig:compLaXsecceres\] and \[fig:compLaXsecvesta\]). To increase the precision of the secular model, it is possible to increase the order of the secular Hamiltonian but this only allows to reduce partly the differences of frequencies.
As was done by [@laskar1990], we adjust the secular frequencies of the model. The differences in the perihelion and ascending node longitudes between the solution La2011 and the secular model are fitted by the affine functions $\mathcal{A} t + d \varpi_0$ and $\mathcal{B} t + d \Omega_0$. The frequencies of the secular model are adjusted by applying the following procedure to obtain the Hamiltonian $H'$ $$H'=H-\frac{\mathcal{A}\Lambda}{2} X\overline{X}-2\mathcal{B}\Lambda Y\overline{Y}.$$ The initial conditions for the perihelion longitudes and ascending node longitudes are also respectively corrected of the quantities $d \varpi_0$ and $d \Omega_0$. The initial conditions for the eccentricity and the inclination are also slightly corrected. We iterate this procedure until we obtain a difference between the solution La2011 and the secular model, which has a mean close to zero for the four quantities $e$, $i$, $\varpi$, $\Omega$. The adjustement of the frequencies is then about $\mathcal{A}\approx4.1{\arcsecond\per\mathrm{yr}}$ and $\mathcal{B}\approx0.20{\arcsecond\per\mathrm{yr}}$ for Ceres and $\mathcal{A}\approx0.51{\arcsecond\per\mathrm{yr}}$ and $\mathcal{B}\approx-0.41{\arcsecond\per\mathrm{yr}}$ for Vesta. The differences for the eccentricity and the inclination between the two solutions oscillate then around zero and correspond to short period terms, which are not reproduced by the secular Hamiltonian (Figs. \[fig:compLaXsecceres\] and \[fig:compLaXsecvesta\]). On $\left[-20:0\right]{\,\mathrm{Myr}}$, the maximal differences in absolute value between the solution La2011 and the adjusted secular model are then $0.0082$ and $0.23\degree$ for the eccentricity and the inclination of Ceres and $0.013$ and $0.65\degree$ for the eccentricity and the inclination of Vesta.
This Hamiltonian model with the adjustment of the frequencies $g$ and $s$ allows to reproduce the variations of the eccentricity and the inclination of Ceres and Vesta on $\left[-20:0\right]{\,\mathrm{Myr}}$. Therefore, the long-term orbital dynamics of Ceres and Vesta is given for the most part by the planetary perturbations of Jupiter and Saturn as noticed by [@skoglov1996] for Ceres and Vesta and [@ermakov2017a] for Ceres.
Study of the close resonances
-----------------------------
This model allows to study the resonances close to Ceres and Vesta. The integration of the secular Hamiltonian is about $10^4$ faster that the complete integration and allows to proceed to many integrations with different parameters $\mathcal{A}$ and $\mathcal{B}$ near the values used for the models to see the effects of the close secular resonances. For each value of these parameters, Eqs. (\[eq:secX\], \[eq:secY\]) are integrated on $\left[-20:0\right]{\,\mathrm{Myr}}$ and the secular frequencies $g$ and $s$ are determined with the frequency analysis.
For Ceres, the evolutions of the eccentricity, the inclination, the frequency $g_C$ are in figure \[fig:alphaceres\] for $\mathcal{A}\in \left[0:6\right]{\arcsecond\per\mathrm{yr}}$. The resonance with the frequency $2g_6-g_5\approx52.23{\arcsecond\per\mathrm{yr}}$, present in the secular motion of Jupiter and Saturn, acts for about $g_C\in\left[51.32:53.16\right]{\arcsecond\per\mathrm{yr}}$. The maximal and minimal eccentricities vary respectively from $0.18$ to $0.27$ and from $0.04$ to $0.0002$. The maximal inclination rises from $10.6$ to $11.1\degree$, which would rise the variations of the obliquity as noticed in section \[sec:resultoblisymp\]. For about $g_C\in\left[53.21:53.60\right]{\arcsecond\per\mathrm{yr}}$, there is a resonance with the frequency $2g_6-g_7\approx53.40{\arcsecond\per\mathrm{yr}}$, present in the secular motion of Jupiter and Saturn, and the maximal eccentricity increases from $0.17$ to $0.19$. The resonance with the frequency $2g_6-g_7$ has then a weaker chaotic nature than the one with $2g_6-g_5$. In section \[sec:resultorb\], we have given the interval $\left[54.225:54.261\right]{\arcsecond\per\mathrm{yr}}$ as estimation of the variation of the frequency $g_C$ because of the chaotic diffusion on $\left[-250:250\right]{\,\mathrm{Myr}}$. Therefore the chaotic diffusion of Ceres is too weak to put Ceres in resonance with the frequencies $2g_6-g_5$ and $2g_6-g_7$ on $\left[-250:250\right]{\,\mathrm{Myr}}$. The frequency $g_7+2g_6-2g_5\approx51.06{\arcsecond\per\mathrm{yr}}$ in the motion of Jupiter and Saturn causes a resonance with weaker but observable effects on the eccentricity for $g_C\in\left[50.97:51.20\right]{\arcsecond\per\mathrm{yr}}$. In the secular model of the motions of Jupiter and Saturn, we find the frequency $3g_6-2g_5+s_6-s_7\approx52.87{\arcsecond\per\mathrm{yr}}$ with a smaller amplitude and it is then difficult to distinguish its effects from the ones of the resonance with the frequency $2g_6-g_5$. The evolutions of the eccentricity, the inclination, the frequency $s_C$ are in figure \[fig:betaceres\] for $\mathcal{B}\in\left[-3,3\right]{\arcsecond\per\mathrm{yr}}$ and have slight irregularities for $s_C\in\left[-59.95:-59.73\right]{\arcsecond\per\mathrm{yr}}$. This frequency interval does not correspond to a term used for the secular motion of Jupiter and Saturn.
For Vesta, the evolutions of the eccentricity, the inclination, the frequency $g_V$ are in figure \[fig:alphavesta\] for $\mathcal{A}\in\left[-3,3\right]{\arcsecond\per\mathrm{yr}}$. For $g_V\in\left[34.56:35.25\right]{\arcsecond\per\mathrm{yr}}$, there is a resonance with the frequency $2g_5-s_6\approx34.86{\arcsecond\per\mathrm{yr}}$, where the maximal eccentricity increases from $0.17$ to $0.19$ and the maximal inclination from $7.5$ to $8.0\degree$. For $g_V\in\left[38.86:39.12\right]{\arcsecond\per\mathrm{yr}}$, the maximal inclination increases from $7.6$ to $7.7\degree$. This area does not correspond to terms used for the secular motion of Jupiter and Saturn. The evolutions of the eccentricity, the inclination, the frequency $s_V$ are in figure \[fig:betavesta\] for $\mathcal{B}\in\left[-3,3\right]{\arcsecond\per\mathrm{yr}}$. For $s_V\in\left[-41.71:-41.49\right]{\arcsecond\per\mathrm{yr}}$, the inclination can increase from $7.3$ to $7.4\degree$. This resonance does not match with a term used for the secular motion of Jupiter and Saturn.
Stability of the rotation axes\[SEC:stab\]
==========================================
In this part, we are interested in the study of the long-term stability of the rotation axis. Like in [@laskarjoutelrobutel1993] and [@laskarrobutel1993], the stability of the rotation axis can be estimated using frequency analysis. We determinate the precession frequency $f_1$ on the interval $\left[-20:0\right]{\,\mathrm{Myr}}$ and the precession frequency $f_2$ on the interval $\left[-40:-20\right]{\,\mathrm{Myr}}$. The quantity $\sigma=|(f_1-f_2)/f_1|$ allows to estimate the diffusion of the precession frequency [@laskar1993; @dumaslaskar1993]. For an integrable system, this quantity must stay null. For a weakly perturbed system, this quantity is small but increases if the system becomes chaotic.
Secular solution for the obliquity
----------------------------------
We integrate the secular equation (\[eq:integsec\]) with an Adams integrator and a step size of $100$ years. The normal to the orbit $\bf{n}$ and the eccentricity $e$ are computed from the secular orbital solution obtained from the secular frequency decompositions in section \[sec:resultorb\]. We use the initial conditions for the rotation axis of table \[tab:CI\]. The secular solutions for the obliquities are compared to the non secular ones for Ceres and Vesta in figure \[fig:dobliLaXmodsec\] for the time interval $\left[-20:0\right]{\,\mathrm{Myr}}$. The secular computation of the obliquity, which is about 1 million times faster, allows then to reproduce correctly the evolution of the obliquity.
The secular orbital solution has initial conditions different from the ones of the solution La2011 because we have removed the short period variations in figures \[fig:compmodsecceres\] and \[fig:compmodsecvesta\]. This modifies the initial obliquities of Ceres and Vesta of about $0.02\degree$ for Ceres and $-0.05\degree$ for Vesta and could explain the differences observed in figure \[fig:dobliLaXmodsec\].
We integrate on $\left[-40:0\right]{\,\mathrm{Myr}}$ the rotation axis with the symplectic method of the section \[sec:oblisymp\] and the secular equation (\[eq:integsec\]). For both integrations the initial obliquities vary from $0$ to $100\degree$ with a step of $0.5\degree$. The diffusion of the precession frequency is represented in figure \[fig:epsstab\] with respect to the initial obliquity. For Ceres, the diffusion is quite similar with close amplitude and evolution and the areas with strong increasing of $\sigma$ allow to recognize resonances with the orbital frequencies for the two cases. For Vesta, the diffusion $\sigma$ is higher for the secular solution. However the areas with high values of the diffusion $\sigma$ correspond. The secular and non secular solutions of the obliquity have then close stability properties.
Study of the close resonances
-----------------------------
We integrate the secular equation (\[eq:integsec\]) on $\left[-40:0\right]{\,\mathrm{Myr}}$ for different precession constants in an interval with a step of $0.01{\arcsecond\per\mathrm{yr}}$ to look the effects of the resonances.
### Ceres
The precession frequency, its diffusion and the variations of the obliquity are represented for Ceres in figure \[fig:alphastabceres\] with respect to the precession constant in the interval $\left[0.01:12\right]{\arcsecond\per\mathrm{yr}}$. We observe areas with strong variations of the diffusion, specified in table \[tab:areadifceres\], which correspond to resonances with orbital frequencies. Most of these frequencies are already present in the frequency decompositions of the variables $z$ and $\zeta$ used for the construction of the secular solution. The quantities $\mathbf{n}$ and $e$, which appear in the secular equation (\[eq:integsec\]) and which are used to obtain the angular momentum $\mathbf{w}$, are computed from the secular solution of the variables $z$ and $\zeta$, and can include additional frequencies. To identify the remaining frequencies in table \[tab:areadifceres\], we perform then a frequency analysis of the quantities $n_x+in_y$ and $(n_x+in_y)/(1-e^2)^{3/2}$. $n_x$ and $n_y$ are the coordinates in the invariant frame of the component parallel to the invariable plane of the normal to the orbit $\mathbf{n}$. We find no trace of the remaining frequencies in the first 4000 terms of the frequency analysis of $n_x+in_y$. In the first 4000 terms of the frequency analysis of $(n_x+in_y)/(1-e^2)^{3/2}$, we find the missing frequencies of the areas identified in table \[tab:areadifceres\]. The variations of the eccentricity are then responsible for the apparition of additional secular resonances between the orbital and the rotational motions.
We note in particular the appearance of the resonance with the frequency $s_C+2(g_C-g_6)+(g_5-g_7)\approx-6.07{\arcsecond\per\mathrm{yr}}$, which is included in the interval of uncertainty of the precession constant. Therefore, Ceres could be in resonance with this frequency. However this effect on the obliquity is very limited. In the vicinity of the interval of uncertainty, we observe a narrow area with a small decreasing to $1\degree$ of the minimal obliquity because of the resonance with the frequency $s_C+(3g_C-4g_6+g_7)\approx-6.39{\arcsecond\per\mathrm{yr}}$. More distant resonances have stronger effects on the obliquity of Ceres. The resonance with the frequency $s_C+(g_C-g_5)\approx-9.26{\arcsecond\per\mathrm{yr}}$ causes variations of the obliquity in the interval between $0$ and almost $40\degree$ and the one with $s_7\approx-2.99{\arcsecond\per\mathrm{yr}}$ variations between $0$ and almost $30\degree$. Ceres is closer of a less important resonance with $s+2(g_C-g_6)\approx-7.24{\arcsecond\per\mathrm{yr}}$, where there are variations of the obliquity between $0$ and $26\degree$. However Ceres should have a precession constant between about $7.15{\arcsecond\per\mathrm{yr}}$ and $7.85{\arcsecond\per\mathrm{yr}}$ to go inside this resonance.
$\alpha$ $({\arcsecond\per\mathrm{yr}})$ frequency $({\arcsecond\per\mathrm{yr}})$ identification approximate value
------------------------------------------ ------------------------------------------- ------------------------------- ------------------------------------- ----
$\left[0.52:1.10\right]$ $\left[-1.07:-0.51\right]$ $s_8$ $-0.69{\arcsecond\per\mathrm{yr}}$ \*
$\left[1.77:2.00\right]$ $\left[-1.94:-1.72\right]$ $s_7+(g_5-g_7)$ $-1.83{\arcsecond\per\mathrm{yr}}$ \*
$\left[2.19:2.60\right]$ $\left[-2.52:-2.12\right]$ $s_6-(g_5-g_6)$ $-2.36{\arcsecond\per\mathrm{yr}}$ \*
$\left[2.76:3.49\right]$ $\left[-3.38:-2.67\right]$ $s_7$ $-2.99{\arcsecond\per\mathrm{yr}}$ \*
$\left[4.21:4.54\right]$ $\left[-4.38:-4.07\right]$ $s_7-(g_5-g_7)$ $-4.16{\arcsecond\per\mathrm{yr}}$ \*
$\left[5.14:5.34\right]$ $\left[-5.15:-4.96\right]$ $s_7-(g_C+g_5-2g_6)$ $-5.01{\arcsecond\per\mathrm{yr}}$
$\left[5.36:5.60\right]$ $\left[-5.39:-5.16\right]$ $s_C+(3g_C+g_5-4g_6)$ $-5.22{\arcsecond\per\mathrm{yr}}$ \*
$\left[5.75:6.04\right]$ $\left[-5.81:-5.54\right]$ $s_1$ $-5.61{\arcsecond\per\mathrm{yr}}$ \*
$\left[6.21:6.48\right]$ $\left[-6.23:-5.97\right]$ $s_C+2(g_C-g_6)+(g_5-g_7)$ $-6.07{\arcsecond\per\mathrm{yr}}$
$\left[6.52:6.79\right]$ $\left[-6.52:-6.27\right]$ $s_C+(3g_C-4g_6+g_7)$ $-6.39{\arcsecond\per\mathrm{yr}}$ \*
$\left[7.15:7.85\right]$ $\left[-7.53:-6.86\right]$ $s_C+2(g_C-g_6)$ $-7.24{\arcsecond\per\mathrm{yr}}$ \*
$\left[8.10:8.29\right]$ $\left[-7.94:-7.76\right]$ $s_C-(s_6-s_7-2g_C-g_5+3g_6)$ $-7.88{\arcsecond\per\mathrm{yr}}$
$\left[8.34:8.51\right]$ $\left[-8.16:-7.99\right]$ $s_C+(g_C-g_7)$ $-8.09{\arcsecond\per\mathrm{yr}}$ \*
$\left[8.63:8.95\right]$ $\left[-8.56:-8.27\right]$ $s_C+2(g_C-g_6)-(g_5-g_7)$ $-8.41{\arcsecond\per\mathrm{yr}}$ \*
$\left[9.48:10.19\right]$ $\left[-9.66:-9.04\right]$ $s_C+(g_C-g_5)$ $-9.26{\arcsecond\per\mathrm{yr}}$ \*
$\left[10.48:10.75\right]$ $\left[-10.19:-9.95\right]$ $s_C-(g_5-2g_6+g_7)$ $-10.11{\arcsecond\per\mathrm{yr}}$ \*
$\left[10.93:11.12\right]$ $\left[-10.55:-10.37\right]$ $s_C+(g_C-2g_5+g_7)$ $-10.43{\arcsecond\per\mathrm{yr}}$ \*
$\left[11.15:11.37\right]$ $\left[-10.78:-10.58\right]$ $s_C+(s_6-s_7-3(g_5-g_6))$ $-10.65{\arcsecond\per\mathrm{yr}}$
$\left[11.51:11.66\right]$ $\left[-11.03:-10.90\right]$ $s_C-(g_C+g_5-4g_6+2g_7)$ $-10.96{\arcsecond\per\mathrm{yr}}$
$\left[11.75:11.99\right]$ $\left[-11.40:-11.11\right]$ $s_C-2(g_5-g_6)$ $-11.28{\arcsecond\per\mathrm{yr}}$ \*
$\alpha$ $({\arcsecond\per\mathrm{yr}})$ frequency $({\arcsecond\per\mathrm{yr}})$ identification approximate value
------------------------------------------ ------------------------------------------- ----------------- ------------------------------------- ----
$\left[10.71:11.07\right]$ $\left[-9.24:-8.95\right]$ $-9.09{\arcsecond\per\mathrm{yr}}$
$\left[13.16:13.45\right]$ $\left[-11.16:-10.93\right]$
$\left[13.84:14.18\right]$ $\left[-11.74:-11.47\right]$ $s_7-(g_V-g_6)$ $-11.65{\arcsecond\per\mathrm{yr}}$
$\left[14.89:16.48\right]$ $\left[-13.54:-12.30\right]$ $2s_6-s_V$ $-13.09{\arcsecond\per\mathrm{yr}}$ \*
$\left[17.36:18.18\right]$ $\left[-14.82:-14.22\right]$ $s_V+(g_6-g_7)$ $-14.46{\arcsecond\per\mathrm{yr}}$
$\left[18.77:19.67\right]$ $\left[-16.02:-15.25\right]$ $s_V-(g_5-g_6)$ $-15.62{\arcsecond\per\mathrm{yr}}$ \*
$\left[20.12:21.98\right]$ $\left[-18.12:-16.34\right]$ $-17.74{\arcsecond\per\mathrm{yr}}$ \*
In figure \[fig:alphastabceres\], we see the diffusion for the precession constants computed with a rotation rate $7\%$ higher [@mao2018], as discussed in section \[sec:earlyceresalpha\]. If the early Ceres was in hydrostatic equilibrium as supposed by [@mao2018], it could be in resonances with the frequencies $s_1\approx-5.61{\arcsecond\per\mathrm{yr}}$, $s_C+2(g_C-g_6)+(g_5-g_7)\approx-6.07{\arcsecond\per\mathrm{yr}}$ and $s_C+(3g_C-4g_6+g_7)\approx-6.39{\arcsecond\per\mathrm{yr}}$ which have weak effects on the obliquity as seen in figure \[fig:alphastabceres\] and the amplitudes of the oscillations of the obliquity would be similar. The events or phenomena, which would have changed its rotation rate, would not have changed significantly the interval of variation of the obliquity.
As discussed in section \[sec:earlyceresalpha\], if the early Ceres was in hydrosatic equilibrium and the shape and the internal structure have not changed as supposed by [@mao2018], the present Ceres would have a precession constant in the interval $\left[6.58:6.98\right]{\arcsecond\per\mathrm{yr}}$ for a normalized polar moment of inertia of ${\overline{C}}=0.371$. With these precession constants, Ceres could be in resonance with the frequency $s_C+(3g_C-4g_6+g_7)\approx-6.39{\arcsecond\per\mathrm{yr}}$ (table \[tab:areadifceres\]) with no significant changes of the obliquity.
### Vesta\[sec:closereson\]
The precession frequency, its diffusion and the variations of the obliquity are represented for Vesta in figure \[fig:alphastabvesta\] with respect to the precession constant in the interval $\left[10:22\right]{\arcsecond\per\mathrm{yr}}$. The frequencies of the resonances are in table \[tab:areadifvesta\]. We identify the frequencies $2s_6-s_V\approx-13.09{\arcsecond\per\mathrm{yr}}$, $s_V-g_5+g_6\approx-15.62{\arcsecond\per\mathrm{yr}}$ and $-17.74{\arcsecond\per\mathrm{yr}}$, which are among the frequencies of the secular model. As for Ceres, we perform a frequency analysis of the quantities $n_x+in_y$ and $(n_x+in_y)/(1-e^2)^{3/2}$ to identify the remaining frequencies. We do not find them in the frequency analysis of $n_x+in_y$ but find in the one of $(n_x+in_y)/(1-e^2)^{3/2}$ the frequencies $-9.09{\arcsecond\per\mathrm{yr}}$, $s_7-(g_V-g_6)\approx-11.65{\arcsecond\per\mathrm{yr}}$, $s_V+(g_6-g_7)\approx-14.46{\arcsecond\per\mathrm{yr}}$, which can correspond to the areas identified in table \[tab:areadifvesta\]. As for Ceres, the variations of the eccentricity are responsible for the appearance of some resonances. The interval of frequency $\left[-11.16:-10.93\right]{\arcsecond\per\mathrm{yr}}$ in table \[tab:areadifvesta\] does not correspond to any term of the frequency analysis.
The resonance which has the most important effect in the vicinity of Vesta is the one with the frequency $-17.74{\arcsecond\per\mathrm{yr}}$. If the maximal obliquity increases by about $3\degree$, the minimal obliquity decreases by about $10\degree$. The domain of the resonance with the frequency $2s_6-s_V\approx-13.09{\arcsecond\per\mathrm{yr}}$ is included in the uncertainty interval for the precession constant. We have observed in section \[sec:resultoblisymp\] that for the value ${\overline{C}}=0.406$ of the normalized polar moment of inertia, the minimal obliquity decreases compared to the evolution of the obliquity for the other normalized polar moments of inertia. We can see here that it is an effect of the resonance with the frequency $2s_6-s_V$. In this resonance, the minimal obliquity can decrease to $17.6\degree$ and the maximal obliquity can increase to $47.7\degree$.
In figure \[fig:alphastabvesta\], we observe the diffusion for the precession constants computed with the paleorotation rate of the early Vesta determined by [@fu2014] and the physical parameters discussed in section \[sec:earlyvestaalpha\]. The two giant impacts could then have put Vesta closer to the resonance with the frequency $2s_6-s_V$, which involves the crossing of two small resonances and could also have slightly increased the interval of variation of the obliquity.
Global stability of the rotation axis
-------------------------------------
Like in [@laskarjoutelrobutel1993] and in [@laskarrobutel1993], we look for the long-term stability of the rotation axis. We integrate the rotation axis on $\left[-40:0\right]{\,\mathrm{Myr}}$ with the secular equation (\[eq:integsec\]) on a grid of $24120$ points for initial obliquities from $0$ to $100\degree$ with a step of $0.5\degree$ and for precession constants from $0.5$ to $60{\arcsecond\per\mathrm{yr}}$ with a step of $0.5{\arcsecond\per\mathrm{yr}}$.
The precession frequency corresponds in the frequency analysis to the frequency with the largest amplitude. However in the case of an important resonance, the frequency with the largest amplitude can correspond to the resonance frequency. We also consider as precession frequency the one with the largest amplitude, for which the difference with the frequencies $s$ and $s_6$ is larger than $5\times10^{-3}{\arcsecond\per\mathrm{yr}}$.
### Ceres
In figure \[fig:stab\], we see the value of the quantity $\log_{10}\left(\sigma\right)$, the maximal amplitude of the obliquity on $\left[-40:0\right]{\,\mathrm{Myr}}$, which corresponds to the difference between the minimal and maximal obliquities on $\left[-40:0\right]{\,\mathrm{Myr}}$ and the precession frequency of Ceres obtained by frequency analysis on $\left[-20:0\right]{\,\mathrm{Myr}}$. The position of Ceres for the epoch J2000 is indicated with a white circle.
Ceres is in a quite stable zone and is far from the most chaotic zones, which correspond to the resonance with the frequencies $s_C$, $s_6$ and $s_C+\left(g_C-g_6\right)$. The motion of its rotation axis is relatively stable although Ceres has a precession frequency $f_C=-6.1588{\arcsecond\per\mathrm{yr}}$ (table \[tab:freqobliceres\]) close to the node precession frequencies of the inner planets, Mercury $s_1=-5.61{\arcsecond\per\mathrm{yr}}$ and Venus $s_2=-7.06{\arcsecond\per\mathrm{yr}}$ (table \[tab:freqref\]). The secular orbital motion of Ceres is almost entirely determined by the planetary perturbations of Jupiter and Saturn (section \[sec:modsecadj\]) and the amplitudes of frequencies of the inner planets are small in the motion of the ascending node of Ceres. Therefore the width of these resonances is small and they do not overlap contrary to the case of the inner planets [@laskarrobutel1993].
The resonances with the frequencies $s_7$ and $s_8$ have more important effects that the ones with the inner planets. These resonances can increase the amplitude of the obliquity of several degrees. With the present precession constant, we see that the resonance with $s_7$ can affect the rotation axis of Ceres only if Ceres has an initial obliquity of about $70\degree$ and the resonance with $s_8$ can affect Ceres if the initial obliquity is about $90\degree$. The resonance with the frequency $s_6$ has large amplitudes of the obliquity, but does not correspond to one of the most chaotic zone except when it overlaps with the resonance at the frequency $s_C+\left(g_C-g_6\right)$. As for the case of the planets as noted by [@laskarrobutel1993], the resonance with the frequency $s_6$ is isolated.
The most important nearby resonance is with the frequency $s_C+\left(g_C-g_5\right)\approx-9.24{\arcsecond\per\mathrm{yr}}$, which has a significant effect on the amplitude of the obliquity. For an initial obliquity between $0$ and $10\degree$, the amplitude of the obliquity passes about from $20$ to $40\degree$. However this resonance has a limited width of about $1{\arcsecond\per\mathrm{yr}}$ and it has no influence on Ceres.
### Vesta
The diffusion $\log_{10}\left(\sigma\right)$, the maximal amplitude of the obliquity and the precession frequency are represented for Vesta in figure \[fig:stab\]. Vesta for the epoch J2000 is indicated with a white circle.
Vesta is at the boundary of a relatively stable region. Vesta is far from the chaotic zone created by the resonances with the frequencies $s_V$ and $s_6$ and is close to the resonance with the orbital frequency $2s_6-s_V$. The most important resonance in the vicinity is the one with the frequency $-17.74{\arcsecond\per\mathrm{yr}}$, which could correspond to the frequency $s_4$. In this resonance, the amplitude of the obliquity passes about from $30$ to $40\degree$. Because of this limited width, it has no influence on Vesta. The effect of the resonances with the frequencies $s_7$ and $s_8$ are less important than for Ceres. The resonance with $s_7$ increases only the amplitude of the obliquity of some degrees. Like for Ceres, the resonance with the frequency $s_V+\left(g_V-g_5\right)\approx-6.97{\arcsecond\per\mathrm{yr}}$ has still an important effect on the amplitude of the obliquity which increases from about $20$ to $30\degree$ in the resonance.
Like for Ceres, Vesta has a precession frequency $f_V=-12.882{\arcsecond\per\mathrm{yr}}$ (table \[tab:freqoblivesta\]) close to the node precession frequencies of the inner planets, the Earth $s_3=-18.848{\arcsecond\per\mathrm{yr}}$ and Mars $s_4=-17.751{\arcsecond\per\mathrm{yr}}$ (table \[tab:freqref\]), but their perturbations on the orbit of Vesta are too weak to have significant consequences on the stability of the rotation axis for the present Vesta.
The early Vesta is represented in figure \[fig:stab\] by a white square for the precession constant computed from the supposed rotational parameters before the two giant impacts. The early Vesta would also be in a more stable region. As seen in section \[sec:closereson\], the two giant impacts could have put Vesta closer to the resonance with the orbital frequency $2s_6-s_V$.
Conclusion
==========
We applied the method of [@farago2009] to realize a symplectic integration of the rotation axes only averaged over the fast proper rotation. The obliquity variations of Ceres have been obtained between $2$ and $20\degree$ for the last $20{\,\mathrm{Myr}}$ in agreement with the results of [@bills2017] and [@ermakov2017a]. If we use for Ceres the value of the normalized polar moment of inertia ${\overline{C}}=0.395$ (Eq. (\[eq:Cnorm\])), which takes into account the non-spherical form of Ceres, we obtain obliquity variations in the same interval and the frequency precession decreases in absolute value of about $0.5\%$ with respect to the value obtained for ${\overline{C}}=0.393$. For Vesta, the obliquity variations are between $21$ and $45\degree$ for the last $20{\,\mathrm{Myr}}$. As noted by [@skoglov1996], these large variations of the obliquity are due to the significant inclinations of Ceres and Vesta with respect to the invariable plane.
The secular orbital model in section \[SEC:secularmodels\] has allowed to show that the chaotic diffusion of the secular frequency $g_C$ of Ceres does not seem sufficiently important to put Ceres in a secular orbital resonance with the frequencies $2g_6-g_5$ and $2g_6-g_7$. For Vesta, the chaotic diffusion of the secular frequencies is more important especially for $s_V$. This model has also allowed to show that a secular model of Ceres and Vesta only perturbed by Jupiter and Saturn could entirely reproduce their secular orbital motions. The secular orbital dynamics of Ceres and Vesta is then dominated by the perturbations of Jupiter and Saturn, as noted by [@skoglov1996] for Ceres and Vesta and confirmed by [@ermakov2017a] for Ceres.
Ceres and Vesta have precession frequencies close to the secular orbital frequencies of the terrestrial planets as it is the case for Mars. The precession frequency of Ceres is close to secular orbital frequencies of Mercury and Venus and the one of Vesta to secular orbital frequencies of the Earth and Mars. However their long-term rotations are relatively stable. They are in an orbital region where the perturbations of Jupiter and Saturn dominate the secular orbital dynamics and the perturbations of the inner planets are relatively weak. The secular resonances with the inner planets have smaller widths and do not overlap contrary to the case of the inner planets.
This is an illustration that the stability of the long-term rotation depends strongly on the orbital motion. For Ceres and Vesta, there exists a chaotic zone with large oscillations of the obliquity as for the inner planets, but it is caused by the overlapping of resonances due to their proper secular frequencies with other resonances due to the perturbations of Jupiter and Saturn. We can also note for Ceres and Vesta that the evolution of the eccentricity is responsible for the appearance of secular resonances for the spin axis. However, their effects on the obliquity and the stability stay modest.
The two giant impacts suffered by Vesta modified the precession constant and could have put Vesta closer to the resonance with the orbital frequency $2s_6-s_V$. Given the uncertainty on the polar moment of inertia, the present Vesta could be in resonance with the frequency $2s_6-s_V$, where the obliquity can decrease to about $17\degree$ and increase to about $48\degree$.
T. Vaillant thanks Nathan Hara for fruitful discussions and references about the random walk on a sphere. The authors thank Anton Ermakov for useful comments on this work.
Passage from the invariable plane frame to the ICRF\[sec:planinv\]
===================================================================
We consider a vector $\mathbf{x}$ in the frame associated to the invariable plane. The coordinates in the ICRF become $$\mathbf{x}'=R_z\left(\theta_3\right)R_x\left(\theta_1\right)\mathbf{x}$$ with $R_x$ the rotation of axis $(1,0,0)$ and $R_z$ the rotation of axis $(0,0,1)$. The angles $\theta_1$ and $\theta_3$ are given by $$\theta_1=0.4015807829125271\,\rad$$ being about $\theta_1\approx23.01\degree$ and $$\theta_3=0.06724103544220839\,\rad$$ being about $\theta_3\approx3.85\degree$.
[^1]: If we take into account the non-spherical shape of Ceres to compute the normalized mean moment of inertia, the normalized polar moment of inertia becomes ${\overline{C}}=0.395$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider a real massless scalar field in 1+1 dimensions satisfying time-dependent Robin boundary condition at a static mirror. This condition can simulate moving reflecting mirrors whose motions are determined by the time-dependence of the Robin parameter. We show that particles can be created from vacuum, characterizing in this way a dynamical Casimir effect.'
author:
- 'Hector O. Silva'
- 'C. Farina'
title: 'A simple model for the dynamical Casimir effect for a static mirror with time-dependent properties'
---
Introduction {#Intro}
============
The phenomenon of particle creation from quantum vacuum by moving boundaries or due to time-dependent properties of materials, commonly referred to as the dynamical Casimir effect (DCE) [@Yablo-1989; @Schwinger-1992], has been investigated since the pioneering works of Moore [@Moore-JMP-1970] and DeWitt [@DeWitt-1975] (see also the subsequent works carried in Refs. [@others]) in a wide variety of situations and with the aid of quite different approaches (see Refs. [@Reviews-I] for excellent reviews on the subject). Particularly, perturbative and numerical approaches were applied for single mirrors [@Single-mirror; @Jaekel-PRL-1996; @Mintz-JPA-2006; @Mintz-JPA-2006-2] and cavities [@Cavities]. Initial field states different from vacuum were also considered for single mirrors [@temperatura-uma-fronteira; @equivalencia-DN] and cavities as well [@temperatura-cavidade]. The first experimental observation of this phenomenon was recently announced in Ref. [@Wilson-arXiv].
Taking into account the difficulties in generating appreciable mechanical oscillation frequencies (of the order of GHz) to obtain a detectable number of photons, recent experimental schemes focus on simulating moving boundaries by considering material bodies with time-dependent electromagnetic properties. These possibilities were first proposed by Yablonovitch [@Yablo-1989] and have been further developed in theoretical works which considered materials with time-dependent permittivities and time-dependent surface conductivities [@time-dependent-prop; @Crocce-PRA-2004; @Naylor-PRA-2009] (see the nice compilation done in Ref. [@Naylor-PRA-2009]). For instance, in Ref.
[@Crocce-PRA-2004] the DCE for a massless scalar field within a cavity containing a thin semiconducting film with time-dependent conductivity and centered at the middle of the cavity was studied. The coupling of such a film to the quantum scalar field was modeled by a delta-potential with time-dependent strength. A generalization to the case of an electromagnetic field was carried out in [@Naylor-PRA-2009]. Very promising and ingenious experimental set-ups to simulate non-stationary boundaries include the changing of the reflectivity of a semiconductor by the incidence of a periodic sequence of short laser pulses [@Experimento-MIR] or by using a coplanar waveguide terminated by a superconducting quantum interference device (SQUID). Applying a variable magnetic flux on the SQUID, a single moving mirror can be simulated [@Johansson; @Nation]. A first step toward the experimental verification of the DCE was recently made in [@Wilson-PRL-2010] using this approach. Moreover, the same group recently claimed to have observed the DCE [@Wilson-arXiv].
Key ingredients in the predictions of the DCE are the boundary conditions (BC) under consideration and naturally the quantum field submitted to those BC. Quite general BC are the so called Robin ones which, for the case of a scalar field in 1+1 dimensions and a single mirror fixed at $x=a$, are defined by $\phi(t,x=a)=\gamma[\partial_{x}\phi(t,x)]_{x=a}$, where $\gamma$ is a real parameter (called hereafter as Robin parameter). For the case of a moving boundary, the previous relation is imposed in the comoving frame and the corresponding BC in the laboratory frame is obtained after an appropriate Lorentz transformation.
This BC have the nice feature of interpolating continuously Dirichlet ($\gamma \rightarrow 0$) and Neumann ($\gamma \rightarrow \infty$) ones and occurs in several areas of physics and mathematics. For instance, in classical mechanics they will appear if one considers a vibrating string coupled to a spring which satisfies Hooke’s law and is localized at one of its edges [@Mintz-JPA-2006; @Mintz-JPA-2006-2; @Robin-CM]. In non-relativistic quantum mechanics, Robin BC occur as the most general BC imposed by a wall ensuring the hermiticity of the hamiltonian as well as a null probability flux through it [@Robin-QM]. Regarding the static Casimir effect [@Casimir-1948], it was shown that the Casimir force between two parallel plates which impose Robin BC on a real scalar field may have its sign changed if appropriate choices are made for the corresponding Robin parameters of each mirror [@Romeo-JPA-2002]. Such kind of repulsive Casimir force was also predicted, in the case of parallel plates, by Boyer in the 70s, who considered a pair of perfectly conducting and infinitely permeable plates [@Boyer-PRA-1974]. Further investigations on the influence of Robin BC in the static Casimir effect, including thermal corrections and the case of Casimir piston setups, were carried, for instance, in Refs. [@Robin-SCE]. See also Refs. [@Quantum-Vacuum] for the influence of this BC on the structure of quantum vacuum.
Only recently Robin BC were considered in the context of the DCE. For a massless scalar field in 1+1 dimensions submitted to a Robin BC at a single moving mirror the radiation reaction force on the moving mirror and the particle creation rate were computed in Refs. [@Mintz-JPA-2006; @Mintz-JPA-2006-2]. Interestingly, for Robin BC, the radiation reaction force acquires a dispersive component, in sharp contrast with Dirichlet and Neumann cases where the force is purely dissipative. It was also shown that, for a given Robin parameter, there exists a mechanical frequency of motion that dramatically reduces the particle creation effect [@Mintz-JPA-2006-2]. Finally, and of crucial importance for the present work, Robin BC can also be useful to describe phenomenological models for penetrable surfaces and under certain conditions they simulate the plasma model for real metals [@Robin-EM]. In these situations, for frequencies $\omega$ much smaller than the plasma frequency $\omega_{\mbox{\footnotesize{P}}}$, the Robin parameter $\gamma$ can be identified as the plasma wavelength $\lambda_{\mbox{\footnotesize{P}}}$. In other words, the Robin parameter $\gamma$ gives us an estimative of the penetration length of the mirror under consideration [^1].
Since to simulate a motion of a reflecting mirror is equivalent to simulate a real metal with time-dependent plasma wavelength, the above interpretation of $\gamma$ leads naturally to the consideration of time-dependent Robin parameters. Specifically speaking, it is quite natural to simulate the motion of a reflecting mirror by considering the quantum field submitted to a Robin BC at a static mirror but with a time-dependent Robin parameter $\gamma(t)$. The kind of boundary motion which is being simulated is determined by the kind of time-dependence of $\gamma(t)$. The purpose of this paper is precisely to analyze this situation for a massless scalar field in 1+1 dimensions. Particularly, we shall compute explicitly the particle creation rate for a natural choice of time-dependence for $\gamma(t)$ which is directly related to recent experimental proposals. This paper in organized as follows: in Sec. \[Bogoliubov\] the Bogoliubov transformation between the in and out creation/annihilation operators are obtained, allowing us to find the spectral distribution of the created particles and the particle creation rate in Sec. \[Spectrum\] and \[creation\_rate\], respectively. Finally, in Sec. \[Conc\] we present our conclusions and final remarks. Throughout this work we consider $\hbar=c=1$.
The Bogoliubov transformation {#Bogoliubov}
=============================
We start considering a real massless scalar field $\phi$ in 1+1 dimensions which satisfies the Klein-Gordon equation, $\partial^2\phi = 0$, and is submitted to a time-dependent Robin BC at a mirror fixed at the origin, namely, $\gamma(t)\partial\phi/\partial x|_{x=0} - \phi(0,t)
= 0$. For simplicity, we assume that $\gamma(t)$ departs only slightly from a positive constant $\gamma_0$, so that we can write $\mbox{$\gamma(t)=\gamma_{0}+\delta\gamma(t)$}$, where $\delta\gamma(t)$ is a smooth time-dependent function satisfying the condition $\max|\delta\gamma(t)| \ll \gamma_{0}$, for every $t$. Under these assumptions in the limit $\gamma_{0} \rightarrow \infty$ we recover Neumann BC. On the other hand, to reobtain Dirichlet BC ($\gamma_{0} \rightarrow 0$), because of condition $\max |\delta\gamma(t)| \ll \gamma_{0}$, we must also take $\delta\gamma(t) \rightarrow 0$. If we consider only $\delta\gamma(t)=0$ we re-obtain the usual time-independent Robin BC. Moreover, we shall also impose that $\delta\gamma(t) \rightarrow 0$ for $t \rightarrow \pm
\infty$. The BC satisfied by $\delta\gamma(t)$ then reads $$\begin{aligned}
\gamma_0\left[\frac{\partial \phi(x,t)}{\partial
x}\right]_{x=0} -\;\phi(0,t) + \delta\gamma(t)\left[\frac{\partial
\phi(x,t)}{\partial x}\right]_{x=0}=0\, . \label{bc-1} \nonumber \\ \end{aligned}$$ Also for the field, a perturbative approach will be adopted. Following Ford and Vilenkin [@Ford-Vilenkin-1982] we write $$\phi(x,t)=\phi_{0}(x,t)+\delta\phi(x,t)\, , \label{ansatz}$$ where, by assumption, $\phi_0$ satisfies the Klein-Gordon equation, $\partial^2 \phi_0 = 0$, and the time-independent Robin BC, $$\gamma_0\left[\frac{\partial \phi_0(x,t)}{\partial
x}\right]_{x=0} -\;\phi_0(0,t) = 0\, . \label{bc-PhiZero}$$ The small perturbation $\delta \phi$ takes into account the contribution to the total field $\phi$ caused by the time-dependence of the Robin parameter, described by the function $\delta\gamma(t)$. Since both $\phi$ and $\phi_0$ satisfy the Klein-Gordon equation, so does $\delta\phi$, namely, $\partial^2\delta\phi = 0$. The BC satisfied by $\delta\phi$ is obtained, up to first order terms, by substituting (\[ansatz\]) into Eq. (\[bc-1\]), which leads to $$\begin{aligned}
\gamma_0\left[\frac{\partial \delta \phi(x,t)}{\partial
x}\right]_{x=0}- \;\delta\phi(0,t) =
- \delta\gamma(t)\left[\frac{\partial \phi_{0}(x,t)}{\partial
x}\right]_{x=0}\, , \label{cond-delta-phi} \nonumber \\\end{aligned}$$ where Eq. (\[bc-PhiZero\]) was used. Hereafter it will be convenient to work in the Fourier domain, such that
$$\begin{aligned}
\Phi(x,\omega) &=& \int dt \,\, \phi(x,t)\; e^{i\omega t}\; ;
\;\;\;\;\;
\Phi_0(x,\omega) = \int dt \,\, \phi_0(x,t)\; e^{i\omega t}\;
;\cr\cr
\delta\Phi(x,\omega) &=& \int dt \,\, \delta\phi(x,t)\; e^{i\omega
t}\; ;
\;\;\;\;\;\;\;
\delta\Gamma(\omega) \;= \int dt \,\, \delta\gamma(t)\; e^{i\omega
t}\; . \label{fourier-trans}\end{aligned}$$
It is worth emphasizing at this moment that, by assumption, $\delta\gamma$ is a prescribed function of $t$, so that $\delta\Gamma(\omega)$ is known, in principle. Since $\phi_0(x,t)$ is the solution with time-independent Robin BC, this field is already known, and so does its Fourier transform, which is given by (for the region $x > 0$), $$\begin{aligned}
\Phi_{0}(x,\omega) &=&
\sqrt{\frac{4\pi}{|\omega|(1+\gamma_0^2\omega^2)}}\;\Bigl[\sin(\omega
x)
+ \gamma_0\omega\cos(\omega x)\Bigr]\nonumber \\ &\times &\Bigl[\Theta(\omega)a(\omega)
- \Theta(-\omega)a^{\dagger}(-\omega)\Bigr], \label{field-exp}\end{aligned}$$ where $\Theta(\omega)$ is the Heaviside step function. The operators $a(\omega)$ and $a^{\dagger}(\omega)$ satisfy the usual bosonic commutation relation $[a(\omega),a^{\dagger}(\omega^{\prime})]=2\pi\delta(\omega-\omega^{\prime})$.
In order to obtain $\Phi(x.\omega) = \Phi_0(x, \omega) +
\delta\Phi(x,\omega)$ we need to compute $\delta\Phi(x,\omega)$, which satisfies the Helmholtz equation, $$\label{Helmholtz}
\Bigl(\partial^2_x \,
+ \, \omega^2\Bigr)\,\delta\Phi(x,\omega) = 0\; ,$$ and is submitted to the BC below, obtained by Fourier transforming Eq. (\[cond-delta-phi\]), $$\begin{aligned}
\gamma_{0}\left[ \frac{\partial \delta\Phi(x,\omega)}{\partial x}
\right]_{x=0} -\; \delta\Phi(0,\omega) = \nonumber \\ - \int\frac{d\omega^{\prime}}{2\pi}\left[
\frac{\partial\Phi_{0}(x,\omega^{\prime})}{\partial x}
\right]_{x=0}\delta\Gamma(\omega-\omega^{\prime}).
\label{CondCont-deltaPhi}\end{aligned}$$ A further condition that must be imposed to the solution of Eq. (\[Helmholtz\]) for $x>0$ is that it will lead to a solution for $\phi(x,t)$ that must travel to the right, since $\delta\phi(x,t)$ must describe a contribution coming from the mirror, and not going towards the mirror. The desired solution can be written in terms of Green functions. Following the procedure given in [@Mintz-JPA-2006-2] it can be shown that the in and out fields, denoted respectively as $\Phi_{\mbox{\footnotesize{in}}}$ and $\Phi_{\mbox{\footnotesize{out}}}$, are related to each other according to $$\begin{aligned}
\Phi_{\mbox{\footnotesize{out}}}(x,\omega) =
\Phi_{\mbox{\footnotesize{in}}}(x,\omega) +
\frac{1}{\gamma_{0}}\Bigl[ G_{\mbox{\footnotesize{R}}}^{\mbox{\footnotesize{ret}}}(0,x,\omega) \nonumber \\-
G_{\mbox{\footnotesize{R}}}^{\mbox{\footnotesize{adv}}}(0,x,\omega)\Bigr]
\times \left\{ \gamma_{0}\left[\frac{\partial \delta \Phi(x,\omega)}{\partial x}\right]_{x=0} -\;
\delta\Phi(0,\omega) \right\}\; , \nonumber \\
\label{mintz-2006}\end{aligned}$$ where $G_{\mbox{\footnotesize{R}}}^{\mbox{\footnotesize{ret}}}(0,x,\omega)$ ($G_{\mbox{\footnotesize{R}}}^{\mbox{\footnotesize{adv}}}(0,x,\omega)$) is the retarded (advanced) Robin Green function, satisfying the time-independent Robin BC at $x=0$. These Green functions are given, respectively, by $$G_{\mbox{\footnotesize{R}}}^{\mbox{\footnotesize{ret}}}(0,x,\omega)
= \left(\frac{\gamma_0}{1-i\gamma_0\omega}\right)e^{i\omega x},
\label{green-func-1}$$ and $$G_{\mbox{\footnotesize{R}}}^{\mbox{\footnotesize{adv}}}(0,x,\omega)
= \left(\frac{\gamma_0}{1+i\gamma_0\omega}\right)e^{-i\omega x}.
\label{green-func-2}$$ Inserting Eqs. (\[field-exp\]) (appropriately relabeled as $\Phi_{\mbox{\footnotesize{out}}}$ and $\Phi_{\mbox{\footnotesize{in}}}$), (\[CondCont-deltaPhi\]), (\[green-func-1\]) and (\[green-func-2\]) into Eq. (\[mintz-2006\]), we can readily obtain the Bogoliubov transformation between $a_{\mbox{\footnotesize{out}}}$ and $a_{\mbox{\footnotesize{in}}}$ and its hermitean conjugates: $$\begin{aligned}
a_{\mbox{\footnotesize{out}}}(\omega)=
a_{\mbox{\footnotesize{in}}}(\omega)-2i\sqrt{\frac{\omega}{1+\gamma_0^2\omega^2}}
\int_{-\infty}^{+\infty}\,\frac{d\omega^{\prime}}{2\pi}
\sqrt{\frac{\omega^{\prime}}{1+\gamma_0^2{\omega^{\prime}}^2}} \nonumber \\
\times
\Bigl[\Theta(\omega^{\prime})a_{\mbox{\footnotesize{in}}}(\omega^{\prime})
-\Theta(-\omega^{\prime})a^{\dagger}_{\mbox{\footnotesize{in}}}(-\omega^{\prime})
\Bigr] \,\delta\Gamma(\omega-\omega^{\prime}). \nonumber \\
\label{transform}\end{aligned}$$ Noting that the annihilation operator $a_{\mbox{\footnotesize{out}}}(\omega)$ is given in terms of the annihilation and creation operators $a_{\mbox{\footnotesize{in}}}(\omega)$ and $a^\dagger_{\mbox{\footnotesize{in}}}(\omega)$, respectively, we conclude that the state $|0_{\mbox{\footnotesize{in}}} \rangle$ is not annihilated by the $a_{\mbox{\footnotesize{out}}}(\omega)$ operators. Consequently, we can state that particles were created from an initial vacuum state due only to the time-dependence of $\delta\gamma(t)$ in the BC (\[bc-1\]) imposed on the field by the static mirror. In fact, for $\delta\gamma(t)=0$ for all times, which corresponds to a static mirror imposing the standard time-independent Robin BC on the field, we have $a_{\mbox{\footnotesize{out}}}(\omega)=a_{\mbox{\footnotesize{in}}}(\omega)$ and no particles will be created, as expected. The particle creation effect will be further investigated in the next sections, where we will choose a specific time-dependent expression for $\gamma(t)$ in order to compute explicitly the corresponding spectral distribution of the created particles as well as the respective particle creation rate.
Spectral distribution of the created particles {#Spectrum}
==============================================
We start by writing the spectral distribution of the created particles as $$\frac{dN(\omega)}{d\omega} d\omega= \frac{1}{2\pi}\,\langle
0_{\mbox{\footnotesize{in}}}\vert\,
a^{\dagger}_{\mbox{\footnotesize{out}}}(\omega)
a_{\mbox{\footnotesize{out}}}(\omega) \, \vert 0_{\mbox{\footnotesize{in}}}
\rangle d\omega, \label{spec}$$ where $dN(\omega)/d\omega$ is the number of created particles with frequency between $\omega$ and $\omega + d\omega$ ($\omega \geq 0$) per unit frequency. From the previous definition for $dN(\omega)/d\omega$, it follows immediately that the total number of created particles from $t =-\infty$ to $t=+\infty$ is given by $$\label{Numero-Energia}
N=\int_0^{\infty} \frac{dN(\omega)}{d\omega}\, d\omega\,.$$ From Eq. (\[transform\]) and its hermitian conjugate $a^{\dagger}_{\mbox{\footnotesize{out}}}$, it is straightforward to show that $$\begin{aligned}
\frac{dN(\omega)}{d\omega} = \frac{2}{\pi}\left( \frac{
\omega}{1+\gamma_0^2\omega^2} \right)
\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{2\pi}\frac{\omega^{\prime}}{1+\gamma_0^2{\omega^{\prime}}^2} \nonumber \\ \times {\left\vert
\delta\Gamma(\omega-\omega^{\prime}) \right\vert}^2
\Theta(\omega^{\prime}). \label{dndo}\end{aligned}$$ In what follows we will obtain the spectral distribution for a particular case of $\delta\Gamma (\omega)$. With this purpose in mind, let us consider the following expression for $\delta\gamma(t)$, $$\delta\gamma(t)=\epsilon_{0} \cos(\omega_{0}t)\,
e^{-|\, t\,|/T},
\label{DeltaGamma(t)1}$$ with $\omega_{0} T \gg 1$. This choice of $\delta\gamma(t)$ may simulate, for instance, the changing magnetic flux through a SQUID fixed at the extreme of a unidimensional transmission line, as in Ref. [@Johansson], where a Robin-like BC arises naturally from quantum network theory applied to the system under consideration.
The expression of $\delta\Gamma(\omega)$, obtained by Fourier transforming Eq. (\[DeltaGamma(t)1\]), contains, in the limit of $\omega_{0} T \gg 1$, two sharped peaks around $\omega= \pm \,
\omega_{0}$, which can be approximated by Dirac delta functions, leading to the result $${\left\vert \delta\Gamma(\omega) \right\vert}^{2} \approx
\frac{\pi}{2}\epsilon^{2}_{0}T\bigl[ \delta(\omega-\omega_{0}) +
\delta(\omega+\omega_{0})\bigr]\, .$$ Substituting the above result into Eq. (\[dndo\]), we finally obtain the desired spectral distribuition, $$\begin{aligned}
\frac{dN(\omega)}{d\omega}
&=& \left(\frac{\epsilon^{2}_{0}T}{2\pi}\right)
\frac{\omega\,(\omega_{0}-\omega)}{(1+\gamma_{0}^2\omega^{2})\left[
1+\gamma_{0}^2(\omega_{0}-\omega)^2
\right]} \nonumber \\ &\times &\Theta(\omega_{0}-\omega),\label{spec-1}\end{aligned}$$ for this particular situation.
![ (Color online) The spectral distribution of the created particles $[(2\pi)/(\epsilon_{0}^{2} \, T)]\,dN/d\omega$ as a function of $\omega/\omega_{0}$ for several values of $\gamma_{0}$. Notice the reflexion symmetry around $\omega/\omega_{0}=0.5$: a signature of the fact that particles are created in pairs. The full line corresponds to $\gamma_{0}=1$; the dashed line to $\gamma_{0}=5$ and $20 \times [(2\pi)/(\epsilon_{0}^{2} \, T)]\,dN/d\omega$; and the dotted line to $\gamma_{0}=10$ and $100 \times [(2\pi)/(\epsilon_{0}^{2} \, T)]\,dN/d\omega$. []{data-label="spc"}](spectrum-color.eps)
A few comments are in order. Firstly, observe (see Fig. \[spc\] and Eq. (\[spec-1\])) that ${dN(\omega)}/{d\omega}$ vanishes for $\omega>\omega_{0}$, which means that no particles are created with frequencies larger than $\omega_{0}$ $-$ the characteristic frequency of the time-dependent BC. We also notice that the spectrum is left invariant under the replacement $\omega \rightarrow
\omega_0 -\omega$. This is a signature of the fact that particles are created in pairs: for each particle created with frequency $\omega$ there is a twin particle created with frequency $\omega_{0}-\omega$. In second place, note that for $\epsilon_{0} \rightarrow 0$, where a Robin BC with a time-independent parameter $\gamma_0$ is re-obtained, the spectrum of created particles vanishes, as expected (recall that the mirror which imposes the BC on the field is at rest). Further, for a fixed (finite) value of $\omega_0$, the limit $\gamma_{0} \rightarrow \infty$ (Neumann BC imposed on the field at a static mirror) also leads to a vanishing spectrum of created particles. Finally, since we assumed $\epsilon_0\ll\gamma_0$, the limit $\gamma_{0} \rightarrow 0$ (Dirichlet BC imposed on the field by a static mirror) necessarily leads to a vanishing spectrum as well.
Particle creation rate {#creation_rate}
======================
The total number of created particles is obtained by substituting Eq. (\[spec-1\]) in (\[Numero-Energia\]), namely, $$\begin{aligned}
\label{numerototal}
N &=& \left(\frac{\epsilon^{2}_{0}T}{2\pi}\right)
\int_{0}^{\infty}
\!\!\frac{\;\omega\,(\omega_{0}-\omega) \, \Theta(\omega_{0}-\omega)}{(1+\gamma_{0}^2\omega^{2})\left[
1 + \gamma_{0}^2(\omega_{0} - \omega)^2 \right]} \, \; d\omega \cr\cr
&=&
\left(\frac{\epsilon^2_{0}\omega^{3}_{0}T}{2\pi}\right) F(\xi)\; ,\end{aligned}$$ where $\xi=\gamma_{0}\omega_{0}$ and the function $F(\xi)$ is given by $$F(\xi)=\frac{\left(2+\xi^2\right)\ln\left(1+\xi^2\right)-2\xi\arctan(\xi)}{\xi^{4}\left (4+\xi^2\right)}.$$ As $N$ is proportional to $T$ - as expected for an open cavity - the physical meaningful quantity is the particle creation rate defined as $R=N/T$, that is $$R=\left(\frac{\epsilon^2_{0}\omega^{3}_{0}}{2\pi}\right) F(\xi)\;.
\label{rate}$$ In the limits $\gamma_{0}\omega_0 \ll 1$ and $\gamma_{0}\omega_0 \gg 1$, the particle creation rate are approximately given by $$R \approx \left(\frac{\epsilon^2_{0}\omega^{3}_{0}}{12\pi}\right) \;\;\;\; \mbox{for $\;\;\gamma_{0}\omega_0 \ll 1$}
\label{limites-1}$$ $$R \approx \left(\frac{\epsilon^2_{0}\omega^{3}_{0}}{2\pi}\right)\frac{2\ln(\xi)}{\xi^4} \;\;\;\; \mbox{for $\;\;\gamma_{0}\omega_0 \gg 1$}.
\label{limites-2}$$ For the sake of comparison with Eq. (\[rate\]), we recall the total particle creation rates for moving mirrors with Dirichlet [@Jaekel-PRL-1996] (or equivalently for Neumann BC as proved in [@equivalencia-DN]
) $$R_{\mbox{\footnotesize{D/N}}}=\frac{\delta q^{2}_{0}\omega^{3}_0}{12\pi},
\label{lambrecht}$$ and for time-independent Robin BC [@Mintz-JPA-2006-2] $$R_{\mbox{\footnotesize{ti-R}}}=\left( \frac{\delta q^{2}_0 \omega_0^3}{2\pi} \right) G( \gamma_0\omega_0),
\label{mintz}$$ where $$\begin{aligned}
G(\xi)=\frac{\xi\left[ 4\xi +\xi^3 + 12 \arctan(\xi)\right]-6\left( 2+\xi^2 \right)\ln\left( 1 + \xi^2 \right)}{6\xi^2 \left( 4 + \xi^2 \right)}. \nonumber \\\end{aligned}$$ The formulas above were obtained assuming a non-relativistically small amplitude oscillatory law of motion for the mirror. For both cases $\delta q_0$ is the amplitude and $\omega_0$ is the frequency of oscillation. We remark that for $\gamma_{0}\omega_0 \ll 1$ the particle creation rate in our model is exactly the same as for of a moving mirror [@Jaekel-PRL-1996] with Dirichlet BC where $\epsilon_{0}$ plays the role of the amplitude of oscillation of the motion. This reinforces the possibility of simulating moving boundaries through a static mirror with time-dependent Robin BC. The three particle creation rates are compared in Fig. \[creationrate1\].
![(Color online) Comparison between the total particle creation rates given by Eqs. (\[rate\]), (\[lambrecht\]) and (\[mintz\]). The full line corresponds to scaled creation rate $10\times \left[ \left( 2\pi\gamma_0^3 \right) / \left( \epsilon_0^2 \right) \right]\,R$. The dashed line corresponds to $10\times \left[ \left( 2\pi\gamma_0^3 \right) / \left( \delta q_0^2 \right) \right] \, R_{\mbox{\footnotesize{ti-R}}}$. Finally, the dotted line corresponds to $\left[\left( 2 \pi \right) / \left( \delta q_0^2 \right)\right]\,R_{\mbox{\footnotesize{D/N}}}$. In both curves involving Robin BC we considered $\gamma_0 =1$.[]{data-label="creationrate1"}](creationrate1-color.eps)
It is worth noting that the particle creation rate shown in Fig. \[creationrate2\] starts growing with $\omega_0$ until it achieves a maximum value for a given value of $\omega_0$ and then it approaches monotonically to zero as $\omega_0$ goes to infinity. This behaviour should be compared with that obtained for a moving mirror which imposes on the field a Robin BC with a time-independent parameter, where the particle creation rate after passing through one maximum and one minimum grows indefinitely as $\omega_0$ goes to infinity (see Ref. [@Mintz-JPA-2006-2]). Naively, we could expect similar behaviors for these two problems, after all, a time-dependent Robin parameter should simulate, in principle, a moving mirror so that a high frequency oscillating $\gamma(t)$ should mean a high frequency oscillating mirror. However, the interpretation of the Robin parameter $\gamma$ as an estimative of the penetration depth of the material boundary is rigorously proved only for static mirrors. Even in this case, this identification is valid only for the field modes whose frequencies are much smaller than the plasma frequency (but this condition is easily achieved since the plasma frequency is much higher than the mechanical frequencies we want to simulate). It is plausible that such an interpretation remains valid for *slowly* time-varying $\gamma (t)$, but not for high frequency oscillating $\gamma(t)$. In fact, our results show that this interpretation for $\gamma(t)$ fails for high values of $\omega_0$.
![(Color online) Behavior of Eq. (\[rate\]) for a larger range of $\omega_0$, assuming $\gamma_0=1$. The creation rate approaches smoothly to zero as $\omega_0 \rightarrow \infty$. This does not happen for the other cases showed in the Fig. \[creationrate1\]: for *moving* mirrors the particle creation rate increases unlimitedly for larger values of $\omega_0$.[]{data-label="creationrate2"}](creationrate2-color.eps)
Conclusions and final remarks {#Conc}
=============================
Exploring the peculiar properties of Robin BC, particularly, the interpretation of the Robin parameter, we presented a simple and yet instructive theoretical model where a single static mirror with time-dependent properties described by a time-dependent Robin parameter simulates a moving boundary. We used this model to study analytically the dynamical Casimir effect of a system that may of some value for further understanding of a ongoing experiment based on a one-dimensional transmission line terminated by a SQUID. In this setup a time-dependent magnetic flux through the SQUID gives rise to particle creation phenomenon. Employing a perturbative approach, we showed that particles can be created due to the time-dependence of the Robin parameter $\gamma$. We obtained explicitly the spectrum of the created particles as well as the total particle creation rate for a particular choice of $\gamma(t)$ which has a practical interest concerning the experiment just described. Our model can also be used as a theoretical model to investigate other experimental setups suggested for measuring the dynamical Casimir effect, as for example, the promising experimental proposal of the Padua group [@Experimento-MIR]. All we have to do is to choose appropriately the time-dependence of $\gamma(t)$ to simulate correctly the physical situation under consideration.
We emphasize that the particle creation phenomena due to a time-dependent Robin BC imposed on the field at a static mirror has similarities and differences with the case where a time-independent Robin BC is imposed on the field at a moving mirror, as discussed by Mintz *et al* [@Mintz-JPA-2006-2]. The main difference being the respective behaviors of the total particle creation rate for high values of $\omega_0$ (in the former case, where $\omega_0$ means the mechanical frequency of the moving mirror, this rate grows indefinitely as $\omega_0\rightarrow\infty$, while in the latter case, where $\omega_0$ gives a measure of how quick the time-dependent Robin parameter $\gamma(t)$ varies, this rate goes to zero, as $\omega_0\rightarrow\infty$). In the appropriate limits of the usual time-independent Dirichlet ($\gamma_{0} \rightarrow 0$), Neumann ($\gamma_{0} \rightarrow
\infty$) and Robin ($\delta\gamma(t)=0$) BCs no particles are created, as expected. The generalization of the present work for 3+1 dimensions and cavities are also expected to have induced photon creation. These issues are under investigation and will be discussed elsewhere.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank the Brazilian agencies CNPq and Capes for a partial financial suport. H. O. S. would also like to thank the hospitality of the Theoretical Physics Department of the Federal University of Rio de Janeiro where part of this work was done. The authors are also grateful to A. L. C. Rego, D. T. Alves and T. Hartz for valuable discussions.
[10]{} E. Yablonovitch, Phys. Rev. Lett. **62**, 1742 (1989). J. Schwinger, Proc. Nat. Acad. Sci. USA **89**, 4091 (1992). G. T. Moore, J. Math. Phys. **11**, 2679 (1970). B. S. DeWitt, Phys. Rep. **19**, 295 (1975). S. A. Fulling and P. C. W. Davies, Proc. R. Soc. London A **348**, 393 (1976); P. C. W. Davies and S. A. Fulling, Proc. R. Soc. London A **356**, 237 (1977); P. C. W. Davies and S. A. Fulling, Proc. R. Soc. London A **354**, 59 (1977); P. Candelas and D. J. Raine, J. Math. Phys. **17**, 2101 (1976); P. Candelas and D. Deutsch, Proc. R. Soc. London A **354**, 79 (1977). V. V. Dodonov, Adv. Chem. Phys. **119**, 309 (2001) \[[ arXiv:quant-ph/0106081v1]( arXiv:quant-ph/0106081v1)\]. V. V. Dodonov, J. Phys.: Conf. Ser. **161**, 012027 (2009); D. A. R. Dalvit, P. A. Maia Neto and F.D. Mazzitelli, <arXiv:1006.4790v2> (2010); V. V. Dodonov, Phys. Scr. **82**, 038105 (2010). L. H. Ford and A. Vilenkin, Phys. Rev. D **25**, 2569 (1982); P. A. Maia Neto, J. Phys. A **27**, 2167 (1994); P. A. Maia Neto and L. A. S. Machado, Phys. Rev. A **54**, 3420 (1996); P. A. Maia Neto and L. A. S. Machado, Braz. J. Phys. **25**, 324 (1996). A. Lambrecht, M.-T. Jaekel and S. Reynaud, Phys. Rev. Lett. **77**, 615 (1996). B. Mintz, C. Farina, P. A. Maia Neto and R. B. Rodrigues, J. Phys. A: Math. Gen. **39**, 6559 (2006). B. Mintz, C. Farina, P. A. Maia Neto and R. B. Rodrigues, J. Phys. A: Math. Gen. **39**, 11325 (2006). C. K. Law, Phys. Rev. Lett. **73**, 1931 (1994); Y. Wu, K. W. Chan, M. C. Chu and P. T. Leung, Phys. Rev. A **59**, 1662 (1999); P. Wegrzyn, J. Phys. B **40**, 2621 (2007); V. V. Dodonov, A. B. Klimov and D. E. Nikonov, J. Math. Phys. **34**, 2742 (1993); D. A. R. Dalvit and F. D. Mazzitelli, Phys. Rev. A **57**, 2113 (1998); C. K. Cole and W. C. Schieve, Phys. Rev. A **52**, 4405 (1995); C. K. Cole and W. C. Schieve, Phys. Rev. A **64**, 023813 (2001); M. Razavy and J. Terning, Phys. Rev. D **31**, 307 (1985); G. Calucci, J. Phys. A **25**, 3873 (1992); C. K. Law, Phys. Rev. A **49**, 433 (1994); V. V. Dodonov and A. B. Klimov, Phys. Rev. A **53**, 2664 (1996); D. F. Mundarain and P.A. Maia Neto, Phys. Rev. A **57**, 1379 (1998); D. T. Alves, C. Farina and E. R. Granhen, Phys. Rev. A **73**, 063818 (2006); J. Sarabadani and M. F. Miri, Phys. Rev. A **75**, 055802 (2007). M.-T. Jaekel and S. Reynaud, J. Phys. I (France) **3**, 339 (1993); M.-T. Jaekel and S. Reynaud, Phys. Lett. A **172**, 319 (1993); L. A. S. Machado, P. A. Maia Neto and C. Farina, Phys. Rev. D **66**, 105016 (2002). D. T. Alves, C. Farina and P. A. Maia Neto, J. Phys. A **36**, 11333 (2003); D. T. Alves, E. R. Granhen and M. G. Lima, Phys. Rev. D **77**, 125001 (2008). V. V. Dodonov, J. Phys. A: Math. Gen. **31**, 9835 (1998); G. Plunien, R. Schützhold and G. Soff, Phys. Rev. Lett. **84**, 1882 (2000); J. Hui, S. Qing-Yun and W. Jian-Sheng, Phys. Lett. A **268**, 174 (2000); R. Schützhold, G. Plunien and G. Soff, Phys. Rev. A **65**, 043820 (2002); G. Schaller, R. Schützhold, G. Plunien and G. Soff, Phys. Rev. A **66**, 023812 (2002); D. T. Alves, E. R. Granhen, H. O. Silva and M. G. Lima, Phys. Rev. D **81**, 025016 (2010). C. M. Wilson *et al*, <arXiv:1105.4714v1> (2011). E. Yablonovitch, J. P. Heritage, D. E. Aspnes and Y. Yafet, Phys. Rev. Lett. **63**, 976 (1989); Y. E. Lozovik, V. G. Tsvetus and E. A. Vinogradov, JETP Lett. **61**, 723 (1995); Y. E. Lozovik, V. G. Tsvetus and E. A. Vinogradov, Phys. Scr. **52**, 184 (1995); T. Okushima and A. Shimizu, Japan J. Appl. Phys. **34**, 4508 (1995); M. Uhlmann, G. Plunien, R. Schützhold and G. Soff, Phys. Rev. Lett. **93**, 193601 (2004). W. Naylor, S. Matsuki, T. Nishimura and Y. Kido, Phys. Rev. A **80**, 043835 (2009). M. Crocce, D. A. R. Dalvit, F. C. Lombardo and F. D. Mazzitelli, Phys. Rev. A **70**, 033811 (2004). C. Braggio *et al*, Europhys. Lett. **70**, 754 (2005); A. Agnesi *et al*, J. Phys. A: Math. Gen. **41**, 164024 (2008); A. Agnesi *et al*, J. Phys.: Conf. Ser. **161**, 012028 (2009). J. R. Johansson, G. Johansson, C. M. Wilson and F. Nori, Phys. Rev. Lett. **103**, 147003 (2009); J. R. Johansson, G. Johansson, C. M. Wilson and F. Nori, Phys. Rev. A **82**, 052509 (2010). P. D. Nation, J. R. Johansson, M. P. Blencowe and F. Nori, <arXiv:1103.0835v1> (2011). C. M. Wilson *et al*, Phys. Rev. Lett. **105**, 233907 (2010). K. Gustafson, *Introduction to Partial Differential Equations and Hilbert Space Methods* (Dover Publications, New York, 1999); G. Chen, J. Zhou, *Vibration and Damping in Distributed Systems vol. 1* (CRC Press, Florida, 1999). T. E. Clark, R. Menikoff and D. H. Sharp, Phys. Rev. D **22**, 3012 (1980); M. Carreau, E. Farhi and S. Gutmann, Phys. Rev. D **42**, 1194 (1990); V. S. Araujo, F. A. B. Countinho and J. Fernando Perez, Am. J. Phys. **72**, 203 (2004); V. S. Araujo, F. A. B. Coutinho and F. M. Toyama, Braz. J. Phys. **38**, 178 (2008); B. Belchev and M. A. Walton, J. Phys. A: Math. Gen. **43**, 085301 (2010). H. B. G. Casimir, Proc. K. Nede. Akad. Wet. **B51**, 793 (1948). A. Romeo and A. A. Saharian, J. Phys. A: Math. Gen. **35**, 1297 (2002). T. H. Boyer, Phys. Rev. A **9**, 2078 (1974). Z. H. Liu and S. A. Fulling, New J. Phys. **8**, 234 (2006); E. Elizalde, S. D. Odintsov and A. A. Saharian, Phys. Rev. D **79** 065023 (2009); L. P. Teo, JHEP **11**, 095 (2009); M. Asorey, D. García Álvares and J. M. Muñoz-Castañeda, J. Phys. A: Math. Gen. **39**, 6127 (2006). M. Asorey and J. M. Muñoz-Castañeda, J. Phys. A: Math. Theor. **41**, 164043 (2008); M. Asorey and J. M. Muñoz-Castañeda, J. Phys. A: Math. Theor. **41**, 304004 (2008). V. M. Mostepanenko and N. N. Trunov, Sov. J. Nucl. Phys. **45**, 818 (1985). See the first reference of [@Single-mirror].
[^1]: The set of references concerning the physical applications of Robin BC provided in the present paper obviously does not intend to be complete. Our objective is just to give the reader a taste of the richness of physical situations involving this BC.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Dust growth via accretion of gas species has been proposed as the dominant process to increase the amount of dust in galaxies. We show here that this hypothesis encounters severe difficulties that make it unfit to explain the observed UV and IR properties of such systems, particularly at high redshifts. Dust growth in the diffuse ISM phases is hampered by (a) too slow accretion rates; (b) too high dust temperatures, and (c) the Coulomb barrier that effectively blocks accretion. In molecular clouds these problems are largely alleviated. Grains are cold (but not colder than the CMB temperature, $\Tcmb \approx 20$ K at redshift $z=6$). However, in dense environments accreted materials form icy water mantles, perhaps with impurities. Mantles are immediately ($\simlt 1$ yr) photo-desorbed as grains return to the diffuse ISM at the end of the cloud lifetime, thus erasing any memory of the growth. We conclude that dust attenuating stellar light at high-$z$ must be ready-made stardust largely produced in supernova ejecta.'
author:
- |
A. Ferrara$^{1}$, S. Viti$^{2}$, C. Ceccarelli$^{3,4}$\
$^{1}$ Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy\
$^{2}$ Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT, United Kingdom\
$^{3}$ Univ. Grenoble Alpes, IPAG, F-38000 Grenoble, France\
$^{4}$ CNRS, IPAG, F-38000 Grenoble, France\
bibliography:
- 'ref.bib'
title: The problematic growth of dust in high redshift galaxies
---
\[firstpage\]
galaxies: high-redshift – (ISM:) dust, extinction
Introduction {#Mot}
============
Dust grains are a fundamental constituent of the interstellar medium (ISM) of galaxies. A large fraction ($\approx $ 50% in the Milky Way) of the heavy elements produced by nucleosynthetic processes in stellar interiors can be locked into these solid particles. They are vital elements of the ISM multiphase gas life-cycle, and key species for star formation, as they absorb interstellar UV photons, heat and cool the gas- They also catalyze the formation of H$_2$ on their surfaces, the first step toward the formation of all other ISM molecules, including CO.
The presence of dust at high ($z\simgt 6$) redshift implies that conventional dust sources (AGB and evolved stars) are not the dominant contributors. This is because their evolutionary timescales are close or exceed the Hubble time at that epoch ($\approx 1$ Gyr). Following the original proposal by [@Todini01], it is now believed that the first cosmic dust grains were formed in the supernova ejecta ending the evolution of fast-evolving massive stars [@Hirashita02; @Nozawa07; @Bianchi07; @Gall11; @Bocchio16]. Thus, albeit quasar host galaxies show remarkably high dust masses [@Bertoldi03; @Beelen06; @Michalowski10], in general the dust content of early galaxies rapidly decreases [@Capak15; @Bouwens16]. This does not come as a complete surprise given that the mean metallicity of the Universe[^1] increases with time.
Usually the presence of dust in high-$z$ galaxies is assessed via a specific observable, the so-called $\beta$ slope. This is defined as the slope of the rest-frame UV galaxy emission spectrum, $F_\lambda^i \propto \lambda^{\beta}$, in the wavelength range $1600-2500$ A. As dust extinction typically rises towards shorter wavelengths, a flatter slope indicates the presence of larger amounts of dust. Indeed this is what has been recently shown by ALMA observations [@Capak15; @Bouwens16].
It has been claimed that current observations cannot be explained purely by dust production by sources (either SNe and AGB/evolved stars). Instead, the dominant contribution to the dust mass of high-$z$ galaxies should come from grain growth [@Michalowski10; @Hirashita14a; @Mancini15] in the interstellar medium. This can happen only if gas-phase atoms and molecules can stick permanently to grain surfaces (typically consisting of silicate or amorphous carbonaceous materials) and remain bound to the grain solid structure. Simplistic approaches based on a “sticking coefficient” argument predict that the growth time of the grains could be very fast ($\simlt 1$ Myr). However, such conclusion fails to catch some critical points that we examine here.
The first problem is linked to the type of material that silicate or carbonaceous cores tend to accrete. As we will argue in this *Letter*, accretion can only occur in MCs. There, silicate or carbonaceous refractory cores must predominantly accrete the most abundant elements, namely oxygen and carbon, either in atomic or molecular form (CO). These species condense on grain surfaces where they are hydrogenated, forming icy mantles mainly made up of water, CO, CO$_2$, ammonia, methane and, possibly, other impurities [@Boogert15]. Mantles do not share the same optical properties as refractory cores. However, this is not a relevant point, given that icy mantles are very volatile and rapidly photo-desorbed as grains re-emerge from MCs.
A second problem with the growth scenario arises from the increasing CMB temperature, $\Tcmb= T_0 (1+z)$ K with $T_0=2.725$. While dust in Milky Way molecular clouds typically attains temperatures $\approx 10-20$ K [@Stutz10], at high-$z$ dust cannot cool below $\Tcmb$. This might represent a serious problem for dust growth, as the warmer surface hampers the sticking ability of particles.
It then appears that if early galaxy properties require a larger amount of UV-absorbing dust particles, we are forced to conclude that the observed grains must be ready-made products of the most efficient high-$z$ factories, i.e. SNe. In the following we show that this is indeed the case.
Dust accretion: what and where {#Met}
==============================
After solid particles condense out of early supernova ejecta, and are possibly processed by the reverse shock, they are injected in the pervasive diffuse phases of the interstellar medium, i.e. the Cold Neutral Medium (CNM, density $n_C \approx 30\cc$, temperature $T_C\approx 100 $K) and the Warm Neutral Medium (WNM, $n_W \approx 0.4\cc$, $T_W\approx 8000$K). Usually these two phases are considered to be in thermal equilibrium [@Field69; @Wolfire03; @Vallini13]. However, the multiphase regime exists only in a narrow, metallicity-dependent range of pressures. Outside this regime, the gas settles on a single phase. At high pressures (a more typical situation for denser high-$z$ galaxies), the CNM dominates; at low pressures the WNM takes over. We therefore concentrate in what follows on the CNM. As both destruction and production processes ultimately rely on supernova explosions [@Dayal10], the equilibrium dust abundance is essentially independent on star formation rate (SFR).
Even if they initially reside in the CNM, grains become periodically embedded into MCs forming out of this phase. This happens on the gas depletion timescale, $\tau_g \approx M_g/\mathrm{SFR}$, where $M_g$ is the total gas mass. In galaxies, $\tau_g$ decreases with redshift, but it remains close to 10% of the Hubble time at any epoch, i.e. $\tau_g \approx \mathrm{few} \times 100$ Myr at $z=6$. This timescale should be compared with the lifetime of MCs, which is much shorter, i.e. $\approx 10$ Myr. Thus, it is reasonable to conclude that grains spend a large fraction of their lifetime in the diffuse phase. An important point is that, while in MCs, dust grains are largely shielded from UV radiation and therefore have lower temperatures.
The grain accretion process
---------------------------
Before analysing the specific cases of the CNM and MCs in high-$z$ galaxies, we recall here the basic theory of grain accretion. Consider grains whose temperature is $T_d$. The grain accretion time scale, $\tau_a$, is set by kinetics and is given by [@Spitzer78; @Umebayashi80]: \_a\^[-1]{} S n\_d a\^2 v\_s \[eq1a\] where $n_d$ is the grain number density (proportional to the hydrogen number density, $n_H$, and metallicity, $Z$); $a$ is the average (spherical) grain radius. The most probable species velocity (the maximum of the Maxwell distribution function) is $v_s = (2 k_B T/m_s)^{1/2}$, i.e. the square root of the ratio of gas temperature and species mass; $S(T_d, m_s)$ is the accreting species sticking probability. The latter is a poorly known function of the dust temperature and composition, and of the accreting species. Recently, [@He16] reported an experimental determination of the $S$ value for different species and dust temperatures. They propose a general formula to evaluate $S$ from the species binding energy and dust temperature. In the following, we adopt their prescription (namely their Eq. 1). In practice, though, for $T_d \simlt 30$ K, and binding energies larger than about $E_b/k_B =1100$ K, the sticking probability is equal to unity.
Once an atom or molecule has sticked onto the dust grain, its fate depends on the species binding energy, $E_b$, the dust temperature, $T_d$, and the irradiation from FUV and cosmic rays. First, the atom/molecule might be thermally desorbed after a time \_d\^[-1]{} \_0 (-E\_b/k\_B T\_d) \[eq1b\] where $\nu_0$ is the vibrational frequency of the sticking species. In general, $\nu_0$ depends on the properties of the grain surface and adsorbed species [@Hasegawa92]. In practice, it is $\approx 10^{12} \rm s^{-1}$ for the cases relevant to the present study, namely H, O and Si atoms and water molecules [@Minissale16]. The binding energy refers to the van der Waals force, typically a small fraction of eV for many species (see below). Therefore, the desorption rate is very sensitive to $T_d$ and it is almost a step function [@Collings04]. In addition to thermal desorption, FUV and cosmic rays may provide enough energy to the accreted species to evict a fraction of them back into the gas [@Leger85; @Shen04; @Bertin13]. This desorption rate depends on $E_b$ and the specific microphysics of the process. It suffices to note here that Eq. \[eq1a\] provides, therefore, a lower limit to the actual accretion time.
Finally, a third relevant timescale is that needed by a species to hop and scan the grain surface. This is known as the “scanning time”, and it is given by: \_s\^[-1]{} N\_s\^[-1]{} \_0 (-E\_d/k\_BT\_d). \[eq1c\] $N_s$ is the number of sites of the grain; for $a=0.1 \mu$m and a mean distance between sites of 3.5 A, $N_s\approx 10^5$. The diffusion energy, $E_d$, is a poorly known parameter. Usually, this is taken to be a fraction, $f_d$, of the binding energy, i.e. $E_d = f_dE_b$. Experiments give often rather contradictory results, but tend to justify values in the range $f_d=0.3-0.8$ (see discussion in [@Taquet12]). For highly reactive species, like H and O, laboratory experiments provide diffusion energies rather than $E_b$. Fox oxygen, [@Minissale16] find $E_d/k_B =750$ K and $E_b/k_B =1320$ K. For H atoms, experiments find $210\, \mathrm{K} < E_d/k_B < 638$ K, depending on adsorbing surface and site [@Hornekaer05; @Matar08; @Hama12]. However, @Hama12 showed that the majority of sites have $E_d/k_B= 210$ K. In this study, we assume $E_b/k_B=500$ K and $E_d/k_B= 210$ K for H, following the majority of astrochemical models. No experimental measurements nor theoretical calculations exist for the binding energy of Si atoms, apart from the 2700 K heuristic estimate by [@Hasegawa93]. Finally, the water binding energy has been measured to be 1870 and 5775 K on bare silicates/amorphous water, respectively [@Fraser01].
In the following we investigate whether grain growth is possible and what type of condensables can be actually accreted on the bare soil surfaces of the grains. We consider separately the case of the CNM and MC environments and study processes as a function of the galaxy redshift. We will also concentrate on silicates as [@Weingartner01] showed that a for SMC-like extinction curve appropriate for high-$z$ galaxies, the silicate/carbonaceous mass ratio is $\approx 11:1$, i.e. C-based grains negligibly contribute to total dust mass.
Residence in the diffuse phase
------------------------------
Let us consider the case of the CNM, whose properties have been defined above, namely a gas temperature of 100 K and hydrogen nuclei density of 30 cm$^{-3}$. Assume also that the sticking species is a Si atom (of mass 28 amu). Plugging these values into Eq. \[eq1a\] we find
\_a 1.2 ([0.1 Z\_Z ]{}) ([a ]{}) S\^[-1]{} \[eq1\] The binding energy of Si atoms is estimated to be $E_b/k_B=2700$ K (Hasegawa & Herbst 1993; see above), so that the sticking probability is essentially unity, according to [@He16]. This time scale is very long and even exceeds the Hubble time at $z=6$. Smaller grains, $a\approx 0.01 \mu$m, due to their larger surface/mass area, might have proportionally shorter $\tau_a$. However, times scales remain comparable or longer than the depletion time scale, $\tau_g$, discussed above.
Thus, grain growth in the diffuse ISM is overwhelmingly difficult for at least three reasons. First, the accretion time scale $\tau_a$ is at best comparable to the residence time in the ISM. This means that by the time at which accretion from the gas phase *might* become important, the grain moves from the diffuse to the dense molecular phase. However, the situation is worse than this due to two additional complications.
Grain temperatures in the CNM are rather high (at least compared to that in MCs). Due to their compactness (sizes $\simlt 1$ kpc) and consequently high surface star formation rate per unit area ($\dot \Sigma_*\approx 1 M_\odot$ yr$^{-1}$ kpc$^{-2}$ – about 100 times that of the Milky Way), the interstellar UV field is correspondingly more intense. Roughly scaling the value of the Habing flux with $\dot \Sigma_*$ gives a value of $G_0 \approx 100$. Grains achieve temperatures ranging from 30 to 50 K for a typical $a=0.1\, \mu$m grain. Smaller grains, which provide most of the surface area for accretion, are even hotter, $T_d=42-72$ K ([@Bouwens16], Ferrara & Hirashita, in prep.). Under these conditions, Si, and O atoms will not remain attached to the grain surface. Instead, they almost instantaneously (fraction of a second) bounce back into the gas (see Eq. \[eq1b\]).
Further complications arise from the grain charge. Under the action of strong UV irradiation, grains attain a positive charge. For example, [@Bakes94] show that a spherical grain located in the CNM and exposed to a UV flux of intensity $G_0=1$ (in standard Habing units) attains an equilibrium charge $Z_g \approx +10$. This value must be seen as a lower limit as we mentioned already that $G_0$ can be up to a factor 100 higher in early systems. This represents a problem for accretion of key condensable species as, e.g. Si and C. Their ionization potential (11.26 eV for C and 8.15 eV for Si) is lower than 1 Ryd. Thus, even in neutral regions as the CNM these species are ionized. As a result, the Coulomb repulsion between the charged grains and these ions represents a virtually unsurmountable barrier in order for these species to reach the grain surface.
These processes acting against accretion of materials from the surrounding gas by grains lead to conclude that, under the conditions prevailing in high-$z$ galaxies, grain growth in the diffuse ISM phases can be safely excluded. We now turn to the analysis of the MC case.
{width="90mm"}
Residence in molecular clouds
-----------------------------
Accretion conditions become apparently more favorable when grains are incorporated into a newly born MC. There, due to the higher mean gas density ($n=10^{3-4} \cc$) the gas accretion timescale becomes shorter. In spite of the lower temperatures ($\approx 10$ K), the large density found in MCs boosts the accretion rate with respect to the diffuse ISM by a factor $\approx 10-100$. While residing in MCs, dust grains can evolve mostly because of the growth of icy mantles coating the grain refractory cores (e.g. [@Caselli12; @Boogert15].
Due to the attenuation or total suppression of the UV field, grains are colder in MCs. However, at high redshift heating by CMB becomes important and sets the minimum temperature of gas and grains in MCs. This value depends on redshift, and at $z\geq 6$ it is $T_d \geq 19$ K. This is about a factor of two larger than MC temperatures in the local universe. At these warmer temperatures, processes like thermal desorption (see Eq. \[eq1b\]), may dominate (or even impede) grain growth.
In the case of MW dust, a grain embedded in MCs is coated with several ($\approx 100$) layers of water ice [@Taquet12]. This is because oxygen, the most abundant element after H and He, after landing on the grain surface, is rapidly[^2] hydrogenated, and forms water molecules [@Dulieu10]. Water remains then stuck on the grain surface. Besides being the most abundant element, O condenses also faster than other heavier elements, like Si or SiO (which are particularly relevant for this study), according to Eq. \[eq1a\].
The water formation process on grains is a two-step process[^3]. First, an impinging oxygen atom must become bound to the surface. Second it must react with H atoms landed on the grain on a time scale shorter than their desorption time scale, $\tau_d$, during which they scan the grain surface. As dust and gas temperatures are likely higher in MCs at high redshifts due to CMB, we need to re-consider the efficiency of this process, in terms of the above timescales.
Consider the formation of the first H$_2$O layer. The desorption (or permanence on the grain surface) time (Eq. \[eq1b\]) for O atoms is larger than the Hubble time for $z \simlt 6.5$ (see Fig. \[Fig01\]). H atoms permanence time depends on $z$ (Fig. \[Fig01\]) and it is $\approx 240$ ms at $z=6$. These timescales have to be compared with the time necessary for H atoms to scan the grain surface, $\tau_s$ (Eq. \[eq1c\]). Taking again $a=0.1\, \mu$m, and $N_s = 10^5$ sites to scan, it takes $\tau_s =6$ ms for H atoms to scan a grain at temperature $T_d=\Tcmb(z=6)$, and form H$_2$O molecules. Hence, the time required to form the first water layer is set by the accretion rate of oxygen atoms. Assuming a MC hydrogen density $n_H=10^4 \cc$, a temperature $\Tcmb(z=6)$, and that a layer contains $N_s \approx 10^5$ molecules, we get that the first layer is formed in $\approx 1500$ yr. Once formed, water (whose $E_b$ is larger than the O atom binding energy) will remain frozen forming a mantle. The key point here is that, once a first layer of water ice is formed, this will prevent Si atoms or SiO molecules to get in contact with the silicate surface. On the contrary, given the much smaller Si abundance (for solar ratios Si/O $\approx 1/200$ by number), these species will be – at best, trapped in the ice layer, forming impurities but not silicate-like bonds[^4]. As a final remark, we underline that the above rapid hydrogenation process applies also to the case of carbonaceous grains.
Mantle photo-desorption in the diffuse ISM {#Met}
------------------------------------------
The results of the previous subsection lead us to conclude that grain growth in molecular clouds is largely in the form of icy mantles. Once the parent molecular cloud is dispersed (typical lifetime 10 Myr) as a result of radiative and mechanical feedback from stars born in its interior, core-mantle grains are returned to the ISM. However, the memory of growth occurred in the MC will be promptly erased as icy mantles are photo-desorbed by the FUV field [@Barlow78; @Fayolle13]. A simple calculation shows that this timescale is very short. Suppose that the ice mantle is made by $\ell = 100$ layers, each containing $N_s=10^5$ sites. The time necessary to completely photo-desorb the mantle, $\tau_p$, is that required to provide each site with an UV photon. Thus, for a given Habing band ($6-13.6$ eV) UV flux intensity, $G_0$ (in cgs units), and mean photon energy $\langle h\nu \rangle \approx 10$ eV y\_[H\_2O]{}[G\_0h]{} a\^2 \_p = N\_s , or $t_p = 10-1000$ yr for $G_0=100-1$, respectively. In the previous calculation we have further assumed a H$_2$O photo-desorption yield, $y_{H_2O}=0.001$, following [@Oberg09]. To all purposes the accreted mantle material (along with the impurities) is almost instantaneously “lifted” from the core. The bare grain then finds itself back in the ISM with (optical) properties essentially identical to those prevailing before the journey into the MC, and set by nucleation processes in sources.
Conclusions {#Con}
===========
We have shown that grain growth by accretion is a problematic process hampered by a number of difficulties which become additionally more severe at high redshifts as the temperature floor set by the CMB increases (i.e. grains are hotter). In the CNM (for the WNM the situation is even less favorable) grain growth is essentially prevented by the (i) low accretion rates related to low densities; (b) higher dust temperatures (particularly in compact, high-$z$ galaxies) causing very short thermal desorption times; (c) Coulomb repulsive forces preventing positively charged ions (Si, C) to reach the grain surface.
Molecular clouds offer more favorable conditions, due to their high density and low dust temperature. However, we have shown that once the bare grain cores are immersed in the MC environment, they are covered by a water ice mantle on a timescale of a few thousand years at $z=6$. As hydrogenation is very fast ($\approx$ ms), such timescale is set by the accretion rate of oxygen atoms. This process fully operates in spite of CMB heating up to $z \approx 8.3$. Beyond that epoch, predictions become uncertain due to the lack of a precise knowledge of the diffusion energies of the various species. As the grains are returned to the diffuse phase at the end of the MC lifetime, the mantles are photo-desorbed and the bare core is exposed again. That is, the memory of the ice growth phase in the MC is completely erased.
If grain growth is as problematic as we point out, it is necessary to re-evaluate the arguments, often made, invoking it. These are generically based on a comparison between the dust production rate by sources (planetary nebulae, evolved stars, and SNe) and destruction rate in supernova shocks [@Draine09]. According to such, admittedly uncertain, estimates *made for the Milky Way*, dust production fails by about a factor 10 to account for the observed dust mass once shock destruction is accounted for.
In the light of the present results, reconciliation of this discrepancy must necessarily come from either (a) an upward revision of the production rate, or (b) downward reappraisal of the dust destruction efficiency by shocks. Interestingly, there appears to be evidence for both solutions, and perhaps even a combination of the two. Recent studies [@Matsuura11; @Indebetouw14] have determined the dust mass produced by SN1987A. The observations revealed the presence of a population of cold dust grains with $T_d=17-23$ K; the emission implies a dust mass of $\approx 0.4-0.7\, M_\odot$. Such value is $20-35$ times larger than what usually assumed in the above argument. Moreover, [@Gall14] found that the $0.1-0.5$ masses of dust detected in the luminous SN2010jl are made of $\mu$m-size grains which easily resist destruction in (reverse) shocks. Alternatively, the puzzle can be solved also by decreasing the destruction rate. Indeed, [@Jones11] thoroughly reanalyzed this issue and found that the destruction efficiencies might have been severely overestimated. They additionally conclude that “the current estimates of “global” dust lifetimes could be uncertain by factors large enough to call into question their usefulness”. Given the situation, the results presented here do not seem to create any additional challenge. On the contrary, the hope is that they will stimulate deeper studies on the key problem of dust evolution in a cosmological context.
Acknowledgments {#acknowledgments .unnumbered}
===============
This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915.
\[lastpage\]
[^1]: Throughout the paper, we assume a flat Universe with the following cosmological parameters: $\Omega_{\rm m} = 0.308$, $\Omega_{\Lambda} = 1- \Omega_{\rm m} = 0.692$, and $\Omega_{\rm b} = 0.048$, where $\Omega_{\rm M}$, $\Omega_{\Lambda}$, $\Omega_{\rm b}$ are the total matter, vacuum, and baryonic densities, in units of the critical density, and $h$ is the Hubble constant in units of 100 km/s [@Ade15].
[^2]: When the dust temperature is 10 K, the permanence time of H atoms on the grain surface is $\approx 165$ yr, while it takes only $\approx 132$ s for each H atom to scan the full grain surface (Fig. \[Fig01\]).
[^3]: Here we neglect a number of subtleties and complications related to water formation, which does not involve only the simple direct addition of H and O atoms [@Dulieu10].
[^4]: Note that the seminal and important experimental results by [@Krasnokutski14] suggesting the formation of silicate-like bonds from accreting Si and SiO molecules, refer to ultra-low temperatures ($\sim 0.34$ K) where quantum chemical effects might largely enhance the efficiency of the reaction. This is caused by trapping of the very cold molecules in a van der Waals energy potential which is easily overcome at higher temperatures (see, for example, a conceptually similar experiment by [@Shannon13]).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'With the constant increase in demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while ensuring continual improvements to network performance. Although Network Function Virtualization (NFV) has been identified as a solution, several challenges must be addressed to ensure its feasibility. In this paper, we present a machine learning-based solution to the Virtual Network Function (VNF) placement problem. This paper proposes the Depth-Optimized Delay-Aware Tree (DO-DAT) model by using the particle swarm optimization technique to optimize decision tree hyper-parameters. Using the Evolved Packet Core (EPC) as a use case, we evaluate the performance of the model and compare it to a previously proposed model and a heuristic placement strategy.'
author:
- 'Dimitrios Michael Manias, Hassan Hawilo, Manar Jammal and Abdallah Shami[^1]'
title: 'Depth-Optimized Delay-Aware Tree (DO-DAT) for Virtual Network Function Placement'
---
NFV, Machine Learning, PSO, SFC, MANO.
Introduction
============
ith network connectivity demands at an all-time high and continuing to increase, Network Service Providers (NSPs) are tasked with the challenge of accommodating additional bandwidth requests on their networks while concurrently maintaining or improving their Quality of Service (QoS). To adapt their networks to accommodate this demand, NSPs must create a network with increased flexibility, portability, and scalability. The concept of Network Function Virtualization (NFV) has been proposed as a candidate solution for addressing these challenges. NFV architecture isolates network functions and executes them as software-based applications independently from the underlying hardware [@r1]. By abstracting the individual network functions from their underlying hardware and creating Virtual Network Functions (VNFs), NSPs may experience a reduction in capital and operational expenditures, and an increase in operational efficiencies [@r2].
NFV technology, however, is not without its own challenges, including performance, availability, and reliability. NSPs are obliged to adhere to specific standards when delivering a service to a customer. These standards are outlined through QoS guarantees, performance metrics, and thresholds pertaining to jitter, packet loss, delay, and availability. When evaluating the feasibility of an NFV-enabled network, adherence to QoS guarantees is essential and must be considered.
One of the key metrics outlined in QoS guarantees is performance, which can be described by different metrics such as delay or availability and can pertain to an individual VNF instance or a set of interconnected VNF instances known as a Service Function Chain (SFC).
Our previous work presents the Delay Aware Tree (DAT), which uses a decision tree to address the NP-Hard VNF Placement Problem [@r3-1]. The DAT shows promising results when compared to current heuristic solutions. However, the DAT placement strategy, on average, produces 34 ms of additional delay per computational path when compared to current heuristics due to sub-optimal fitting. When considering the incoming adoption of 5G networks and the new ultra-low latency requirements (<1ms) in industrial internet of things use cases, this added delay hinders the adoption of the DAT. As such, in this work, the maximum depth hyperparameter (related to fitting) of the DAT is optimized in an effort to improve the delay observed across all computational paths and outperform current heuristics. To optimize the maximum depth of the DAT, we propose the optimization of a performance-based objective function, which considers both the delay and QoS guarantees when evaluating the fitness of a set of hyperparameter values.
In order to illustrate the proposed solution, the virtual Evolved Packet Core (vEPC) is selected as a use case; however, the solution presented in this paper is generalizable to any SFC. There are four VNF instances forming the SFC for vEPC being: the Home Subscriber Service (HSS), the Mobility Management Entity (MME), the Serving Gateway (SGW), and the Packet Data Network Gateway (PGW).
The remainder of this paper is structured as follows. Section II discusses the state-of-the-art. Section III outlines the methodology. Section IV presents and analyzes the results obtained. Finally, Section V concludes the paper.
Related Work
============
There has been significant work in the field of VNF placement in recent years. Some methods used to address the VNF placement problem include optimization problem formulations [@mr21], latency-aware placement schemes [@mr22], Monte-Carlo tree-based chaining algorithms [@mr23], and matching theory approaches [@mr24]. The abovementioned works however, are not capable of learning from historical observations; to address this inadequacy, ML-based solutions are explored. Wahab *et al.* [@r4-1] propose an ML approach for efficient placement and adjustment of VNFs and minimizing operational costs while considering capacity and efficiency constraints. Khezri *et al.* [@r4] propose a deep Q-learning model considering the reliability requirements of a given service function chain. Zhang *et al.* [@r5] propose an intelligent cloud resource manager that uses deep reinforcement learning when mapping services and applications to resource pools. Sun *et al.* [@r6] propose Q-learning as a method of addressing the time-accuracy tradeoff between heuristic and optimization models. Khoshkholghi *et al.* [@key-1] ** propose a genetic algorithm with the objective of minimizing a resource-based cost function. Compared to these studies, our work advances the state-of-the-art as we capture carrier-grade functionality constraints (*i.e*. availability) as well as the dependency constraints while simultaneously generating placements that produce multiple computational paths (CPs) (*i.e.* multiple components serving the same SFC) which enables the minimization of end-to-end SFC delay as well as enhanced availability. Our work also considers HyperParameter Optimization (HPO) and analyzes its effect on the overall performance of the model.
HPO is used to improve the performance of ML algorithms. The tuning and optimization of tree-based machine learning models has been explored using searches, heuristics and metaheuristics [@r7], [@r8], visual methods [@r9], and Bayesian optimization [@r10]. The main metric for assessing performance in these works has been predictive accuracy.
This work extends our previous work by introducing a method of optimizing the performance of the DAT through HPO. Due to the nature of this multi-class, multi-output classification problem, predictive accuracy is not sufficient as a metric for evaluating our model. The main contributions of this paper include: a domain-based HPO model, which optimizes the maximum depth parameter of the DAT using the meta-heuristic Particle Swarm Optimization (PSO) technique, the introduction of a regularization term, which severely penalizes invalid placement predictions, and the creation of the Depth-Optimized Delay-Aware Tree (DO-DAT), which exhibits improved performance compared to placement strategies published in literature and facilitates automation in NFV management and orchestration.
Methodology
===========
The following section outlines the various stages leading to the development of the DO-DAT.
Problem Formulation
-------------------
The problem formulation for this work is conducted in a two-fold manner, the first dealing with the problem formulation of the DAT and the second dealing with the problem formulation of the PSO depth optimization.
### DAT
The methodology behind the construction of the DAT, as defined by our previous work [@r3-1], takes the previous placements made by the near-optimal heuristic BACON algorithm [@r22]. Inherently, the problem formulation for the DAT follows the MILP problem formulation for the BACON algorithm outlined in the work of Hawilo *et al.* [@r22] and constructs a dataset that is used to train the DAT. The BACON problem formulation has the objective of minimizing the delay experienced by two dependent VNFs forming an SFC. To capture the carrier-grade requirements associated with this technology, several constraints were included in the problem formulation, including capacity constraints (placement cannot exceed computational resource capacity), network-delay constraint (placement cannot violate latency requirement), availability constraint (placement cannot violate co-location and anti-location requirements), redundancy constraint (placement must improve availability through the placement of redundant components), and dependency constraint (placement must ensure that dependent VNFs forming an SFC are placed in a manner which enables the execution of the SFC).
### PSO depth optimization
The PSO depth optimization is conducted through the development of a unique optimization function related to the domain of the NFV-enabled network. By adopting this process, it is possible to move past the point of matching the performance of the BACON algorithm and instead focus on the continual development of the predictive placement model as a whole. The PSO optimization takes place once during the initial construction of the DO-DAT.
When considering the construction of a decision tree, the maximum depth of the tree has been identified as a key hyperparameter in the overall fitting of the model. In an effort to prevent over and underfitting, this work presents a joint optimization objective that considers both the average delay across all CPs of a predicted placement as well as a penalty factor related to fitting. In the previous construction of the DAT, improper model fitting manifested itself through invalid predicted placements, which were instances where the constraints imposed on the initial BACON problem formulation were not captured by the DAT and therefore resulted in predicted placements which were considered invalid. The penalty factor term operates like a regularization term in the objective function penalizing invalid predicted placements during the training phase of the model.
The formulation of the multi-objective optimization problem consolidated into a single objective function is defined below.
The hyperparameter set is defined in (1)
$$\{h\}=\left\{ maxDepth\right\}$$
The objective of the optimization is to minimize the delay across all CPs as well as the number of invalid predicted placements. Let *i* represent the trial number and *j* represent the CP; the average delay across all CPs can be defined as:
$$avg_{delay,CP}=\frac{\sum_{i=1}^{n}\left[\frac{\sum_{j=1}^{k}delay_{CP_{j}}}{k}\right]}{n}$$
Where *n* is the total number of trials and *k* is the total number of CPs.
The regularization term used is expressed through (3) where *ip* represents invalid placement predictions. In order to be effective, the regularization term must have an equal order of magnitude compared to the quantity being regularized; since the average delay per CP value is in the order of thousands of microseconds, 1000 was selected as a coefficient to ensure that the regularization term firstly is of equal magnitude and additionally has a severe penalty on the overall objective.
$$regTerm=1000*\log_{2}(ip+1)$$
It is evident that as the number of invalid predictions approaches zero, so does this regularization term suggesting that in the ideal case where there are zero invalid predictions, the effect of this regularization term is zero, as seen in (4).
$$\lim_{ip\rightarrow0}1000*\log_{2}(ip+1)=0$$
By combining (2) and (3) into a single, equally-weighted objective function, the following is obtained:
$$O_{PSO}=avg_{delay,CP}+regTerm$$
The evaluation criterion considers the objective (5) evaluated on a model with a given hyperparameter value $(model(h))$ across the training and validation sets $T_{s}$and $V_{s}$.
$$E(O_{PSO},model(h),T_{s},V_{s})$$
Considering cross-validation, where *b* represents the number of folds, the following is the function to be optimized.
$$P(h)_{PSO}=\frac{1}{b}\sum_{i=1}^{b}E(O_{PSO},model(h),T_{s}^{(i)},V_{s}^{(i)})$$
Finally, since this is a cost function, the optimization problem objective is expressed in (6) as a minimization:
$$minimize\:P(h)_{PSO}$$
When considering the above optimization problem, the possible range of hyperparameter values is constrained to the functional range of the system. In this problem, the functional range is defined as the range where the number of invalid placement predictions falls below a specified error threshold and when it achieves steady-state across ten depth iterations. The constraint on the hyperparameter values is defined by:
$$a_{1}\leq h\leq a_{2}$$ where the functional range is defined on the interval $[a_{1},a_{2}]$.
Data Generation
---------------
In order to generate the training and testing datasets, initial network topologies are constructed. These topologies are structured as 3-tier data centers to simulate the NFV-enabled network environments operated by NSPs, as defined in our previous work [@r3-1]. In order to generate the topology, an initial number of network servers and vEPC VNF instances are selected. For each server-instance permutation, 10,000 topologies are generated with differing network conditions and the respective VNF instances are placed using the BACON algorithm. All the network parameters (*i.e.* delays, resources, tolerances, etc.) were simulated by sampling published distributions from NSP datacenters such as Microsoft [@key-8] and Intel [@key-9]. By sampling these distributions, we can ensure our work is generalizable and applicable to real-world environments.
In this work, two different server-instance permutations are considered, and 20,000 topologies are generated. The first contains 6 VNF instances to be placed on 15 network servers. Taking into consideration the distribution of VNF instances, there are a total of 4 different CPs available for this topology. The second considers the placement of 10 VNF instances on 30 network servers and 36 CPs.
Data Analysis
-------------
Upon the creation of the various topologies through the data generation and initial placements using the BACON algorithm, the next stage in the methodology relates to the feature extraction. In order to predict the placement of VNF instances on network servers, a snapshot of the network conditions is taken and used as input features to the model. The output labels are the placements of the components on the network servers; as previously stated, this is a multi-class, multi-output problem; therefore, there is a set of outputs predictions, each with their respective set of possible labels. Given that, *s* represents a network server, and *v* represents an instance to be placed, the following holds true:
$$\begin{aligned}
outputs & =\left\{ v_{1},v_{2},...,v_{n}\right\} \\
labels & =\left\{ s_{1},s_{2},...,s_{n}\right\} \end{aligned}$$
From the network snapshot, several features are extracted, including instance resource requirements, server resource capacity, delay tolerance between interdependent instances, delay between server, and instance dependency levels.
Model Construction
------------------
The construction of the DO-DAT follows a 3 step process. The first stage involves the determination of the range of under/overfitting with respect to the tree depth. PSO optimization is initially run given a range of [\[]{}2,100[\]]{} for the maximum depth hyperparameter. The value of 100 is selected as the upper bound for the range of values that the maximum depth hyperparameter can assume as a benchmark to limit the initial search space; if an optimal solution is not achieved in the defined search space, the upper bound is increased by a factor of 2 until an optimal solution is found. Since the evidence of under/overfitting in the DAT was manifested through the number of invalid placement predictions, the goal of this PSO optimization stage is to determine the depth which minimizes their occurrence.
The result from the first stage is then used to determine the range of values to be further considered. The range of values is determined by considering when the number of invalid placement predictions falls below an initial error threshold set at 7.5% (10% is the pre-defined maximum tolerable placement error, by setting the initial threshold to 7.5%, we can be more conservative with our placements and reduce the search space for the subsequent steps) and when steady-state is reached, meaning that there is no further improvement observed for several iterations. The optimization performed in the second stage considers the entire objective function, (6) evaluated across the previously determined range. The result of this optimization shows the effect of the range of depths on the joint consideration of invalid predictions and delay.
The final stage of the construction of the DO-DAT is to identify the optimal tree depth obtained from the previous stage and construct the model using this hyperparameter value.
Results and Analysis
====================
The following is a presentation of the results obtained as well as an analysis of their implications on the DO-DAT. The generation of the dataset, data processing and ML models are implemented using Python on a PC with an Intel Core i7-8700 CPU @ 3.20 GHz CPU, 32 GB RAM, and an NVIDIA GeForce GTX 1050 Ti GPU.
Functional Range
----------------
The first set of results pertains to the PSO optimization and its effectiveness in presenting the optimal value of the tree depth. Fig. 1 displays the effect of varying the depth of the tree on the number of invalid placement predictions.
{width="7cm" height="3.5cm"}
As seen, the number of invalid placement predictions decreases while the tree depth is less than 25 and stabilizes at this minimum value of 0 while the tree depth is greater than 25. At a max depth of 20, the number of invalid placement predictions (error rate) is 7.5%. Since steady-state is observed beyond 25, 10 additional tree depths were considered to evaluate the effect of the overfitting on the optimization. Therefore, the range of tree depths spanning [\[]{}20,35[\]]{} is selected as the functional range of the first stage and is further evaluated by taking into consideration the full objective function (6).
Optimal Depth
-------------
Results from the optimization of the functional range of interest are presented in Fig. 2. From this figure, we can see the objective function *P(h)* is decreasing on the interval [\[]{}20,28[\]]{} and plateaus on the interval [\[]{}29,35[\]]{}. The interval [\[]{}28,30[\]]{} represents the interface between under and overfitting of the DAT, and therefore, since there is no further significant improvement on the interval [\[]{}29,35[\]]{}, the optimal depth is 29.
{width="7cm" height="3.5cm"}
Performance Comparison
----------------------
The following figures compare the placement of BACON, DAT, and DO-DAT. Fig. 3 illustrates the delay across the various CPs in the small scale network. As observed in this Fig. 3, DO-DAT exhibits improved performance when compared to both the BACON and DAT placement strategies.
{width="7cm" height="3.5cm"}
Fig. 4 displays the delay experienced between interconnected dependent instances, forming a vEPC SFC.
{width="7cm" height="3.5cm"}
As seen in Fig. 4, the DO-DAT is the best performing placement strategy as it has successfully placed the VNF instances with less delay exhibited between dependent VNF instances. These results can be further extended to the second network topology, as expressed in Fig. 5, where it can be seen that DO-DAT, when considered across all CPs, produces more paths with less delay compared to the other placement strategies. This is further illustrated in Table 1, where the ratios of which strategy produces a placement with the least delay per each of the 36 CPs seen in Fig. 5 are listed. In all cases, the DO-DAT outperforms the other two strategies and establishes itself as the clear winner when considering the reduction of end-to-end delay across all CPs.
{width="8.5cm"}
[Ratio Description]{} [Ratio]{}
------------------------------ -------------
[BACON vs. DAT vs. DO-DAT]{} [13:9:14]{}
[BACON vs. DO-DAT]{} [13:23]{}
[DAT vs. DO-DAT]{} [12:24]{}
: Placement strategy producing the lowest delay per CP
This is reinforced when considering Fig. 6, which shows a Probability Density Function (PDF) of the difference between the DO-DAT and BACON algorithms in terms of placement delay; a similar comparison between BACON and DAT was conducted in our previous work [@r3-1]. By calculating the difference in delay across every CP placement, we can determine the probability of the DO-DAT outperforming BACON. The mean in Fig. 6 is -10ms; therefore, on average, DO-DAT provides a CP with 10ms less of a delay when compared to BACON. This is an improvement on our previous work related to DAT, which on average had 34ms more delay. The work presented in the paper effectively improves the placement of VNF instances by 44ms on average.
{width="6.5cm"}
Runtime Complexity
------------------
One of the benefits of the use of ML in networks is the reduction of system complexity. This is evident through the runtime complexity analysis of our proposed model. When considering the BACON algorithm, it has a runtime complexity of $O(\frac{s^{3}-s^{2}}{2})$ where *s* denotes the number of available servers in the network [@r22]. Our previous work outlines the time complexity of constructing a decision tree as $O(n_{features}*n_{samples}*\log n_{samples})$ when creating the tree and $O(\log n_{samples})$ when executing a query [@r25]. Additionally, the DO-DAT has an additional offline optimization component with the complexity defined by $O(n^{2}t)$ where *n* denotes the population and *t* the iteration [@r26]. Since the building of the tree and the optimization are completed entirely offline, only the querying phase is considered during the runtime analysis.
Conclusions and Future Work
===========================
The work presented in this paper described a key step towards an implementable, intelligent, and delay-aware VNF placement strategy. Through the optimization of the max tree depth, we addressed the under/overfitting phenomenon, which plagues large decision trees and negatively impacts performance. The DO-DAT uses ML and PSO to provide an effective, real-time placement solution, which outperforms existing placement strategies and improves QoS through the reduction of the delay between VNF instances, forming an SFC. Future work will consider the use of ML to address additional services offered by the VNF orchestrator.
[10]{} H. Hawilo, A. Shami, M. Mirahmadi, and R. Asal, NFV: state of the art, challenges, and implementation in next generation mobile networks (vEPC), *IEEE Network*, vol. 28, no. 6, pp. 18-26, 2014.
ETSI, Network Functions Virtualisation: An Introduction, Benefits, Enablers, Challenges & Call for Action, 2012. Available: https://portal.etsi.org/NFV/NFV\_White\_Paper.pdf
D. M. Manias, *et al.*, Machine Learning for Performance-Aware Virtual Network Function Placement, *IEEE GlobeCom*, Waikoloa, USA, 2019, pp. 1-6.
Y. Xie, S. Wang, and Y. Dai, “Revenue-maximizing virtualized network function chain placement in dynamic environment,” *Future Gener. Comput. Syst.*, vol. 108, pp.650-661, 2020.
D. M. Manias, H. Hawilo, and A. Shami, “A Machine Learning - Based Migration Strategy for Virtual Network Function Instances,” SAI FTC, Vancouver, Canada, pp. 1-15, 2020.
O. Soualah, ** M. Mechtri, C. Ghribi, and D. Zeghlache, “Energy efficient algorithm for VNF placement and chaining,” *IEEE/ACM CCGRID*, Madrid, Spain, 2017, pp. 579-588.
F. Chiti, R. Fantacci, F. Paganelli, and B. Picano, “Virtual Functions Placement With Time Constraints in Fog Computing: A Matching Theory Perspective,” *IEEE Trans. Netw. Service Manag.*, vol. 16, pp. 980-989, 2019.
O. A. Wahab, N. Kara, C. Edstrom, and Y. Lemieux, “MAPLE: A Machine Learning Approach for Efficient Placement and Adjustment of Virtual Network Functions,” *J. Netw. Comput. Appl.*, vol. 142, pp 37-50, 2019.
H. R. Khezri, *et al.*, Deep Q-Learning for Dynamic Reliability Aware NFV-Based Service Provisioning, *IEEE GlobeCom*, Waikoloa, USA, 2019, pp. 1-6.
Y. Zhang, J. Yao, and H. Guan, Intelligent Cloud Resource Management with Deep Reinforcement Learning, *IEEE Cloud Comput.*, vol. 4, pp. 6069, 2017.
J. Sun, *et al.*, A Q-learning-based approach for deploying dynamic service function chains, *Symmetry* , vol. 10, no. 11, 2018.
M. Khoshkholghi, J. Taheri, D. Bhamare, and A. Kassler, “Optimized Service Chain Placement Using Genetic Algorithm ,” *IEEE NetSoft, Paris, France, 2019, pp. 472-479.*
R. G. Mantovani, Use of meta-learning for hyperparameter tuning of classification problems, Ph.D dissertation, University of Sao Carlos, Brazil ** , 2018.
A. Sureka and K. V. Indukuri, Using Genetic Algorithms for Parameter Optimization in Building Predictive Data Mining Models, *ADMA*, Chengdu, China, 2008, pp. 260271.
G. Stiglic, S. Kocbek, I. Pernek, and P. Kokol, Comprehensive decision tree models in bioinformatics, *PLoS One*, vol. 7, no. 3, 2012.
C. Thornton, F. Hutter, H. Hoos, and K. Leyton-Brown, Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, *ACM SIGKDD*, Chicago, USA, 2013, pp. 847-855..
H. Hawilo, M. Jammal, and A. Shami, Network Function Virtualization-Aware Orchestrator for Service Function Chaining Placement in the Cloud, *IEEE J. Sel. Areas Commun.*, vol. 37, no. 3, pp. 643-655, 2019.
T. Benson, A. Akella, D. Maltz, “Network traffic characteristics of data centers in the wild,” *ACM SIGCOMM* , New York, USA, 2010, pp. 267280.
K. Jang, J. Sherry, H. Ballani, and T. Moncaster, “Silo: Predictable message latency in the cloud,” *ACM SIGCOMM*, London, United Kingdom, 2015, pp. 435-448.
F. Pedregosa, *et al.*, Scikit-learn : Machine Learning in Python, *J. Mach. Learn. Res.*, vol. 12, pp. 28252830, 2011.
J. Kennedy, “Particle swarm optimization,” *Encyclopedia of Machine Learning*. Springer US, 2010. pp. 760-766.
[^1]: Dimitrios Michael Manias, Hassan Hawilo and Abdallah Shami are with the Department of Electrical and Computer Engineering at The University of Western Ontario, London, Canada, e-mail: {dmanias3, hhawilo, Abdallah.shami}@uwo.ca. Manar Jammal is with the School of IT at York University, Toronto, Canada, e-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study analytically the superfluid flow of a Bose-Einstein condensate in a ring geometry in presence of a rotating barrier. We show that a phase transition breaking a parity symmetry among two topological phases occurs at a critical value of the height of the barrier. Furthermore, a discontinuous (accompanied by hysteresis) phase transition is observed in the ordered phase when changing the angular velocity of the barrier. At the critical point where the hysteresis area vanishes, chemical potential of the ground state develops a cusp (a discontinuity in the first derivative). Along this path, the jump between the two corresponding states having a different winding number shows strict analogies with a topological phase transition. We finally study the current-phase relation of the system and compare some of our calculations with published experimental results.'
author:
- Xiurong Zhang
- Francesco Piazza
- WeiDong Li
- Augusto Smerzi
title: Parity Symmetry Breaking and Topological Phases in a Superfluid Ring
---
[^1]
[*Introduction*]{}. A paradigmatic manifestation of superfluidity is the existence of stationary atomic states in a ring geometry in presence of a barrier rotating with constant angular velocity $\Omega$ [@legget_99]. With Bose-Einstein condensates (BEC), these states have been recently observed experimentally [@Campbell11; @moulder12; @Boshier13prl; @Campbell13; @Campbell133] and extensively studied theoretically [@piazza_2009; @mathey_2014; @piazza_rings; @piazza_crit; @rizzi_2014; @amico_2015; @kavoulakis_2015; @yakim_2015; @kato_2015; @mayol; @susanto_2016]. The stationary current-carrying states are characterized by a topological invariant given by the phase of the superfluid accumulated around the ring $\nu=2\pi \ell$, with the integer winding number $\ell= 0, \pm 1,\pm 2...$ [@leggett_rmp]. The winding number can be dynamically modified by sweeping the angular velocity of the rotating barrier [@Campbell13; @Campbell133]. The change in topology takes place via the creation of topological defects (solitons in one dimension $d=1$ [@carr_winding] and vortices in $d>1$ [@kato_2015; @yakim_2015; @mayol; @piazza_crit] ).
In the limit of a vanishing barrier, the state with topological defects adiabatically connect two rotation-invariant states with different winding number $\ell$. A second-order phase transition takes place two times as a function of $\Omega$ [@carr_winding], first as the system enters the state with topological defects from the first rotational-invariant state $\ell_1$ and then as it leaves the former by entering the second rotational-invariant state $\ell_2$. This scenario changes in presence of any finite-size obstacle that breaks the rotational symmetry of the ring, wherein the topological defects are always dynamically unstable [@finazzi_2015], so that, in general, two topologically different states cannot be adiabatically connected. This has been recently confirmed experimentally with a barrier moving inside a toroidal BEC [@Campbell14Na], where hysteresis appears in the transition between states with different topological winding number. The unstable branch of the hysteresis loop corresponds to the state with topological defects, and the angular velocity at which the metastable state decays (through phase slippage [@piazza_2009]) into the ground state generalizes the Landau critical velocity to the weak-link case [@finazzi_2015; @mayol].
In this manuscript, we show that with a barrier rotating at the angular velocity $ \Omega_c= \hbar/2 mR^2$, with $R$ and $m$ the radius of the ring and the atomic mass, respectively, the ground state of the system becomes degenerate when the height of the barrier is smaller than a critical value $V < V_c$. The degeneration arises from a parity symmetry breaking that provides two possible ground states with different topology, i.e., winding number. In the disordered phase, $V > V_c$, the ground state is unique with an undefined winding number. Furthermore, by keeping constant the height of the barrier in the ordered phase, $V < V_c$, a first order phase transition between the two ground states with different topological winding number and hysteresis can be observed by varying $\Omega$. The area enclosed by the hysteresis path shrinks while increasing the height of the barrier till eventually vanishing at the critical point $V = V_c$. Hysteresis has been experimentally observed but the sudden change in the winding number at $\Omega_c$ was smeared out due to shot-to-shot number and finite temperature fluctuations [@Campbell14Na]. As order parameter of the phase, both the continuous and discontinuous phase transitions we choose the difference between the phase accumulated around the ring $\nu$ and phase drop across the barrier, a quantity which is experimental accessible [@Campbell14prx]. The phase drop across the barrier, together with the current flowing through the ring, also provides the current-phase relation [@Piazza; @Campbell14prx] – an optimal characterization of the ring-superfluid junction [@barone; @likharev; @packard; @schwartz70; @sols].
We finally emphasize that at the angular velocity $\Omega_c$ and $V=V_c$, the transition between the two topological states is accompanied by a discontinuity in the derivative of the ground state chemical potential as a function of the angular velocity. Furthermore, at this point the transition is not associated with the breaking of any symmetry and it cannot therefore be characterized by a local order parameter. This carries strong similarities with a continuous topological phase transition occurring between two degenerate ground states with different topological winding numbers $\ell$.
[*The model*]{}. We consider a BEC confined in an effective one-dimensional toroidal trap in presence of a barrier rotating with a constant angular velocity $\Omega$. The barrier is a penetrable repulsive potential with radial extension larger than the annulus width. The system can be modeled by the Gross-Pitaevskii equation (GPE) [@pit_str] that governs the dynamics along the azimuthal coordinate $x\in[-L/2,L/2]$, where $L$ is the length of the ring. We remove the time-dependence of the Hamiltonian by moving to a rotating reference frame: $x \Rightarrow x+\Omega R t$ with the torus radius $R=L/2\pi$. This introduces a gauge field $\propto \Omega R$ into the GPE, which reads $$\begin{aligned}
\nonumber
&& i\hbar\frac {\partial}{\partial t} \Psi (x,t)= \left[ \hat{H} +Ng |\Psi (x,t)|^{2} \right] \Psi (x,t), \\
&& \hat{H} = \frac{\hbar^2}{2m}\left(i\frac{\partial}{\partial x}+m \frac{\Omega R}{\hbar}\right)^2+ V(x)-\frac{1}{2}m\Omega^2 R^2 \label{GP}.\end{aligned}$$ The barrier $V(x)$ is a repulsive square well with height $V>0$ and width $d$ centered about $x=0$, $N$ is the number of atoms and $g=4\pi\hbar^2a_s/m$ is the contact interaction with the effective 1D s-wave scattering length $a_s$. With the further transformation $\Psi(x,t)=e^{\imath(m\Omega R x+m\Omega^2 R^2 t/2)/\hbar} \phi(x,t)$, the gauge field can be removed from the Hamiltonian which now reads as the usual nonlinear GPE for the order parameter $\phi(x,t)$ [@Bransden; @Landau]. Following [@yu_66; @langer_67; @schwartz70; @carr_05; @li06; @li04; @Piazza], the stationary solutions of Eq.(\[GP\]) can be written in terms of Jacobi Elliptical $\operatorname{SN}$ functions [@sup]. Two class of solutions which we call, for reasons that will become clear below, plane-waves (PW) and solitons (SL), are found for each value of the winding number $\ell$. The circulation is $\nu=\oint \Theta(x) dx=2\pi\ell$, where $\Theta(x)=m\Omega R x/\hbar+\theta(x)$ is the phase in the lab frame while $\theta(x)= (m/\hbar) \int dx ~j/\rho(x)$ is the phase in the rotating frame. The BEC density is $\rho(x)=|\Psi(x)|^2=|\phi(x)|^2$ and in the rotating frame the current $j$ and the chemical potential $\epsilon$ are related with the current and chemical potential in the lab frame by $I(x)=j+\Omega R\rho(x)$ and $\mathcal{E}=\epsilon-m\Omega^2 R^2/2$, respectively. In absence of barrier, $V = 0$, the current for the PW solution is simply $I=\ell ~ I_0$, where we choose $I_0=R\Omega_0\rho_0$ and $\Omega_0=\hbar/mR^2$ as units of current and rotation velocity and a density normalized as $\rho_0=1/L$. The SL state has a chemical potential larger than the chemical potential of the PW state $\mu_0= N g \rho_0$, which will be used to define our units of energy, time $\hbar/\mu_0$, and length $\xi_0=\hbar/\sqrt{2m\mu_0}$. The presence of a repulsive barrier breaks the rotational invariance and the two solutions at fixed $\Omega,\nu$ are neither purely a PW or a SL. As already mentioned, we found two kinds of solutions that will be labeled as PW (SL) since both continuously reduce to an exact PW or a SL as $V\to 0$ [@Piazza].
![a) Order parameter $\alpha$ as a function of the barrier angular velocity $\Omega$ and height of the barrier $V$. a) At fixed $\Omega=\Omega_c$, the ground state solution of the system becomes degenerate at $V < V_c$. The black and red solid line correspond to the value of $\alpha$ for the PW-branch with winding number $\ell=0$ or $\ell=1$, respectively, plotted as a function of $V$. At $V \ge V_c$, the order parameter vanishes and the winding number of the state is undefined, dashed-dot line. Further solid lines running along the $\Omega$ direction for different values of $V$ give $\alpha$ also for the PW-branch, where the different colours correspond to different winding number $\ell=0$ or $\ell=1$. Hysteresis along the closed trajectories marked by dark-light green and dark-light blue colours exists for $V < V_c$. b-d) Value of $\alpha$ as a function of $\Omega$ for three different values of $V$. Solid(dashed) lines correspond to the PW ( SL)-branch and different colours correspond to different winding number: $\ell=0$ or $\ell=1$. In d), at $V=V_c=1.06\mu_0$, the $\ell=0$ and the $\ell=1$ PW-branches are directly connected at a point where the derivative of $\alpha$ as a function of $\Omega$ diverges. Here the parameters are the same as in Fig. \[fig1\].[]{data-label="fig2"}](FIG2.pdf){width="45.00000%"}
![Upper panel: chemical potential for three different values of the barrier height (in units of $\mu_0$). Lower panel: topological winding number below and over the critical barrier height. Arrows highlight the hysteretic behaviour as a function of the rotation velocity. Here $L=20d$ and $d=20\xi_0$, similarly to the NIST experiment [@Campbell14prx]. []{data-label="fig1"}](FIG1.pdf){width="45.00000%"}
[*Continuous phase transition*]{}. In the following we study the exact ground state solutions as a function of the order parameter $$\alpha=\nu - \gamma, \label{alf}$$ that is the the difference between the circulation $\nu$ and the the phase drop across the barrier $\gamma$ [@sup]. In the limit $V=0$, the phase difference is simply equal to the the phase accumulated around the ring: $\alpha= \nu=2 \pi \ell $. This quantity has been measured experimentally [@Campbell14prx] from the interference fringes of two overlapping BEC, one expanding from a ring with barrier, the second from a disk without barrier providing the reference phase.
The phase diagram of the system is depicted in Fig. \[fig2\]a), where the order parameter $\alpha$ is plotted as a function of the angular velocity $\Omega$ and strength $V$. When $V < V_c$, the ground state is a PW with winding number either $\ell=0$ or $\ell=1$ and is characterized by a non-vanishing $\alpha$. This bifurcation is a pitchfork for $\Omega=\Omega_c$, with the unstable branch for $V<V_c$ being the SL solution (not shown in Fig. \[fig2\] a), see dashed lines in Fig. \[fig2\] b),c)). For $\Omega\neq\Omega_c$ the bifurcation becomes a saddle-node (see the discussion of Fig. \[fig4\] below). The behaviour of $\alpha$ as a function of $\Omega$ is shown in Fig. \[fig2\] b-d) for three different values of $V$, where the solid (dashed) lines correspond to the PW (SL)-branch and the different colours correspond to different winding numbers.
It is instructive to analyze how a non-vanishing order parameter $\alpha$ arises by looking at the particular spatial form of the solutions, shown in Fig. \[fig3\]. For a fixed angular velocity $\Omega_c$ the behavior of the density and phase of the PW-solution is shown both inside and outside the hysteretic region. In absence of hysteresis: $V\geq V_c$, the $\nu=0$ and $\nu=2\pi$ branches share the same density profile, characterized by a zero at the center of the weak link: $x=0$. At this singular point, the phase has a $\pi$-jump, downards for the $\nu=0$-branch, upwards for the $\nu=2\pi$-branch, leading to the same value of $\alpha$ (see Fig. \[fig2\]). For $x\neq 0$ the phase grows linearly with the same slope for both branches. The presence of a singular point (topological defect) in the PW-branches indicates that the latter acquire a solitonic character in the non-hysteretic regime. The SL and PW branches for a given $\nu$ and $\Omega_c$ are indeed equal for $V\geq V_c$ and the winding number $\ell$ is not defined along this path, dashed dot line in Fig. \[fig2\] a).
![Density and phase profiles of the PW-solutions in the hysteretic regime a) and c) and in the non-hysteretic regime b). The shaded area indicates the barrier region. Here the parameters are the same as in Fig. \[fig1\].[]{data-label="fig3"}](FIG3.pdf){width="45.00000%"}
[*Discontinuos phase transition and hysteresis*]{}. With a barrier height $V$ below the critical value $V_c$ the system support hysteresis, as already experimentally demonstrated in [@Campbell14Na]. In the region $\Omega<\Omega_{c1}$ the PW state with $\ell=0$ has the lowest energy, while in the region $\Omega>\Omega_{c2}$ the lowest energy state is a PW with $\ell=1$. In the region $\Omega_{c1}<\Omega<\Omega_{c2}$ either one of the PW solutions is stable while the other is metastable. The metastable PW-branch is connected with the SL-branch for $\Omega_{c1} \le \Omega \le \Omega_{c2}$, while outside this region only a single PW-branch exists. The value of $\Omega_{c1,c2}$ are determined by the interaction strength $gN$, the height and the width of the barrier. The fact that the SL-branch in this region is unstable explains the hysteretic behavior [@mayol], see the lower panel of Fig. \[fig1\]: as soon as the PW-branch meets the SL-branch a dynamical instability sets in whereby the system decays into the lowest-energy PW-branch having a different winding number. This dynamical instability originates from the underlying saddle-node bifurcation where the PW- and the SL-branch merge [@finazzi_2015] (see also Fig. \[fig4\]). We remark that in this case the change of the topological winding number $\ell$, taking place while going from the metastable to the stable PW-branch, is discontinuous. The situation changes when $V\geq V_c$: in this case hysteresis is absent and the two PW-branches with $\ell=0$ and $\ell=1$ are directly connected, without the intermediate unstable SL-branch. Therefore, as shown in Fig. \[fig1\], at $\Omega=\Omega_c$ the topological winding number jumps between $\ell=0$ and $\ell=1$, while the system remains in the lowest-energy stationary state. Moreover, as evident from the upper panel of Fig. \[fig1\], if we additionally tune the barrier height to $V=V_c$ the chemical potential shows a discontinuous derivative at $\Omega=\Omega_c$. This can be interpreted as a topological phase transition (a transition between two topologically distinct states) without breaking any local symmetry. This behaviour is always present, independently of the particular form of the barrier. The disappearance of hysteresis for high enough barriers has been observed experimentally [@Campbell14Na]. Yet the observed transition between states with a different winding number was not perfectly sharp, probably due to shot-to-shot atom-number fluctuations. In order to verify our scenario involving a “topological" phase transition one would need to observe both i) a sharp jump between $\ell=0,1$ as a function of $\Omega$ and ii) a second-order discontinuity in some observable (like the chemical potential shown in Fig. \[fig1\]). In order to observe i), the temperature has to be low enough to suppress random nucleation of topological defects [@mathey_2014] – as probably already being the case of [@Campbell14Na] – and shot-to-shot number fluctuations need to be reduced. The measurement of a discontinuity in the derivative of the chemical potential as required in ii) seems a more demanding task.
![Current-phase relation with a rotating barrier. Panels a-c) show the order parameter $\alpha$ as a function of the barrier height $V$. Panels d),f) show the current-phase relation, while panel e) reports the chemical potential versus $\Omega$. In a), d), e) the black solid line represents the $\nu=0$ PW-branch, while the dashed and dash-dotted red line corresponds to the $\nu=2\pi$ PW- and SL branch, respectively. In c),f), the red dashed line represents the $\nu=2\pi$ PW-branch while the solid and dash-dotted black line corresponds to the $\nu=0$ PW- and SL branch, respectively. In b) the SL branches for $\nu=0,2\pi$ overlap. The blue circle and green triangles mark the special points (saddle-node bifurcations) where the PW- and SL-branch meet. In the left panel, the current-phase relation is single-valued i.e. $\gamma<\pi$ while in the right panel is multivalued, namely, for some values of the current $j$ we have $\gamma>\pi$. Here the parameters are the same as in Fig. \[fig1\].[]{data-label="fig4"}](FIG4.pdf){width="45.00000%"}
[*Current-phase relation*]{}. The knowledge of the phase drop $\gamma$ across the barrier, combined with the knowledge of the (spatially-constant) current $j$ flowing across the weak-link, allows to construct the current-phase relation of the system. This is a powerful characterization of the weak link, allowing for instance to distinguish different regimes ranging from deep tunneling to hydrodynamic flow [@barone; @likharev; @packard]. In the context of BECs, the current phase-relation has been computed so far for infinite systems with open boundary conditions, a static weak link, and a given injected flow [@watanabe; @Piazza]. Stimulated by the experimental results in [@Campbell14prx], we compute here the current-phase relation for our case of a BEC in a ring geometry. The results are shown in Fig. \[fig4\]. For a given barrier, interaction strength, and winding number, the current-phase relation can be constructed by varying the angular velocity $\Omega$. As illustrated above, for each fixed $\Omega$, i.e. fixed current $j$, we obtain two solutions (PW and SL branches) with a different value of $\gamma$. The current-phase relation for both $\ell=0$ and $\ell=1$ is shown in Fig. \[fig4\] d),f) for two different values of the barrier height $V$. The current-phase relation is composed of the PW- and SL-branches, meeting at the special points indicated by blue circles or green triangles. The same points are marked also in the $\mu$ versus $\Omega$ diagram (panel e)), as well as in the $\alpha$ versus $V$ diagram (panels a) and c)). It appears how those special points are saddle-node bifurcations, where the PW- and SL-branch merge and disappear so that there are no stationary solutions for larger (or smaller) values of $V$ or $\Omega$. In b) we also show that at $\Omega=\Omega_c$ the bifurcation becomes a pitchfork, as previously discussed. The latter is characterized by the merging of four branches: the two PW-branches with $\nu=0,2\pi$ (black solid and red dashed lines) and the two SL-branches with $\nu=0,2\pi$ (red dash-dotted line), which have the same $\alpha$.
![Comparison with the experimental measurements of [@Campbell14prx] for the order parameter $\alpha$ as a function of angular velocity $\Omega$, without a), and with b) the fitted nonlinear parameter $\eta$ (see text). c), rate of change of $\alpha$ as a function of barrier height $V$. In (d) the size of the hysteresis loop is compared to the value measured in [@Campbell14Na] and to the full 3D GPE simulations (blue dotted line) employed in [@Campbell14Na]; the black line corresponds to the predictions without fitting parameters, while the red lines to the predictions with a nonlinearity $\eta$ also reduced by $25\%$. In (a) and (b), the barrier height is $V=0.8\mu_0$. In (a-c), the barrier width is chosen according to [@Campbell14prx] to be $d\approx0.04L\approx22\xi_0$, while in (d) it is taken to be $d\approx0.05L\approx17\xi_0$, according to [@Campbell14Na].[]{data-label="fig5"}](FIG5.pdf){width="45.00000%"}
The current-phase relation indicates the maximal current $j$ and the largest phase drop $\gamma$ for a given barrier. In the deep tunneling regime, the current-phase relation is sinusoidal, while in the hydrodynamic regime of flow, achieved for barriers much smaller than the chemical potential, the current is quite higher and linearly proportional to the phase drop over a broad range of phases [@packard]. Moreover, there is a further regime where the phase drop can be larger than $\pi$, which implies that the current-phase relation becomes multivalued, as shown in the right part of Fig. \[fig4\].
[*Comparison with experiments*]{}. All the predictions presented in this manuscript can be experimentally tested within the experimental current state of the art. In this final section we compare some of our results with experimental results already obtained at NIST and published in [@Campbell14Na; @Campbell14prx]. The comparison is summarized in Fig. \[fig5\]. Apart from the barrier width along the azimuthal coordinate, taken from [@Campbell14Na; @Campbell14prx], the most relevant parameter is the dimensionless effective nonlinearity $$\eta=N\times L mg/\hbar^2.$$ As apparent from Fig. \[fig5\](a) and (b), the agreement between our predictions and the experimental data strongly depends on the value of $\eta$, determined by the total atom number $N$ and ring length $L$. In (a) our predictions are calculated by taking $N=8\times10^5$ and $L=140\mu m$ from [@Campbell14prx] without any adjustable parameters, which clearly overestimates the size of the hysteresis loop. However, as shown in (b), a very good agreement could be provided, after reducing the effective nonlinearity $\eta$ by $25\%$, as confirmed in (c) by comparing also the variation of $\alpha$ with the velocity $\Omega$. The fact that our purely 1D model overestimates the nonlinearity at given $N,L$ is due to the fact that the experiment is indeed not in the one-dimensional regime. Still we can reproduce the experimental results even quantitatively by simply readjusting the effective nonlinearity. This is consistent with the comparison presented in [@Campbell14prx], where an effective one-dimensional model showed a good agreement once the proper dimensional reduction was performed.
[*Conclusions*]{}. We have studied the superfluid flow of a Bose-Einstein condensate confined in a ring geometry in presence of a rotating barrier. The stationary solutions have been found by solving analytically an effective one-dimensional Gross-Pitaevskii equation. We have identified a continuous parity symmetry breaking phase transition among two topological phases. A discontinuous phase transition accompanied by hysteresis as a function of the angular velocity of the barrier. Hysteresis has been experimentally observed at NIST [@Campbell14Na; @Campbell14prx]. At the critical point where the hysteresis area vanishes, the chemical potential of the ground state develops a cusp (a discontinuity in the first derivative). Along this path, the jump between the two corresponding winding numbers shows strict analogies with a topological phase transition. A good agreement between the order parameter $\alpha$ as a function of the angular velocity and the rate $d\alpha/d\Omega$ as a function of the height of barrier and the area of the hysteresis has been found with published experimental data in [@Campbell14Na; @Campbell14prx] by readjusting the effective nonlinearity to take into account the fact that the experiment is not purely one-dimensional.
[*Acknowledgments*]{}. This work was supported by the National Natural Science Foundation of China (Grant No. 11374197), PCSIRT (Grant No. IRT13076) and the Hundred Talent Program of Shanxi Province (2012).
[99]{} A. J. Leggett, Rev. Mod. Phys. 71, S318 (1999) A. Ramanathan, K. C. Wright, S. R. Muniz, M. Zelan,W. T. Hill, C. J. Lobb, K. Helmerson, W. D. Phillips, and G. K. Campbell, Phys. Rev. Lett. 106, 130401 (2011). S. Moulder, S. Beattie, R.P. Smith, N. Tammuz, and Z. Hadzibabic, Phys. Rev. A 86, 013629 (2012) C. Ryu, P.W. Blackburn, A. A. Blinova, and M. G. Boshier, Phys. Rev. Lett. 111, 205301 (2013). N. Murray, M. Krygier, M. Edwards, K. C. Wright, G. K. Campbell and C. W. Clark, Phys. Rev. A 88, 053615 (2013). K. C. Wright, R. B. Blakestad,C. J. Lobb,W. D. Phillips and G. K. Campbell, Phys. Rev. Lett. 110, 025302 (2013). F. Piazza, L. A. Collins, and A. Smerzi, Phys. Rev. A 80, 021601(R) (2009) A. C. Mathey, C. W. Clark, and L. Mathey Phys. Rev. A 90, 023604 (2014) F. Piazza, L. A. Collins, and A. Smerzi, New J. Phys. 13, 043008 (2011) F. Piazza, L. A. Collins, and A. Smerzi, J. Phys. B: At. Mol. Opt. Phys. 46 095302 (2013) Marco Cominotti, Davide Rossini, Matteo Rizzi, Frank Hekking, and Anna Minguzzi, Phys. Rev. Lett. 113, 025301 (2014) Davit Aghamalyan, Marco Cominotti, Matteo Rizzi, Davide Rossini, Frank Hekking, Anna Minguzzi, Leong-Chuan Kwek and Luigi Amico, New J. Phys. 17, 045023 (2015) A. Roussou, G. D. Tsibidis, J. Smyrnakis, M. Magiropoulos, Nikolaos K. Efremidis, A. D. Jackson, and G. M. Kavoulakis, Phys. Rev. A 91, 023613 (2015) A. I. Yakimenko, Y. M. Bidasyuk, M. Weyrauch, Y. I. Kuriatnikov, and S. I. Vilchinskii, Phys. Rev. A 91, 033607 (2015) M. Kunimi and Y. Kato, Phys. Rev. A 91, 053608 (2015) A. Munoz Mateo, A. Gallemi, M. Guilleumas, and R. Mayol, Phys. Rev. A **91**, 063625 (2015) M. Syafwan, P. Kevrekidis, A. Paris-Mandoki, I. Lesanovsky, P. Kruger, L. Hackermuller and H. Susanto, arXiv:1512.07924 A. J. Leggett, Rev. Mod. Phys. 73, 307 (2001) R. Kanamoto, L. D. Carr, and M. Ueda, Phys. Rev. Lett. 100, 060401 (2008) S. Finazzi, F. Piazza, M. Abad, A. Smerzi, and A. Recati, Phys. Rev. Lett. 114, 245301 (2015) S. Eckel, Jeffrey G. Lee, Noel Murray, Charles W. Clark, Christopher J. Lobb,William D. Phillips, Mark Edwards and G. K. Campbell, Nature 506, 200 (2014). S. Eckel, F. Jendrzejewski, A. Kumar, C. J. Lobb, and G. K. Campbell, Phys. Rev.X 4, 031052 (2014). F. Piazza, L. A. Collins and A. Smerzi, Phys. Rev.A 81, 033613 (2010) A. Barone and G. Paterno, Physics and Applications of the Josephson Effect (John Wiley & Sons, New York, 1982) K. K. Likharev, Dynamics of Josephson Junctions and Circuits (Gordon and Breach, New York, 1986) R. E. Packard, Rev. Mod. Phys. 70, 641 (1998) A. Baratoff, J. A. Blackburn and B. B. Schwartz, Phys. Rev. Lett. **25**, 1096 (1970). F. Sols and J. Ferrer, Phys. Rev. B 49, 15913 (1994) L. P. Pitaevskii and S. Stringari, *Bose-Einstein Condensation*, Clarendon Press (2001) Bransden B. and Joachain C., Quantum Mechanics (2nd Edn), Pearson Education Limited, 2000, pp. 255 L.D. Landau and E. M. Lifshitz, Quantum Mechanics (Non-relativistic Theory) (3rd Edn), Pergamon press Ltd., pp. 52 Yu. G. Mamaladze and O. D. Cheishvili, Zh. Eksp. Teor. Fiz. 50, 169 (1966) \[Sov. Phys. JETP. 23, 112 (1966)\] S. Langer and V. Ambegaokar, Phys. Rev. 164, 498 (1967) B. T. Seaman, L. D. Carr, and M. J. Holland, Phys. Rev. A 71, 033609 (2005) WeiDong Li, Phys. Rev. A 74, 063612, (2006). Li W.D. and Smerzi A., Phys. Rev. E 70, 016605, (2004). See Supplementary Information, where the detail of the analytical solutions of GPE (1) and the proof of Eq. (2) are discussed. G. Watanabe, F. Dalfovo, F Piazza, L. P. Pitaevskii, and S. Stringari, Phys. Rev. A 80, 053602 (2009)
[^1]: corresponding author: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper proposes a novel game-theoretical autonomous decision-making framework to address a task allocation problem for a swarm of multiple agents. We consider cooperation of self-interested agents, and show that our proposed decentralized algorithm guarantees convergence of agents with *social inhibition* to a Nash stable partition (i.e., social agreement) within polynomial time. The algorithm is simple and executable based on local interactions with [neighbor]{} agents under a strongly-connected communication network and even in asynchronous environments. We analytically present a mathematical formulation for computing the lower bound of suboptimality of the solution, and additionally show that 50% of suboptimality can be [at least]{} guaranteed if social utilities are non-decreasing functions with respect to the number of co-working agents. The results of numerical experiments confirm that the proposed framework is scalable, fast adaptable against dynamical environments, and robust even in a realistic situation.'
author:
- 'Inmo Jang, Hyo-Sang Shin, and Antonios Tsourdos[^1]'
bibliography:
- 'library.bib'
title: 'Anonymous Hedonic Game for Task Allocation in a Large-Scale Multiple Agent System'
---
Distributed robot systems, Networked robots, Task allocation, Game theory, Self-organising systems
Introduction {#sec:intro}
============
Cooperation of a large number of possibly small-sized robots, called *robotic [swarm]{}*, will play a significant role in complex missions that existing operational concepts using a few large robots could not deal with [@Shin2014a]. Even if every single robot (or called *agent*) in a swarm is incapable of accomplishing a task alone, their cooperation will lead to successful outcomes [@Khamis2015; @Jevtic2012; @Sahin2005; @Dorigo2014]. The possible applications include environmental monitoring [@Barton2013], ad-hoc network relay [@Bekmezci2013], disaster management [@Erdelj2017], cooperative radar jamming [@Jang2017], to name a few.
Due to the large cardinality of a swarm robot system, however, it is infeasible for human operators to supervise each [agent]{} directly, but needed to entrust the swarm with certain levels of decision-makings (e.g., task allocation, path planning, and individual control). Thereby, [what only remains]{} is to provide a high-level mission description, which is manageable for a few or even a single human operator. Nevertheless, there still exist various challenges in the autonomous decision-making of robotic swarms. [Among]{} them, this paper addresses a task allocation problem where the number of agents is higher than that of tasks: how to partition a set of agents into subgroups and assign the subgroups to each task. In the problem, it is assumed that each agent can be assigned to at most one task, whereas each task may require multiple agents: this case falls into ST-MR (single-task robot and multi-robot task) category [@Gerkey2004; @Korsah2013].
According to [@Sahin2005; @Dorigo2014; @Bandyopadhyay2017; @Brambilla2013; @Johnson2011], decision-making frameworks for [a robotic swarm should be]{} [*decentralized* (i.e.,]{} the desired collective [behavior]{} can be achieved by individual agents [relying]{} on local information), *scalable*, *predictable* [(e.g., regarding convergence performance and outcome quality), and]{} *adaptable* to dynamic environments (e.g., [unexpected]{} elimination or addition of agents or tasks). [Moreover, the frameworks are also desirable to be]{} *robust* [against]{} asynchronous environments [because, d]{}ue to the large cardinality of the system and its [decentralization]{}, it is very challenging for [every]{} agent to [behave]{} synchronously. For [synchronization]{} in practice, “artificial delays and extra communication must be built into the framework” [@Johnson2011], which may cause considerable inefficiency on the system. [In addition, it is also preferred to be capable of]{} accommodating different interests of agents (e.g., different swarms operated by different [organization]{}s [@Clark2009]).
In this paper, we propose a novel decision-making framework based on hedonic games [@Dreze1980; @Banerjee2001; @Bogomolnaia2002]. [The task allocation problem considered is modeled as]{} a coalition-formation game [where]{} self-interest agents are willing to form coalitions to improve their own interests. The objective of this game is to find a *Nash stable* partition, which is a social agreement where all the agents agree with the current task assignment. Despite any possible conflicts between the agents, this paper shows that if they have *social inhibition*, then a Nash stable partition can always be determined within polynomial times in the proposed framework and all the desirable characteristics mentioned above can be achieved. Furthermore, we [analyze]{} the lower bound of the outcome’s suboptimality and show that 50% is [at least]{} guaranteed for a particular case. Various settings of numerical experiments validate that the proposed framework is scalable, [adaptable]{}, and robust even in asynchronous environments.
This paper is [organize]{}d as follows. Section \[sec:GRAPE\_related\_work\] reviews existing literature on [decentralized]{} task allocation approaches and introduces a recent finding in hedonic game[s]{} that inspires this study. Section \[GRAPE\] proposes [our]{} decision-making framework, named *GRAPE*, and analytically proves the existence of and the polynomial-time convergence to a Nash stable partition. Section \[Analysis\] discusses the framework’s algorithmic complexity, suboptimality, adaptability, [and robustness]{}. Section \[sec:min\_rqmt\] [shows]{} that the framework can also address a task allocation problem in which each task may need a certain number of agents for completion. Numerical simulations in Section \[Results\] confirm that the proposed framework holds all the desirable characteristics. Finally, concluding remarks are followed in Section \[Conclusion\].
Related Work {#sec:GRAPE_related_work}
============
[Decentralized]{} Coordination of Robotic Swarms {#sec:GRAPE_literature}
------------------------------------------------
Existing approaches for task allocation problems can be [categorize]{}d into two branches, depending on how agents eventually reach a converged outcome: *orchestrated* and *(fully) self-[organize]{}d* approaches [@Brutschy2014]. In the former, additional mechanism such as negotiation and voting model is imposed so that some agents can be worse off if a specific condition is met (e.g., the global utility is better off). Alternatively, in self-[organize]{}d approaches, each agent simply makes a decision without negotiating with [the]{} other agents. The latter generally induce less resource consumption in communication and computation [@Kalra2006], and hence they are preferable in terms of scalability. On the other hand, the former usually provide a better quality of solutions with respect to the global utility, and a certain level of suboptimality could be guaranteed [@Zhang2013; @Choi2009; @Segui-gasco2015]. A comparison result between them [@Kalra2006] presents that as the available information to agents becomes local, the latter becomes to outperform the former. In the following, we particularly review existing literature on self-[organize]{}d approaches because, for large-scale multiple agent systems, scalability is [at least]{} essential and it is realistic to regard that the agents only know their local information but instead the global information.
Self-[organize]{}d approaches can be [categorize]{}d into *top-down approaches* and *bottom-up approaches* according to which level (i.e., between an ensemble and individuals) is mainly focused on. Top-down approaches [emphasize]{} developing a macroscopic model for the whole system. For [instance]{}, population fractions associated with given tasks are represented as states, and the dynamics of the population fractions [is]{} [modeled]{} by Markov chains [@Acikmese2015; @Chattopadhyay2009; @Demir2015; @Bandyopadhyay2017] or differential equations [@Berman2009; @Halasz2007; @Hsieh2008; @Mather2011; @Prorok2016c]. Given a desired fraction distribution over the tasks, agents can converge to the desired status by following local decision policies (e.g., the associated rows or columns of the current Markov matrix). One advantage of using top-down approaches is [predictability of average emergent behavior with regard to]{} convergence speed and the quality of a stable outcome (i.e., how well the agents converge to the desired fraction distribution). [However,]{} such prediction, to the best of our knowledge, can be made mainly numerically. Besides, as top-down generated control policies regulate agents, it may be difficult to accommodate each agent’s individual preference. Also, each agent may have to physically move around according to its local policy during the entire decision-making process, which may cause waste of time and energy costs in the transitioning. Bottom-up approaches focus on designing each agent’s individual rules (i.e., microscopic models) that eventually lead to a desired emergent [behavior]{}. Possible actions of a single agent can be [modeled]{} by a finite state machine [@Labella2006], and [a]{} change of [behavior]{} occurs according to a probabilistic threshold model [@Castello2014]. A threshold value in the model [determines]{} the decision boundary [between two]{} motion[s.]{} This value is adjustable based on an agent’s past experiences such as [the]{} time spent for working a task [@Brutschy2014; @Kurdi2016], the success/failure rates [@Labella2006; @Liu2007a], and direct communication from a central unit [@Castello2014]. This feature can improve system adaptability, and may have a potential to incorporate each agent’s individual interest if required. However, it was shown in [@Liu2010b; @Martinoli2004; @Lerman2005; @Correll2006; @Liu2007a; @Prorok2011; @Kanakia2016] that, to predict or evaluate an emergent performance of a swarm [utiliz]{}ing bottom-up approaches, a macroscopic model for the swarm is eventually required to be developed by abstracting the microscopic models.
Hedonic Games
-------------
*Hedonic games* [@Dreze1980; @Banerjee2001; @Bogomolnaia2002] model a conflict situation where self-interest agents are willing to form coalitions to improve their own interests. *Nash stability* [@Bogomolnaia2002] plays a key role since it yields a social agreement [among]{} the agents even without having any negotiation. Many researchers have investigated conditions under which a Nash stable partition is guaranteed to exist and to be determined [@Bogomolnaia2002; @Dimitrov2006; @Darmann2012; @Darmann2015]. [Among]{} them, the works in [@Darmann2012; @Darmann2015] mainly addressed an *anonymous hedonic game*, in which each agent considers the size of a coalition to which it belongs instead of the identities of the members. Recently, Darmann [@Darmann2015] showed that selfish agents who have *social inhibition* (i.e., preference toward a coalition with a fewer number of members) could converge to a Nash stable partition in an anonymous hedonic game. The author also proposed a [centralized]{} recursive algorithm that can find a Nash stable partition within $O( n_a^2 \cdot n_t)$ of iterations. Here, $n_a$ is the number of agents and $n_t$ is that of tasks.
Main Contributions
------------------
Inspired by the recent breakthrough of [@Darmann2015], we propose a novel [decentralized]{} framework that models the task allocation problem considered as an anonymous hedonic game. The proposed framework is a self-[organize]{}d approach in which agents make decisions according to its local policies (i.e., individual preferences). Unlike top-down or bottom-up approaches reviewed in the previous section, which primarily concentrate on designing agents’ decision-making policies either macroscopically or microscopically, our work instead focuses on investigating and exploiting advantages from socially-inhibitive agents, while simply letting them greedily behave according to their individual preferences. Explicitly, the main contributions of this paper are as follows:
1. This paper shows that selfish agents with social inhibition, which we refer to as *SPAO* preference (Definition \[SPAO\]), can reach a Nash stable partition within less algorithmic complexity compared with [@Darmann2015]: $O(n_a^2)$ of iterations are required[^2].
2. We provide a [decentralized]{} algorithm, which is executable under a strongly-connected communication network of agents and even in asynchronous environments. Depending on the network assumed, the algorithmic complexity may be additionally increased by $O(d_G)$, where $d_G < n_a$ is the graph diameter of the network.
3. This paper [analyze]{}s the suboptimality of a Nash stable partition in term of the global utility. We firstly present a mathematical formulation to compute the suboptimality lower bound by using the information of a Nash stable partition and agents’ individual utilities. Furthermore, we additionally show that 50% of suboptimality can be at least guaranteed if the social utility for each coalition is defined as a non-decreasing function with respect to the number of members in the coalition.
4. Our framework can accommodate different agents with different interests as long as their individual preferences hold SPAO.
5. Through various numerical experiments, it is confirmed that the proposed framework is scalable, fast adaptable to environmental changes, and robust even in a realistic situation where some agents are temporarily unable to proceed a decision-making procedure and communicate with [the]{} other agents during a mission.
Symbol Description
------------------------------------------- ---------------------------------------------------------------------------------
$\mathcal{A}$ a set of $n_a$ agents
$a_i$ the $i$-th agent
$\mathcal{T}^*$ a set of $n_t$ tasks
$t_j$ the $j$-th task
$t_{\phi}$ the void task (i.e., not to work any task)
$\mathcal{T}$ a set of tasks, $\mathcal{T}=\mathcal{T}^* \cup \{t_{\phi}\}$
$(t_j,p)$ a task-coalition pair (i.e. to do task $t_j$ with $p$ participants)
$\mathcal{X}$ the set of task-coalition pairs, $\mathcal{X}=\mathcal{X}^* \cup \{t_{\phi}\}$,
where $\mathcal{X}^* = \mathcal{T}^* \times \{1,2,...,n_a\}$
$\mathcal{P}_i$ agent $a_i$’s preference relation over $\mathcal{X}$
$\succ_i$ the strong preference of agent $a_i$
$\sim_i$ the indifferent preference of agent $a_i$
$\succeq_i$ the weak preference of agent $a_i$
$\Pi$ a *partition*: a disjoint set that partitions the agent set $\mathcal{A}$,
$\Pi = \{S_1,S_2,...,S_{n_t},S_\phi\}$
$S_j$ the (task-specific) coalition for $t_j$
$\Pi(i)$ the index of the task to which agent $a_i$ is assigned given $\Pi$
$d_G$ the graph diameter of the agent communication network
${{\color{green!45!black}}\mathcal{N}_i}$ The [neighbor agent set of agent $a_i$ given a network]{}
: Nomenclature[]{data-label="nomenclature"}
GRoup Agent Partitioning and placing Event {#GRAPE}
==========================================
Problem Formulation {#sec:MRTA}
-------------------
Let us first introduce the multi-robot task allocation problem considered in this paper and underlying assumptions.
\[prob\_basic\] Suppose that there exist a set of $n_a$ agents $\mathcal{A} =\{a_1, a_2, ... , a_{n_a}\}$ and a set of tasks $\mathcal{T}=\mathcal{T}^* \cup \{t_\phi\}$, where ${ \mathcal{T}^*=\{t_1, t_2, ... , t_{n_t}\} }$ is a set of $n_t$ tasks and $t_\phi$ is *the void task* (i.e., not to perform any task). Each agent $a_i$ has *the individual utility* $u_i: \mathcal{T} \times {|\mathcal{A}|} \rightarrow \mathbb{R}$, which is a function of the task to which the agent is assigned and the number of [its]{} co-working agents (including itself) [$p \in \{1,2,...,n_a\}$]{} (called *participants*). [The individual utility for $t_{\phi}$ is zero regardless of the participants.]{} Since every agent is considered to have limited capabilities to finish a task alone, the agent can be assigned to at most one task. The objective of this task allocation problem is to find an assignment that [maximize]{}s *the global utility*, which is the sum of individual utilities of the entire agents. The problem described above is defined as follows: $$\label{eqn:obj_ftn}
\max_{\{x_{ij}\}} \sum_{\forall a_i \in \mathcal{A}} \sum_{\forall t_j \in \mathcal{T}} u_{i}(t_j, p) x_{ij} {{\color{green!45!black}},}$$ subject to $$\sum_{\forall t_j \in \mathcal{T}} x_{ij} \le 1{{\color{green!45!black}},} \quad \forall a_i \in \mathcal{A}{{\color{green!45!black}},}$$ $$x_{ij} \in \{0,1\}{{\color{green!45!black}},} \quad \forall a_i \in \mathcal{A}, \forall t_j \in \mathcal{T}{{\color{green!45!black}},}$$ where $x_{ij}$ is a binary decision variable that indicates whether or not task $t_j$ is assigned to agent $a_i$.
The term *social utility* is defined as the sum of individual utilities within any agent group.
\[assum:agents\] This paper considers a large-scale multi-robot system of homogeneous agents since the realisation of a swarm can be in general achieved through mass production [@Sahin2005]. Therefore, each individual utility $u_i$ is concerned with the cardinality of the agents working for the task. Note that agents in this paper may have different preferences with respect to the given tasks, e.g., for an agent, a spatially closer task is more preferred, whereas this may not be the case for another agent. Besides, noting that “mass production [favors]{} robots with fewer and cheaper components, resulting in lower cost but also reduced capabilities[@Rubenstein2014]", it is also assumed that each agent can be only assigned to perform at most a single task. According to [@Gerkey2004], such a robot is called a *single-task* (ST) robot.
\[assum:agents\_comm\] The communication network of the entire agents is at least *strongly-connected*, [i.e., there exists a directed communication path between any two arbitrary agents.]{} Given a network, $\mathcal{N}_i$ denotes a set of [neighbor]{} agents for agent $a_i$.
\[assum:tasks\] Every task is a *multi-robot* (MR) task, meaning that the task may require multiple robots [@Gerkey2004]. For now, we assume that each task can be performed even by a single agent although it may take a long time. However, in Section \[sec:min\_rqmt\], we will also address a particular case in which some tasks need at least a certain number of agents for completion.
\[assum:agents\_util\] Every agent $a_i$ only knows its own individual utility $u_i(t_j,p)$ with regard to every task $t_j$, while not being aware of those of [the]{} other agents. Through communication, however, they can notice which agent currently choses which task, i.e., *partition* (Definition \[def:partition\]). Note that the agents do not necessarily have to know the true partition information at all the time. Each agent owns its locally-known partition information.
Proposed Game-theoretical Approach: GRAPE
-----------------------------------------
Let us transform Problem \[prob\_basic\] into an anonymous hedonic game event where every agent selfishly tends to join a coalition according to its preference.
\[game\] An instance of *GRoup Agent Partitioning and placing Event* (GRAPE) is a tuple $(\mathcal{A}, \mathcal{T}, \mathcal{P})$ that consists of (1) ${ \mathcal{A} =\{a_1, a_2, ... , a_{n_a}\} }$, a set of $n_a$ agents; (2) ${ \mathcal{T}=\mathcal{T}^* \cup \{t_\phi\} }$, a set of tasks; and (3) ${ \mathcal{P}=(\mathcal{P}_1, \mathcal{P}_2, ... , \mathcal{P}_{n_a}) }$, an [$n_a$]{}-tuple of preference relations of the agents. For agent $a_i$, $\mathcal{P}_i$ describes its *preference relation* over the set of task-coalition pairs ${ \mathcal{X}=\mathcal{X}^* \cup \{t_\phi\} }$, where ${ \mathcal{X}^*=\mathcal{T}^*\times \{1,2,...,n_a\} }$; a task-coalition pair $(t_j,p)$ is interpreted as “to do task $t_j$ with $p$ participants”. For any task-coalition pairs ${ x_1, x_2 \in \mathcal{X} }$, ${ x_1 \succ_i x_2 }$ implies that agent $a_i$ strongly prefers $x_1$ to $x_2$, and ${ x_1 \sim_i x_2 }$ means that the preference regarding $x_1$ and $x_2$ is indifferent. Likewise, $\succeq _i$ indicates the weak preference of agent $a_i$.
Note that agent $a_i$’s preference relation can be derived from its individual utility $u_i(t_j, p)$ in Problem \[prob\_basic\]. For [instance]{}, given that $u_i(t_1,p_1) > u_i(t_2,p_2)$, it can be said that $(t_1,p_1) \succ_i (t_2,p_2)$.
\[def:partition\] Given an instance $(\mathcal{A},\mathcal{T},\mathcal{P})$ of GRAPE, a *partition* is defined as a set $\Pi = \{S_1,S_2,...,S_{n_t},S_\phi \}$ that disjointly partitions the agent set $\mathcal{A}$. Here, $S_j \subseteq \mathcal{A}$ is the *(task-specific) coalition* for executing task $t_j$ such that $\cup^{n_t}_{j=0}S_j=\mathcal{A}$ and $S_j \cap S_k = \emptyset$ for $j \neq k$. $S_\phi$ is the set of agents who choose the void task $t_\phi$. Note that this paper interchangeably uses $S_0$ to indicate $S_\phi$. Given a partition $\Pi$, $\Pi(i)$ indicates the index of the task to which agent $a_i$ is assigned. For [example]{}, $S_{\Pi(i)}$ is the coalition that the agent belongs to, i.e., ${S_{\Pi(i)}= \{S_j \in \Pi \mid a_i \in S_j\}}$.
The objective of GRAPE is to determine a stable partition that all the agents agree [with]{}. In this paper, we seek for a *Nash stable* partition, which is defined as follows:
\[Nash\_stable\] A partition $\Pi$ is said to be *Nash stable* if, for every agent ${ a_i \in \mathcal{A} }$, it holds that ${ (t_{\Pi(i)}, |S_{\Pi(i)}|) \succeq_i (t_j, |S_j \cup \{a_i\}|) }$, $\forall {S_j \in \Pi}$.
In other words, in a Nash stable partition, every agent prefers its current coalition to joining any of the other coalitions. Thus, every agent does not have any conflict within this partition, and no agent will not unilaterally deviate from its current decision.
The rationale behind the use of Nash stability [among]{} various stable solution concepts in hedonic games [@Sung2007; @Dreze1980; @Karakaya2011; @Aziz2012a] is that it can reduce communication burden between agents required to reach a social agreement. In the process of converging to a Nash stable partition, an agent does not need to get any permission from [the]{} other agents when it is willing to deviate. This property may not be the case for the other solution concepts. Therefore, each agent is only required to notify its altered decision without any negotiation. This fact can reduce [inter-agent]{} communication in the proposed approach.
SPAO Preference: Social Inhibition
----------------------------------
This section introduces the key condition, called *SPAO*, that enables our proposed approach to provide all the desirable properties described in Section \[sec:intro\], and then explains its implications.
\[SPAO\] Given an instance ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$ of GRAPE, it is said that the preference relation of agent $a_i$ with respect to task $t_j$ is *SPAO (Single-Peaked-At-One)* if it holds that, for every ${ (t_j,p) \in \mathcal{X}^* }$, ${ (t_j,p_1) \succeq _i (t_j,p_2) }$ for any ${p_1,p_2 \in \{1,...,n_a\} }$ such that ${ p_1 < p_2 }$. Besides, we say that an instance ${ (\mathcal{A},\mathcal{T},\mathcal{P})}$ of GRAPE is SPAO if the preference relation of every agent in $\mathcal{A}$ with respect to every task in ${\mathcal{T}^*}$ is SPAO.
For an example, suppose that $\mathcal{P}_i$ is such that $$(t_1,1) \succ_i (t_1,2) \succeq_i (t_1,3) \succ_i (t_2,1) \sim_i (t_1,4) \succ_i (t_2,2).$$ This preference relation indicates that agent $a_i$ has $(t_1,1) \succ_i (t_1,2) \succeq_i (t_1,3) \succ_i (t_1,4)$ for task $t_1$, and $(t_2,1) \succ_i (t_2,2)$ for task $t_2$. According to Definition \[SPAO\], the preference relation for each of the tasks holds SPAO. For another example, given that $$(t_1,1) \succ_i (t_1,2) \succeq_i (t_1,3) \succ_i (t_2,2) \sim_i (t_1,4) \succ_i (t_2,1),$$ the preference relation regarding task $t_1$ holds SPAO, whereas this is not the case for task $t_2$ because of $(t_2,2) \succ_i (t_2,1)$.
This paper only considers the case in which every agent has SPAO preference relations regarding all the given tasks. [Such a]{}gents prefer to execute a task with smaller number of collaborators, namely, they have *social inhibition*.
SPAO implies that an agent’s individual utility should be a monotonically decreasing function with respect to the size of a coalition. In practice, SPAO can often emerge. For instance, experimental and simulation results in [@Guerrero2012 Figures 3 and 4] show that the total work capacity resulted from cooperation of multiple robots does not proportionally increase due to interferences of the robots. In such a *non-superadditive* environment [@Shehory1999], assuming that an agent’s individual work efficiency is considered as its individual utility, the individual utility monotonically drops as the number of collaborators enlarges even though the social utility is increased. For another example, SPAO also arises when individual utilities are related with shared-resources. As more agents use the same resource simultaneously, their individual productivities become diminished (e.g., traffic affects travel times [@Nam2015] [@Johnson2016 Example 3]). As the authors in [@Shehory1999] pointed out, a non-superadditive case is more realistic than a superadditive case: agents in a superadditive environment always attempt to form the grand coalition whereas those in a non-superadditive case are willing to reduce unnecessary costs. Note that social utility functions are not restricted so that they can be either monotonic or non-monotonic.
The proposed framework can accommodate selfish agents who greedily follow their individual preferences as long as the preferences hold SPAO. This implies that the framework may be [utilized]{} for a combination of swarm systems from different organisations under the condition that the multiple systems satisfy SPAO.
Existence of and Convergence to a Nash Stable Partition {#sec:existence_NS}
-------------------------------------------------------
Let us prove that if an instance of GRAPE holds SPAO, there always exists a Nash stable partition and it can be found within polynomial time.
\[def:iteration\] This paper uses the term *iteration* to represent an iterative stage in which an arbitrary agent compares the set of selectable task-coalition pairs given an existing partition, and then determines whether or not to join another coalition including the void task one.
\[assum:mutex\] We assume that, at each iteration, a single agent exclusively makes a decision and updates the [current]{} partition if necessary. This paper refers to this agent as the *deciding agent* at the iteration. Based on the resultant partition, another deciding agent also performs the same process at the next iteration, and this process continues until every agent does not deviate from a specific partition, which is, in fact, a Nash stable partition. To implement this algorithmic process in practice, the agents need a *mutual exclusion* (or called *mutex*) algorithm to choose the deciding agent at each iteration. In this section, for simplicity of description, we assume that all the agents are fully-connected, by which they somehow select and know the deciding agent. However, in Section \[sec:algorithm\], we will present a distributed mutex algorithm that enables the proposed approach to be executed under a strongly-connected communication network even in an asynchronous manner.
\[lemma\_1\] Given an instance ${(\mathcal{A},\mathcal{T},\mathcal{P})}$ of GRAPE that is SPAO, suppose that a new agent $a_r \notin \mathcal{A}$ holding a SPAO preference relation with regard to every task in $\mathcal{T}$ joins ${(\mathcal{A},\mathcal{T},\mathcal{P})}$ in which a Nash stable partition is already established. Then, the new instance ${(\tilde{\mathcal{A}},\mathcal{T},\mathcal{P})}$, where ${\tilde{\mathcal{A}} = \mathcal{A} \cup \{a_r\} }$, also (1) satisfies SPAO; (2) contains a Nash stable partition; and (3) the maximum number of iterations required to re-converge to a Nash stable partition is ${|\tilde{\mathcal{A}}|}$.
Given a partition $\Pi$, [for agent $a_i$,]{} *the number of [additional co-workers tolerable in]{} its coalition* is defined as: [ $$\resizebox{.48\textwidth}{!}{$
\Delta_{\Pi(i)} := \min\limits_{{S}_j \in \Pi \setminus \{{S}_{\Pi(i)}\}} \max\limits_{\Delta \in \mathbb{Z}} \big\{ \Delta \mid (t_{\Pi(i)},|{S}_{\Pi(i)}| + \Delta) \succeq_i (t_j,|{S}_j \cup \{a_i\}|) \big\}.
$}$$ ]{} Due to the SPAO preference relation, this value satisfies the following characteristics: (a) if $\Pi$ is Nash stable, for every agent $a_i$, it holds that $\Delta_{\Pi(i)} \ge 0$; (b) if $\Delta_{\Pi(i)} < 0$, then agent $a_i$ is willing to deviate to [another]{} coalition at a next iteration; and (c) for the agent $a_i$ who deviated at the last iteration and updated the partition as $\Pi'$, it holds that $\Delta_{\Pi'(i)} \ge 0$.
From Definition \[SPAO\], it is clear that the new instance ${ (\tilde{\mathcal{A}},\mathcal{T},\mathcal{P}) }$ still holds SPAO. Let ${\Pi_0}$ denote a Nash stable partition in the original instance ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$. When a new agent $a_r \notin \mathcal{A}$ decides to execute one of [the]{} tasks in $\mathcal{T}$ and creates a new partition ${ {\Pi}_1}$, it holds that $\Delta_{{\Pi}_1(r)} \ge 0$, as shown in (c). If there is no existing agent $a_q \in \mathcal{A}$ whose $\Delta_{{\Pi}_1(q)} < 0$, then the new partition ${ {\Pi}_1}$ is Nash stable.
Suppose that there exists at least an agent $a_q$ whose $\Delta_{{\Pi}_1(q)} < 0$. Then, the agent must be one of [the existing]{} members in the coalition that agent $a_r$ selected in the last iteration. As agent $a_q$ moves to another coalition and creates a new partition ${ {\Pi}_2}$, the previously-deviated agent $a_r$ [holds]{} $\Delta_{{\Pi}_2(r)} \ge 1$. In other words, an agent who deviates to a coalition and expels one of the existing agents in that coalition will not deviate again even if another agent joins the coalition in a next iteration. This implies that at most ${|\tilde{\mathcal{A}}|}$ of iterations are required to hold $\Delta_{\tilde{{\Pi}}(i)} \ge 0$ for every agent $a_i \in \tilde{\mathcal{A}}$, where the partition $\tilde{\Pi}$ is Nash stable.
Lemma \[lemma\_1\] is essential not only for the existence of and convergence to a Nash stable partition but also for fast adaptability to dynamic environments.
\[NASH\] If ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$ is an instance of GRAPE holding SPAO, then a Nash stable partition always exists.
This theorem will be proved by induction. Let $M(n)$ be the following mathematical statement: [for]{} $|\mathcal{A}| = n$, if an instance ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$ of GRAPE is SPAO, then there exists a Nash stable partition.
*Base case*: When ${n=1}$, there is only one agent in an instance. This agent is allowed to participate in its most preferred coalition, and the resultant partition is Nash stable. Therefore, $M(1)$ is true.
*Induction hypothesis*: Assume that ${M(k)}$ is true for a positive integer $k$ such that ${|\mathcal{A}|=k}$. *Induction step*: Suppose that a new agent ${a_i \notin \mathcal{A}}$ whose preference relation regarding every task in $\mathcal{T}$ is SPAO joins the instance ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$. This induces a new instance ${ (\tilde{\mathcal{A}},\mathcal{T},\mathcal{P}) }$ where ${\tilde{\mathcal{A}} = \mathcal{A} \cup \{a_i\} }$ and ${ |\tilde{\mathcal{A}}|=k+1}$. From Lemma \[lemma\_1\], it is clear that the new instance also satisfies SPAO and has a Nash stable partition ${ \tilde{\Pi} }$. Consequently, ${M(k+1)}$ is true. By mathematical induction, ${M(n)}$ is true for all positive integers $n \ge 1$.
\[Nash\_poly\] If ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$ is an instance of GRAPE holding SPAO, then the number of iterations required to determine a Nash stable partition is at most ${ |\mathcal{A}|\cdot(|\mathcal{A}|+1)/2}$.
Suppose that, given a Nash stable partition in an instance where there exists only one agent, we add another arbitrary agent and find a Nash stable partition for this new instance, and repeat the procedure until all the agents in $\mathcal{A}$ are included. From Lemma \[lemma\_1\], if a new agent joins an instance in which the current partition is Nash stable, then the maximum number of iterations required to find a new Nash stable partition is the number of the existing agents plus [one]{}. Therefore, it is trivial that the maximum number of iterations to find a Nash stable partition of an instance ${ (\mathcal{A},\mathcal{T},\mathcal{P}) }$ is given as $${\sum_{k=1}^{|\mathcal{A}|} k = |\mathcal{A}| \cdot (|\mathcal{A}|+1)/2}.$$
Note that this [polynomial-time convergence still holds]{} even if the agents are [initialize]{}d [to]{} a random partition. Suppose that we have the following setting: the entire agents $\mathcal{A}$ are firstly not movable from the existing partition, except a set of free agents $\mathcal{A'} \subseteq \mathcal{A}$; whenever the agents $\mathcal{A}'$ find a Nash stable partition $\Pi'$, one arbitrary agent in $a_r \in \mathcal{A} \setminus \mathcal{A}'$ additionally becomes liberated and deviates from the current coalition $S_{\Pi'(r)}$ to another coalition in $\Pi'$. In this setting, [from]{} the viewpoint of the agents in $\mathcal{A}' \setminus S_{\Pi'(r)}$, the newly liberated agent is considered as a new agent as that in Lemma \[lemma\_1\]. Accordingly, we can still [utilize]{} the lemma for the agents in $\mathcal{A}' \setminus S_{\Pi'(r)} \cup \{a_r\}$. The agents also can find a Nash stable partition if one of them moves to $S_{\Pi'(r)}$ during the process, because, due to $a_r$, it became $\Delta_{\Pi'(i)} \ge 1$ for every agent $a_i \in S_{\Pi'(r)} \setminus \{a_r\}$. In a nutshell, the agents $\mathcal{A}' \cup \{a_r\}$ can converge to a Nash stable partition within $|\mathcal{A}' \cup \{a_r\}|$, [which is equivalent to]{} Lemma \[lemma\_1\]. [H]{}ence[,]{} Theorem \[NASH\] and this theorem are also valid for the case when the initial partition of the agents are randomly given.
[Decentralized]{} Algorithm {#sec:algorithm}
---------------------------
In the previous section, it [was]{} assumed that only one agent is somehow chosen to make a decision at each iteration [under the fully-connected network]{}. On the contrary, in this section, we propose a [decentralized]{} algorithm, as shown in Algorithm \[algorithm\], in which every agent does decision making [based its local information]{} and affects its [neighbors]{} simultaneously [under a strongly-connected network.]{} Despite that, we show that [Theorems \[NASH\] and \[Nash\_poly\] still hold]{} thanks to our proposed distributed mutex subroutine shown in Algorithm \[algorithm:async\]. The details of the [decentralized]{} main algorithm are as follows.
*// Initialisation* $\mathsf{satisfied} \leftarrow false$; $r^i \leftarrow 0$; $s^i \leftarrow 0$ \[line:local\_variable\_1\] $\Pi^i \leftarrow \{S_{\phi} = \mathcal{A}, S_j = \phi \ \forall t_j \in \mathcal{T}\}$\[line:local\_variable\_2\] *// Decision-making process begins*
*// Make a new decision if necessary* \[line:decision\_making\]
$(t_{j*},|S_{j*}|) \leftarrow \arg\max_{\forall S_j \in \Pi^i} (t_j, |S_j \cup \{a_i\}|)$ \[line:choose\_best\] \[line:decision\_1\] Join $S_{j*}$ and update $\Pi^i$ $r^i \leftarrow r^i + 1$ \[line:iteration\_increase\] $s^i \in \mathrm{unif}[0,1]$ \[line:decision\_2\] $\mathsf{satisfied} = true$ \[line:satisfied\] \[line:decision\_making2\]
*// Broadcast the local information to [neighbor]{} agents* Broadcast $M^i = \{r^i, s^i, \Pi^i\}$ and receive $M^k$ from its\[line:communication\] [neighbor]{}s $\forall a_k \in \mathcal{N}_i$
*// Select the valid partition from all the received messages* Construct $\mathcal{M}^i_{rcv}=\{M^i, \forall M^k\}$ $\{r^i, s^i, \Pi^i\}, \mathsf{satisfied} \leftarrow$ \[line:decision\_making\_end\]
Each agent $a_i$ has local variables such as $\Pi^i$, $\mathsf{satisfied}$, $r^i$, and $s^i$ (Line \[line:local\_variable\_1\]–\[line:local\_variable\_2\]). Here, $\Pi^i$ is the agent’s locally-known partition; $\mathsf{satisfied}$ is a binary variable that indicates whether or not the agent [is satisfied]{} with $\Pi^i$ [such that it does not want to deviate from its current coalition]{}; $r^i \in \mathbb{Z}^+$ is an integer variable to represent how many times $\Pi^i$ has evolved (i.e., the number of iterations happened for updating $\Pi^i$ until that moment); and $s^i \in [0,1]$ is a uniform-random variable that is generated when[ever]{} $\Pi^i$ is newly updated (i.e., a random time stamp). Given $\Pi^i$, agent $a_i$ examines which coalition is the most preferred [among]{} others, assuming that [the]{} other agents remain at the existing [coalitions]{} (Line \[line:choose\_best\]). Then, the agent joins the newly found coalition if it is strongly preferred than [its current]{} coalition. In this case, the agent updates $\Pi^i$ to reflect its new decision, increases $r^i$, and generates a new random time stamp $s^i$ (Line \[line:decision\_1\]–\[line:decision\_2\]). In any case, since the agent ascertained that the currently-selected coalition is the most preferred, the agent becomes satisfied with $\Pi^i$ (Line \[line:satisfied\]). Then, agent $a_i$ generates [and sends]{} a message $M^i := \{r^i, s^i, \Pi^i\}$ to [its]{} [neighbor]{} agents, and vice versa (Line \[line:communication\]).
Since every agent locally updates its locally-known partition simultaneously, one of the partitions should be regarded as if it were the partition updated by a deciding agent at the previous iteration. We refer to this partition as *the valid partition* at the iteration. The distributed mutex subroutine in Algorithm \[algorithm:async\] enables the agents to [recognize]{} the valid partition [among]{} all the locally-known current partitions even under a strongly-connected network and in asynchronous environments. Before executing this subroutine, each agent $a_i$ collects all the messages received from its [neighbor]{} agents ${{\color{green!45!black}}\forall a_k} \in \mathcal{N}_i$ (including $M^i$) as $\mathcal{M}^i_{rcv}= \{M^i, \forall M^k\}$. Using this message set, the agent examines whether or not its own partition $\Pi^i$ is valid. If there exists any other partition $\Pi^k$ such that $r^k > r^i$, then the agent considers $\Pi^k$ more valid than $\Pi^i$. This also happens if [$r^k = r^i$ and $s^k > s^i$, which indicates the case where $\Pi^k$ and $\Pi^i$ have evolved over the same amount of times, but the former has a higher time stamp]{}. Since $\Pi^k$ is considered as more valid, agent $a_i$ [will need]{} to re-examine if there is a more preferred coalition given $\Pi^k$ in the next iteration. Thus, the agent sets $\mathsf{satisfied}$ as $false$ (Line \[alg\_mutex:line1\]–\[alg\_mutex:line2\] in Algorithm \[algorithm:async\]). After [completing this subroutine]{}, depending on $\mathsf{satisfied}$, each agent proceeds the decision-making process again (i.e., Line \[line:decision\_making\]–\[line:decision\_making2\] in Algorithm \[algorithm\]) and/or just broadcasts the existing locally-known partition to its [neighbor]{} agents (Line \[line:communication\] in Algorithm \[algorithm\]).
$\mathsf{satisfied} \leftarrow true$ \[alg\_mutex:line1\] $r^i \leftarrow r^k$ $s^i \leftarrow s^k$ $\Pi^i \leftarrow \Pi^k$ $\mathsf{satisfied} \leftarrow false$ \[alg\_mutex:line2\] $\{r^i, s^i, \Pi^i\}$, $\mathsf{satisfied}$
In a nutshell, the distributed mutex algorithm makes sure that there is only one valid partition that dominates (or will finally dominate depending on the communication network) any other partitions. In other words, multiple partitions locally evolve, [but one]{} of them only eventually survive [as long as a strongly-connected network is given. From each partition’s viewpoint, it can be regarded as being evolved by a random sequence of the agents under the fully-connected network. Thus, the partition becomes Nash stable within the polynomial time as shown in Theorem \[Nash\_poly\].]{} In an extreme case, we may encounter multiple Nash stable partitions [at]{} the very last. Nevertheless, thanks to the mutex algorithm, one of them can be distributedly selected by the agents. [All the features]{} imply that agents using Algorithm \[algorithm\] can find a Nash stable partition in a [decentralized]{} manner [and Theorems \[NASH\] and \[Nash\_poly\] still hold.]{}
Analysis {#Analysis}
========
Algorithmic Complexity (Scalability) {#sec:algorithm_complexity}
------------------------------------
Firstly, let us discuss about the running time for the proposed framework to find a Nash stable partition. This paper refers to a unit time required for each agent to proceed the main loop of Algorithm \[algorithm\] (Line \[line:decision\_making\]-\[line:decision\_making\_end\]) as a *time step*. Depending on the communication network considered, especially if it is not fully-connected, it may be possible that [some of]{} the given agents have to execute this loop to just propagate [their locally-known]{} partition information without affecting $r_i$ as Line \[line:iteration\_increase\]. Because this process also spends a unit time step, we call it as *dummy iteration* to distinguish from a *(normal) iteration*, which increases $r_i$.
Notice that such dummy iterations happen [sequentially]{} at most $d_G$ times before a normal iteration occurs, where $d_G$ is the graph diameter of the communication network. Hence, thanks to Theorem \[Nash\_poly\], the total required time steps until finding a Nash stable partition is $O(d_G n_a^2)$. For the fully-connected network case, it becomes $O(n_a^2)$ because of $d_G = 1$. Note that this algorithmic complexity is less than that of the [centralized]{} algorithm, i.e., $O(n_a^2 \cdot n_t)$, in [@Darmann2015].
Every agent at each iteration investigates ${n_t+1}$ of selectable task-coalition pairs including [$t_{\phi}$]{} given a locally-known valid partition (as shown in Line \[line:choose\_best\] in Algorithm \[algorithm\]). Therefore, the computational overhead for an agent is ${O(n_t)}$ per any iteration. With consideration of the total required time steps, the running time of the proposed approach for an agent can be bounded by ${O(d_G n_t n_a^2)}$. Note that the running time in practice can be much less than the bound since Theorem \[Nash\_poly\] was conservatively [analyze]{}d, as described in the following remark.
\[remark:num\_iter\_practice\] Algorithm \[algorithm\] allows the entire agents in $\mathcal{A}$ to [be involved in]{} the decision-making process, whereas, in the proof for Theorem \[Nash\_poly\], a new agent can be involved after a Nash stable partition of existing agents is found. Since agents using Algorithm \[algorithm\] do not need to find every Nash stable partition for each subset of the agents, unnecessary iterations can be reduced. Hence, the number of required iterations in practice may become less than that shown in Theorem \[Nash\_poly\], which is also supported by the experimental results in Section [\[sec:result\_scalability\]]{}.
Let us now discuss about the communication overhead for each agent per iteration. Given a network, agent $a_i$ should communicate with $|\mathcal{N}_i|$ of its neighbors, and the size of each message grows with regard to $n_a$. Hence, the communication overhead of the agent is $O(|\mathcal{N}_i| \cdot n_a )$. It could be quadratic if $|\mathcal{N}_i|$ increases in proportional to $n_a$. However, this would rarely happen in practice due to spatial distribution of agents and physical limits on communication such as range limitation. Instead, $|\mathcal{N}_i|$ would be most likely saturated in practice.
\[remark:comm\_overhead\] To reduce the communication overhead, we may impose *the maximum number of transactions per iteration*, denoted by $n_c$, on each agent. Even so, Theorems \[NASH\] and \[Nash\_poly\] are still valid as long as the union of underlying graphs of the communication networks over time intervals becomes connected. However, in return, the number of dummy iterations may increase, so does the framework’s running time. In an extreme case where $n_c = 1$ (i.e., unicast mode), dummy iterations may happen in a row at most $n_a$ times. Thus, the total required time steps until finding a Nash stable partition could be $O(n_a^3)$, whereas the communication overhead is $O(n_a)$. In short, the running time of the framework can be traded off against the communication overhead for each agent per iteration.
Suboptimality {#sec:suboptimality}
-------------
This section investigates the *suboptimality lower bound* (or can be called *approximation ratio*) of the proposed framework in terms of the global utility, i.e., the objective function in Equation (\[eqn:obj\_ftn\]). Given a partition $\Pi$, the global utility value can be equivalently rewritten as $$\label{Objective_function}
J = \sum_{\forall a_i \in \mathcal{A}} u_i(t_{\Pi(i)},|S_{\Pi(i)}|).$$ Note that we can simply derive $\{x_{ij}\}$ for Equation (\[eqn:obj\_ftn\]) from $\Pi$ for Equation (\[Objective\_function\]), and vice versa. Let ${J_{GRAPE}}$ and $J_{OPT}$ represent the global utility of a Nash stable partition obtained by the proposed framework and the optimal value, respectively. This paper refers to the fraction of ${J_{GRAPE}}$ with respect to $J_{OPT}$ as the *suboptimality* of GRAPE, denoted by $\alpha$, i.e., $$\label{eqn:approx_ratio}
\alpha := J_{GRAPE}/J_{OPT}.$$
The lower bound of the suboptimality can be determined by the following theorem.
\[OPT\_Bound\] Given a Nash stable partition ${\Pi}$ obtained by GRAPE, its suboptimality in terms of the global utility is lower bounded as follows: $$\label{Eq_OPT_1}
\alpha \ge J_{GRAPE}/(J_{GRAPE}+\lambda),$$ where $$\label{eqn:lambda}
\lambda \equiv \sum_{\forall S_j \in \Pi} \max_{a_i \in \mathcal{A}, p \le |\mathcal{A}|} \big\{ p \cdot \big[ u_i(t_j,p) - u_i(t_j,|S_j \cup \{a_i\}|) \big] \big\} $$
Let ${\Pi^{*}}$ denote the optimal partition for the objective function in Equation (\[Objective\_function\]). Given a Nash stable partition $\Pi$, from Definition \[Nash\_stable\], it holds that, $\forall a_i \in \mathcal{A}$, $$\label{Eq_OPT_2}
u_i (t_{\Pi(i)},|S_{\Pi(i)}|) \ge u_i ( t^{*}_{j \gets i} , | S_{j} \cup \{a_i\} | ) ,$$ where ${t^{*}_{j \gets i}}$ indicates task ${t_j \in \mathcal{T}}$ to which agent ${a_i}$ should have joined according to the optimal partition $\Pi^*$; and ${S_{j} \in \Pi}$ is the coalition for task ${t_j}$ whose participants follow the Nash stable partition $\Pi$. The right-hand side of the inequality in Equation (\[Eq\_OPT\_2\]) can be rewritten as $$\begin{split}
u_i ( & t^{*}_{j \gets i} , | S_{j} \cup \{a_i\} | ) = u_i ( t^{*}_{j \gets i} , | S^{*}_{j}| ) - \\
& \big\{ u_i ( t^{*}_{j \gets i} , | S^{*}_{j}| ) - u_i ( t^{*}_{j \gets i} , | S_{j} \cup \{a_i\} | ) \big\},
\label{Eq_OPT_3}
\end{split}$$ where ${S^{*}_{j} \in \Pi^{*}}$ is the ideal coalition of task ${t^{*}_{j \gets i}}$ that [maximize]{}s the objective function.
By summing [over]{} all the agents, the inequality in Equation (\[Eq\_OPT\_2\]) can be said that
[rCl]{}\
& & \_[a\_i ]{} u\_i ( t\^[\*]{}\_[j i]{} , | S\^[\*]{}\_[j]{}| )\
&& - \_[a\_i ]{} { u\_i ( t\^[\*]{}\_[j i]{} , | S\^[\*]{}\_[j]{}| ) - u\_i ( t\^[\*]{}\_[j i]{} , | S\_[j]{} {a\_i} | ) }.\
\[Eq\_OPT\_4\]
The left-hand side of the inequality in Equation (\[Eq\_OPT\_4\]) represents the objective function value of the Nash stable partition $\Pi$, i.e., $J_{GRAPE}$, and the first term of the right-hand side is the optimal value, i.e., $J_{OPT}$. The second term in the right-hand side can be interpreted as the summation of the utility lost of each agent caused by the belated decision to its optimal task, provided that [the]{} other agents still follow the Nash stable partition. The upper bound of the second term is given by $$\sum_{j=1}^{{{\color{green!45!black}}n_t}} | S^{*}_{j}| \cdot \max_{a_i \in S^{*}_j} \{ u_i ( t^{*}_{j \gets i} , | S^{*}_{j}| ) - u_i ( t^{*}_{j \gets i} , | S_{j} \cup \{a_i\} | ) \}.
\label{Eq_OPT_5}$$ This is at most $$\sum_{\forall S_j \in \Pi} \max_{a_i \in \mathcal{A}, p \le |\mathcal{A}|} L_{ij}[p] \equiv \lambda,
\label{Eq_OPT_6}$$ where $L_{ij}[p] = p \cdot ( u_i ( t_{j} , p ) - u_i ( t_{j} , | S_{j} \cup \{a_i\} | ) )$.
Hence, the inequality in Eqn (\[Eq\_OPT\_4\]) can be rewritten as $$J_{GRAPE} \ge J_{OPT} - \lambda.$$ Dividing both sides by $J_{GRAPE}$ and rearranging them yield the suboptimality lower bound of the Nash stable partition, as given by Equation (\[Eq\_OPT\_1\]).
Although Theorem \[OPT\_Bound\] does not provide a fixed-value lower bound, it can be determined as long as a Nash stable partition and agents’ individual utility functions are given. Nevertheless, as a special case, if the social utility for any coalition is non-decreasing (or monotonically increasing) in terms of the number of co-working agents, then we can obtain a fixed-value lower bound for the suboptimality of a Nash stable partition.
\[optimality\] Given an instance $(\mathcal{A},\mathcal{T},\mathcal{P})$ of GRAPE, if (i) the social utility for any coalition is non-decreasing with regard to the number of participants, i.e., for any $S_j \subseteq \mathcal{A}$ and $a_l \in \mathcal{A}\setminus S_j$, it holds that $$\sum_{\forall a_i \in S_j} u_i(t_j, |S_j|) \le \sum_{\forall a_i \in S_j \cup \{a_l\}} u_i(t_j, |S_j \cup \{a_l\}|),$$ and (ii) all the individual utilities can derive SPAO preference relations, then a Nash stable partition $\Pi$ obtained by GRAPE provide[s at least]{} $50\%$ of suboptimality in terms of the global utility.
Firstly, we introduce some definitions and notations that facilitate to describe this proof. Given a partition $\Pi$ of an instance $(\mathcal{A},\mathcal{T},\mathcal{P})$, the global utility is denoted by $$\label{eqn:social_util}
\begin{split}
V(\Pi) := \sum_{\forall a_i \in \mathcal{A}} u_i(t_{\Pi(i)},|S_{\Pi(i)}|).
\end{split}
$$
We use operator $\oplus$ as follows. Given any two partitions $\Pi^{A} = \{S^A_0, ..., S^A_{n_t}\}$ and $\Pi^{B} = \{S^B_0, ..., S^B_{n_t}\}$, $$\Pi^{A} \oplus \Pi^{B} := \{S_0^A \cup S_0^B, \ S_1^A \cup S_1^B, ..., \ S_{n_t}^A \cup S_{n_t}^B \}.$$ [Since]{} $\cup^{n_t}_{j=0}S^A_j=\cup^{n_t}_{j=0}S^B_j=\mathcal{A}$[,]{} there may exist the same agent $a_i$ even in two different coalitions in $\Pi^{A} \oplus \Pi^{B}$. For instance, suppose that $\Pi^{A} = \{ \{a_1\}, \{a_2\}, \{a_3\}\}$ and $\Pi^{B} = \{ \emptyset, \{a_1, a_3\}, \{a_2\}\}$. Then, $\Pi^{A} \oplus \Pi^{B} = \{ \{a_1\}, \{a_1, a_2, a_3\}, \{a_2, a_3\}\}$. We regard such an agent as two different agents in $\Pi^{A} \oplus \Pi^{B}$. Accordingly, the operation may increase the number of total agents in the resultant partition.
Using the definitions described above, condition (i) implies that $$\label{eqn:socialutil_nondec}
V(\Pi^A) \le V(\Pi^A \oplus \Pi^B).$$
From now on, we will show that $\frac{1}{2} V(\Pi^*) \le V(\hat{\Pi})$, where $\Pi^{*} = \{S^*_0, S^*_1, ...,S^*_{n_t}\}$ is an optimal partition and $\hat{\Pi} = \{\hat{S}_0, \hat{S}_1, ...,\hat{S}_{n_t}\}$ is a Nash stable partition. By doing so, this theorem can be proved. From the definition in Equation (\[eqn:social\_util\]), it can be said that $$\label{eqn:eqn_15}
\begin{split}
V(\hat{\Pi} \oplus \Pi^*)= & \sum_{\forall {a}_i \in {\mathcal{A}}} u_i(t_{\hat{\Pi}(i)}, |\hat{S}_{\hat{\Pi}(i)} \cup {S}^*_{\hat{\Pi}(i)}|) \\
& + \sum_{\forall {a}_i \in {\mathcal{A}}^{-}} u_i(t_{\Pi^*(i)}, |\hat{S}_{\Pi^*(i)} \cup {S}^*_{\Pi^*(i)}|),
\end{split}$$ where $\mathcal{A}^-$ is the set of agents whose decisions follow not the Nash stable partition $\hat{\Pi}$ but only the optimal partition $\Pi^*$. Due to condition (ii), the first term of the right-hand side in Equation (\[eqn:eqn\_15\]) is no more than $$\sum_{\forall {a}_i \in {\mathcal{A}}} u_i(t_{\hat{\Pi}(i)}, |\hat{S}_{\hat{\Pi}(i)}|) \equiv V(\hat{\Pi}).$$ Likewise, the second term is also at most $$\sum_{\forall {a}_i \in {\mathcal{A}}^{-}} u_i(t_{\Pi^*(i)}, |\hat{S}_{{\Pi}^*(i)} \cup \{a_i\}|).$$ By the definition of Nash stability (i.e., for every agent $a_i \in \mathcal{A}$, $u_i(t_{\hat{\Pi}(i)}, |\hat{S}_{\hat{\Pi}(i)}|) \ge u_i(t_j, |\hat{S}_j \cup \{a_i\}|)$, $\forall {\hat{S}_j \in \hat{\Pi}}$), the above equation is at most $$\sum_{\forall {a}_i \in {\mathcal{A}}^{-}} u_i(t_{\hat{\Pi}(i)}, |\hat{S}_{\hat{\Pi}(i)}|),$$ which is also no more than, because of $\mathcal{A}^- \subseteq \mathcal{A}$, $$\sum_{\forall {a}_i \in {\mathcal{A}}} u_i(t_{\hat{\Pi}(i)}, |\hat{S}_{\hat{\Pi}(i)}|) \equiv V(\hat{\Pi}).$$
[Accordingly]{}, the left-hand side of Equation (\[eqn:eqn\_15\]) holds the following inequality: $$V(\hat{\Pi} \oplus \Pi^*) \le 2 V(\hat{\Pi}).$$ Thanks to Equation (\[eqn:socialutil\_nondec\]), it follows that $$V(\Pi^*) \le V(\hat{\Pi} \oplus \Pi^*).$$ Therefore, $V(\Pi^*) \le 2 V(\hat{\Pi})$, which completes this proof.
Adaptability
------------
Our proposed framework is also adaptable [to]{} dynamic environments [such as unexpected addition or loss of agents or tasks]{}, owing to its fast convergence to a Nash stable partition. Thanks to Lemma \[lemma\_1\], if a new agent additionally joins an ongoing mission in which an assignment was already determined, the number of iterations required for converging to a new Nash stable partition is at most the number of the total agents. Responding to any environmental change, the framework is able to establish a new agreed task assignment within polynomial time.
[Robustness in Asynchronous Environments]{}
-------------------------------------------
[In the proposed framework, for every iteration, each agent does not need to wait until nor ensure that its locally-known information has been propagated to a certain neighbor group. Instead, as described in Remark \[remark:comm\_overhead\], it is enough for the agent to receive the local information from one of its neighbors, to make a decision, and to send the updated partition back to some of its neighbors. Temporary disconnection or non-operation of some agents may cause dummy iterations additionally. However, it does not affect the existence of, the convergence toward, and the suboptimality of a Nash stable partition under the proposed framework, which is also supported by Section \[sec:result\_robust\]. ]{}
GRAPE with Minimum Requirements {#sec:min_rqmt}
===============================
This section addresses another task allocation problem where each task may require at least a certain number of agents for [its]{} completion. This problem can be defined as follows.
\[prob\_minrqmt\] Given a set of agents $\mathcal{A}$ and a set of tasks $\mathcal{T}$, the objective is to find an assignment such that $$\label{eqn:obj_minrqmt}
\max_{\{x_{ij}\}} \sum_{\forall a_i \in \mathcal{A}} \sum_{\forall t_j \in \mathcal{T}} u_{i}(t_j, p) x_{ij}{{\color{green!45!black}},}$$ subject to $$\label{eqn:prob_minrqmt_min_rqmt}
\sum_{\forall a_i \in \mathcal{A}} x_{ij} \ge R_j{{\color{green!45!black}},} \quad \forall t_j \in \mathcal{T}{{\color{green!45!black}},}$$ $$\sum_{\forall t_j \in \mathcal{T}} x_{ij} \le 1{{\color{green!45!black}},} \quad \forall a_i \in \mathcal{A}{{\color{green!45!black}},}$$ $$x_{ij} \in \{0,1\}{{\color{green!45!black}},} \quad \forall (a_i,t_j) \in \mathcal{A} \times \mathcal{T}{{\color{green!45!black}},}$$ where $R_j \in {{\color{green!45!black}}\mathbb{N} \cup \{0\}}$ is the number of minimum required agents for task $t_j$, and all the other variables are identically defined as those in Problem \[prob\_basic\]. Here, it is considered that, for $\forall a_i \in \mathcal{A}$ and $\forall t_j \in \mathcal{T}$, $$\label{eqn:no_reward}
u_i (t_j, p) = 0 \quad \text{if $p < R_j$}$$ because task $t_j$ cannot be completed in this case. [Note that any task $t_j$ without such a requirement is regarded to have $R_j = 0$.]{}
[For each task $t_j$ having $R_j > 0$,]{} even if $u_i(t_j,p)$ is monotonically decreasing at $p \ge R_j$, the individual utility can not be simply transformed to a preference relation holding SPAO because of [Equation]{} (\[eqn:no\_reward\]). Thus, we need to modify the utility function to yield alternative values for the case when $p < R_j$. We refer to the modified utility as *auxiliary individual utility* $\tilde{u}_i$, which is defined as $$\label{eqn:aux_util}
\tilde{u}_{i}(t_j, p) = \begin{cases}
{u}^0_{i}(t_j,p) & \text{if $p \le R_j$} \\
{u}_{i}(t_j, p) & \text{otherwise,}
\end{cases}$$ where ${u}^0_{i}(t_j,p)$ is the *dummy utility* of agent $a_i$ with regard to task $t_j$ when $p \le R_j$.
The dummy utility is intentionally used also for the case when $p = R_j$ in order to find an assignment that holds Equation (\[eqn:prob\_minrqmt\_min\_rqmt\]). For this, the auxiliary individual utility should satisfy the following condition.
\[con:min\_rqmt\] For every agent $a_i \in \mathcal{A}$, its preference relation $\mathcal{P}_i$ holds that, for any two tasks $t_j, t_k \in \mathcal{T}$, $$(t_j,R_j) \succ_i (t_k,R_k+1).$$ This condition enables every agent to prefer a task for which the number of co-working agents is less than its minimum requirement, over any other tasks whose requirements are already fulfilled. Under this condition, as long as the agent set $\mathcal{A}$ is such that $|\mathcal{A}| \ge \sum_{\forall t_j \in \mathcal{T}} R_j$ and a Nash stable partition is found, the resultant assignment satisfies Equation (\[eqn:prob\_minrqmt\_min\_rqmt\]).
\[prop:min\_rqmt\] Given an instance of Problem \[prob\_minrqmt\] where $u_i(t_j, p)$ $\forall i$ $\forall j$ is a monotonically decreasing function with regard to $\forall p \ge R_j$, if the dummy utilities ${u}^0_{i}(t_j,p)$ $\forall i$ $\forall j$ in (\[eqn:aux\_util\]) are set to satisfy Condition \[con:min\_rqmt\] and SPAO for $\forall p \le R_j$, then all the resultant auxiliary individual utilities $\tilde{u}_i(t_j,p)$ $\forall i$ $\forall j$ $\forall p$ can be transformed to a $n_a$-tuple of preference relations $\mathcal{P}$ that hold Condition \[con:min\_rqmt\] as well as SPAO for $\forall p \in \{1,...,n_a\}$. In the corresponding instance of GRAPE $(\mathcal{A},\mathcal{T}, \mathcal{P})$, a Nash stable partition can be determined within polynomial times as shown in Theorems \[NASH\] and \[Nash\_poly\] because of SPAO, and the resultant partition can satisfy Equation (\[eqn:prob\_minrqmt\_min\_rqmt\]) due to Condition \[con:min\_rqmt\].
Let us give an example. Suppose that there exist 100 agents $\mathcal{A}$, and 3 tasks $\mathcal{T} = \{t_1, t_2, t_3\}$ where only $t_3$ has its minimum requirement $R_3 = 5$; for every agent $a_i \in \mathcal{A}$, individual utilities for $t_1$ and $t_2$, i.e., $u_i(t_1,p)$ and $u_i({{\color{blue}}t_2},p)$, are much higher than that for $t_3$ in $\forall p \in \{1,...,100\}$. We can find a Nash stable partition for this example, as described in Proposition \[prop:min\_rqmt\], by setting ${u}^0_{i}(t_j,p) {{\color{green!45!black}}= \max_{\forall t_j}\{{u}_{i}(t_j,R_j + 1)\} + \beta}$ for $\forall p \le R_j$, [$\forall a_i \in \mathcal{A}$, where $\beta > 0$ is an arbitrary positive constant]{}.
After a Nash stable partition is found, in order to compute the objective function value in (\[eqn:obj\_minrqmt\]), the original individual utility function $u_i$ should be used instead of the auxiliary one $\tilde{u}_i$.
\[prop:subbound\_minrqmt\] [Given]{} a Nash stable partition $\Pi$ obtained by implementing Proposition \[prop:min\_rqmt\], its suboptimality bound $\alpha$ is such that $$\label{eqn:bound_minrqmt}
\alpha \ge \frac{J_{GRAPE}}{J_{GRAPE} + \tilde{\lambda}} \cdot \frac{J_{GRAPE}}{J_{GRAPE} + \delta}.$$ Here, $\delta \equiv \tilde{J}_{GRAPE} - J_{GRAPE}$, where $\tilde{J}_{GRAPE}$ [(or]{} ${J}_{GRAPE}$[)]{} is the objective function value in (\[eqn:obj\_minrqmt\]) using $\tilde{u}_i$ [(or using]{} ${u}_i$[)]{} given the Nash stable partition. Likewise, $\tilde{\lambda}$ is the value in (\[eqn:lambda\]) using $\tilde{u}_i$. In addition to this, if every $\tilde{u}_i$ satisfies the conditions for Theorem \[optimality\], then $$\label{eqn:bound2_minrqmt}
\alpha \ge \frac{1}{2} \cdot \frac{J_{GRAPE}}{J_{GRAPE} + \delta}.$$
Since the Nash stable partition $\Pi$ is obtained by using $\tilde{u}_i$, it can be said from Equations (\[eqn:approx\_ratio\]) and (\[Eq\_OPT\_1\]) that $$\label{eqn:proposition_2}
\frac{\tilde{J}_{GRAPE}}{\tilde{J}_{OPT}} \ge \frac{\tilde{J}_{GRAPE}}{\tilde{J}_{GRAPE} + \tilde{\lambda}}.$$ Due to the fact that $\tilde{u}_i(t_j,p) \ge {u}_i(t_j,p)$ for $\forall i,j,p$, it is clear that $\tilde{J}_{GRAPE} \ge {J}_{GRAPE}$ and $\tilde{J}_{OPT} \ge {J}_{OPT}$. By letting that $\delta := \tilde{J}_{GRAPE} - J_{GRAPE}$, the left term in (\[eqn:proposition\_2\]) is at most $({J}_{GRAPE} + \delta)/{{J}_{OPT}}$. Besides, the right term in (\[eqn:proposition\_2\]) is a monotonically-increasing function with regard to $\tilde{J}_{GRAPE}$, and thus, it is lower bounded by ${J}_{GRAPE}/(J_{GRAPE} \ {{\color{green!45!black}}+} \ \tilde{\lambda})$. From this, Equation (\[eqn:proposition\_2\]) can be rewritten as Equation $(\ref{eqn:bound_minrqmt})$ by multiplying $J_{GRAPE}/(J_{GRAPE}+\delta)$.
Likewise, for the case when every $\tilde{u}_i$ satisfies the conditions for Theorem \[optimality\], it can be said that $\tilde{J}_{GRAPE} \ge 1/2 \cdot \tilde{J}_{OPT}$, which can be transformed into Equation (\[eqn:bound2\_minrqmt\]) as shown above.
Notice that if $\delta = 0$ for the Nash stable partition in Proposition \[prop:subbound\_minrqmt\], then the suboptimality bounds become equivalent to those in Theorems \[OPT\_Bound\] and \[optimality\].
Simulation and Results {#Results}
======================
This section validates the performances of the proposed framework with respect to its scalability, suboptimality, adaptability against dynamic environments, and robustness in asynchronous environments.
Mission Scenario and Settings {#sec:setting}
-----------------------------
### Utility functions
Firstly, we introduce the social and individual utilities used in this numerical experiment. We consider that if multiple robots execute a task together as a coalition, [then]{} they are given a certain level of reward for the task. The amount of the reward varies depending on the number of the co-working agents. The reward is shared with the agents, and each agent’s individual utility is considered as the shared reward minus the cost required to personally spend [on]{} the task (e.g., fuel consumption for movement). In this experiment, *the equal fair allocation rule* [@Saad2009; @Saad2011] is adopted. Under the rule, a task’s reward is equally shared [among]{} the members. Therefore, the individual utility of agent $a_i$ executing task $t_j$ with coalition $S_j$ is defined as $$\label{basic_utility_ftn}
u_i(t_j,|S_j|)=r(t_j, |S_j|)/|S_j|-c_i(t_j),$$ where $r(t_j, |S_j|)$ is the reward from task ${t_j}$ when it is executed by $S_j$ together, and $c_i(t_j)$ is the cost that agent ${a_i}$ needs to pay for the task. Here, we simply set the cost as a function of the distance from agent $a_i$ to task $t_j$. We set that if $u_i(t_j,|S_j|)$ is not positive, agent $a_i$ prefers to join $S_{\phi}$ over $S_j$.
This experiment considers two types of tasks. For the first type, a task’s reward becomes higher as the number of participants gets close to a specific desired number. We refer to such a task as a *peaked-reward* task, and its reward can be defined as $$\label{peaked_task}
r(t_j, |S_j|)=\frac{r^{\max}_{j} \cdot |S_j|}{n^d_j} \cdot e^{-|S_j|/n^d_j + 1},$$ where ${n^d_j}$ represents the desired number, and ${r^{\max}_j}$ is the peaked reward in case that $n^d_j$ of agents are involved in. Consequently, the individual utility [of]{} agent $a_i$ with regard to task $t_j$ becomes the following equation: $$\label{utility_ftn_1}
u_i(t_j,|S_j|)=\frac{r^{\max}_j}{n^d_j} \cdot e^{-|S_j|/n^d_j + 1}- c_i(t_j).$$
For the second type, a task’s reward becomes higher as more agents are involved, but the corresponding marginal gain decreases. This type of tasks is said to be *submodular-reward*, and the reward can be defined as $$\label{submodular_task}
r(t_j, |S_j|)=r^{\min}_j \cdot \log_{{\epsilon}_j}(|S_j|+{\epsilon}_j-1),$$ where ${r^{\min}_j}$ indicates the reward obtained if there is only one agent [is]{} involved, and ${{\epsilon}_j} > 1$ is the design parameter regarding the diminishing marginal gain. The resultant individual utility becomes as follows: $$\label{utility_ftn_2}
u_i(t_j,|S_j|)=r^{\min}_j \cdot \log_{{\epsilon}_j}(|S_j|+{\epsilon}_j-1)/|S_j| - c_i(t_j).$$
Figure \[fig\_utility\_ftn\] illustrates examples of the social utilities and individual utilities for the task types introduced above. For simplification, agents’ costs are ignored in the figure. We set ${r^{\max}_j}$, ${n^d_j}$, ${r^{\min}_j}$ and ${\epsilon_j}$ to be 60, 15, 10, and 2, respectively. Notice that the individual utilities are monotonically decreasing in both cases, as depicted in Figure \[fig\_utility\_ftn\](b). Therefore, given a mission that entails these task types, we can generate an instance ${(\mathcal{A},\mathcal{T},\mathcal{P})}$ of GRAPE that holds SPAO.
### Parameters generation
In the following sections, we will mainly [utilize]{} Monte Carlo simulations. At each run, $n_t$ tasks and $n_a$ agents are uniform-randomly located in a $1000 \ m \times 1000 \ m$ arena and a $250 \ m \times 250 \ m$ arena within there, respectively. For a scenario including peaked-reward tasks, $r^{\max}_j$ is randomly generated from a uniform distribution over $[1000 , 2000] \times n_a/n_t$[,]{} and $n^d_j$ is [set to be]{} the rounded value of $(r^{\max}_j/{\sum_{\forall t_k \in \mathcal{T^*}} r^{\max}_k}) \times n_a$. For a scenario including submodular-reward tasks, $\epsilon_j$ is set as 2, and $r^{\min}_j$ is uniform-randomly generated over $[1000, 2000] \times 1/\log_{{\epsilon}_j}{(n_a/n_t + 1)}$.
### Communication network
Given a set of agents, their communication network is strongly-connected [in a way that only contains a bidirectional minimum spanning tree with consideration of the agents’ positions.]{} Furthermore, we also consider the fully-connected network in some experiments in order to examine the influence of the network. The communication network is randomly generated at each instance, and is assumed to be sustained during a mission except the robustness test simulations in Section \[sec:result\_robust\].
Scalability {#sec:result_scalability}
-----------
To investigate the effectiveness of $n_t$ and $n_a$ upon the scalability of the proposed approach, we conduct Monte Carlo simulation[s]{} with 100 runs for the scenarios introduced in Section \[sec:setting\] with a fixed $n_t = 20$ and various $n_a \in \{80, 160, 240, 320\}$ and for those with $n_a = 160$ and $n_t \in \{5, 10, 15, 20\}$. Figure \[fig\_result\_scalability\] shows the statistical results using box-and-whisker plots, where the green boxes indicate the results from the scenarios with the peaked-reward tasks and the magenta boxes are those with the submodular-reward tasks. The blue and red lines connecting the boxes represent the average value for each test case $(n_a, n_t)$ under a strongly-connected network and the fully-connected network, respectively.
The left subfigure in Figure \[fig\_result\_scalability\](a) shows that the ratio of the number of required (normal) iterations to that of agents linearly increases as more agents are involved. This implies that [the proposed framework has quadratic complexity with regard to the number of agents]{} (i.e., $C_1 n_a^2$), as stated in Theorem \[Nash\_poly\], but with $C_1$ being much less than $\frac{1}{2}$, which is the value from the theorem. $C_1$ can become [even lower]{} (e.g., $C_1 = 5 \times 10^{-4}$ in the experiments) under the fully-connected network. Such $C_1$ being smaller than $\frac{1}{2}$ may be explained by Remark \[remark:num\_iter\_practice\]: the algorithmic efficiency of Algorithm \[algorithm\] can reduce unnecessary iterations that may be induced in the procedure of the proof for Theorem \[Nash\_poly\].
On the other hand, the left subfigure in Figure \[fig\_result\_scalability\](b) [shows]{} that the number of required iterations decreases with regard to the number of tasks. This [trend]{} may be caused by the fact that more selectable options provided to [the]{} fixed number of agents can reduce possible conflicts between the agents. Furthermore, in [the two results]{}, the trends regarding either $n_a$ or $n_t$ have higher slope[s]{} under a strongly-connected network than [those]{} under the fully-connected network. This is because the former condition is more sensitive to conflicts between agents, and thus causes additional iterations. For example, agents at the middle nodes of the network may change their decisions (and thus increase the number of iterations) while the local partition information of the agent at one end node is being propagated to the other end nodes. Such unnecessary iterations [in the middle]{} might not have occurred if the agents at [all]{} the end nodes were directly connected [to each other]{}.
The right subfigures in Figure \[fig\_result\_scalability\](a) and (b) indicate that approximately 3–4 times of dummy iterations, compared with the required number of normal iterations, are additionally needed under a strongly-connected network. Noting that the mean values of the graph diameter $d_G$ for the instances with $n_a \in \{80, 160, 240, 320\}$ are $36, 58, 75$ and $92$, respectively, the results show that the amount of dummy iterations happened is much less than the bound value, which is $d_G$ as pointed out in Section \[sec:algorithm\_complexity\]. On the contrary, under the fully-connected network[,]{} there is no need of such a dummy iteration, and thus the required number of iterations and that of time steps are the same.
Suboptimality {#suboptimality}
-------------
This section examines the suboptimality of the proposed framework by using Monte Carlo simulations with 100 instances. In each instance, there are $n_t = 3$ of tasks and $n_a = 12$ of agents [who]{} are strongly-connected. Figure \[fig\_result\_minimum\_bound\] presents the true [sub]{}optimality [of]{} each instance, which is the ratio of the global utility obtained by the proposed framework to that by a brute-force search, i.e., $J_{GRAPE}/J_{OPT}$, and the lower bound given by Theorem \[OPT\_Bound\]. A blue circle and a red cross in the figure indicate the true suboptimality and the lower bound, respectively. The results show that the framework provides near-optimal solutions in almost all cases and the suboptimality of each Nash stable partition is enclosed by the corresponding lower bound.
The suboptimality may be improved if the agents are allowed to investigate a larger search space, for [example]{}, possible coalitions caused by co-deviation of multiple agents. However, this strategy in return may increase communication transactions [between]{} the agents because they have to notice each other’s willingness unless their individual utility functions are known [to each]{} other, which is in [contradiction]{} to Assumption \[assum:agents\_util\]. Besides, the computational overhead for [each]{} agent [per]{} iteration also becomes more expensive than $O(n_t)$, which is the complexity for [unilateral]{} searching[, as shown in Section \[sec:algorithm\_complexity\]]{}. Hence, the resultant algorithm[’s]{} complexity may hinder its [practical]{} applicability to a large-scale multiple agent system. Figure \[fig\_result\_minimum\_bound\_large\] depicts the suboptimality lower bounds for the large-size problems that were previously addressed in Section \[sec:result\_scalability\]. It is clearly shown that the agent communication network does not make any effect on the suboptimality lower bound of a Nash stable partition. Although there is no universal trend of the suboptimality with regard to $n_a$ and $n_t$ in both utility types, it is suggested that the features of the lower bound given by Theorem \[OPT\_Bound\] can be influenced by the utility functions considered. In the experiments, the suboptimality bound averagely remain above than 60–70 %.
Adaptability
------------
This section discusses the adaptability of our proposed framework in response to dynamic environments such as [unexpected]{} inclusion or loss of agents [or]{} tasks. Suppose that there are 10 tasks and 160 agents in a mission, and a Nash stable partition was already found as a baseline. During the mission, the number of agents (or tasks) changes; the range of the change is from losing 50% of the existing agents (or tasks) to additionally including new ones as much as 50% of them. For each dynamical environment, a Monte Carlo simulation with 100 instances [is]{} performed by randomly including or excluding a subset of the corresponding number of agents or tasks. Here, we consider a strongly-connected communication network.
Figure \[fig\_result\_adaptability\](a) illustrates that the more agents are involved [additionally]{}, the more iterations [are]{} required for [re-]{}converging to a new Nash stable partition. This is because the inclusion of a new agent may lead to additional iterations at most as much as the number of [the]{} total agents including the [new]{} agent ([as shown in]{} Lemma \[lemma\_1\]). [On the contrary]{}, the loss of existing agents does not seem to have any apparent relation with the number of iterations. A possible explanation is that the exclusion of an existing agent is [favorable]{} to [the]{} other agents due to SPAO preferences. This stimulates only a limited number of agents who are preferred to move to the coalition where the excluded agent was. This feature induces fewer additional iterations to reach a new Nash stable partition, compared with the case of adding a new agent.
Figure \[fig\_result\_adaptability\](b) shows that eliminating existing tasks causes more iterations than [including]{} new tasks. This can be explained [by]{} the fact that removing any task releases the agents performing the task free and it results in extra iterations as much as the number of the freed agents. On the other hand, adding new tasks induces relatively fewer additional iterations because only some of [the]{} existing agents are attracted to [these]{} tasks.
In summary, as the ratio of the number of agents to that of tasks increases, the number of additional iterations [for]{} converge[nce to]{} a new Nash stable partition also increases. This result corresponds to the trend described in Section \[sec:result\_scalability\], i.e., the left subfigures in Figure \[fig\_result\_scalability\](a) and (b). In all the cases of this experiment, the number of additionally induced iterations still remains [at]{} the same order of the number of the given agents, which implies that the proposed framework provides excellent adaptability.
Robustness in Asynchronous Environments {#sec:result_robust}
---------------------------------------
This section investigates the robustness of the proposed framework in asynchronous environments. This scenario assumes that a certain fraction of the given agents, which are randomly chosen at each time step, somehow can not execute Algorithm \[algorithm\] and even can not communicate with other normally-working [neighbor]{} agents. We refer to such agents as *non-operating* agents. Given that $n_t = 5$ and $n_a = 40$, the fractions of the non-operating agents are set as $\{0, 0.2, 0.4, 0.6, 0.8\}$. In each case, we conduct 100 instances of Monte Carlo experiments for which the submodular-reward [tasks]{} are used.
Figure \[fig\_result\_async\](a) presents that the number of (normal) iterations required for converging [to]{} a Nash stable partition remains [at]{} the same level regardless of the fraction of the non-operating agents. Despite that, the required time steps increase as more agents become non-operating, as shown in Figure \[fig\_result\_async\](b). Note that *time steps growth rate* means the ratio of the total required time steps to those for the case [when all the agents operate normally.]{} These findings indicate that, due to communicational discontinuity caused by the non-operating agents, the framework may take more time to wait for these agents to operate again and then to disseminate [locally-known]{} partition information over the entire agents. [As such,]{} dummy iterations may increase in asynchronous environments, [though]{} the proposed framework is still able to find a Nash stable partition. Furthermore, the resultant Nash stable partition’s [suboptimality]{} lower bound obtained by Theorem \[OPT\_Bound\] is not affected, as presented in Figure \[fig\_result\_async\](c).
Visuali[z]{}ation
-----------------
We have $n_a = 320$ agents and $n_t = 5$ tasks. The initial locations of the given agents are randomly generated, and the overall formation shape is different in each test scenario such as [being]{} circle, skewed circle, and square (denoted [by]{} Scenario \#1, \#2, and \#3, respectively). The tasks are also randomly located away from the agents. In this simulation, each agent is able to communicate with its [nearby]{} agents [within a radius of]{} 50 $m$. Here, the submodular-reward [tasks]{} are used.
Figure \[fig\_result\_visual\] shows the [visualize]{}d task allocation results[,]{} where the circles and the squares indicate the positions of the agents and the tasks, respectively. The lines between the circles represent the communication networks of the agents. The [color]{}ed agents are assigned to the same [color]{}ed task, for example, yellow agents belong to [the]{} team for executing the yellow task. The [size of a square]{} indicates the reward of the [corresponding]{} task. [T]{}he cost for an agent [with regard to a]{} task is considered as a function of the distance from the agent to the task. The allocation results seem to be reasonable with consideration of the task rewards and the costs.
The number of iterations required to find a Nash stable partition is 1355, 1380, and 1295 for Scenario \#1, \#2, and \#3, respectively. The number of dummy iterations happened is [just]{} $20$–$30\%$ of that of the iterations. [This value is much]{} fewer than the results in Figure \[fig\_result\_scalability\] because the networks [considered here]{} are more connected than [those in Section \[sec:result\_scalability\]]{}.
Conclusion {#Conclusion}
==========
This paper proposed a novel game-theoretical framework that addresses a task allocation problem for a robotic swarm consisting of self-interested agents. We showed that selfish agents whose individual interests are transformable to SPAO preferences can converge to a Nash stable partition by using the proposed simple [decentralized]{} algorithm, which is executable even in asynchronous environments and under a strongly-connected communication network. We analytically and experimentally presented that the proposed framework provides scalability, a certain level of guaranteed suboptimality, adaptability, robustness, and a potential to accommodate different interests of agents.
As this framework can be considered as a new sub-branch of self-[organize]{}d approaches, one of our ongoing works is to compare it with one of the existing methods. Defining a fair scenario for both methods is non-trivial and requires careful consideration; otherwise, a resultant unsuitable scenario may provide biased results. Secondly, another natural progression of this study is to relax anonymity of agents and thus to consider a combination of the agents’ identities. Experimentally, we have often observed that heterogeneous agents with social inhibition also can converge to a Nash stable partition. More research would be needed to [analyze]{} the quality of a Nash stable partition obtained by the proposed framework in terms of $\min\max$ because our various experiments showed that the outcome provides individual utilities to agents in a balanced manner.
Acknowledgment {#acknowledgment .unnumbered}
==============
The authors gratefully acknowledge that this research was supported by International Joint Research Programme with Chungnam National University (No. EFA3004Z)
IEEE Copyright Notice $\copyright$ 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
**Accepted to be Published in: IEEE Transactions on Robotics**
[^1]: Inmo Jang, Hyo-Sang Shin, and Antonios Tsourdos are with Centre for Autonomous and Cyber-Physical Systems, Cranfield University, MK43 0AL, United Kingdom (e-mail: [email protected]; [email protected]; [email protected]).
[^2]: Note that the definition of *iteration* is described in Definition \[def:iteration\]. This comparison assumes the fully-connected communication network because the algorithm in [@Darmann2015] is [centralized]{}.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Employing a nonequilibrium Green’s function approach, we examine the effects of long-range hole-impurity scattering on spin-Hall current in $p$-type bulk semiconductors within the framework of the self-consistent Born approximation. We find that, contrary to the null effect of short-range scattering on spin-Hall current, long-range collisions do produce a nonvanishing contribution to the spin-Hall current, which is independent of impurity density in the diffusive regime and relates only to hole states near the Fermi surface. The sign of this contribution is opposite to that of the previously predicted disorder-independent spin-Hall current, leading to a sign change of the total spin-Hall current as hole density varies. Furthermore, we also make clear that the disorder-independent spin-Hall effect is a result of an interband polarization directly induced by the dc electric field with contributions from all hole states in the Fermi sea.'
author:
- 'S. Y. Liu'
- 'Norman J. M. Horing'
- 'X. L. Lei'
title: 'Long range scattering effects on spin Hall current in $p$-type bulk semiconductors'
---
Introduction
============
Recently, there have been extensive studies of the physics of the spin-orbit (SO) interaction in condensed matter. The most intriguing phenomenon induced by SO coupling is the spin-Hall effect (SHE): when a dc electric field is applied, the SO interaction may result in a net nonvanishing spin current flow along the transverse direction. The SHE is classified into two types according to its origin, an [*extrinsic*]{} spin-orbit Hamiltonian term induced by carrier-impurity scattering potentials[@DP; @HS] and an [*intrinsic*]{} spin-orbit Hamiltonian term arising from free carrier kinetics.[@Zhang; @Sinova] The intrinsic spin-Hall effect was originally thought to be independent of carrier-impurity scattering. Experimentally, the SHE was observed in a $n$-type bulk semiconductor[@Kato] and in a two-dimensional (2D) heavy-hole system.[@Wunderlich]
However, further studies have indicated that the spin-Hall effect associated with the intrinsic mechanism can be strongly affected by carrier-impurity scattering (disorder).[@Loss1; @Nomura; @Loss2; @Dimitrova; @Bauer; @Raimondi; @Khaetskii; @Halperin; @Liu1; @Nagaosa; @Liu2; @Bernevig; @Liu3; @Halperin2; @Murakami; @Chen] (To avoid confusion, we use the term “intrinsic SHE” to refer to the total spin-Hall effect arising from the SO coupling terms of the Hamiltonian that do not explicitly involve scattering; ultimately, this is corrected by scattering, but the part that is unaffected by scattering will be termed the intrinsic “disorder-independent” SHE.) In diffusive 2D semiconductors, there always exists a contribution to the intrinsic spin-Hall current which arises from spin-conserving electron-impurity scattering, but it is independent of impurity density within the diffusive regime. For 2D [*electron*]{} systems with Rashba SO coupling, this disorder-related spin-Hall current leads to the vanishing of the total intrinsic spin-Hall current, irrespective of the specific form of the scattering potential, of the collisional broadening, and of temperature.[@Liu2] In 2D Rashba [*heavy-hole*]{} systems, disorder affects the intrinsic SHE in a different fashion: contributions from short-range collisions to the SHE vanish,[@Bernevig] while long-range electron-impurity scattering produces a nonvanishing disorder-related spin-Hall current, whose sign changes with variation of the hole density.[@Liu3; @Halperin2]
To date, the effect of disorder on the intrinsic spin-Hall current in $p$-type bulk semiconductors has been studied relatively little. Employing a Kubo formula, Murakami found a null disorder effect on the intrinsic SHE for short-range hole-impurity collisions.[@Murakami] The crossover of the SHE from the diffusive to the hopping regime has been investigated by modeling finite-size samples (with a maximum of $50 \times 50\times 50$ lattice sites) by Chen, [*et al*]{}.[@Chen] In this paper, we employ a nonequilibrium Green’s function approach to study the effect of more realistic [*long-range*]{} hole-impurity scattering on the intrinsic spin-Hall current in a diffusive $p$-type bulk semiconductor. We find that, in such a system, the contribution of hole-impurity collisions to the intrinsic spin-Hall current is finite and it is independent of impurity density within the diffusive regime. Moreover, this disorder contribution has its sign opposite to that of the disorder-independent one, leading to a sign change of the total spin-Hall current as the hole density varies. Furthermore, we make clear that the disorder-independent spin-Hall effect arises from an interband polarization process directly induced by the dc electric field and it involves all hole states below the Fermi surface. In contrast to this, the disorder contribution to the intrinsic SHE originates from a disorder-mediated polarization between two hole bands and is associated only with hole states in the vicinity of the Fermi surface. Also, we numerically examine the hole-density dependencies of the spin-Hall conductivity and mobility.
This paper is organized as follows. In Sec. II, we derive the kinetic equation for the nonequilibrium distribution function and discuss the origins of the disorder-independent and disorder-related spin-Hall currents. In Sec. III, we perform a numerical calculation to investigate the effect of long-range hole-impurity scattering on the spin-Hall current. Finally, we review our results in Sec. IV.
Formalism
=========
Kinetic equation
----------------
It is well known that for semiconductors with diamond structure (e.g. Si, Ge) or zinc blende structure (e.g. GaAs), the tops of the valence bands usually are split into fourfold degenerate $S=3/2$ and twofold degenerate $S=1/2$ states due to the spin-orbit interaction ($S$ denotes the total angular momentum of the atomic orbital). Near the top of the $S=3/2$ valence bands, the electronic structure can be described by a simplified Luttinger Hamiltonian[@Luttinger] $${\check h}_0({\bf p})=\frac 1{2m}\left [\left (\gamma_1+\frac 52 \gamma_2 \right )p^2-
2\gamma_2({\bf p}\cdot {\bf S})^2\right],\label{ham}$$ where, ${\bf p}\equiv (p_x,p_y,p_z)\equiv (p\sin\theta_{\bf
p}\cos\phi_{\bf p} ,p\sin\theta_{\bf p}\sin\phi_{\bf
p},p\cos\theta_{\bf p})$ is the three-dimensional (3D) hole momentum, $m$ is the free electron mass, ${\bf S}\equiv
(S_x,S_y,S_z)$ are the spin-$3/2$ matrices, and $\gamma_1$ and $\gamma_2$ are the material constants. (As in previous studies,[@Zhang; @Murakami; @Chen; @Zhang2] we simplified by setting $\gamma_3=\gamma_2$ in the original Luttinger Hamiltonian presented in Ref.).
By a local unitary spinor transformation, $ U_{\bf
p}=\exp({-iS_z\phi_{\bf p}})\exp({-iS_y\theta_{\bf p}})$, Hamiltonian (\[ham\]) can be diagonalized as ${\hat h}_0({\bf
p}) = U^+_{\bf p}\check h_0({\bf p}) U_{\bf p}={\rm
diag}[\varepsilon_{H}(p), \varepsilon_{L}(p), \varepsilon_{L}(p),
\varepsilon_{H}(p)]$. Here, $\varepsilon_{H}
(p)=\frac{\gamma_1-2\gamma_2}{2m}p^2$ and $\varepsilon_{L}
(p)=\frac{\gamma_1+2\gamma_2}{2m}p^2$ are, respectively, the dispersion relations of the heavy- and light-hole bands. Physically, this transformation corresponds to a change from a spin basis to a helicity basis.
In a realistic 3D system, holes experience scattering by impurities. We assume that this interaction between holes and impurities can be characterized by an isotropic potential, $V(|{\bf p}-{\bf k}|)$, which corresponds to scattering a hole from state ${\bf p}$ to state ${\bf k}$. In the helicity basis, the scattering potential takes the transformed form, $\hat T ({\bf
p},{\bf k})=U^+_{\bf p}V(|{\bf p}-{\bf k}|)U_{\bf k}$.
We are interested in the spin-Hall current in a bulk hole system driven by a dc electric field ${\bf E}$ along the $z$ axis. In Coulomb gauge, this electric field can be described by the scalar potential, $V\equiv -e{\bf E}\cdot {\bf r}$, with ${\bf r}$ as the hole coordinate. Without loss of generality, we specifically study a spin current, $J_{y}^{x}$, that is polarized along the $x$ axis and flows along the $y$ axis. In the spin basis, the conserved single-particle spin-Hall operator, ${\check j}_{y}^{x}$, is defined as[@Zhang2] $$\check j_{y}^{x}({\bf p})
=\frac 16 \left \{\frac{\partial \check h_0}{\partial p_y}, P_{\bf p}^LS_xP_{\bf p}^L+P_{\bf p}^HS_xP_{\bf p}^H\right
\},$$ with $P_{\bf p}^L$ and $P_{\bf p}^H$, respectively, as projection operators onto the states of light- and heavy-hole bands: $P_{\bf p}^L=\frac
98 -\frac{1}{2p^2}({\bf p}\cdot {\bf S})^2$, $P_{\bf p}^H=1-P_{\bf
p}^L$. Taking a statistical ensemble average, the observed net spin-Hall current is given by $$J_{y}^{x}=\sum_{\bf p}{\rm Tr}[\check j_{y}^{x}({\bf p})\check\rho({\bf p})],$$ where $\check \rho({\bf p})$ is the distribution function related to the nonequilibrium “lesser” Green’s function, ${\check {\rm
G}}^<({\bf p},\omega)$, as given by $\check \rho({\bf p})=-i\int \frac
{d\omega}{2\pi} \check {\rm G}^{<}({\bf p},\omega)$. Also, $J_y^x$ can be determined in helicity basis via $$J_y^x=\sum_{{\bf p}}{\rm Tr}[{\hat j}_y^x({\bf p}){\hat \rho}({\bf p})],\label{JYX}$$ with ${\hat j}_y^x({\bf p})=U_{{\bf p}}^+\check j_y^x({\bf
p})U_{{\bf p}}$ and ${\hat \rho}({\bf p}) =U^+({\bf p})\check
\rho({\bf p}) U({\bf p})$ being the helicity-basis single-particle spin current operator and distribution function, respectively. Explicitly, Eq.(\[JYX\]) can be rewritten as ($\hat \rho_{\mu\nu}({\bf p})$ are the matrix elements of $\hat \rho({\bf p})$ in helicity basis; $\mu,\nu=1,2,3,4$) $$\begin{aligned}
J_y^x&=&\frac{\sqrt{3}\gamma_2}{m}\sum_{\bf p}p\left \{4\cos^2\phi_{\bf p}\sin \theta_{\bf p} {\rm Im}[\hat \rho_{12}({\bf p})+\hat \rho_{34}({\bf p})]
\right.\nonumber\\
&&-\sin(2\phi_{\bf p})\sin(2 \theta_{\bf p}) {\rm Re}[\hat \rho_{12}({\bf p})+\hat \rho_{34}({\bf p})]
\nonumber\\
&&
+2\cos(2 \phi_{\bf p})\cos\theta_{\bf p}{\rm Im}[\hat \rho_{13}({\bf p})-\hat \rho_{24}({\bf p})]\nonumber\\
&&
\left .-\sin(2\phi_{\bf p})[1+\cos^2 \theta_{\bf p}] {\rm Re}[\hat \rho_{13}({\bf p})-\hat \rho_{24}({\bf p})]
\right \}.\label{JYXE}\end{aligned}$$ Here, the Hermitian property of the distribution function, [*i.e.*]{} $\hat {\rho}({\bf p})=\hat {\rho}^+({\bf p})$, has been used. It is clear from Eq.(\[JYXE\]) that contributions to the spin-Hall current arise only from those elements of the distribution function which describe the interband polarization, such as $\hat \rho_{12}(\bf p)$, $\hat \rho_{13}(\bf p)$, $\hat
\rho_{34}(\bf p)$ and $\hat \rho_{24}(\bf p)$. The vanishing of spin-Hall current contributions from the diagonal elements of the distribution function is associated with the helicity degeneracy of the hole bands in $p$-type bulk semiconductors. The diagonal elements of the distribution function for holes in same band but with opposite helicities are the same, [*i.e.*]{} $\hat \rho_{22}({\bf
p})=\rho_{33}({\bf p})$ and $\hat \rho_{11}({\bf
p})=\rho_{44}({\bf p})$. However, the corresponding diagonal elements of the single-particle spin current have opposite signs due to opposite helicities, $(\hat j_{y}^x)_{22}({\bf p})=-(\hat
j_y^x)_{33}({\bf p})$ and $(\hat j_{y}^x)_{11}({\bf p})=-(\hat
j_y^x)_{44}({\bf p})$. As a result, the net contributions to spin-Hall current from the diagonal elements of distribution function are eliminated.
In order to carry out the calculation of spin-Hall current, it is necessary to determine the hole distribution function.[@Wu] Under homogeneous and steady-state conditions, the spin-basis distribution, ${\check \rho}({\bf p})$, obeys a kinetic equation taken the form, $$e{\bf E}\cdot [\nabla_{\bf p} {\check \rho}({\bf p})]+i[{\check
h}_0({\bf p}),{\check \rho}({\bf p})]=-\check I,\label{KE}$$ with $\check I$ as a collision term given by $$\check I= \int \frac{d\omega}{2\pi}({\check \Sigma}^r_{\bf
p}{\check {\rm G}}^<_{\bf p}+{\check \Sigma}^<_{\bf p}{\check {\rm
G}}^a_{\bf p}- {\check {\rm G}}^r_{\bf p} {\check \Sigma}^<_{\bf
p}-{\check {\rm G}}^<_{\bf p}{\check \Sigma}^a_{\bf p}).\label{CT}$$ ${\check {\rm G}}^{r,a,<}_{\bf p}$ and ${\check
\Sigma}^{r,a,<}_{\bf p}$ are, respectively, the nonequilibrium Green’s functions and self-energies. For brevity, hereafter, the argument $({\bf p},\omega)$ of these functions will be denoted by a subscript ${\bf p}$. In the kinetic equation (\[KE\]) above, the hole-impurity scattering is embedded in the self-energies, ${\check \Sigma}^{r,a,<}_{\bf p}$. In present paper, we consider hole-impurity collisions only in the self-consistent Born approximation. It is widely accepted that this is sufficiently accurate to analyze transport properties in the diffusive regime. Accordingly, the self-energies take the forms: ${\check
\Sigma}^{r,,a,<}_{\bf p}=n_i\sum_{\bf k} |V({\bf p}-{\bf k})|^2
{\check {\rm G}}_{\bf k}^{r,a,<}$, with impurity density $n_i$.
It is most convenient to study the hole distribution function in the helicity basis, $\hat {\rho}({\bf p})= U^+({\bf p})\check
{\rho }({\bf p}) U({\bf p})$, because, there, the unperturbed equilibrium distribution and the equilibrium lesser, retarded, and advanced Green’s functions are all diagonal. To derive the kinetic equation for the helicity-basis distribution, ${\hat \rho}({\bf
p})$, we multiply Eq.(\[KE\]) from left by $U_{\bf p}^+$ and from right by $U_{\bf p}$. Due to the unitarity of $U_{\bf p}$, the collision term in the helicity basis, $\hat I$, has a form similar to Eq.(\[CT\]), but with the helicity-basis Green’s functions and self-energies, $\hat {\rm G}^{r,a,<}_{{\bf
p}}=U^+({\bf p})\check {\rm G}^{r,a,<}_{{\bf p}}U({\bf p})$ and $\hat {\Sigma}^{r,a,<}_{{\bf p}}=U^+({\bf p})\check
{\Sigma}^{r,a,<}_{{\bf p}}U({\bf p})$, respectively, replacing those of the spin-basis, $\check {\rm G}^{r,a,<}_{{\bf p}}$ and $\check {\Sigma}^{r,a,<}_{{\bf p}}$. The left hand side (LHS) of Eq.(\[KE\]) is simplified by using the following facts: $U_{\bf
p}^+\nabla_{\bf p} {\check \rho}({\bf p})U_{\bf p}=\nabla_{\bf
p}{\hat \rho}({\bf p}) -\nabla_{\bf p}U_{\bf p}^+U_{\bf p}{\hat
\rho}({\bf p})-{\hat \rho}({\bf p}) U_{\bf p}^+\nabla_{{\bf p}}
U_{\bf p}$ and $\nabla_{\bf p} U_{\bf p}^+U_{\bf p}=-U_{\bf
p}^+\nabla_{\bf p}U_{\bf p}$. Thus, the kinetic equation in helicity basis may be written as $$e{\bf E}\cdot \left \{\nabla_{\bf p} \hat \rho({\bf p})+[\hat
\rho({\bf p}), \nabla_{\bf p} U^+_{\bf p}U_{\bf p}] \right
\}+i[\hat h_0(p),\hat \rho({\bf p})]=-\hat I.$$ In this equation, the helicity-basis self-energies, $\hat{\Sigma}^{r,a,<}_{\bf p}$, take the forms, $$\hat{\Sigma}^{r,a,<}_{\bf p}=n_i\sum_{{\bf k}}\hat T({\bf
p},{\bf k})\hat{\rm G}^{r,a,<}_{\bf k}\hat T^+({\bf p},{\bf
k}).\label{SE}$$
In this paper, we restrict our considerations to the linear response regime. In connection with this, all the functions, such as the nonequilibrium Green’s functions, self-energies and distribution, can be expressed as sums of two terms: $ A=
A_0+A_1$, with $ A$ as the Green’s functions, self-energies or distribution function. $A_0$ and $ A_1$, respectively, are the unperturbed part and the linear electric field part of $A$. In this way, the kinetic equation for the linear electric field part of the distribution, $\hat {\rho}_1({\bf p})$, can be written as $$e{\bf E}\cdot \nabla_{\bf p}\hat \rho_0({\bf p})-e{\bf E}\cdot
[\hat \rho_0({\bf p}), U_{\bf p}^+\nabla_{\bf p} U_{\bf p}]
+i[\hat h_0({\bf p}),\hat \rho_1({\bf p})]=-\hat
I^{(1)},\label{EQ11}$$ with $\hat I^{(1)}$ as the linear electric field part of the collision term $\hat I$: $$\begin{aligned}
\hat I^{(1)}&=& \int \frac{d\omega}{2\pi}\left [{\hat
\Sigma}^r_{1\bf p}{\hat {\rm G}}^<_{0\bf p}+{\hat \Sigma}^<_{1\bf
p}{\hat {\rm G}}^a_{0\bf p}- {\hat {\rm G}}^r_{1\bf p} {\hat
\Sigma}^<_{0\bf p}-{\hat {\rm G}}^<_{1\bf
p}{\hat \Sigma}^a_{0\bf p}\right .\nonumber\\
&& \left .+{\hat \Sigma}^r_{0\bf p}{\hat {\rm G}}^<_{1\bf p}+{\hat
\Sigma}^<_{0\bf p}{\hat {\rm G}}^a_{1\bf p}- {\hat {\rm
G}}^r_{0\bf p} {\hat \Sigma}^<_{1\bf p}-{\hat {\rm G}}^<_{0\bf
p}{\hat \Sigma}^a_{1\bf p}\right ].\end{aligned}$$
Further, we employ a two-band generalized Kadanoff-Baym ansatz (GKBA)[@GKBA; @GKBA1] to simplify Eq.(\[EQ11\]). This ansatz, which expresses the lesser Green’s function through the Wigner distribution function, has been proven sufficiently accurate to analyze transport and optical properties in semiconductors.[@Jauho] To first order in the dc field strength, the GKBA reads, $$\hat {\rm G}^<_{1{\bf p}}=-\hat {\rm G}_{0{\bf p}}^r\hat \rho_1({\bf p})+\hat \rho_1({\bf p})\hat {\rm G}_{0{\bf p}}^a
-\hat {\rm G}_{1{\bf p}}^r\hat \rho_0({\bf p})+\hat \rho_0({\bf p})\hat {\rm G}_{1{\bf p}}^a,\label{GKBA1}$$ where the equilibrium distribution, and retarded and advanced Green’s functions are all diagonal matrices: $\hat \rho_0({\bf
p})={\rm diag}[n_{\rm F}(\varepsilon_H(p)),n_{\rm
F}(\varepsilon_L(p)),n_{\rm F}(\varepsilon_L(p)),n_{\rm
F}(\varepsilon_H(p))]$ and $\hat {\rm G}_0^{r,a}({\bf p})={\rm
diag}[(\omega-\varepsilon_H(p)\pm
i\delta)^{-1},(\omega-\varepsilon_L(p)\pm
i\delta)^{-1},(\omega-\varepsilon_L(p)\pm
i\delta)^{-1},(\omega-\varepsilon_H(p)\pm i\delta)^{-1}]$, with the Fermi function $n_{\rm F}(\omega)$. We note that $\hat {\rm
G}_{1{\bf p}}^{r,a}$ in the collision term leads to a collisional broadening of the nonequilibrium distribution. In the present transport study, such collisional broadening plays a secondary role and can be ignored. Based on this, the collision term, $\hat
I^{(1)}$, no longer involves the linear electric field part of the retarded and advanced Green’s functions.
It is obvious that the driving force in Eq.(\[EQ11\]) comprises two components: the first of which, $e{\bf
E}\cdot\nabla_{\bf p}\hat\rho_0$, is diagonal, while another one, $-e{\bf E}\cdot [\hat \rho_0({\bf p}), U_{\bf p}^+\nabla_{\bf p}
U_{\bf p}]$, has null diagonal elements. In connection with this, we formally split the kinetic equation into two equations with $\hat \rho_1({\bf p})=\hat \rho_1^{I}({\bf p})+\hat
\rho_1^{II}({\bf p})$ as $$e{\bf E}\cdot \nabla_{\bf p} \hat \rho_0({\bf p})+i[\hat h_0({\bf
p}),\hat \rho_1^I({\bf p})]=-\hat I^{(1)},\label{EQ1}$$ $$-e{\bf E}\cdot [\hat \rho_0({\bf p}), U_{\bf p}^+\nabla_{\bf p} U_{\bf p}]
+i[\hat h_0({\bf p}),\hat \rho_1^{II}({\bf p})]=0,\label{EQ2}$$ wherein $\hat \rho_1^I({\bf p})$ and $\hat \rho_1^{II}({\bf p})$ can be approximately determined independently, as discussed below. We note that the solution of Eq.(\[EQ2\]), $\hat
\rho_1^{II}({\bf p})$, is off-diagonal and independent of impurity scattering. The matrix elements of $\hat \rho_1^{I,II}({\bf p})$ will be denoted by $(\hat \rho_1^{I,II})_{\mu\nu}({\bf p})$, and from Eqs.(\[JYX\]) and (\[JYXE\]), we correspondingly write spin-Hall conductivity contributions based on $J_y^x=\left . J_y^x\right |^I+\left . J_y^x\right |^{II}$ as $$\begin{aligned}
(\sigma^I)_{yz}^x=\left . J_y^x\right |^I/E=
\sum_{{\bf p}}{\rm Tr}[{\hat j}_y^x({\bf p}){\hat \rho}_1^I({\bf p})];
\nonumber\\
(\sigma^{II})_{yz}^x=\left . J_y^x\right |^{II}/E=
\sum_{{\bf p}}{\rm Tr}[{\hat j}_y^x({\bf p}){\hat \rho}_1^{II}({\bf p})].\end{aligned}$$
It is evident that the diagonal driving term of Eq.(\[EQ1\]), $e{\bf E}\cdot\nabla_{\bf p}\hat\rho_0$, is free of impurity scattering. Since $[\hat h_0,\hat \rho_1^I({\bf p})]$ is off-diagonal, the diagonal parts of this equation lead to diagonal $\hat \rho_1^I({\bf p})$ elements, $(\hat \rho_1^I)_{\mu\mu}({\bf
p})$ ($\mu=1...4$), of order of $(n_i)^{-1}$ in the impurity density. Substituting these diagonal elements, $(\hat
\rho_1^I)_{\mu\mu}({\bf p})$, into the off-diagonal elements of the scattering term, $\hat I^{(1)}$, and considering the fact that the terms on LHS of the off-diagonal components of Eq.(\[EQ1\]) are proportional to the off-diagonal elements of $\hat \rho_1^I({\bf p})$, we find that the leading order of the off-diagonal elements of $\hat \rho_1^I({\bf p})$ in the impurity-density expansion is of order $(n_i)^0$, [*i.e.*]{} independent of $n_i$. This result implies that, in general, there always exists a contribution to the spin-Hall current which is disorder-related but independent of impurity density within the diffusive regime. On the other hand, the off-diagonal impurity-density-independent $\hat \rho_1^I({\bf p})$ elements, as well as all the nonvanishing elements of $\hat \rho_1^{II}({\bf p})$, make contributions to the scattering term, $\hat I^{(1)}$, which are linear in the impurity density, while the $\hat I^{(1)}$ terms involving diagonal elements, $(\hat \rho_1^I)_{\mu\mu}({\bf p})$, are independent of $n_i$. Hence, the contributions to $\hat I^{(1)}$ from off-diagonal elements of $\hat \rho_1({\bf p})$ can be ignored and $\hat I^{(1)}$ effectively involves only the diagonal elements of the distribution. Correspondingly, Eqs.(\[EQ1\]) and (\[EQ2\]) are approximately independent of each other and can be solved separately.
Disorder-independent spin-Hall effect
-------------------------------------
The disorder-independent spin-Hall current is associated with $\hat \rho_1^{II}({\bf p})$, the solution of Eq.(\[EQ2\]). The nonvanishing elements of this function are given by $$\begin{aligned}
(\hat\rho_1^{II})_{12}({\bf p})&=&-(\hat\rho_1^{II})_{21}({\bf p})
=(\hat\rho_1^{II})_{34}({\bf p})=-(\hat\rho_1^{II})_{43}({\bf p})\nonumber\\
&=&\frac{\sqrt{3}m}{4\gamma_2p^3}ieE\sin\theta_{\bf
p}[f_0^H(p)-f_0^L(p)],\end{aligned}$$ with $f_0^H(p)=n_{\rm F}[\varepsilon_{H}(p)]$ and $f_0^L(p)=n_{\rm
F}[\varepsilon_{L}(p)]$, while its remaining elements, such as $(\hat \rho_1^{II})_{13}({\bf p})$, $(\hat\rho_1^{II})_{24}({\bf
p})$, [*etc.*]{} vanish. Substituting $\hat\rho_1^{II}({\bf p})$ into Eq.(\[JYXE\]), we find that the disorder-independent contribution to intrinsic spin-Hall current, $\left .J_y^x\right
|^{II}$, can be written as $$\left .J_y^x\right
|^{II}=\frac{eE}{6\pi^2}\int_0^\infty[f_0^H(p)-f_0^L(p)]dp.\label{JYX2}$$ This result agrees with that obtained in Ref..
Obviously, the nonvanishing of $\left .J_y^x\right |^{II}$ is associated with the nonzero driving term on LHS of Eq.(\[EQ2\]), which is just the interband electric dipole moment between the heavy- and light-hole bands. Thus, the disorder-independent spin-Hall effect arises essentially from the polarization process between two hole bands directly induced by the dc electric field. Such a polarization can also be interpreted as a two-band quantum interference process. It should be noted that this polarization process affects only those off-diagonal $\hat \rho^{II}_1({\bf p})$ elements which describe dc-field induced transitions between hole states in the light- and heavy-hole bands. Of course, such transition processes are not restricted only to hole states in the vicinity of the Fermi surface: they contribute from all the hole states below the Fermi surface. As a result, the disorder-independent spin-Hall current given by Eq.(\[JYX2\]) is a function of the entire unperturbed equilibrium distribution, $n_{\rm F}(\omega)$, not just of its derivative, $\partial n_{\rm F}(\omega)/\partial \omega$, at the Fermi surface.
disorder-related spin-Hall effect
---------------------------------
To simplify Eq.(\[EQ1\]), we first analyze symmetry relations between the elements of the distribution function $\hat
\rho^I_{1}({\bf p})$ in the self-consistent Born approximation. Since the distribution function is a Hermitian matrix, only the independent elements $(\hat \rho^I_{1})_{\mu\nu}({\bf p})$ with $\mu,\nu=1...4$ and $\mu\le \nu$ need to be considered. We know that $(\hat \rho^I_1)_{11}({\bf p})$ and $(\hat
\rho^I_1)_{44}({\bf p})$ describe the distributions of the heavy holes having spins $S_z=3/2$ and $S_z=-3/2$, respectively. In equilibrium, heavy hole populations in degenerate states with $S_z=3/2$ and $S_z=-3/2$ distribute equally. Out of equilibrium, the dc electric field action on these hole populations is also the same. Hence, the nonequilibrium distribution of the heavy holes with $S_z=3/2$ is the same as that of the heavy holes with $S_z=-3/2$, [*i.e.*]{} $(\hat \rho_{1}^{I})_{11}({\bf p})=(\hat
\rho_{1}^{I})_{44}({\bf p})$. An analogous relation for light holes is also expected to be valid: $(\hat \rho_{1}^{I})_{22}({\bf
p})=(\hat \rho_{1}^{I})_{33}({\bf p})$. Indeed, substituting these symmetrically related diagonal elements of the distribution $\hat
\rho^I_1({\bf p})$ into the scattering term, we find $\hat
I^{(1)}_{11}=\hat I^{(1)}_{44}$, $\hat I^{(1)}_{22}=\hat
I^{(1)}_{33}$, and $\hat I^{(1)}_{23}=\hat I^{(1)}_{32}=\hat
I^{(1)}_{14}=\hat I^{(1)}_{41}=0$, which are consistent with the elements on the LHS of Eq.(\[EQ1\]). As another consequence of these relations ($(\hat \rho_{1}^{I})_{11}({\bf p})=(\hat
\rho_{1}^{I})_{44}({\bf p})$ and $(\hat \rho_{1}^{I})_{22}({\bf
p})=(\hat \rho_{1}^{I})_{33}({\bf p})$), we also obtain symmetry relations between the remaining off-diagonal elements of $\hat
I^{(1)}$: $\hat I^{(1)}_{12}=-\hat I^{(1)}_{34}$ and $\hat
I^{(1)}_{13}=\hat I^{(1)}_{24}$, which result in symmetry relations for the $\hat \rho_{1}^{I}({\bf p})$ elements as: $(\hat
\rho_{1}^{I})_{12}({\bf p})=(\hat \rho_{1}^{I})_{34}({\bf p})$ and $(\hat \rho_{1}^{I})_{13}({\bf p})=-(\hat \rho_{1}^{I})_{24}({\bf
p})$. Hence, to determine the disorder-related spin-Hall effect, one only needs to evaluate the diagonal elements, $(\hat
\rho_{1}^{I})_{11}({\bf p})$ and $(\hat \rho_{1}^{I})_{22}({\bf
p})$, and the off-diagonal elements, $(\hat
\rho_{1}^{I})_{12}({\bf p})$ and $(\hat \rho_{1}^{I})_{13}({\bf
p})$.
From Eq.(\[EQ1\]), it follows that the diagonal $\hat
\rho_1^I({\bf p})$ elements are determined by the integral equation $$\begin{aligned}
-e{\bf E}\cdot {\bf \nabla}_{\bf p}n_{\rm F}[\varepsilon_\mu (p)]&=&\pi\sum_{\bf k}|V({\bf p}-{\bf k})|^2
\{a_1({\bf p},{\bf k})[(\hat \rho_1^I)_{\mu\mu}({\bf p})\nonumber\\
&&-(\hat \rho_1^I)_{\mu\mu}({\bf k})]\Delta_{\mu\mu}+a_2({\bf p},{\bf k})[(\hat\rho_1^I)_{\mu\mu}({\bf p})
\nonumber\\
&&-(\hat\rho_1^I)_{\bar \mu\bar \mu}({\bf
k})\Delta_{\mu\bar\mu}].\label{KEE}\end{aligned}$$ Here, $\mu=1,2$, respectively, correspond to the heavy- and light-hole bands: $\varepsilon_1(p)\equiv \varepsilon_H(p)$, $\varepsilon_2(p)\equiv \varepsilon_L(p)$, $\bar \mu=3-\mu$, $\Delta_{\mu\nu}=\delta [\varepsilon_\mu (p)-\varepsilon_\nu
(k)]$. The factors $a_1({\bf p},{\bf k})$ and $a_2({\bf p},{\bf
k})$ are associated only with the momentum angles: $$\begin{aligned}
a_1({\bf p},{\bf k})&=&\frac 14 \{2+6\cos^2 \phi_{pk}[\sin^2 \theta_{\bf p}-\cos^2 \theta_{\bf k}]
\nonumber\\
&&+6\cos^2\theta_{\bf p}\cos^2\theta_{\bf k}[1+\cos^2 \phi_{pk}]\nonumber\\
&&+3\cos \phi_{pk}\cos(2\theta_{\bf p})\cos(2\theta_{\bf k})\},\end{aligned}$$ $$a_2({\bf p},{\bf k})=2-a_1({\bf p},{\bf k}),$$ where $\phi_{pk}\equiv \phi_{\bf p}-\phi_{\bf k}$. From Eq.(\[KEE\]), we see that we may remove the dependence of $(\rho_1^I)_{\mu\mu}({\bf p})$ on momentum angle $\phi_{\bf p}$ by redefining the angular integration variable as $\phi_{{\bf
k}}\rightarrow \phi_{pk}=\phi_{\bf p}-\phi_{\bf k}$, taken jointly with the facts that the left hand side does not depend on $\phi_{\bf p}$ and the potential $V({\bf p}-{\bf k})$, as well as the factors $a_1({\bf p},{\bf k})$ and $a_2({\bf p},{\bf k})$, depends on $\phi_{\bf p}$ and $\phi_{\bf k}$ only through the combination $\phi_{pk}$.
Analyzing the components of the scattering term in the kinetic equation for the off-diagonal elements, $(\hat \rho_1^I)_{12}({\bf
p})$ and $(\hat \rho_1^I)_{13}({\bf p})$, we find that these elements of the distribution $\hat \rho_1^I({\bf p})$ are similarly effectively independent of $\phi_{\bf p}$. In connection with this, contributions to the disorder-related spin-Hall current, $\left .J_y^x\right |^I$, from $(\hat \rho_1^I)_{13}({\bf
p})$ and ${\rm Re} [(\hat \rho_1^I)_{12}({\bf p})]$ vanish under the $\phi_{\bf p}$-integration in Eq.(\[JYXE\]), and only the imaginary part of $(\hat \rho_1^I)_{12}({\bf p})$ makes a nonvanishing contribution to $\left .J_y^x\right |^I$. Hence, $$\left .J_y^x\right|^I=\frac{8\sqrt{3}\gamma_2}{m}\sum_{\bf p}p\left \{\cos^2\phi_{\bf p}\sin \theta_{\bf p} {\rm Im}[(\hat \rho_{1}^I)_{12}({\bf p})]
\right \},\label{JYX1}$$ with $$\begin{aligned}
{\rm Im}\left [(\hat \rho_1^I)_{12}({\bf p})\right ]&=&\frac{\sqrt{3}\pi m}{4\gamma_2 p^2}\sum_{{\bf k},\mu=1,2}
|V({\bf p}-{\bf k})|^2a_3({\bf p},{\bf k})\nonumber\\
&&\times(-1)^\mu\{\Delta_{\mu\mu}[(\hat \rho_1^I)_{\mu\mu}({\bf p})-(\hat \rho_1^I)_{\mu\mu}({\bf k})]
\nonumber\\
&&-\Delta_{\mu\bar\mu}[(\hat \rho_1^I)_{\mu\mu}({\bf p})-(\hat \rho_1^I)_{\bar\mu\bar\mu}({\bf k})]\},\label{KEE1}\end{aligned}$$ and $$\begin{aligned}
a_3({\bf p},{\bf k})&=&-\frac 12 \{\sin(2\theta_{\bf p})[\cos^2\theta_{\bf k}
-\sin^2\theta_{\bf k}\cos^2\phi_{pk}]
\nonumber\\
&&+\sin(2\theta_{\bf k})\cos\phi_{pk}[1-2\cos^2\theta_{\bf p}]\}.\end{aligned}$$
From Eqs.(\[KEE\]) and (\[KEE1\]), we see that $\left
.J_y^x\right |^I$ is independent of impurity density. In contrast to the disorder-independent case, the disorder-related spin-Hall current involves only the derivative of the equilibrium distribution function, [*i.e.*]{} $\partial n_{\rm
F}(\omega)/\partial \omega$. This implies that $\left .J_y^x\right
|^I$ is constituted of contributions arising only from hole states in the vicinity of the Fermi surface, or in other words, from hole states involved in longitudinal transport. Physically, the holes participating in transport experience impurity scattering, producing diagonal $\hat \rho^I_1({\bf p})$ elements of order of $n_i^{-1}$. Moreover, the scattering of these perturbed holes by impurities also gives rise to an interband polarization, which no longer depends on impurity density within the diffusive regime. It is obvious that in such a polarization process the disorder plays only an intermediate role. It should be noted that $\left
.J_y^x\right |^I$ generally depends on the form of the hole-impurity scattering potential, notwithstanding its independence of impurity density in the diffusive regime.
The fact that the total spin-Hall current, $J_y^x= \left
.J_y^x\right |^I+\left .J_y^x\right |^{II}$, consists of two parts associated with hole states below and near the Fermi surface, respectively, is similar to the well-known result of Středa[@Streda] in the context of the 2D charge Hall effect. In 2D electron systems in a normal magnetic field, the off-diagonal conductivity usually arises from two terms, one of which is due to electron states near the Fermi energy and the other is related to the contribution of all occupied electron states below the Fermi energy. A similar picture has also recently emerged in studies of the anomalous Hall effect.[@AHE1; @AHE2]
Results and discussions
=======================
To compare our results with the short-range result presented in Ref., we first consider a short-range hole-impurity scattering potential described by: $V({\bf
p}-{\bf k})\equiv u$, with $u$ as a constant. Substituting Eq.(\[KEE1\]) into Eq.(\[JYXE\]) and performing integrations with respect to the angles of $\bf p$ or $\bf k$, respectively, for terms involving $(\hat \rho_1^I)_{\mu\mu} ({\bf
k})$ or $(\hat \rho_1^I)_{\mu\mu} ({\bf p})$, we find that the contribution of short-range disorder to the spin-Hall current vanishes, [*i.e.*]{} $\left. J_y^x\right |^I=0$. This implies that for short-range hole-impurity collisions, the total spin-Hall current is just the disorder-independent one, $J_y^x=\left. J_y^x
\right |^{II}$. This result agrees with that obtained in Ref..
Furthermore, we perform a numerical calculation to investigate the effect of long-range hole-impurity collisions on the spin-Hall current in a GaAs bulk semiconductor. The long-range scattering is described by a screened Coulombic impurity potential $V(p)$: $V(p)=e^2/(\varepsilon_0\varepsilon) [p^2+1/d^2_D]^{-1}$ with $\varepsilon$ as a static dielectric constant.[@Grill] $d_D$ is a Thomas-Fermi-Debye type screening length: $d_D^2=\pi^2\varepsilon_0\varepsilon/(e^2\sqrt{2m^3E_{F}})2^{-1/3}
[(\gamma_1+2\gamma_2)^{-3/2}+(\gamma_1-2\gamma_2)^{-3/2}]^{-2/3}$, with $E_F=(3\pi^2 N_p/2)^{2/3}/(2m)$. The material parameters $\gamma_1$ and $\gamma_2$ are chosen to be $6.85$ and $2.5$, respectively.[@GAAS] In our calculation, the momentum integration is computed by the Gauss-Legendre scheme.
In the present paper, we address the spin-Hall effect at zero temperature, $T=0$. In this case, the disorder-independent spin-Hall current can be obtained analytically from Eq.(\[JYX2\]): $\left .J_y^x\right
|^{II}=eE[k_F^H-k_F^L]/(6\pi^2)$, with $k_F^H$ and $k_F^L$ as the Fermi momenta for heavy- and light-hole bands, respectively. In order to investigate the disorder-related spin-Hall effect, we need to compute the distribution function $\hat \rho_1^I({\bf p})$ at the Fermi surface. In this calculation, we employ a “singular value decomposition” method[@NR] to solve the integral equation, Eq.(\[KEE\]), for the diagonal $\hat \rho_1^I({\bf
p})$ elements. The obtained diagonal elements are then employed to determine ${\rm Im}[(\rho_1^I)_{12}({\bf p})]$ using Eq.(\[KEE1\]). Following that, we obtain the disorder-related spin-Hall current from Eq.(\[JYX1\]), performing the momentum integration.
![Hole-density dependencies of (a) total $\sigma_{yz}^x$ and $\mu_{yz}^x$, and (b) disorder-independent $(\sigma^{II})_{yz}^x$ and $(\mu^{II})_{yz}^x$, in a bulk GaAs semiconductor. The material parameters for GaAs are: $\gamma_1=6.85$ and $\gamma_2=2.5$. The lattice temperature is $T=0$K.[]{data-label="fig1"}](fig1.eps){width="45.00000%"}
In Fig.1, the calculated total and disorder-independent spin-Hall conductivities, $\sigma_{yz}^x=J_{y}^x/E$ and $(\sigma^{II})_{yz}^x=\left .J_{y}^x\right |^{II}/E$, and the total and disorder-independent spin-Hall mobilities, $\mu_{yz}^x=\sigma_{yz}^x/N_{p}$ and $(\mu^{II})_{yz}^x
=(\sigma^{II})_{yz}^x/N_{p}$, are shown as functions of the hole density. The spin-Hall mobility, analogous to the mobility of charge transport, characterizes the average mobile ability of a single spin driven by the external field. This quantity has the same units in 2D and 3D systems.
From Fig.1, we see that, with increasing hole density, the total spin-Hall conductivity first increases and then decreases and even becomes negative as the hole density becomes larger than $N_{pc}=3\times 10^{24}$m$^{-3}$. This behavior of the hole-density dependence of total spin-Hall conductivity is the result of a competition between the disorder-independent and disorder-related processes. The contributions to spin-Hall conductivity from these two processes always have opposite signs and their absolute values increase with increasing hole density. Considering total spin-Hall conductivity, the disorder-related part, $(\sigma^{I})_{yz}^x$, is dominant for high hole density, while $(\sigma^{II})_{yz}^x$ is important in the low hole-density regime. Notwithstanding this hole-density dependence of $\sigma_{yz}^x$, the total spin-Hall mobility, $\mu_{yz}^x$, as well as the disorder-independent one, monotonically decreases with increasing hole density.
It should be noted that the total spin-Hall mobility in bulk systems has the same order of magnitude as that in 2D hole systems. We know that the spin-Hall conductivity in 2D hole systems is of order of $e/\pi$.[@Liu3] For a typical 2D hole density, $n_p^{(2D)}=1\times 10^{12}$cm$^{-2}$, the corresponding spin-Hall mobility is about $0.05$m$^2$/Vs.
In the present paper, we have ignored the effect of collisional broadening on spin-Hall current. Since $\left . J_y^x\right |^I$ is associated only with the hole states in the vicinity of the Fermi surface, the neglect of broadening in the disorder-related spin-Hall current is valid for $\varepsilon_{F} \tau> 1$ ($\varepsilon_F$ is the Fermi energy and $\tau$ is the larger of the relaxation times for holes in the different bands at the Fermi surface, $\tau_L(\varepsilon_F)$ and $\tau_H(\varepsilon_F)$: $\tau=\max [\tau_L(\varepsilon_F),\tau_H(\varepsilon_F)]$). This condition coincides with the usual restriction on transport in the diffusive regime and is satisfied for $p$-type bulk GaAs with mobility approximately larger than $1$m$^2$/Vs (for $N_{\bf
p}>5\times 10^{22}$m$^{-3}$). On the other hand, the disorder-independent spin-Hall conductivity involves contributions from all hole states in the Fermi sea and hence it may be strongly affected by collisional broadening. To estimate the broadening effect on the disorder-independent SHE, we add an imaginary part to $\hat h_0({\bf p})$ and use $\hat h_0({\bf p})+i\hat
\gamma({\bf p})$ instead of $\hat h_0({\bf p})$ in Eq.(\[EQ2\]) ( $\hat \gamma({\bf p})$ is a diagonal matrix describing the broadening: $(\hat \gamma)_{11}({\bf p})=(\hat
\gamma)_{44}({\bf p})=1/2\tau_H(\varepsilon_H(p))$ and $(\hat
\gamma)_{22}({\bf p})=(\hat \gamma)_{33}({\bf
p})=1/2\tau_L(\varepsilon_L(p))$). In this way, $\left.
J_y^x\right |^{II}$ takes a form similar to Eq.(\[JYX2\]) but with an additional factor, $(2\gamma_2 p^2)^2/\{[2\gamma_2
p^2]^2+[1/2\tau_H(\varepsilon_H(p))-1/2\tau_L(\varepsilon_L(p))]^2\}$, in the momentum integrand. Performing a numerical calculation, we find that, in the studied regime of hole density, the effect of collisional broadening on the disorder-independent spin-Hall current is less than 1% for $p$-type bulk GaAs semiconductors with mobility approximately larger than 5m$^2$/Vs. Thus, in such systems, the effect of collisional broadening on the total spin-Hall conductivity can be ignored. It should be noted that in our calculations, we computed $\tau_{L,H}(\varepsilon)$ by considering short-range hole-impurity scattering: $1/\tau_{L,H}(\varepsilon)=2\pi n_iu^2\nu_{L,H}(\varepsilon)$ with the densities of hole states in the light- and heavy-hole bands taken as $\nu_{L,H}(\varepsilon)=2\sum_{\bf
p}\delta(\varepsilon-\varepsilon_{L,H}(p))$. The quantity $n_iu^2$ is determined from the mobility of the system: $\mu=e[N_{p}^L\tau_L(\varepsilon_F)/m_L+N_p^H\tau_H(\varepsilon_F)/m_H]/N_p$, where $m_L=m/(\gamma_1+2\gamma_2)$ and $m_H=m/(\gamma_1-2\gamma_2)$ are the effective masses of holes and $N_p^L/N_p^H=[(\gamma_1-2\gamma_2)/(\gamma_1+2\gamma_2)]^{3/2}$ with $N_p^L$ and $N_p^H$ being the hole densities in the light- and heavy-hole bands, respectively.
On the other hand, in our considerations, the impurities are taken to be so dense that we can use a statistical average over the impurity configuration. This requires that $L_D<L$ ($L$ is the characteristic size of the sample and $L_D$ is the larger of the diffusion lengths of holes in the light- and heavy-hole bands). Failing this, the behavior of the holes would become ballistic, with transport properties depending on the specific impurity configuration.
Conclusions
===========
We have employed a nonequilibrium Green’s function kinetic equation approach to investigate disorder effects on the spin-Hall current in the diffusive regime in $p$-type bulk Luttinger semiconductors. Long-range hole-impurity scattering has been considered within the framework of the self-consistent Born approximation. We have found that, in contrast to the null effect of short-range disorder on the spin-Hall current, long-range scattering produces a nonvanishing contribution to the spin-Hall current, independent of impurity density in the diffusive regime. This contribution has its sign opposite to that of the disorder-independent one, leading to a sign change of the total spin-Hall current as the hole density varies. We also made clear that the disorder-independent spin-Hall effect arises from a dc-field-induced polarization associated with all hole states in the Fermi sea, while the disorder-related one is produced by a disorder-mediated polarization and relates to only those hole states in the vicinity of the Fermi surface. The numerical calculation indicates that with increasing hole density, the total spin-Hall mobility monotonically decreases, whereas the spin-Hall conductivity first increases and then falls.
In addition to $J_y^x$, we also examined other components of the spin current. We found that the previously discovered “basic spintronics relation”,[@Zhang] which relates the $i$th component of the spin current along the direction $j$, $J_j^i$, and the applied electric field, $E_k$, by $J_j^i=\sigma_s\epsilon_{ijk}E_k$ with $\epsilon_{ijk}$ as a totally antisymmetric tensor, still holds in the presence of spin-conserving hole-impurity scattering.
This work was supported by the Department of Defense through the DURINT program administered by the US Army Research Office, DAAD Grant No. 19-01-1-0592, and by projects of the National Science Foundation of China and the Shanghai Municipal Commission of Science and Technology.
M. I. Dyakonov and V. I. Perel, Phys. Lett. [**35A**]{}, 459 (1971). J. E. Hirsch, Phys. Rev. Lett. [**83**]{}, 1834 (1999). S. Murakami, N. Nagaosa, and S. C. Zhang, Science [**301**]{}, 1348 (2003). J. Sinova, D. Culcer, Q. Niu, N. A. Sinitsyn, T. Jungwirth, and A. H. MacDonald, Phys. Rev. Lett. [**92**]{}, 126603 (2004). Y. K. Kato, R. C. Myers, A. C. Gossard, D. D. Awschalom, Science [**306**]{}, 1910 (2004). J. Wunderlich, B. Kaestner, J. Sinova, and T. Jungwirth, Phys. Rev. Lett. [**94**]{}, 047204 (2005). J. Schliemann and D. Loss, Phys. Rev. B [**69**]{}, 165315 (2004). K. Nomura, J. Sinova, T. Jungwirth, Q. Niu, and A. H. MacDonald, Phys. Rev. B [**71**]{}, 041304(R) (2005). O. Chalaev and D. Loss, Phys. Rev. B [**71**]{}, 245318 (2005). O. V. Dimitrova, Phys. Rev. B [**71**]{}, 245327 (2005). J. I. Inoue, G. E. W. Bauer, and L. W. Molenkamp, Phys. Rev. B [**70**]{}, 041303(R) (2004). R. Raimondi and P. Schwab, Phys. Rev. B [**71**]{}, 033311 (2005). A. Khaetskii, cond-mat/0408136 (unpublished). E. G. Mishchenko, A. V. Shytov, and B. I. Halperin, Phys. Rev. Lett. [**93**]{}, 226602 (2004). S. Y. Liu and X. L. Lei, cond-mat/0411629 (unpublished). N. Sugimoto, S. Onoda, S. Murakami, and N. Nagaosa, cond-mat/0503475. S. Y. Liu, X. L. Lei, and N. J. M. Horing, Phys. Rev. B [**73**]{}, 035323(2006). B. A. Bernevig and S. C. Zhang, Phys. Rev. Lett. [**95**]{}, 016801 (2005). S. Y. Liu and X. L. Lei, Phys. Rev. B [**72**]{}, 155314 (2005). A. V. Shytov, E. G. Mishchenko, and B. I. Halperin, cond-mat/0509702 (unpublished). S. Murakami, Phys. Rev. B [**69**]{}, 241202(R) (2004). W. Q. Chen, Z. Y. Weng, and D. N. Sheng, Phys. Rev. Lett. [**95**]{}, 086605 (2005). J. M. Luttinger, Phys. Rev. [**102**]{}, 1030 (1956). S. Murakami, N. Nagaosa,and S. C. Zhang, Phys. Rev. B [**69**]{}, 235206 (2004). For kinetic equations of spin-orbit-coupled systems in a spin-basis representation, see also: M. Q. Weng and M. W. Wu, Phys. Rev. B [**66**]{}, 235109 (2002), J. Appl. Phys. [**93**]{}, 410 (2003); E. G. Mishchenko and B. I. Halperin, Phys. Rev. B [**68**]{}, 045317 (2003). P. Lipavský, V. Špička, and B. Velicky, Phys. Rev. B [**34**]{}, 6933 (1986). H. Haug, Phys. Status Solidi (b) [**173**]{}, 139 (1992). H. Haug and A.-P. Jauho, [*Quantum Kinetics in Transport and Optics of Semiconductors*]{} (Springer, 1996). P. Středa, J. Phys. C [**15**]{}, L717 (1982). V. K. Dugaev, P. Bruno, M. Taillefumier, B. Canals, and C. Lacroix, Phys. Rev. B [**71**]{}, 224423 (2005). S. Y. Liu and X. L. Lei, Phys. Rev. B [**72**]{}, 195329 (2005). R. Grill, Phys. Rev. B [**46**]{}, 2092 (1992). A. Dargys and J. Kundrotas, [*Handbook on Physical Properties of Ge, Si, GaAs, and InP*]{} (Science and Encyclopedia Publishers, Vilnius, 1994). W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, [*Numerical Recipes: The Art of Scientific Computing*]{} (Cambridge University Press, Cambridge, England, 1986).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Denise C. Gabuzda, Sebastian Knuettel,'
- Annalisa Bonafede
date: 'Received ; accepted '
title: 'Evidence for a Toroidal Magnetic-Field Component in 5C4.114 on Kiloparsec Scales'
---
[ A monotonic, statistically significant gradient in the observed Faraday Rotation Measure (RM) across the jet of an Active Galactic Nucleus (AGN) reflects a corresponding gradient in the electron density and/or line-of-sight magnetic (B) field in the region of Faraday rotation. For this reason, such gradients may indicate the presence of a toroidal B field component, possibly associated with a helical jet B field. Although transverse RM gradients have been reported across a number of parsec-scale AGN jets, the same is not true on kiloparsec scales, suggesting that other (e.g. random) magnetic-field components usually dominate on these larger scales. ]{} [We wished to identify clear candidates for monotonic, transverse RM gradients across AGN jet and lobe structures on scales larger than those probed thus far, and estimate their statistical significances. ]{} [We identified an extended, monotonic transverse Faraday rotation gradient across the Northern lobe of a previously published Very Large Array (kiloparsec-scale) RM image of 5C4.114. We reanalyzed these VLA data in order to determine the significance of this RM gradient. ]{} [ The RM gradient across the Northern kiloparsec-scale lobe structure of 5C4.114 has a statistical significance of about $4\sigma$. There is also a somewhat less prominent monotonic transverse Faraday rotation gradient across the Southern jet/lobe (narrower range of distances from the core, significance $\simeq 3\sigma$). Other parts of the Faraday Rotation distribution observed across the source are patchy and show no obvious order. ]{} [ This suggests that we are observing a random RM component associated with the foreground material in the cluster in which the radio source is located and through which it is viewed, superposed on a more ordered RM component that arises in the immediate vicinity of the AGN jets. We interpret the transverse RM gradient as reflecting the systematic variations of the line-of-sight component of a helical or toroidal B field associated with the jets of 5C4.114. These results suggest that the helical field that arises due to the joint action of the rotation of the central black hole and its accretion disc and the jet outflow can survive to distances of thousands of parsec from the central engine. ]{}
Introduction
============
The radio emission of radio galaxies and Active Galactic Nuclei (AGNs) is synchrotron emission, which can be linearly polarized up to about 75% in optically thin regions, where the polarization angle $\chi$ is orthogonal to the projection of the magnetic field [**B**]{} onto the plane of the sky (Pacholczyk 1970). Linear polarization measurements thus provide direct information about both the degree of order and the direction of the [**B**]{} field giving rise to the observed synchrotron radiation.
Multi-frequency interferometric polarization observations also provide high-resolution information about the distribution of the Faraday rotation of the observed polarization angles arising between the source and observer. When the Faraday rotation occurs outside the emitting region in regions of non-relativistic plasma, the amount of rotation is given by $$\begin{aligned}
\chi_{obs} - \chi_o =
\frac{e^3\lambda^{2}}{8\pi^2\epsilon_om^2c^3}\int n_{e}
{\mathbf B}\cdot d{\mathbf l} \equiv RM\lambda^{2}\end{aligned}$$ where $\chi_{obs}$ and $\chi_o$ are the observed and intrinsic polarization angles, respectively, $-e$ and $m$ are the charge and mass of the particles giving rise to the Faraday rotation, usually taken to be electrons, $c$ is the speed of light, $n_{e}$ is the density of the Faraday-rotating electrons, $\mathbf{B} \cdot
d\mathbf{l}$ is an element of the line-of-sight magnetic field, $\lambda$ is the observing wavelength, and RM (the coefficient of $\lambda^2$) is the Rotation Measure (e.g., Burn 1966). Simultaneous multifrequency observations thus allow the determination of both the RM, which carries information about the electron density and [**B**]{} field in the region of Faraday rotation, and $\chi_o$, which carries information about the intrinsic [**B**]{}-field geometry associated with the synchrotron source.
Systematic gradients in the Faraday rotation have been reported across the parsec-scale jets of a number of AGN, interpreted as reflecting the systematic change in the line-of-sight component of a toroidal or helical jet [**B**]{} field across the jet (e.g., Hovatta et al. 2012, Mahmud et al. 2013, and Gabuzda et al. 2014b, 2015). Such fields would come about in a natural way as a result of the “winding up” of an initial “seed” field by the differential rotation of the central accreting objects (e.g. Nakamura, Uchida & Hirose 2001; Lovelace et al. 2002).
It is an interesting question whether these ordered fields can survive to larger scales as the jets propagate outward. The presence of a clear transverse RM gradient on scales of more than a hundred parsec from the jet base was reported for 3C 380 (Gabuzda et al. 2014a). However, attempts to search for possible kiloparsec-scale transverse Faraday-Rotation gradients across the jet structures of extragalactic radio sources by eye have yielded only a small number of candidates (Gabuzda et al. 2012). This is consistent with a picture in which random distributions of the [**B**]{} field and electron density in the general vicinity of the radio source (e.g., in the cluster or inter-cluster medium in which the source is located) generally dominate on these large scales. Only one claim of a transverse Faraday-rotation gradient that could be associated with a helical [**B**]{} field present on kiloparsec scales has been made (Kronberg et al. 2011).
We present here a new analysis of observations of the radio galaxy 5C 4.114, whose position is approximately coincident with the location of the Coma cluster of galaxies, originally obtained and analyzed by Bonafede et al. (2010), which demonstrates the presence of monotonic, statistically significant Faraday Rotation gradients across both the Northern lobe and Southern jet/lobe. The Northern gradient is especially striking. If associated with the azimuthal components of helical [**B**]{} fields, the orientation of the gradients is consistent with the initial [**B**]{} field that is wound up having a dipolar-like configuration.
Observations and Reduction
==========================
The data used for our analysis are precisely the 1.365, 1.516, 4.535 and 4.935 GHz Very Large Array data considered by Bonafede et al. (2010), and the observations and the data calibration and reduction methods used are described in that paper.
We could not use the RM map published by Bonafede et al. (2010) directly, because the associated error map did not take into account the finding of Hovatta et al (2012) that the uncertainties in the Stokes $Q$ and $U$ fluxes in individual pixels on-source are somewhat higher than the off-source rms fluctuations, potentially increasing the resulting RM uncertainties.
To address this, we imported the final, fully self-calibrated visibility data of Bonafede et al. (2010) into the [AIPS]{} package, then used these data to make naturally weighted $I$, $Q$ and $U$ maps at all four frequencies, with matching image sizes, cell sizes and beam parameters specified by hand in the [AIPS]{} task [IMAGR]{}. These images were all convolved with a circular Gaussian beam having a full-width at half-maximum of 1.3$^{\prime\prime}$. We then obtained maps of the polarization angle, $\chi = \frac{1}{2}\arctan(U/Q)$, and used these to construct corresponding maps of the Faraday Rotation Measure (RM) using the [AIPS]{} task [RM]{}. The uncertainties in the polarization angles used to obtain the RM fits were calculated from the uncertainties in $Q$ and $U$, which were estimated using the approach of Hovatta et al. (2012). The output pixels in the RM maps were blanked when the RM uncertainty resulting from the $\chi$ vs. $\lambda^2$ fits exceeded 8 rad/m$^2$.
Results
=======
Fig. 1 presents our 4.535-GHz intensity image of 5C4.114, with the RM image superposed in colour; these essentially reproduce the images in Fig. 12 of Bonafede et al. (2010).
A monotonic gradient in the RM across the Northern radio lobe is clearly visible by eye, highlighted by the upper black arrow in Fig. 1. A less prominent, oppositely directed RM gradient is visible across the Southern jet/lobe, highlighted by the lower black arrow in Fig. 1. Both regions are fairly well resolved, and span two to three beamwidths in the transverse direction. The ordered RM gradient crossing the Northern lobe is quite unusual for the kiloparsec-scale Faraday rotation distributions of radio galaxies and quasars, which tend to be more patchy.
The redshift of 5C4.114 is not known, but no optical identification with either a Coma cluster galaxy nor with a background galaxy has been found, indicating that the source’s redshift is greater than 0.023 (Bonafede et al. 2010). This indicates that the projected distance from the AGN core to the location of the Northern transverse RM gradient is at least 2 kpc (assuming a Hubble constant $H_o = 71$ kms$^{-1}$Mpc$^{-1}$, $\Omega_{\Lambda} =
0.73$, and $\Omega_m = 0.27$), with the projected distance to the Southern gradient being somewhat smaller.
Significance of the Transverse RM Gradients
-------------------------------------------
Monotonic transverse RM gradients are observed throughout the regions enclosed by the grey boxes in Fig. 1; each of the points in the right-hand panels of Fig. 1 corresponds to a monotonic transverse RM gradient at some distance from the core. The uncertainties of the RM values were determined using $\chi$ uncertainties estimated in individual pixels using the approach of Hovatta et al. (2012), without including the effect of uncertainty in the EVPA calibration, since this cannot introduce spurious RM gradients (Mahmud et al. 2009, Hovatta et al. 2012). The uncertainty of the difference between the RM values at the two ends of a slice was estimated by adding the uncertainties for the two RM values in quadrature. Comparisons of the RM values at the two ends of the RM slices considered in Fig. 1 indicate that the Northern and Southern transverse gradients have significances reaching $4.2\sigma$ and $3.6\sigma$, respectively.
The significance of the transverse RM gradients across the Northern lobe (upper right plot in [**Fig. 1**]{}) is $\simeq 3-4.2\sigma$ throughout nearly the entire region considered, and everywhere exceeds $2\sigma$. This is thus a quite extended and coherent RM structure. The significance of the less prominent transverse RM gradients across the Southern jet/lobe (lower right plot in Fig. 1) is greater than $2\sigma$ throughout most of the region enclosed by the Southern grey box in Fig. 1, and reaches $\simeq 3-3.6\sigma$ in the vicinity of the transverse black arrow in Fig. 1.
Discussion
==========
Ability to Detect the Transverse RM Gradients
---------------------------------------------
It is important to note here that we are interested in establishing whether the transverse RM gradients visible by eye represent a statistically significant, systematic, monotonic change in the observed Faraday RM across the lobe/jet structure. We are not interested in measuring the intrinsic RM values, only in investigating the reality of the observed gradients.
The question of the resolution necessary to reliably detect the presence of a transverse RM gradient has been discussed in the literature in the context of VLBI-scale RM measurements. Taylor & Zavala (2010) had proposed that the reliable detection of a transverse RM gradient required that the observed RM gradient span at least three “resolution elements” (usually taken to mean three beamwidths) across the jet. This was tested by Mahmud et al. (2013) using Monte Carlo simulations based on model core–jet-like sources with transverse RM gradients present across their structures, which had intrinsic jet widths of 1/2, 1/3, 1/5, 1/10 and 1/20 of the beam full-width at half-maximum (FWHM) in the direction across the jet. The resulting simulations show that [*the transverse RM gradients introduced into the model visibility data remained visible in RM maps constructed from realistic “noisy” data using standard techniques, even when the intrinsic width of the jet structure was much smaller than the beam width.*]{}
These Monte Carlo simulations (and also those of Hovatta et al. (2012)) thus directly demonstrate that the three-beamwidth criterion of Taylor & Zavala (2010) is overly restrictive, and that it is not necessary or meaningful to place a limit on the width spanned by an RM gradient in order for it to be reliable: the key criteria are that the gradient be monotonic and that the difference between the RM values at either end be at least $3\sigma$. The counterintuitive result that a transverse RM gradient spanning only one beamwidth can potentially be significant essentially comes about because polarization is a vector quantity, while the intensity is a scalar. Thinking of the polarization as being composed of Stokes $Q$ and $U$, this enhanced sensitivity to closely spaced structures comes about because both $Q$ and $U$ can be positive or negative. Again, we are not speaking here of being able to accurately deconvolve the observed RM profiles to determine the intrinsic transverse RM structure — only of the ability to detect the presence of a systematic transverse RM gradient.
This means that we can consider the presence of the RM gradients detected across the Northern lobe and Southern jet/lobe of 5C4.114, which are monotonic, encompass a range of RM values of at least $3\sigma$, and span $\simeq 2-3$ beamwidths across the jet and $\simeq 0.5-1.5$ beamwidths along the jet, to be reliably detected, although it is not possible to derive the intrinsic (infinte-resolution) values of the gradients. The three-dimensional structures of the emission regions where the RM gradients are observed cannot be determined with certainty, although the general symmetry of the radio structure observed suggests that both the Northern and Southern jet/lobe structures are not very far from the plane of the sky.
Random or Ordered RM Distributions on kpc Scales?
-------------------------------------------------
It is generally believed that the RM distributions across extragalactic radio sources on kiloparsec scales should be dominated by fairly random distributions of the electron density and [**B**]{} field in plasma that is not closely related to the radio source itself, for example, plasma that is associated with the cluster or inter-cluster medium. This view is supported by the fact that most of the observed RM distributions appear irregular and “patchy” (e.g., the RM maps presented by Bonafede et al. (2010) and Govoni et al. (2010)). This was the framework in which the original study of Bonafede et al. (2010), aimed at constraining the Coma cluster magnetic-field strength, was carried out.
Based on their results for the seven FRI radio sources they considered, Bonafede et al. (2010) concluded that the observed RM distributions generally did not originate in the immediate vicinity of the sources, arising instead in intervening material of the Coma cluster. This conclusion was based on statistical analyses of the RMs and their uncertainties for different lines of sight through the cluster, and did not take into consideration possible patterns in the RM distributions for the individual objects studied.
{width=".15\textwidth"}
However, the conclusions of Bonafede et al. (2010) for their sample as a whole and our own conclusions specifically for 5C4.114 need not be in contradiction. We suggest that, indeed, the observed RM distributions of extragalactic radio sources on kiloparsec scales are usually (though not always) determined by intervening magnetised plasma that is not directly related to the jets; this plasma is not uniform and possibly turbulent, giving rise to an irregular, patchy RM distribution that bears no obvious relationship to the source structure. However, the resulting irregular RM distribution can be superposed on a more ordered RM distribution brought about by magnetised plasma in the immediate vicinity of the jet structure. Although it is usually the irregular RM component that is dominant, the ordered RM component may occasionally be visible in some individual objects. We suggest that this is the case for 5C4.114.
Tentative Evidence for a Dipole-like Initial Field Structure
------------------------------------------------------------
The transverse RM gradient detected across the Southern jet/lobe structure of 5C4.114 is somewhat less prominent than the Northern RM gradient, which is very firmly detected. In this section, we explore the implications of the overall RM-gradient structure if both the (firm) Northern and (somewhat tentative) Southern RM gradients reflect the azimuthal (toroidal) component of a helical jet [**B**]{} field, brought about by the combination of the rotation of the central black hole and its accretion disk and the jet outflow. In this case, the directions of the transverse RM gradients on the sky imply particular directions for the associated azimuthal [**B**]{}-field components; this, in turn, enables us to infer whether the poloidal components of the “seed fields” that were wound up by the rotation of the central black hole and accretion disk had the same or opposite senses in the two jets.
The direction of the azimuthal field component is determined by the direction of the central rotation and the direction of the poloidal component of the initial field that is “wound up”. As is illustrated schematically in Fig. 3, the pattern shown by 5C4.114 is consistent with the poloidal [**B**]{}-field component being directed outward in one jet and inward in the other, as would be expected for a dipolar-like initial [**B**]{}-field configuration.
Conclusion
==========
We have reconstructed the Faraday RM image of 5C 4.114 initial published by Bonafede et al. (2010) in order to quantitatively analyze various gradients visible in the RM image.
Our analysis has demonstrated that the differences in the RM values encompassed by the monotonic RM gradients visible across the entire Northern lobe of the radio source and a more restricted region in the Southern jet/lobe both exceed $3\sigma$, making them statistically significant. The detection of the RM gradient across the Northern lobe is very firm, while the RM gradient across the Southern jet/lobe is slightly more tentative, due to the relatively narrow range of distances from the central AGN where the significance of the gradient exceeds $3\sigma$.
This represents firm evidence that the Northern, and possibly also the Southern, kiloparsec-scale jet of 5C4.114 carries a helical or toroidal [**B**]{}-field component. Such a component would naturally arise due to the rotation of the central black hole and its accretion disc; apparently, this helical [**B**]{}-field component can sometimes survive to distances of thousands of parsec from the central engine. Regardless of whether the regions where the RM gradients are observed represent outwardly propagating jets or lobes of material flowing backward toward the center of activity, they could contain the imprint of a helical [**B**]{}-field component that was initially carried outward by the jet outflow. The relative orientations of the Northern and Southern gradients are consistent with the pattern expected if the initial poloidal jet [**B**]{} field had a dipole-like structure.
Our new analysis together with the original analysis of Bonafede et al. (2010) suggest a picture in which the observed RM distributions of extragalactic radio sources on kiloparsec scales are usually determined by intervening inhomogeneous, possibly turbulent magnetised plasma that is not directly related to the jets, with a more ordered RM distribution associated with magnetised plasma in the immediate vicinity of the jet structure occasionally becoming dominant in some regions of individual sources. This latter, ordered RM component can bear the imprint of a helical magnetic field associated with the jet structure.
[ ]{}
Bonafede A., Feretti L., Murgia M., Govoni F., Giovannini G., Dallacasa D., Dolag K. & Taylor G. B. 2010, A& A, 513, 30 Burn B.J. 1966, MNRAS, 133, 67 Gabuzda D. C., Christodoulou D. M., Contopoulos I & Kazanas D. 2012, Journal of Physics: Conf. Ser., 355, id. 012019 Gabuzda D. C., Cantwell T. M. & Cawthorne T. V. 2014a, MNRAS, 438, L1 Gabuzda D. C., Knuettel S. & Reardon B. 2015, MNRAS, 450, 2441 Gabuzda D. C., Reichstein A. R. & O’Neill E. L. 2014b, MNRAS, 444, 172 Govoni F., Dolag K., Murgia M., Feretti L., Schindler S., Giovannini G., Boschin W., Vacca V. & Bonafede A. 2010, A& A, 522, 105 Hovatta T., Lister M. L., Aller M. F., Aller H. D., Homan D. C., Kovalev Y. Y., Pushkarev A. B. & Savolainen T. 2012, AJ, 144, 105 Kronberg P. P., Lovelace R. V. E., Lapenta G. & Colgate S. A. 2011, ApJ, 741, L15 Lovelace R. V. E., Li H., Koldoba A. V., Ustyugova G. V. & Romanova M. M. 2002, ApJ, 572, 445 Mahmud M., Gabuzda D. C. & Bezrukovs V. 2009, MNRAS 400, 2 Mahmud M., Coughlan C. P., Murphy E., Gabuzda D. C. & Hallahan R. 2013, MNRAS, 431, 695 Nakamura M., Uchida Y. & Hirose S. 2001, New Astronomy, 6, 2, 61 Pacholczyk A. G., 1970, Radio Astrophysics, W. H. Freeeman , San Franciso Taylor G. B. & Zavala R. 2010, ApJ, 722, L183
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider quantum critical points (QCP) in which quantum fluctuations associated with charge rather than magnetic order induce unconventional metallic properties. Based on finite-$T$ calculations on a two-dimensional extended Hubbard model we show how the coherence scale $T^*$ characteristic of Fermi liquid behavior of the homogeneous metal vanishes at the onset of charge order. A strong effective mass enhancement reminiscent of heavy fermion behavior indicates the possible destruction of quasiparticles at the QCP. Experimental probes on quarter-filled layered organic materials are proposed for unveiling the behavior of electrons across the quantum critical region.'
author:
- 'L. Cano-Cortés$^1$, J. Merino$^1$ and S. Fratini$^2$'
title: Quantum critical behavior of electrons at the edge of charge order
---
#### Introduction.
Quantum critical points occur at zero temperature second order phase transitions in which the strength of quantum fluctuations is controlled by an external field such as pressure, magnetic field or chemical composition [@Sachdev]. In recent years intensive studies have focused on itinerant electrons at the edge of magnetic order, being the heavy fermion materials [@Coleman; @Si; @Lohneysen] prototypical examples. Much less explored but equally interesting are QCP arising from tuning the electrons close to a charge ordering instability. This situation is realized in the quarter-filled families of layered organic superconductors [@Ishiguro] of the $\alpha$, $\beta''$ and $\theta$-(BEDT-TTF)$_2$X types. Large electron effective mass enhancements and non-Fermi liquid metallicity at finite-$T$ are observed in (MeDH-TTP)$_2$AsF$_6$ and $\kappa$-(DHDA-TTP)$_2$SbF$_6$ above a critical pressure at which the charge order found at ambient pressure melts[@Yasuzuka; @Weng]. Such heavy fermion behavior may appear puzzling, considering the different $\pi$-orbitals of the organics as compared to the f-orbitals in the rare earths, but can find a natural explanation based on the universal properties of matter expected near a QCP.
Charge ordering (CO) phenomena in quarter-filled layered organic materials are observed in a wide variety of crystal structures, not limited to specific Fermi surface shapes or nesting. This indicates the importance of onsite and intersite Coulomb repulsion [@Mori; @Merino01] between $\pi$ electrons as the driving force of CO, and in turn implies that electronic correlation effects similar to those found in half-filled systems are inevitably present. These should be considered together with the quantum critical fluctuations of the order parameter to understand the metallic properties in the neighborhood of the present Coulomb-driven transition. The latter can, in principle, differ from more standard charge density wave instabilities of the Fermi surface.
In this Letter we analyze theoretically the possible existence of a QCP at a CO transition driven by the quantum fluctuations associated with strong off-site Coulomb repulsion, in the absence of Fermi surface nesting. The influence of the $T=0$ singular quantum critical point on the finite temperature metallic properties is studied based on finite-$T$ Lanczos diagonalization [@Prelovsek; @Liebsch] of an extended Hubbard model on an anisotropic triangular lattice. At quarter filling ($n=1/2$ hole per molecule), lattice frustration naturally leads to charge ordered metallic states with a single CO pattern. Our main result is the existence of a temperature scale $T^*$ in the dynamical and thermodynamic properties of the system, that is suppressed as the CO transition is approached from the homogeneous metal. The electronic specific heat coefficient at low temperatures is found to be strongly enhanced close to the QCP in analogy with studies of quantum criticality in heavy fermions. Following this analogy, non-Fermi liquid behavior should occur at finite-$T$ in quarter-filled organic conductors at the edge of a CO instability and could be directly probed by several experimentally measurable quantities. Due to the universal character of the proposed quantum critical scenario, our results could also be relevant to broader classes of systems where Coulomb-driven CO occurs, not restricted to the particular ordering pattern studied here.
#### Model.
We focus on the extended Hubbard model: $$\begin{aligned}
H=\sum_{\langle ij\rangle\sigma}t_{ij}(c^\dagger_{i\sigma} c_{j\sigma}+
h.c.) +U \sum_i n_{i\uparrow}n_{i\downarrow}
+\sum_{\langle ij \rangle}V_{ij}n_i n_j
\label{eq:model} \end{aligned}$$ on the anisotropic triangular lattice shown in Fig. \[fig:PD\](a), where $t_{ij}=(t_p,t_c)$ are the transfer integrals between nearest neighboring molecules respectively along the diagonal ($p$) and vertical ($c$) directions, $V_{ij}=(V_p,V_c)$ are the corresponding inter-molecular Coulomb interaction energies and $U$ is the intra-molecular Coulomb repulsion. The model Eq. (\[eq:model\]) has been studied via a variety of techniques due to its relevance to $\theta$-type two-dimensional organic conductors (see Ref. [@Kuroki] for a recent review). Here we follow the standard practice and neglect longer-range Coulomb interactions [@Kuroki] as well as electron-lattice effects [@Udagawa] that are however essential to recover the various ordering patterns realized in these materials. For the sake of simplicity we consider an isotropic inter-site repulsion $V_c=V_p\equiv V$ and set $t_c=0$, $t_p\equiv t>0$. This choice is representative of the $\theta$-ET$_2$X salts with X$=$CsCo(SCN)$_4$, X$=$CsZn(SCN)$_4$ and X$=$I$_3$ where the molecular orbital overlap is strongly suppressed along the $c$ direction [@Mori; @Kuroki]. These materials lie close to (on both sides of) the bandwidth controlled CO transition in Mori’s phase diagram [@Mori] and are therefore optimal candidates for the observation of an interplay between critical charge fluctuations and electronic correlation effects.
#### Phase diagram.
The phase diagram obtained at $T=0$ from the numerical diagonalization of the model Eq. (\[eq:model\]) on $N_s=12$ and $N_s=18$ site clusters is presented in Fig. \[fig:PD\](b). The different phases can be identified by analyzing the behavior of the charge correlation function $N_sC({\bf q})=N_s^{-1}\sum_{ij}\langle n_i n_j \rangle
e^{i{\bf q}\cdot {\bf R}_{ij}}$. In the thermodynamic limit, this quantity diverges at a single wavevector ${\bf Q}\neq 0$ at the onset of charge order. An accurate numerical determination of the phase boundaries relying on a proper finite-size scaling of the results is prohibitive for the fermionic system under study, due to the rapidly increasing size of the Hilbert space. We therefore identify the $T=0$ ordering transition, $V_{CO}$, as the locus of steepest variation of charge correlations upon varying the interaction parameters. An analogous procedure is used to determine the melting temperature of CO, $T_{CO}$. In the physically relevant regime explored here, $U/t\lesssim 20$ [@footnote] the phase boundaries agree on the two cluster sizes.
In the absence of nearest-neighbor repulsion, $V=0$, the system remains in a homogeneous metallic phase up to arbitrary values of the local interaction $U$, as holes can effectively avoid each other at concentrations away from integer fillings. An instability towards a charge ordered state with 3-fold periodicity is realized instead upon increasing the inter-site interaction, $V$, as was previously obtained by different approaches [@Mori03; @Kaneko; @Kuroki; @Hotta; @WatanabeVMC06; @NishimotoDMRG08]. The resulting 3-fold ordering pattern is shown in Fig. \[fig:PD\](b).
At low and moderate values of $U/t\lesssim 5$, down to $U=0$, the CO transition essentially follows the predictions of mean-field approaches [@Mori03; @Kaneko; @Kuroki]. A calculation in the random phase approximation (RPA) yields $V_{CO}=1.06t+U/6$, which is shown as a dotted line in Fig. \[fig:PD\](b). This law is correctly recovered by the numerical data at low $U$, but sizable deviations appear at as soon as $U/t\gtrsim 10$ due to the increasing effects of many-body electronic correlations. The boundary obtained numerically in this region is independent of the cluster size, and our value $V_{CO}/t=3.5$ at $U/t=10$ is in good agreement with existing numerical results in larger systems [@WatanabeVMC06; @NishimotoDMRG08].
Before moving to the analysis of the correlated metallic phase at the edge of charge order, let us note that the charge correlation function also provides indications of a crossover taking place within the CO phase, separating a conventional 3-fold state from a more exotic “pinball liquid” phase [@Hotta]. The latter arises because at large $U$, mean-field like configurations where charge-poor molecules are completely depleted become energetically unfavorable, as these imply that each charge rich molecule should accomodate up to $3/2$ holes on average. To prevent double occupancy, part of the hole density necessarily spills out and decouples from the charge rich sublattice, resulting in a separate fluid moving freely in the remaining sublattice [@Hotta]. This partial ordering, occurring for $V\lesssim U/3$, corresponds to a value $C({\bf Q})=n^2/3$, to be contrasted with the value $C({\bf Q})=n^2$ obtained in the 3-fold state at large $V$.
#### The correlated metal close to charge ordering.
We start by analyzing the kinetic energy of the interacting system, a quantity that provides direct information on how the motion of the charge carriers is hindered by interactions, and can be evaluated with good accuracy through finite-$T$ Lanczos diagonalization. Its importance in correlated systems has been recently recognized [@Millis04; @Qazilbash09], and resides in the fact that it can in principle be accessed from optical absorption experiments, providing a quantitative measure of many-body correlation effects.
The kinetic energy, $K$, normalized to the non-interacting band value, $K_0$, is shown in Figs. \[fig:kinetic\](a) and (b), respectively for $U/t=5$ and $U/t=15$, for several values of the intersite repulsion across the CO transition. At $U/t=5$ the kinetic energy at $T=0$ stays essentially unrenormalized, $ K/ K_0 \gtrsim 0.9$, upon increasing $V$ all the way up to the CO transition occurring at $V_{CO}=2.33t$, as expected in a weakly correlated Fermi liquid. It then suddenly drops to a value $K/ K_0 \sim 0.6$ upon entering the charge ordered phase. This residual value is ascribed to local (incoherent) hopping processes in the charge ordered pattern [@Millis04] and to the motion of remnant itinerant electrons not gapped by the ordering transition [@Kaneko; @WatanabeVMC06].
A richer behavior is revealed by the data at finite temperatures, that clearly indicate the emergence of a temperature scale $T^*$ that marks an analogous suppression of the kinetic energy occurring [*within the homogeneous metallic phase*]{} (we define $T^*$ as the locus of steepest variation of $K$ with temperature, denoted by arrows in Fig. \[fig:kinetic\]). The scale $T^*$ appears to be entirely controlled by the approach to the zero-temperature ordering transition, a behavior that is strongly reminiscent of what is expected close to a QCP. The situation is similar at $U/t=5$ and $U/t=15$ \[Fig. \[fig:kinetic\](b)\], although in the latter case the kinetic energy ratio at $T=0$ is already reduced down to values $K/K_0\sim 0.7$ before entering the CO phase at $V_{CO}=4.83t$, which is indicative of a moderately correlated electron liquid. In this case the quantum critical behavior adds up to the correlated electron picture, affecting the motion of electrons that have already been slowed down by local electronic correlations.
The above observations can be directly related to the low-energy quasiparticle properties by analyzing the temperature dependence of the integrated optical weight $I(\omega)=\int_0^\omega
\sigma(\omega^\prime) d\omega^\prime$. The low-frequency integral $I(\omega= 0.5 t)$, reported in Fig. \[fig:kinetic\](c), comprises most of the quasiparticles contributing to the Drude behavior in a normal Fermi liquid, while excluding higher-energy incoherent excitations arising from the strong electronic interactions. Comparison of Figs. \[fig:kinetic\](b) and (c) demonstrates that the strong reduction of kinetic energy above $T^*$ primarily originates from a drastic suppression of the low-energy coherent quasiparticles. Finally, Fig. \[fig:kinetic\](d) illustrates $I(\omega)$ at a given $V/t=4.8$ just below the CO transition, showing that the quasiparticle weight lost at $T^*$ is partly transferred to high-energy excitations, that are broadly distributed on the scale of $U,V$.
The above results are summarized in the finite temperature phase diagram of Fig. \[fig:PDT\], that constitutes the central result of this work. Scaled units $V/V_{CO}$ are used so that the weakly ($U/t=5$) and moderately ($U/t=15$) correlated cases can be directly compared, illustrating the universality of the $T^*$ phenomenon in proximity to the CO instability. Both $T^*$ and $T_{CO}$ appear to vanish at $V_{CO}$ leading to a funnel-type ’bad’ metallic region with strong quantum critical fluctuations.
To obtain further insight into the behavior of quasiparticles near the CO instability and make contact with the established concepts of quantum criticality, we have calculated the specific heat coefficient $\gamma=C_V/T$. Our results, reported in Fig. \[fig:CV-mass\](a), resemble the behavior of nearly two-dimensional antiferromagnetic metals in which a singular increase is expected upon lowering the temperature close to the QCP, crossing over to a constant value at the onset of Fermi liquid behavior [@Lohneysen; @Moriya]. Curves similar to those in Fig. \[fig:CV-mass\](a) are commonly observed in heavy fermion systems [@Stewart; @Custers; @Lohneysen]. As a striking confirmation of the QCP scenario emerging from the preceding paragraphs, we see that there is a direct correspondence between thermodynamic and dynamical properties: the peak position in the specific heat essentially coincides with the temperature $T^*$ derived from the kinetic energy and the low-frequency optical integral near to the QCP (see Fig. \[fig:PDT\]).
Finally, we discuss the behavior of the effective mass as extracted from the low-temperature limit of the specific heat coefficient (in practice we estimate $m^*$ from the peak value of $\gamma$ to overcome the numerical limitations of the Lanczos technique at low $T$). It can be expected on general grounds that the strong electron-electron interactions responsible for the 3-fold CO will strongly affect those parts of the Fermi surface that are connected by momenta closest to the ordering wavevector ${\bf Q}$. This should lead to the emergence of “hot spots” with divergent effective mass, $m^*/m_b \propto \ln(1/|V-V_{CO}|)$, in full analogy to the situation encountered in metals close to a magnetic instability [@Moriya; @Merino06]. The effective mass reported in Fig. \[fig:CV-mass\](b) indeed shows a marked enhancement at the approach of $V_{CO}$, that adds to a moderate renomalization $m^*/m_b\lesssim 2$ provided by non-critical electronic correlations. Whether the destruction of quasiparticles remains confined to such ’hot spots’ or spreads over the whole Fermi surface is an unsettled issue that is also actively debated in the context of heavy fermion materials [@Coleman; @Si].
#### Concluding remarks.
Our results indicate the occurrence of non-Fermi liquid behavior driven by a combination of electronic correlations and quantum critical fluctuations close to a CO instability in quarter-filled organic conductors. Quantum critical behavior has been recently reported in transport studies of the quarter-filled compounds $\kappa$-(DHDA-TTP)$_2$SbF$_6$ and (MeDH-TTP)$_2$AsF$_6$, by tuning the system across the CO transition via an applied pressure [@Yasuzuka; @Weng]. A stringent verification of our theoretical predictions could be achieved in the conductor $\theta-$(BEDT-TTF)I$_3$, whose band structure is properly described by the model Eq. (1). Remarkably, this is the only salt of the $\theta$ family exhibiting superconductivity [*and*]{} is at the edge of CO. Recent optical studies [@Takenaka1] have shown a rapid loss of electron coherence upon increasing the temperature, associated with a marked reduction of kinetic energy as obtained here, and the evolution of the integrated spectral weight $I(\omega)$ with temperature found experimentally compares very well with our result in Fig. \[fig:kinetic\](d). The presence of an unexplained far infrared absorption peak whose position is controlled by the temperature scale $T$ alone could be a clue of an emergent collective excitation of the QCP [@Caprara]. This material undergoes a CO transition under pressure [@Tajima], which could be directly exploited to probe the quantum critical behavior, applying the plethora of experimental techniques that are commonly used in the study of heavy fermion materials. Hall coefficient as well as de Haas-van Alphen experiments appear to be ideal probes to test whether quasiparticles are destroyed over the whole Fermi surface or on some regions only, shedding light on the nature of the charge order QCP.
L.C. and J.M. acknowledge financial support from MICINN (CTQ2008-06720-C02-02, CSD2007-00010), and computer resources and assistance provided by BSC. The authors thank I. Paul for useful discussions.
[30]{} S. Sachdev, [*Quantum Phase Transitions*]{}, Cambridge University Press (2001). P. Coleman, [*Handbook of Magnetism and Advanced Magnetic Materials*]{} (Wiley, New York, 2007), Vol. 1, p.95 P. Gegenwart, Q. Si, and F. Steglich, Nat. Phys. [**4**]{}, 186 (2008). H. v. Löhneysen, [*et. al.*]{} Rev. Mod. Phys. [**79**]{}, 1015 (2007). T. Ishiguro, K. Yamaji, and G. Saito, [*Organic Superconductors*]{} (Springer, New York, 2001), 2nd. ed. S. Yasuzuka, [*et. al.*]{}, J. Phys. Soc. Jpn. [**75**]{}, 083710 (2006). Y. Weng, [*et. al.*]{}, Synth. Met. [**159**]{}, 2394 (2009). H. Mori, S. Tanaka, and T. Mori, Phys. Rev. B [**57**]{}, 12023 (1998). R. H. McKenzie, [*et. al.*]{}, Phys. Rev. B [**64**]{}, 085109 (2001). J. Jaklic and P. Prelovsek, Phys. Rev. B [**49**]{}, 5065 (1994). We use an Arnoldi algorithm, see: A. Liebsch, H. Ishida and J. Merino, Phys. Rev. B [**78**]{}, 165123 (2008). K. Kuroki, Sci. Technol. Adv. Mater. [**10**]{}, 024312 (2009). M. Udagawa and Y. Motome, Phys. Rev. Lett. [**98**]{}, 206405 (2007). Values of $U \approx 10t$ and $V \approx 2t$ were estimated from optical reflectivity data in \[T. Mori, Bull. Chem. Soc. Jpn. [**73**]{}, 2243 (2000)\] whereas a larger $U \approx 20t$ is extracted from the DFT bare value $U_0$ with a screening reduction of $U \sim U_0/2$ with $t=0.1$ eV \[E. Scriven and B. J. Powell, J. Chem. Phys. [**130**]{}, 104508 (2009); L. Cano-Cortés, [*et. al.*]{}, Eur. Phys. J. B [**56**]{}, 173 (2007)\]. T. Mori, J. Phys. Soc. Jpn. [**72**]{}, 1469 (2003). M. Kaneko and M. Ogata, J. Phys. Soc. Jpn. [**75**]{}, 014710 (2006). C. Hotta and N. Furukawa, Phys. Rev. B [**74**]{}, 193107 (2006). H. Watanabe and M. Ogata, J. Phys. Soc. Jpn. [**75**]{}, 063702 (2006). S. Nishimoto, M. Shingai, and Y. Ohta, Phys. Rev. B [**78**]{}, 035113 (2008). A. J. Millis, [*Optical Conductivity and Correlated Electron Physics*]{}, in [*Strong Interactions in Low Dimensions*]{}, edited by D. Baeriswyl and L. DeGiorgi (Springer Verlag, Berlin 2004). M. M. Qazilbash, [*et. al.*]{}, Nat. Phys. [**5**]{}, 647 (2009). T. Moriya and K. Ueda, Adv. Phys. [**49**]{}, 555 (2000). G. R. Stewart Rev. Mod. Phys. [**56**]{}, 755 (1984); G. R. Stewart, [*ibid.*]{} [**73**]{}, 797 (2001). J. Custers, [*et. al.*]{}, Nature [**424**]{}, 524 (2003). J. Merino, [*et. al.*]{}, Phys. Rev. Lett. [**96**]{}, 216402 (2006). K. Takenaka, [*et. al.*]{}, Phys. Rev. Lett. [**95**]{}, 227801 (2005). S. Caprara, [*et. al.*]{}, Phys. Rev. Lett. [**88**]{}, 147001 (2002); S. Caprara, [*et. al.*]{}, Phys. Rev. B [**75**]{}, 140505(R) (2007). N. Tajima, [*et al.*]{}, J. Phys. IV France [**114**]{}, 263 (2004).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this note we show that in a commutative ring $R$ with unity, for any $n > 0$, if $I$ is an $n$-absorbing ideal of $R$, then $(\sqrt{I})^{n} \subseteq I$.'
author:
- Hyun Seung Choi and Andrew Walker
title: 'The radical of an n-absorbing ideal'
---
[*An ideal $I$ of a commutative ring $R$ is said to be [**n-absorbing**]{} if whenever $a_{1} \cdots a_{n+1} \in I$ for $a_{1},\ldots,a_{n+1}
\in R$, then $a_{1}\cdots a_{i-1}a_{i+1}\cdots a_{n+1} \in I$ for some $i \in \{1, 2, \ldots, n+1\}$.* ]{}
In [@Anderson Theorem $2.1(e)$], it is shown that if $a \in \sqrt{I} \text{ and } I
\text{ is } n\text{-absorbing, then } a^{n} \in I$. Conjecture 2 in [@Anderson page 1669] states that more generally, if $I$ is $n$-absorbing, then $(\sqrt{I}\:)^{n} \subseteq I$. That is, if $a_1, a_2, \ldots, a_n \in \sqrt{I}$, then $a_1a_2 \cdots a_n \in I$. The object of this note is to prove this conjecture.
Throughout, all rings will be assumed to be commutative and unital. If $n$ is a positive integer, we’ll consider $\mathbb{N}^{n}_{0}$ as a partially ordered set with the lexicographic ordering. That is, if $\alpha,\beta \in \mathbb{N}^{n}_{0}$, then $\alpha \geq_{\text{lex}}
\beta$ if the leftmost non-zero coordinate of $\alpha - \beta$ is non-negative.
Our first observation is that when considering the problem of when $(\sqrt{I}\:)^n \subseteq I$ for $I$ $n$-absorbing, we may assume without loss of generality that $I = 0$.
Suppose $(\sqrt{0})^{n} = 0$ in any ring such that $0$ is $n$-absorbing. Then for an $n$-absorbing ideal $I$ in an arbitrary ring $R$, $(\sqrt{I})^{n} \subseteq I$.
Let $R' = R/I$. Then $0$ is $n$-absorbing in $R'$ [@Anderson Theorem $4.2
(a)$], so that $(\sqrt{0})^{n} = 0$. Let $f \colon R \to R'$ be the canonical map. Then $(\sqrt{I})^{n} = (f^{-1}(\sqrt{0}))^{n}
\subseteq f^{-1}((\sqrt{0})^{n}) = f^{-1}(0) = I$.
If $I$ is an $n$-absorbing ideal in a ring $R$ and $k \geq n$, then $I$ is a $k$-absorbing ideal of $R$.
[@Anderson Theorem $2.1(b)$].
Next we develop a technical result involving linear maps. If $m$ is any positive integer and $R$ any ring, then $e_{j} \in R^{m}$ refers to the $j$-th canonical basis element $e_{j} =
[0,\ldots,1,\ldots,0]^t$ of $R^m$ (where the $t$ denotes transpose). We denote by $\pi_{j} : R^{m} \to R$ the canonical projection for each $j = 1,\ldots,m$.
Let $R$ be a ring, $m \in \mathbb{N}$ and $\varphi \colon R^{m} \to
R^{m}$ an $R$-linear map. We’ll say that $\varphi$ is **projectively zero** if for any $v \in R^{m}$, $\pi_{i}\varphi(v)
= 0$ for some $i=1,\ldots,m$.
In the following example, we establish a relationship between projectively zero maps and $n$-absorbing ideals. Let’s consider the simplest interesting case, when $0$ is a $2$-absorbing ideal. We wish to show that $(\sqrt{0}\,)^{2} = 0$. That is, if $a,b \in \sqrt{0}$, then $ab = 0$. Consider the matrix $$\left[ \begin{array}{cc}
ab & b^2 \\
a^2 & ab \\
\end{array} \right].$$ We claim that this matrix represents a projectively zero map $\varphi
\colon R^{2} \to R^{2}$. By Theorem [@Anderson Theorem 2.1e], we know that $a^{2} =
b^{2} = 0$. So that the above matrix simplifies to $$\label{equation1}
\left[ \begin{array}{cc}
ab & 0 \\
0 & ab \\
\end{array} \right].$$ Say $v = ce_{1} + c'e_{2} \in R^{2}$, where $c,c' \in R$. Then $\varphi(v) = (cab)e_{1} + (c'ab)e_{2}$. That is $$\left[
\begin{array}{cc}
ab & 0 \\
0 & ab \\
\end{array} \right]
\left[
\begin{array}{c}
c \\
c' \\
\end{array} \right]
=
\left[
\begin{array}{c}
cab \\
c'ab \\
\end{array} \right]
.$$ To show $\varphi$ is projectively zero, we need one of the monomials $cab$ or $c'ab$ to be $0$. We have $$\label{equation2}
0 = a^{2}bc + b^{2}ac' = ab(ca + c'b)
\mbox{ since } a^2 = b^2 = 0.$$
Since $0$ is $2$-absorbing and $ab(ca + c'b)$ = $0$, then at least one of $ab$, $b(ca + c'b)$, or $a(ca + c'b)$ is zero. If $ab = 0$, then both $\pi_1\varphi(v)$ = $cab$ and $\pi_2\varphi(v)$ = $c'ab$ are zero.
If $b(ca + c'b) = 0$, then since $b^{2} = 0$, we get $0$ = $cab$ = $\pi_1\varphi(v)$. Similarly, if $a(ca + c'b)$ = $0$, we get $0$ = $c'ab$ = $\pi_2\varphi(v)$. Thus $\varphi$ is projectively zero.
This will be useful since Lemma $3$ below will tell us that $ab =
0$, and thus $(\sqrt{0}\,)^{2} = 0$.
[*We say that a linear map $\varphi \colon R^{m} \to R^{m}$ is **upper-triangular** if for each $j = 1,\ldots, m$, $\pi_{i}\varphi(e_{j}) = 0$ whenever $i > j$.* ]{}
Lemma \[upper.triangular.zero.on.diagonal\] shows that certain upper-triangular matrices must have at least one zero on their diagonal.
\[upper.triangular.zero.on.diagonal\] Suppose that $\varphi \colon R^{m} \to R^{m}$ is a projectively zero upper-triangular map. Then $\pi_{j}\varphi(e_{j}) = 0$ for some $j$.
Let $ j_{1}$ = $\max \{ i \in \{1, 2, \ldots, m\} \mid \pi_{i}\varphi(e_{m}) = 0 \}$. Since $\varphi$ is projectively zero, the above set is non-empty and so $j_{1}$ is a positive integer. Similarly we can define a positive integer $j_{2} = \max \{ i \in \{1, \ldots, m\} \mid \pi_{i}\varphi(e_{j_{1}} +
e_{m}) = 0 \}$. Proceeding in the same way, we have for each $k \in \mathbb{N}$, a positive integer $j_{k}$ with $$\label{equation7}
j_{k} = \max \{ i \mid \pi_{i}\varphi(e_{j_{k-1}} +
\cdots + e_{j_{2}} + e_{j_{1}} + e_{m}) = 0\}.$$
Suppose that for each $j \in \{1, \ldots, m\}$ that $\pi_{j}\varphi(e_{j}) \neq 0$. We then claim that the sequence $j_1, j_2, \ldots$ of positive integers constructed above is strictly decreasing. If not, then for some $k \in \mathbb{N}$ we have either $j_{k} < j_{k+1}$ or $j_{k}$ = $j_{k+1}$. Suppose that $j_{k} < j_{k+1}$. Now by definition of $j_{k+1}$, we have $$0 = \pi_{j_{k+1}}\varphi(e_{j_{k}} + e_{j_{k-1}} +
\cdots + e_{j_{1}} + e_{m}) =$$ $$\label{equation8}
\pi_{j_{k+1}}\varphi(e_{j_{k}}) +
\pi_{j_{k+1}}\varphi(e_{j_{k-1}}) + \cdots +
\pi_{j_{k+1}}\varphi(e_{j_{1}}) + \pi_{j_{k+1}}\varphi(e_{m})$$ and the first term in (\[equation8\]) is zero since $j_k < j_{k+1}$ and $\varphi$ is upper triangular. So this is $$= 0 + \pi_{j_{k+1}}\varphi(e_{j_{k-1}}) + \cdots +
\pi_{j_{k+1}}\varphi(e_{j_{1}}) + \pi_{j_{k+1}}\varphi(e_{m}) =
\pi_{j_{k+1}}\varphi(e_{j_{k-1}} + \cdots + e_{j_{1}} + e_{m}).$$ But this contradicts how $j_{k}$ was defined in equation (\[equation7\]). So the only way for $j_{k+1} \geq j_{k}$ to happen is if $j_{k+1} = j_{k}$. But then $$0 =
\pi_{j_{k+1}}\varphi(e_{j_{k}} + e_{j_{k-1}} + \cdots + e_{j_{1}}
+ e_{m})
= \pi_{j_{k}}\varphi(e_{j_{k}} + e_{j_{k-1}} + \cdots +
e_{j_{1}} + e_{m})$$ $$= \pi_{j_{k}}\varphi(e_{j_{k}}) +
\pi_{j_{k}}\varphi(e_{j_{k-1}} + \cdots + e_{j_{1}} + e_{m})
=
\pi_{j_{k}}\varphi(e_{j_{k}}) + 0 = \pi_{j_{k}}\varphi(e_{j_{k}}),$$ which contradicts our assumption that $\pi_{j}\varphi(e_{j}) \neq 0$ for any $j$. Thus the $\{j_{k}\}$ form a strictly decreasing sequence, a contradiction since $j_k \in \{1, \ldots, m\}$ for each $k$.
We will need some partial orderings on monomials.
Let $x_{1},\ldots,x_{n}$ be indeterminates over a ring $R$. The **(unordered) multi-degree** of a monomial $M =
x^{k_{1}}_{1}\cdots x_{n}^{k_{n}}$ in $R[x_{1},\ldots,x_{n}]$ is the $n$-tuple $\alpha = (k_{\sigma(1)},\ldots, k_{\sigma(n)}) \in
\mathbb{N}^{n}_{0},$ where $\sigma$ is a permutation of $\{1,\ldots,n\}$ such that $k_{\sigma(1)} \geq
\cdots \geq k_{\sigma(n)}.$ Denote this $n$-tuple by $\text{multideg}(M)$. We’ll also write $|\alpha|$ for the **degree** $\sum_{i=1}^n k_{i}$ of the monomial $M$.
Suppose $x,y,z$ are indeterminates over $R$. Then $$\text{multideg}(x^{2}y^{4}z^{2}) = \text{multideg}(x^{4}y^{2}z^{2}) =
(4,2,2).$$
Suppose $J =
(a_{1},\ldots,a_{n})R$ is a finitely generated ideal of a ring $R$. If $\underline{x} = x_{1},\ldots,x_{n}$ is a sequence of indeterminates over $R$, we have a natural $R$-algebra homomorphism $f : R[\underline{x}] \to R$, where $x_{i} \mapsto a_{i}$. Let $H$ = $(\underline{x})R[\underline{x}]$. Then under this map, $f(H) = J$. Moreover, for any $k \in \mathbb{N}$, we have $f(H^k) = J^{k}$. Then $H^k$ is just the ideal of $R[\underline{x}]$ generated by all monomials $M$ in $R[\underline{x}]$ of degree $k$. Now grouping together all monomials of degree $k$ that have the same (unordered) multi-degree, we may write $$H^{k} = \sum_{\alpha \in
\mathbb{N}^{n}_{0},|\alpha| = k}
H_\alpha^k,$$ where $H_\alpha^k$ is the ideal of $R[\underline{x}]$ generated by all monomials $M$ with $\deg(M) = k$ and $\text{multideg}(M) = \alpha$. Thus $J^{k}$ = $f(H^k)$ = $f(\sum H_\alpha^k)$ = $\sum f(H_\alpha^k)$. For $\alpha \in \mathbb{N}^{n}_{0}$ with $|\alpha| = k$, let $J^{k}_{\alpha} =
f(H_\alpha^{k})$. So that we may write $$J^{k} =
\sum_{\alpha \in \mathbb{N}_{0}^{n}, |\alpha| = k} J_{\alpha}^{k}.$$
Let $x,y,z$ be indeterminates over a ring $R$. Then in the above notation, $$H^{3} = H_{(3,0,0)}^{3} + H_{(2,1,0)}^{3} + H_{(2,0,1)}^{3} + H_{(1,2,0)}^{3} + H_{(1,1,1)}^{3} + H_{(1,0,2)}^{3} + H_{(0,3,0)}^{3} + H_{(0,2,1)}^{3} + H_{(0,1,2)}^{3} + H_{(0,0,3)}^{3}$$$$= (x^{3},y^{3},z^{3}) + (x^{2}y,x^{2}z,y^{2}x,y^{2}z,z^{2}x,z^{2}y) + (0) + (0) + (xyz) + (0) + (0) + (0) + (0) + (0).$$ For instance, $H_{(2,0,1)}^{3} = 0$ since there are no monomials with (unordered) multi-degree $(2,0,1)$; the (unordered) multi-degree of a monomial $M \in R[x,y,z]$ is always of the form $(n,m,\ell)$, where $n \geq m \geq \ell$.
Using this notation, we are now ready to prove the main conjecture.
\[main\_theorem\] Let $0$ be an $n$-absorbing ideal in a ring $R$. Then $\left(\sqrt{0}\:\right)^{n} = 0$.
We assume $n > 1$, since the $n = 1$ case is trivial. Fix $a_{1},\ldots, a_{n} \in \sqrt{0}$ and let $J =
(a_{1},\ldots,a_{n})R$. Observe that $a_{1} \cdots a_{n} \in J_{(1,1,\ldots, 1)}^{n}$, so that it suffices to show $J_{(1,1,\ldots,1)}^{n} = 0$. Even better, we aim to show $$\label{inductionsetup}
J_{\alpha}^{k} = 0 \text{ for all }\alpha \in \mathbb{N}^{n}_{0} \text{ with }|\alpha| = k \geq n.$$ Since $a^{n}_{i} = 0$ for all $i \in \{1,\ldots, n\}$, we have $J^{k}_{\alpha} = 0$ for all $\alpha \in \mathbb{N}^{n}_{0}$ with $|\alpha| = k \geq n^{2} - n +1$. To prove (\[inductionsetup\]), it thus remains to show $$\label{inductionsetup2}
J_{\alpha}^{k} = 0 \text{ for all } \alpha \in \Delta \text{ with }|\alpha| = k,$$ where $$\Delta := \{ \alpha \in \mathbb{N}^{n}_{0} \mid n^{2} - n \geq |\alpha| \geq n \}.$$ Now for $\alpha, \beta \in \mathbb{N}^{n}_{0}$, write $\beta \succeq \alpha$ if one of the following holds:
1. $|\beta| > |\alpha|$ or
2. $|\beta| = |\alpha|$ and $\beta \geq_{\text{lex}} \alpha$.
It follows that $\succeq$ defines a total ordering on $\Delta$. We prove that (\[inductionsetup2\]) holds by means of an induction on $\Delta$ with respect to the total ordering $\succeq$. The largest element of $\Delta$ (with respect to $\succeq$) is $\gamma$, where $\gamma = (n^{2}-n,0,0\ldots,0)$. So $$J_{\gamma}^{n^{2}-n} = (a^{n^{2}-n}_{1},a^{n^{2}-n}_{2},\ldots, a^{n^{2}-n}_{n}) = 0,$$ since $n^{2} - n \geq n$ and $a^{n}_{i} = 0$ for all $i \in \{1,\ldots,n\}$. Now, say $\alpha \in \Delta$ with $|\alpha| = k$ and assume that $J_{\beta}^{s} = 0$ for any $\beta \succ\alpha$ with $\beta \in \Delta$ and $|\beta| = s$. We prove $J_{\alpha}^{k} = 0$.
Recall that $J^{k}_{\alpha}$ is generated by elements of the form $g = f(M)$, where $M$ is a monomial of $R[\underline{x}]$ with $\text{multideg}(M) = \alpha$ and $|\alpha| = k$. So write $g =
a^{k_{1}}_{\ell_{1}} \cdots a_{\ell_{m}}^{k_{m}}$, where each $k_{t} > 0$, $\Sigma_{t=1}^{m}k_{t}=k$, and each $a_{\ell_{j}}$ is a distinct element of $\{a_{1},\ldots,a_{n}
\}$. Set $y_{j} = a_{\ell_{j}}$ for each $j \in \{1,\ldots, m\}$ and we may assume without loss of generality, that $k_t \geq k_{t+1}$ for each $t$. So $g = y^{k_{1}}_{1} \cdots y_{m}^{k_{m}}$.
Let $\textbf{C}$ be the $m \times m$ matrix $\Big( \displaystyle \frac{y_{j}}{y_{i}} g \Big)_{i,j}$. Note that $\frac{y_{j}}{y_{i}} g$ is an element of $R$, since $k_{i}$ is positive. So $$C =
\left[
\begin{array}{ccccc}
\frac{y_1}{y_1}g & \frac{y_2}{y_1}g & \cdots &\frac{y_{m-1}}{y_1}g &\frac{y_m}{y_1}g \\
\frac{y_1}{y_2}g & \frac{y_2}{y_2}g & \cdots &\frac{y_{m-1}}{y_2}g &\frac{y_m}{y_2}g \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
\frac{y_1}{y_{m-1}}g&\frac{y_2}{y_{m-1}}g& \cdots &\frac{y_{m-1}}{y_{m-1}}g&\frac{y_{m-1}}{y_m}g \\
\frac{y_1}{y_m}g &\frac{y_2}{y_m}g & \cdots & \frac{y_{m-1}}{y_m}g &\frac{y_m}{y_m}g
\end{array}
\right]
=
\left[
\begin{array}{ccccc}
\frac{y_1}{y_1}g & \frac{y_2}{y_1}g & \cdots &\frac{y_{m-1}}{y_1}g & \frac{y_m}{y_1}g \\
0 & \frac{y_2}{y_2}g & \cdots &\frac{y_{m-1}}{y_2}g &\frac{y_m}{y_1}g\\
\vdots & \vdots & \vdots & \vdots &\vdots \\
0 & 0 & \cdots & \frac{y_{m-1}}{y_{m-1}}g&\frac{y_m}{y_{m-1}}g \\
0 & 0 & \cdots & 0 &\frac{y_m}{y_m}g
\end{array}
\right].$$ Indeed if $i > j$ we may write $\displaystyle \frac{y_{j}}{y_{i}} g = f(M')$, where $$M'
= \frac{x_{\ell_{j}}}{x_{\ell_{i}}}(x_{\ell_{1}}^{k_1}\cdots x_{\ell_{j}}^{k_j} \cdots x_{\ell_{i}}^{k_i}\cdots x_{\ell_{m}}^{k_m})
= x_{\ell_{1}}^{k_1}\cdots x_{\ell_{j}}^{k_j+1} \cdots x_{\ell_{i}}^{k_i-1}\cdots x_{\ell_{m}}^{k_m}$$ is a monomial of $R[\underline{x}]$ with $ \beta = \text{multideg}(M')
>_{\text{lex}} \text{multideg}(M) = \alpha$ and $|\beta| = |\alpha| = k$. Thus $\beta \in \Delta$ with $\beta \succ \alpha$, and hence $\displaystyle
\frac{y_{j}}{y_{i}} g \in J_{\beta}^{k} = 0$. So $\textbf{C}$ is upper-triangular. Let $\varphi \colon R^{m} \to R^{m}$ be the $R$-linear map defined by $v \mapsto \textbf{C}v$. Then $\varphi$ is upper triangular. Moreover, $\varphi$ is projectively zero. Indeed, given any $v = \sum c_{j}e_{j} \in R^m$ we have that for each $i$, $$\pi_{i}\varphi(v) = \sum^{m}_{j=1} c_{j}\frac{y_{j}}{y_{i}}g.$$ On the other hand, we note $J^{k+1} = 0$ by our induction hypothesis (or by our previous remarks if $k = n^{2} - n$), so $g \in (J^{k+1} \colon J) =
(0 \colon J)$. Then $\displaystyle g \Big(\sum^{m}_{j=1}
c_{j}y_{j}\Big) \in gJ = 0$. Now since $0$ is $n$-absorbing and $g$ is the product of $k
\geq n$ elements, we must have that for some $i$ (if $g$ is not zero), $$0 =
\frac{1}{y_{i}}g \Big(\sum^{m}_{j=1} c_{j}y_{j}\Big) =
\sum^{m}_{j=1} c_{j}\frac{y_{j}}{y_{i}}g = \pi_{i}\varphi(v).$$ So $\varphi$ is a projectively zero upper-triangular map. Thus by Lemma \[upper.triangular.zero.on.diagonal\], $\pi_{j}\varphi(e_{j}) = 0$ for some $j$. But $\pi_{j}\varphi(e_{j}) = \displaystyle\frac{y_{j}}{y_{j}}g = g$. Thus $g$ = $0$ and the induction is complete.
If $I$ is $3$-absorbing with $\sqrt{I} = P$ a prime ideal and $x \in P$, then $I_{x} = (I :_{R} x)$ is a $2$-absorbing ideal of $R$.
We must show that if $abc \in I_{x}$, then $ab,ac$ or $bc \in I_{x}$. Since $I$ is $3$-absorbing and $abcx \in I$, then either $abc \in I$, $abx \in I$, $acx \in I$ or $bcx \in I$. So we assume $abc \in I$.
Without loss of generality, we can assume $a \in P$ as well. Since $P^{3} \subseteq I$ by Theorem \[main\_theorem\], $xbc(a + x^2) \in I$, so that since $I$ is $3$-absorbing, we’re left with $4$ possibilities: $xbc, xc(a+x^{2}), xb(a+x^{2})$, or $bc(a+x^{2}) \in I$. From the first three choices, we can conclude $xbc, xca,$ or $xba \in I$ respectively, so that we may assume $bc(a+x^{2}) \in I$, from which it follows that $bcx^{2} \in I$. Again since $I$ is $3$-absorbing, this implies $bcx$, $bx^{2}$ or $cx^{2} \in I$. So we may deduce $bx^{2}$ or $cx^{2} \in I$. If $bx^{2} \in I$, then $abx(x+ c) \in I$ implies that one of $abx, ab(x+c), bx(x+c),$ or $ax(x+c) \in I$. In any of these cases, we can deduce either $abx, bcx$ or $acx \in I$. On the other hand, if $cx^{2} \in I$, then $acx(x+ b) \in I$ implies that one of $acx, ac(x+b), cx(x+b),$ or $ax(x+b) \in I$. In any of these cases, we can deduce either $abx, bcx$ or $acx \in I$, and we’re done.
Suppose that $I$ is a $3$-absorbing ideal of a ring $R$ and $\sqrt{I} = P$ is prime. If $x,y,z \in
P$, then either $I_{xz} \subseteq I_{xy}$ or $I_{xy} \subseteq I_{xz}$. Furthermore, $I_{xy}$ is $1$-absorbing.
We can assume $xy,xz \notin I$, otherwise there’s nothing to do since $I_{xy} = I_{xz} = R$. We have that $I_{x}$ is $2$-absorbing by the previous result, so that the set $I_{xa} = \{ (I :_{R} xa) \mid a \in \sqrt{I_{x}} \backslash I_{x} \}$ is a totally ordered set of $1$-absorbing ideals [@Badawi Theorems 2.5,2.6]. Since $z,y \in \sqrt{I} \subseteq \sqrt{I_{x}}$ and $z,y \notin I_{x}$ by our assumption, the claim follows.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank Youngsu Kim and Paolo Mantero for their helpful comments and suggestions writing this note.
[99]{}
D. F. Anderson and A. Badawi, [*On $n$-absorbing ideals of commutative rings*]{}, Comm. in Alg., [**39(5)**]{}, 1646-1672 (2011).
Ayman Badawi, [*On $2$-absorbing ideals of commutative rings*]{}, Bull. Austral. Math. Soc., [**75(3)**]{}, 417-429 (2007).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Respondent-Driven Sampling (RDS) is a variant of link-tracing, a sampling technique for surveying hard-to-reach communities that takes advantage of community members’ social networks to reach potential participants. While the RDS sampling mechanism and associated methods of adjusting for the sampling at the analysis stage are well-documented in the statistical sciences literature, methodological focus has largely been restricted to estimation of population means and proportions (e.g. prevalence). As a network-based sampling method, RDS is faced with the fundamental problem of sampling from population networks where features such as homophily and differential activity (two measures of tendency for individuals with similar traits to share social links) are sensitive to the choice of a simulation and sampling method. Though not clearly described in the RDS literature, many simple methods exist to generate simulated simple RDS data, with a small number of covariates where the focus is on estimating simple estimands. There is little to no comprehensive framework on how to simulate realistic RDS samples so as to study multivariate analytic approaches such as regression.
In this paper, we present strategies for simulating RDS samples with known network and sample characteristics, so as to provide a foundation from which to expand the study of RDS analyses beyond the univariate framework. We conduct an analysis to assess the accuracy of simulated RDS samples, in terms of their ability to generate the desired levels of homophily, differential activity, and relationships between covariates. The results show that RDS samples more accurately reflect the data-generating conditions when (1) homophily plays little to no role in the formation of links and (2) when groups, defined by traits, are equally active and equally represented in the population. We demonstrate that control over all features is difficult; our proposed ‘indirect’ method of simulation allows better control over treatment effects and covariate relationships than the traditional approach to RDS simulation, at the cost of some bias in the level of homophily. We use this approach to mimic features of the Engage Study, a respondent-driven sample of gay, bisexual and other men who have sex with men in Montréal.
author:
- 'Mamadou Yauck[^1]'
- 'Erica E. M. Moodie'
- Herak Apelian
- 'Marc-Messier Peet'
- Gilles Lambert
- Daniel Grace
- 'Nathan J. Lachowsky'
- Trevor Hart
- Joseph Cox
title: '**Sampling from Networks: Respondent-Driven Sampling**'
---
\#1
[*Keywords: Network data; Respondent-driven sampling; Simulation.*]{}
Introduction
============
Hard-to-reach communities such as sex workers, people who use drugs, or men who have sex with men may be unwilling to participate in a research study, often because of social stigma [@Hec97]. Members of such communities are, however, often connected through a social network. Current sampling strategies such as snowball sampling [@Good61], a variant of link-tracing sampling, take advantage of those social relationships to reach members of the study population who are not easily accessible to researchers. The snowball sampling mechanism is non-probabilistic, and can result in selection or sampling biases that may affect the accuracy of any estimates calculated via the produced samples [@Gile11; @Gile15b]. Because of this problem, snowball samples are often referred to as ‘convenience samples’ that lack any valid basis for inferential methods whose results might generalize to the underlying population of interest [@Bier81].
*Respondent-Driven Sampling* (RDS) was introduced [@Hec97] as a form of link-tracing sampling that aimed to combine the advantages of probabilistic sampling and snowball sampling, with the idea that ‘those best able to access members of hidden populations are their own peers’. In RDS, the study recruitment protocol leads to the generation of a large number of recruitment *waves*, with each successive participant being asked to recruit additional participants starting from initial ‘seed’ participants that are purposively selected. RDS offers several advantages over traditional link-tracing methods. First, the RDS recruitment occurs through a number of waves, allowing the process to sample further from the seeds and reducing the dependence of the final sample on the initial sample. Second, allowing respondents to recruit their peers reduces the confidentiality concerns associated with listing respondents’ social network contacts. Finally, RDS adjusts all analyses for the (self-reported) social connectivity of participants, thereby ensuring the resulting estimates account for the relative over- (or under-) sampling of those members of the community who are more (or less) socially connected and thus more (or less) likely to be invited to participate in the study.
Current research on RDS is focused mainly on estimating population means and proportions [@Gile15b; @Gile18] while overlooking two important issues. First, while the RDS sampling mechanism and associated methods of adjusting for sampling weights or clustering at the analysis stage are well documented in the statistical litterature, little to no technical consideration is given to multivariate modeling. Second, being a network-based sampling method, RDS is faced with the fundamental problem of sampling from population networks where network features such as homophily and differential activity, two measures of social ‘connectedness’ of individuals with similar traits, are sensitive to the choice of a sampling method [@Cost03]. Though not clearly and thoroughly described in the RDS literature, there are simple methods for RDS data simulation, with a small number of covariates where the focus is on estimating simple estimands. There is little to no comprehensive framework on how to simulate realistic RDS samples so as to conduct multivariate studies such regression. Moreover, sensitivity analyses in current RDS research is concerned with population estimators (means and proportions) in regard to network and sampling assumptions [@Gile15b] while failing to address the accuracy of RDS samples. This paper presents strategies for simulating RDS samples with known network and sample characteristics, so as to provide a basis from which to extend RDS analyses beyond the univariate framework. A sensitivity analysis is conducted to assess the accuracy of RDS samples, in terms of their ability to recover the true levels of homophily, differential activity and association between covariates.
In the next section, we introduce RDS as a network-based sampling technique and describe its sampling mechanism. In section \[sec:methodo\], we present the methodology for simulating RDS samples from population networks. Section \[sec:simulation\] presents a simulation study assessing the accuracy of simulated RDS samples. In Section \[sec: casestudy\], we analyze a real-world dataset about HIV transmission among gay, bisexual and other men who have sex with men (GBM), recruited in Montréal using RDS.
Background
==========
Simulation of RDS samples requires two steps. First, a population network with known characteristics and relational structures must be simulated. Second, an RDS sample with prespecified characteristics is drawn from the population network [@Spil18; @Gile10]. In this section, we describe the required assumptions and key characteristics of the population and the sampling process.
The population network {#sec:popnetwork}
----------------------
Suppose a target population of $N$ individuals. Following [@Hec97], [@Sal04] and [@Gile11], we assume that the individuals in the population, or *nodes* in the network, are connected by social ties.
The social network connecting members of the target population exists and is an undirected graph $G = (V, E)$ with no parallel edges or self-loops.
*Parallel edges* refer to two edges with the same end vertices. An edge is called *self loop* if it has the same vertex as both its end vertices.The elements in the set $V$ of vertices are the $|V|=N$ individuals (or nodes) in the population, while the edges in $E$ represent social ties between members of the population. Let $\bm{y}$ be an $N\times N$ adjacency matrix representing the dual relationships in the network, with elements $y_{ij}=y_{ji}$ indicating the presence of an edge between nodes $i$ and $j$. Each node $i \in V$ is assigned a degree $d_i=\sum_{j=1}^N y_{ij}$, defined as the number of edges connected to that node. We assume that the degrees over the entire network are distributed according to $D=\left(D_0,\ldots,D_K\right)$, where $D_k=\sum_{i=1}^N I\left(d_i=k\right)$ represents the number of nodes with degree $k$, $K$ represents the maximum degree, and $D$ can be viewed as a population-level frequency table. The degree distribution is subjected to the following constraint of consistency: $\sum_{k=0}^K kD_k=\sum_{i=1}^N d_i=2|E|$, where $|E|$ is the number of edges in G. Finally, we define $\bar{d}=\frac{1}{N}\sum_{i=1}^N d_i$ as the mean degree of the network.
Let $\bm{z}$ be a $1\times N$ vector of two-valued attributes $z_i \in \{0,\;1\}$ for the $i-th$ individual in the population, and $p=\frac{1}{N}\sum_{i=1}^N z_i$ the proportion of individuals with attribute $z_i=1$. We focus on two-valued nodal attributes, but the results presented in this paper can be extended to categorical attributes. We define two additional features of the network structure on nodal attributes: *differential activity* ($D_a$) and *homophily* ($h$). Differential activity is equal to the ratio of mean degrees by attribute, $$D_a=\frac{\sum_{i=1}^N z_id_i}{\sum_{i=1}^N (1-z_i)d_i} \frac{1-p}{p},$$ and can be thought of as a measure of the relative ‘social connectedness’ of those with and without the trait. Homophily is the tendency for individuals with similar traits to share social ties. Several measures of homophily have been used in the RDS literature [@Gile18]. In this work, two related measures of homophily in the population network will be considered. Let $z_{ji}=1$ if node $i$ has attribute $j$. A fairly intuitive measure of homophily is Newman’s assortativity coefficient for categorical attributes [@Newm02], defined as $$\label{assortnewman}
h=\frac{\sum_{j\in\{0,\;1\}}p_{jj}-\sum_{j\in\{0,\;1\}}p_{j\bullet}p_{\bullet j}}{1-\sum_{j\in\{0,\;1\}}p_{j\bullet}p_{\bullet j}},$$ where $$p_{jk}=\frac{1}{|E|}\sum_{i=1}^N \sum_{l=1}^N y_{il}z_{ji}\left(1-z_{kl}\right),\,j\neq k,\,i \neq l$$ is the proportion of network edges linking a node of attribute $j$ to one of attribute $k$, $$p_{00}= \frac{1}{|E|}\sum_{i=1}^N \sum_{l<i} y_{il}\left(1-z_{ji}\right)\left(1-z_{kl}\right),\mbox{~~and~~}p_{11}= \frac{1}{|E|}\sum_{i=1}^N \sum_{l<i} y_{il}z_{ji}z_{kl},$$ Further, $p_{j\bullet}=\sum_{k\in \{0,\,1\}} p_{jk}$ is the fraction of network edges originating from a node of attribute $j$ and $p_{\bullet j}$ is the proportion of edges terminating in a node of category $j$. Newman’s assortativity coefficient ranges from $-1$ to $1$. A value of $h=1$ indicates a perfect assortative mixing, $h=0$ depicts no assortative mixing, while at $h=-1$ the network is said to be dissortative. The level of homophily can also be measured by $R=p_{11}/p_{10}$ ($p_{10}\neq 0$) [@Gile11]. These two metrics are related since, for the undirected graph $G$, Equation (\[assortnewman\]) can be expressed as $$h=\frac{R}{1+R}-\frac{2}{1+\eta\left(1+2R\right)},$$ where $\eta=\frac{1}{D_a}\frac{1-p}{p}$. Even though formulation (\[assortnewman\]) is easier to interpret, we will primarily rely on the measure $R$ in the simulations of Section \[sec:simulation\] for ease of implementation. An illustration of a network with a single nodal attribute and prespecified levels of prevalence, homophily and differential activity is presented in Figure \[fig:popnet\].
![A population network with minority group in grey, and with $N=12$, $p=0.33$, $\bar{d}=2.16$, $D_a=1.16$, $R=0.40\,(h=-0.20)$.[]{data-label="fig:popnet"}](popnet.pdf)
The RDS sampling process {#sec:RDSsample}
------------------------
RDS is a network-based sampling technique in which members of a hidden community reach across their personal social network to recruit other members into a study. We begin by describing how RDS works in practice, before considering how this translates to the implementation of RDS in simulation.
In practice, generating an RDS sample procedes as follows:
1. Sampling starts with the selection of a fixed number of nodes, or *seeds*, in the network. In practice, seeds are often chosen purposefully so as to be as heterogeneous as possible with respect to nodal attributes. Seeds recruit members to the study by (i) inviting them to participate, and (ii) giving their invited social contacts a *coupon* that they return to the researcher, so that the researcher can track the social links in the recruitment process. Coupons have unique identifying numbers and/or letters to link recruiters to their recruits. Seeds, and successive participants, all receive a fixed number of coupons.
2. Each seed recruits further participants, up to the total number of coupons received. Individuals may only participate once in the study.
3. Each successive implementation of Step 2 is called a *wave*. The recruitment continues, through a number of waves, until the desired sample size is reached.
To consider how the above practical implementation of study recruitment can be formalized, we first state the assumptions on the graphical structure of RDS.
The RDS recruitment is conducted across edges of the undirected graph G.
No node in the social network can be sampled more than once.
Successive samples are obtained by sampling among the remaining unsampled neighbours (social contacts) of sampled nodes in the population. Each sampled node selects up to a predetermined fixed number of unsampled neighbors. The recruitment process stops when the desired sample size is attained. An example of the RDS sampling process is illustrated in Figure \[fig:RDSsampling\].
The general procedure to simulate an RDS sample is as follows:
1. Sample, without replacement, a fixed number of seeds from the $N$ nodes of the population network. The selection of seeds is either dependent or independent of $\bm{z}$. When the selection is conditional on nodal attributes, there is potential ‘seed bias’ induced by the selection of the initial sample, especially when there is strong homophily on $\bm{z}$.
2. For each seed, sample up to a number of nodes bounded by the number of coupons, without replacement, from among their unsampled neighbors. If the selection depends on $\bm{z}$, then there is *differential recruitment* in the sampling process.
3. Repeat step 2 until the desired sample size is reached. In practice, the target sample size is usually linked to the diversity of the RDS samples with respect to the population characteristics upon which sampling focuses.
The aim of this general procedure is to generate RDS samples with the desired network and attribute-related characteristics. Samples drawn from population networks often lead to sampling errors mainly driven by differential activity and homophily [@Wag17]; we will consider these biases further in Section \[sec:simulation\]. One key advantage of the RDS sampling process over other link tracing sampling methods is that through a fixed number of waves, the dependence of the final sample on the initial sample is reduced or eliminated. This relies on the approximation of the RDS sampling process as a regular Markov process [@Hec97; @Sal04]. From a practical standpoint, one needs to decide on the number of seeds to sample and coupons to distribute, as the latter is inversely proportional to the number of waves of sampling [@WHORDS2013] under the assumption that recruits accept an invitation randomly. When homophily on $\bm{z}$ is weak, the process will likely reach equilibrium after a small number of waves [@Gile10]. In this case, the decision on whether to distribute a small (large) number of coupons to a large (small) number of seeds will not have a great impact on the final sample. When the network is highly clustered, sampling should be conducted in a way that allows a broad range of individuals to recruit from their networks through many waves. This can be achieved by distributing a small number of coupons to an initial sample as diverse as possible with respect to the characteristics of the target population.
![Illustration of the RDS sampling process with one seed (dashed circle), two coupons per sampled node and a sample of size $8$. The minority group is represented in grey. The RDS recruitment chain navigating through the population network is illustrated in **(a)** and the resulting RDS recruitment graph is illustrated in **(b)**, indicating 5 waves of recruitment.[]{data-label="fig:RDSsampling"}](RDSsamplingtikz.pdf)
Methodology {#sec:methodo}
===========
In this section, we describe a classical approach to simulating data from a population network, and a new approach that strictly controls inter-variable characteristics at the cost of giving up direct control of the network parameters.
Simulating a population {#sec:SimNetwork}
-----------------------
### A classical approach: Exponential random graphs {#sec:classic}
A common approach to generating the social network of the population is by simulating using Exponential Random Graph models [@harris2014introduction], a class of generative models based on exponential family distribution theory for modeling network dependence. Let $\bm{Y}$ be the random adjacency matrix for the population network. The joint distribution of its elements is defined as $$\label{ergm}
\mbox{P}\left(\bm{Y}=\bm{y}|\bm{z}\right)=\frac{\mbox{exp}\left\lbrace \bm{\theta} g\left(\bm{y},\bm{z}\right) \right\rbrace }{\bm{\kappa\left(\bm{\theta}\right)}},$$ where $g\left(\bm{y},\bm{z}\right) $ is a vector of network statistics and $\bm{\theta}$ its corresponding vector of parameters, $\bm{\kappa\left(\bm{\theta}\right)}=\sum_{\bm{y}} \mbox{exp}\left\lbrace \bm{\theta} g\left(\bm{y},\bm{z}\right) \right\rbrace$ is a normalizing constant. The main structural features of the network are fully captured in model (\[ergm\]) by choosing statistics to represent density, degree distribution by attribute and homophily on nodal attributes. The sufficient statistic for the network density is $g_0(\bm{y})=\sum_{i=1}^N\sum_{j=1}^Ny_{ij}=|E|$. The statistics for the degree distribution by attribute are obtained by counting the number of times a node with such attributes appears in an edge: $$g_1\left(\bm{y},\bm{z}\right) = \sum_{j=1}^N \sum_{k<j} y_{jk}z_jz_k+\sum_{j=1}^N \sum_{k=1}^N y_{jk} z_j\left(1-z_k\right),\mbox{~~and~~}$$
$$g_2\left(\bm{y},\bm{z}\right) =\sum_{j=1}^N \sum_{k<j} y_{jk}\left(1-z_j\right)\left(1-z_k\right)+ \sum_{j=1}^N \sum_{k=1}^N y_{jk}z_j\left(1-z_k\right).$$
The sufficient statistics for homophily are represented by the joint distribution of the node and neighbour’s attribute (also called *mixing matrix*). For a $2\times2$ mixing matrix of an undirected graph, one needs to specify the following statistics: $$g_3\left(\bm{y},\bm{z}\right) = \sum_{j=1}^N \sum_{k<j} y_{jk}z_jz_k \mbox{~~and~~} g_4\left(\bm{y},\bm{z}\right)= \sum_{j=1}^N \sum_{k<j} y_{jk}\left(1-z_j\right)\left(1-z_k\right).$$ By expressing (\[ergm\]) in terms of the conditional log-odds of a tie between two nodes, one can show that $\bm{\theta}$ represents the log-odd of a tie conditional on all others [@harris2014introduction]. If we assume that $Y_{ij}$ and $Y_{kl}$ are independent for any $\left(i,\,j\right) \neq \left(k,\,l\right)$, then $g\left(\bm{y},\bm{z}\right) =g\left(\bm{y}\right)$=$\sum_{i=1}^N\sum_{j=1}^Ny_{ij}$ and (\[ergm\]) reduces to $$\label{ergm-bernou}
\mbox{P}\left(\bm{Y}=\bm{y}|\bm{z}\right)=\frac{\mbox{exp}\left( \theta |E| \right) }{\bm{\kappa\left(\theta\right)}},$$ which corresponds to the simplest random graph model, also called the Bernouilli model [@durrett_2006]. In model (\[ergm-bernou\]) the probability of a tie between any two nodes, $\frac{\exp\left(\theta\right)}{ 1+\exp\left(\theta\right) }$, is the density of the network.
ERGMs can be fitted via $\textsf{statnet}$ [@handcock:statnet], a suite of $\textsf{R}$ packages, including $\textsf{ergm}$ [@Hunter08], $\textsf{sna}$ [@Carter08-1] and $\textsf{network}$ [@Carter08-2], for the modeling of network data. The structural features of the population network are included as *terms* in the function $\textsf{ergm}$ of the same package. Homophily on nodal attributes and differential activity are specified in the function call to $\textsf{ergm}$ by using terms $\textsf{nodematch}$ and $\textsf{nodecov}$ respectively.
If there is more than one nodal covariate, a two-step procedure is used to control both the relationship between covariates and the network structure on each covariate. First, we generate covariates with known dependence and marginal distributions using the package $\textsf{GenOrd}$ [@Ale17]. Then compute network statistics corresponding to homophily and differential activity for each covariate as inputs for $\textsf{ergm}$. We refer to this as the *classical* simulation approach.
Once the model is fully specified and fitted, one can simulate an undirected network from the distribution of all undirected networks that are consistent with the target statistics.
### An indirect method: Layering known relationships onto exponential random graphs {#sec:indirect}
The classical approach to simulating a population network focuses on control of the network properties. Such approaches are particularly useful, and frequently employed, in methodological studies that aim to examine the properties of estimators for single-variable parameters such as the prevalence or mean [@Gile10; @Gile11; @Spil18]. These approaches are satisfactory for their intended purpose, but are not suitable for methodological study of multi-variable approaches such as regression analyses. There is a pressing need to develop simulation methods to study regression within an RDS context, as regression is frequently employed [@Ram13; @Rho15; @Spi09] with little methodological understanding of the properties of the methods used.
We propose an approach that begins with an ERGM that directly controls the network properties of several covariates $\bm{x}$, and then generates one or more additional covariates $\bm{z}$ with known dependence on $\bm{x}$. This approach can directly control the relationship between covariates, allowing the researcher to directly control the ‘regression parameter’ and any additional covariates (including, for example, confounding), while a network structure on these additional covariates is induced by the strength of their relationship with $\bm{x}$. We call this an *indirect* simulation approach.
Let $\bm{x}$ be a $N\times q$ matrix of additional nodal covariates for the population network described in Section \[sec:popnetwork\]. Let $\rho_j$, $j=1,\dots, q$, be a measure of the association between $\bm{x_j}$ and $\bm{z}$. In Equation (\[ergm\]), homophily on $\bm{z}$ is dealt with by including parameters corresponding to the cells of the mixing matrix directly into the ERGM model. We consider a new setting in which homophily is defined on each of the $q$ nodal covariates. We investigate the ‘spill-over’ effect of homophily from $\bm{x}$ to $\bm{z}$ through $\bm{\rho}=\left(\rho_1,\, \dots,\, \rho_q\right)$ at the population level.
Simulating a study that applies RDS to the population network {#sec:SimNetwork}
-------------------------------------------------------------
In a simulation, RDS samples are drawn from the synthetic undirected population networks using a two-step procedure. First, $s$ nodes are sampled (sequentially) without replacement as *seeds*. In this work, we assume that there is no ‘seed bias’ as the selection regime of the initial sample does not depend on $\bm{z}$. Successive sampling *waves* are obtained by sampling sequentially, and without replacement, up to $c$ nodes from among the unsampled neighbours of each selected node. We assume that their is no differential recruitment. The process is halted once the sample size reaches $n$. The parameters of the RDS sampling process are defined in Table \[table:parameter\_RDS\].
[lcc]{} & & [Notation]{}\
& **Population network**\
Population size & & $N$\
Prevalence & & $p$\
Mean degree& & $\bar{d}$\
Differential activity & & $D_a$\
Homophily & & $R$\
\
& **RDS sample**\
Number of seeds & & $s$\
Number of coupons& & $c$\
Sample size& & $n$\
\[table:parameter\_RDS\]
Simulation and RDS sample evaluation {#sec:simulation}
====================================
The goal of the simulation study is to assess the accuracy of RDS samples drawn from population networks using the methodology described in Section \[sec:methodo\]. In this work, we define an ‘accurate’ RDS sample as a sample that preserves $(i)$ the mean degree, $(ii)$ the differential activity and $(iii)$ the level of homophily of the original network. We simulated population networks, with a single nodal attribute $\bm{z}$, from which RDS samples were drawn for the set of characteristics defined in Table \[table:parameter\_values\]. We performed 500 simulation runs for each set of characteristics.
[lcc]{} & & [Values]{}\
& **Population network**\
$N$ & & $1000$\
$p$ & & $0.1,\,0.5,\,0.8$\
$\bar{d}$ & & $99.9$\
$D_a$ & & $0.5,\,1,\,4$\
$R$ & & $1,\,5$\
\
& **RDS sample**\
$s$ & & $5$\
$c$ & & $2$\
$n$ & & $200,\,400,\,800$\
\[table:parameter\_values\]
We first study the accuracy of sampling from an ERGM with a single covariate, as this is relevant both to the classical data-generation approach and as the first step in the indirect data-generation approach. We then turn to the indirect approach to simulating network data to showcase how a variable can be generated dependent on an ERGM so as to directly control the covariate relationship while inducing homophily on the variable generated outside of the ERGM.
Degree distribution and homophily biases
----------------------------------------
The distribution of the relative biases for each set of network and sample characteristics are illustrated in Figures \[fig:deg.bias\]-\[fig:hmp.bias\] for the estimators of mean degree, differential activity and homophily, respectively. All cases are compared to the ideal setting in which $R=1$, $D_a=1$ and $p=50\%$. In this setting, there is negligible bias in the three estimators regardless of sample size.
Figure \[fig:deg.bias\] shows the distribution of the relative bias for the mean degree of the population network. When both groups are equally active ($D_a=1$), the bias is small to negligible for any level of homophily and prevalence. When $R=1$ and $p=80\%$, the population network is moderately heterophilic ($h\approx -0.5$) and the RDS sample exhibits small positive bias. This can be explained by the fact that in moderate heterophilic networks, the nodes in the majority group are attracted by high degree nodes in both groups. This mixing process helps nodes in the majority group gain more popularity, causing asymmetry in the degree distribution.
When the $\bm{z}$-present ($\bm{z}=1$) group is four times more active than the $\bm{z}$-absent group (so that $D_a=4$), the bias is more pronounced than in the baseline scenario ($D_a=1$). When $R=1$ and $p=10\%$, the mean bias is approximately $35\%$ for a moderate sample size ($n=200$). This can be explained by the fact that the population network is almost neutral ($h \approx 0.05$), prompting the more active minority group to mix with high degree nodes in both groups. The bias decreases as the $\bm{z}$-present group goes from minority to majority group and the network from neutral to moderately heterophilic ($h \approx -0.4$). When the $\bm{z}$-absent group is twice as active as the $\bm{z}$-present group ($D_a=0.5$), the reverse is observed but the bias is less pronounced than in the previous scenario. Note that the relative bias decreases as the sample size increases.
In Figure \[fig:da.bias\], we show the distribution of the relative bias for the level of activity in the network. The bias is negligible in all cases. There is more variability when the $\bm{z}$-absent minority group is twice as active as the $\bm{z}$-present group, which decreases as more nodes are sampled.
Figure \[fig:hmp.bias\] shows the bias distribution for homophily. When both groups are equally active, the bias is negligible when $R=1$. The bias is higher on average when the $\bm{z}$-present group, whether in the minority or the majority, is four times more active than the $\bm{z}$-absent group. Furthermore, the estimation of homophily exhibits more variability as the difference in the level of activity between the $\bm{z}$-present and the $\bm{z}$-absent groups increases. An important observation is that the estimator deteriorates as the sampling fraction increases. This result was depicted by [@lin2013sampling] while comparing different sampling techniques on social networks. A possible explanation is that the RDS sampling process described in Section \[sec:RDSsample\] can be seen as degree-based conditional on the population network. Thus, by navigating through the population network, the RDS recruitment chain captures more of the original network structure (degree distribution and activity level) and less of its mixing structure (mixing matrix and homophily) and, as sampling progresses, the process is equivalent to probability proportional to degree (without replacement) sampling.
Overall, RDS samples are more accurate when both groups ($\bm{z}$-present and $\bm{z}$-absent) are equally active and equally represented in the population. For mean degree and differential activity, the estimation becomes more accurate as the sample fraction increases. The RDS sampling process performs well in terms of recovering the true level of homophily for small to medium sampling fractions but deteriorates as the sampling fraction increases. The variability in the mean degree and homophily increases as the gap in activity level between the two groups widens.
![Mean degree bias for a population network of size $N=1000$. The x-axis shows the levels of population prevalence, $p=0.1,\,0.5,\,0.8$. The y-axis visualizes the relative bias of the mean degree estimator $\hat{\bar{d}}$ for a target value of $\bar{d}=99.9$. The $2\times 3$ grid depicts the levels of differential activity ($D_a=0.5,\,1,\,4$) and homophily ($R=1,\,5$). The boxplots for each set of characteristics are presented for three sample sizes ($n=200,\,400,\,800$). The number of seeds and coupons are set to $s=5$ and $c=2$, respectively.[]{data-label="fig:deg.bias"}](md.pdf)
![Differential activity bias for a population network of size $N=1000$ and mean degree $\bar{d}=99.9$. The x-axis shows the levels of population prevalence, $p=0.1,\,0.5,\,0.8$. The y-axis visualizes the relative bias of the differential activity estimator $\hat{D}_a$. The $2\times 3$ grid depicts the levels of differential activity ($D_a=0.5,\,1,\,4$) and homophily ($R=1,\,5$). The boxplots for each set of characteristics are presented for three sample sizes ($n=200,\,400,\,800$). The number of seeds and coupons are set to $s=5$ and $c=2$, respectively.[]{data-label="fig:da.bias"}](da.pdf)
![Homophily bias for a population network of size $N=1000$ and mean degree $\bar{d}=99.9$. The x-axis shows the levels of population prevalence, $p=0.1,\,0.5,\,0.8$. The y-axis visualizes the relative bias of the homophily estimator $\hat{h}$. The $2\times 3$ grid depicts the levels of differential activity ($D_a=0.5,\,1,\,4$) and homophily ($R=1,\,5$). The boxplots for each set of characteristics are presented for three sample sizes ($n=200,\,400,\,800$). The number of seeds and coupons are set to $s=5$ and $c=2$, respectively.[]{data-label="fig:hmp.bias"}](hmp.pdf)
Indirect homophily on $\bm{z}$ {#sec:indirecthomo}
------------------------------
As described above, in this indirect data generation approach, we control the association between a directly simulated network variable $\bm{z}$ and $q$ additional variables $\bm{x}$ via $\rho_j$. For simplicity, we consider the case where $q=1$. Let $\bm{x}$ and $\bm{z}$ be vectors of binary covariates. We set homophily parameters on $\bm{x}$ to $h_x=-0.25,\,-0.5,\,0.25,\,0.5,\,0.75$ and $\rho=0.1,\,0.5,\,0.8,\,0.9,\,-0.1,\,-0.5,\,-0.8,\,-0.9$.
Figure \[fig:indhomo\] displays the evolution of the level of homophily on $\bm{z}$, $h_z$, as a function of the coefficient $\rho$, for different levels of homophily on $\bm{x}$. For homophilic networks ($h_x=0.25,\,0.5,\,0.75$), homophily on $\bm{z}$ increases (decreases) as the positive (negative) association between $\bm{z}$ and $\bm{x}$ becomes stronger. The reverse is observed when the network is heterophilic ($h_x=-0.25,\,-0.5,\,-0.75$). This result shows that (1) homophily on an attribute can be ‘transferred’ to another and (2) the magnitude of the transfer depends on the strength of the association between the two attributes. One could naturally then ‘layer’ additional covariates onto the network, as needed. For example, to understand the properties of a causal estimator in the presence of confounding, one could generate the confounding variables $\bm{x}$ to have a particular network structure, then simulate an exposure $\bm{z}$ as a function of $\bm{x}$ and further simulating an outcome that is affected by both the confounders and the exposure.
![Indirect homophily on $\bm{z}$ for a population network of size $N=1000$ and mean degree $\bar{d}=99.9$. The x-axis displays the strength of the association, $\rho=0.1,\,0.5,\,0.8,\,0.9,\,-0.1,\,-0.5,\,-0.8,\,-0.9$. The y-axis visualizes the level of homophily on $\bm{z}$, $h_z$, for different levels of homophily on $\bm{x}$, $h_x=-0.25,\,-0.5,\,0.25,\,0.5,\,0.75$.[]{data-label="fig:indhomo"}](INDHP.pdf)
Case Study {#sec: casestudy}
==========
We now turn to data collected through the Engage study, a national cross-sectional study undertaken in three large Canadian cities, Montréal, Toronto and Vancouver. The main goal of the Engage study is to determine the individual, social and community-level factors that impact HIV and STI transmission and related behaviours among GBM [@DRSP19]; for the Engage Montréal recruitment network, see Figure \[fig:engagenet\]. In this example, we focus on data collected in Montréal, and aim to generate, as a proof of concept, synthetic samples that mimic a subset of the observed data in terms of key covariate features including mean degree, differential activity, homophily, and the correlation between the variables.
Participants in the Engage study were recruited using RDS. The process started with the selection of $s=27$ seeds with ages ranging from 16 to 80, who mostly identified as French Canadian (17), English Canadian (1), European (4), Caribbean (1), Arab (1), South-East Asian (1) and mixed (2), with four participants living with HIV. Seeds were selected following a formative assessment and community mapping, and to be as heterogeneous as possible with respect to the diversity (e.g., HIV status, ethnicity) of the GBM community. At the end of their interview, participants received $c=6$ uniquely identified coupons, along with monetary and non-monetary incentive (complete STI screening), to recruit their peers into the study population. The study was conducted from February 2017 through June 2018 for a total of $n=1179$ GBM recruits. Approximately $55\%$ of the recruited individuals who were given coupons, including $6$ seeds, did not recruit anyone, while $82\%$ of the effective recruiters recruited between $1$ to $3$ members.
![Representation of the RDS network among $n=1179$ gay, bisexual and other men who have sex with men (GBM) in Montréal, 2018. On the left, a sphere representation of the network. On the right, a representation of the recruitment tree in which nodes are aligned by wave.[]{data-label="fig:engagenet"}](engagenet.pdf)
Descriptive statistics of the RDS sample are displayed in Table \[table:descstat\]. About $33\%$ of respondents were aged less than 30 years, seven out of ten were born in Canada, two-thirds were French or English Canadian, $30\%$ had a high school diploma or lower, and around $58\%$ earned less than $30\,000\$$ in annual income. Around $86\%$ of respondents described themselves as gay and two out of five reported being in a relationship with a main partner. In the past six months, almost $14\%$ of GBM recruits declare using crack cocaine, and less than $1\%$ reported using a syringe used by someone else in the past six months. About two-thirds of respondents reported having anal sex without a condom during the past six months, and almost $17\%$ reported that they were living with HIV.
The mean degree of the observed RDS network is $\bar{d}=51.77$. The level of homophily on covariates described in Table \[table:descstat\] ranged from 0.08 (Use of a syringe used by someone else) to 0.46 (Age), depicting a small to moderate homophilic network on average, with respect to nodal attributes. The differential activity level ranged from 0.67 (Age) to 1.40 (Place of birth). Respondents with a college degree and those with a lower diploma were almost equally active (as measured in terms of their degree, or number of social links), while respondents who reported living with HIV were $32\%$ more active than those who reported an HIV negative status.
The goal of this example is to simulate RDS samples with network and sample characteristics similar to those of the Engage population network. First, we simulate $1000$ population networks with three nodal covariates: condomless anal sex (CAS), currently in a relationship with a main partner (CIR) and HIV status (HIV+)) using two methods. The first method, labeled *classical*, is described in Section \[sec:classic\]. The second method, labeled *indirect*, takes advantage of the ‘spill-over’ effect of homophily and is described in Section \[sec:indirect\]. The true size of the population is set to $N=40400$ [@ISQ16].
For each nodal attribute, we estimate the prevalence, the level of homophily and the differential activity by adjusting for each individual’s reported social network size using the RDS-II estimator [@Hec02] and take these values to be the true values in the population. Then, for each simulated population network, we simulate an RDS sample with characteristics that mimic those of the empirical RDS sample. The summary statistics of the network and RDS sample characteristics are presented in Table \[table:parameter\_Engage\] (see Appendix). The association matrix for the three nodal attributes is presented in Table \[table:correlation\] and displayed in the Appendix. There is a positive and significant association between having sex without a condom and being in a relationship with a main partner. Having a positive HIV status is not significantly associated with having sex without a condom or being in a relationship. We compute relative biases for differential activity, homophily, mean degree and association coefficients on each of the three nodal covariates. Figure \[fig:engage.appli\] shows the bias distribution for differential activity, homophily, mean degree and the coefficient of association.
The relative bias for the level of activity is small to negligible for the three covariates. The bias is more pronounced for HIV+ ($-2.9\%$ on average) as this group is $32\%$ more active than the HIV- group. The bias is negligible for the nodal attribute CIR ($<1\%$ on average) as both groups are approximately equally represented ($p=44\%$) and equally active ($D_a=0.97$).
The homophily bias is small (for HIV+) to negligible (for CAS and CIR) on average. Overall, this result was expected as groups for both CAS and CIR are approximately neutral, equally active and equally represented in the network. Although small ($-2.56\%$ on average), the ‘magnitude’ of the relative homophily bias for HIV status was also expected as the HIV+ group is $32\%$ more active, exhibits medium homophilic behavior ($h=0.34$) and represents $17\%$ of the network population.
Interpretation of the mean degree bias is more complicated as all nodal covariates contribute to defining the structure of the population network. We hypothesize that the bias is mainly driven by HIV status as the minority HIV positive group is more active than the HIV negative group while exhibiting a medium homophilic behavior, prompting high degree nodes to mix mainly with other high degree nodes, thus causing asymmetry in the degree distribution.
Overall, *classical* and *indirect* methods of simulation offer similar performances for differential activity, homophily and mean degree. The indirect method performs somewhat better at recovering the true level of association between nodal attributes, at the cost of some control over the level of homophily. This trade-off may nevertheless be useful to methodologists wishing to examine the propoerties of estimators of associations between variables that arise from network data and collected via RDS.
[lllllcccc]{} & & &&& [**$n$**]{} & [**%**]{}& [**$h$**]{}& [**$D_a$**]{}\
**Socio-demographic characteristics**& & &&&&& &\
Age less than 30 & &&& & $384$ & $32.6$& $0.46$ & $0.67$\
Born in Canada& & &&& $820$ & $69.6$&$0.35$ & $1.40$\
Not French or English Canadian& &&& & $445$ & $37.8$& $0.32$&$0.71$\
Highest diploma lower than college’s& &&& & $352$ & $29.9$& $0.23$ &$0.98$\
Less than $30\,000\$$ in annual income& &&& & $678$ & $57.5$ & $0.20$&$0.77$\
\
**Sexuality**& & &&&&& &\
Describe oneself as gay & &&& &$1016$ & $86.2$ &$0.27$ &$1.35$\
Currently in a relationship with a main partner & &&& &$508$ &$43.1$ & $0.09$&$0.95$\
Anal sex without a condom during the past 6 months & &&& &$761$ &$64.5$ &$0.17$ &$1.18$\
\
**Drug Use**& & &&&&& &\
Use of crack cocaine& &&& & $158$ &$13.4$ &$0.27$ &$1.27$\
Use of a syringe used by someone else & &&& &$68$ &$0.06$ &$0.08$ &$1.16$\
\
**Health Status**& & &&&& & &\
HIV positive & &&& & $200$ &$16.9$ &$0.38$ &$1.32$\
\[table:descstat\]
![ Differential activity ($D_a$), homophily ($h$), association coefficient ($\rho$) and mean degree ($\bar{d}$) biases for the underlying population network of gay, bisexual and other men who have sex with men (GBM) recruits. The nodal attributes are *Condomless anal sex* (CAS), *Currently in a relationship with a main partner* (CIR) and *HIV positive* (HIV+).[]{data-label="fig:engage.appli"}](Engage_appli.pdf)
Discussion
==========
The simulation of RDS samples from population networks is an important methodological issue. While previous simulation studies of RDS having focused on the ability of various estimators to recover network parameters, there has been little study of the impact if those network parameters on the resulting samples, independent of any estimation approach. This article shows that sampling errors in networks for RDS occur when there is (1) difference in the level of activity between two groups of attributes and (1) homophily on nodal attributes.
To date, studies that have used RDS have employed various forms of regression analysis, and yet there is little literature to guide best practices. Variously, studies have treated the data as collected from random sampling and applied linear and logistic regressions [@Ram13]. Some studies have included RDS weights only [@Johnston2010TheAO], ignored RDS weights but adjusted for the seed as a random effect [@Rho15]. [@Spi09] proposed a mixed effects model on features such as recruitment tree, recruiter-recruit dyad to account for dependence, using weights at different levels of clustering when appropriate. His work was presented as a general guideline on how to perform regression analysis using RDS, but lacked technical details on how the proposed methods could perform under different RDS settings.
We have presented a new approach to simulating network data that allows for the ‘spill-over’ effect of homophily between two nodal attributes through their level of association, which can be directly controlled. The resulting RDS simulation method is more accurate than the classical approach in terms of recovering the true level of association between nodal attributes, with some cost in terms of the variability of homophily. The proposed method will allow the researcher to directly control the relationship between covariates, as in a regression setting, and to accomodate additional covariates (confounding, for example) and covariates of various types (continuous, for example). This is an important first step in developing simulation methods to perform regression analysis in an RDS context. This result will be particularly useful to RDS methodologists, who aim to provide new inferential tools or validate the approaches currently being used in practice.
Appendix {#appendix .unnumbered}
========
[lcccc]{} & & [Estimated]{} & &[$95\%$ CI]{}\
& & [Value]{} & &\
& **Population network**\
Population size & & $40400$ &&\
Mean degree& & $16.63$ & &\
Prevalence ($\%$)& & & &\
*Condomless anal sex in the past six months*& &$57.9$ & & $[52.7,\,63.0]$\
*Currently in a relationship* & &$43.9$ & &$[38.8, \,49.0]$\
*HIV positive*& &$12.7$ & &$[9.3, \,16.0]$\
Differential activity & & & &\
*Condomless anal sex in the past six months*& &$1.32$ & &\
*Currently in a relationship* & &$0.97$ &&\
*HIV positive*& &$1.40$ &&\
Homophily & & &&\
*Condomless anal sex in the past six months*& &$0.17$ &&\
*Currently in a relationship* & &$0.12$ &&\
*HIV positive*& &$0.34$ &&\
\
& **RDS sample**\
Number of seeds & & $27$ && -\
Number of recruits& &\
0& &$651$ && -\
1 & &$236$ && -\
2& &$117$ &&-\
3& &$81$ &&-\
4 & &$49$ &&-\
5& &$27$ &&-\
6& &$18$ &&-\
Sample size& & $1179$ &&-\
\[table:parameter\_Engage\]
[1. CAS ]{} [2. CIR]{} [3. HIV+ ]{}
--------------------------------------- ------------- ------------------- -------------------
1\. Condomless anal sex (CAS) 1 $0.104$ ($0.115$) $0.023$ ($0.018$)
2\. Currently in a relationship (CIR) 1 $0.046$ ($0.002$)
3\. HIV positive (HIV+) 1
: Correlation matrix of three nodal covariates for the Engage RDS sample. Unweighted and weighted correlations are displayed, with weighted correlations in parenthesis.
p-value$<0.001$.
\[table:correlation\]
Acknowledgment {#acknowledgment .unnumbered}
--------------
The authors would like to thank the Engage study participants, office staff, and community engagement committee members, as well as our community partner agencies REZO, ACCM and Maison Plein Coeur. The authors also wish to acknowledge the support of David M. Moore, Nathan J. Lachowsky and Jody Jollimore and their contributions to the work presented here. Engage/Momentum II is funded by the Canadian Institutes for Health Research (CIHR, TE2-138299), the CIHR Canadian HIV/AIDS Trails Network (CTN300), the Canadian Foundation for AIDS Research (CANFAR, Engage), the Ontario HIV Treatment Network (OHTN, 1051), the Public Health Agency of Canada (Ref: 4500370314), Canadian Blood Services (MSM2017LP-OD), and the Ministère de la Santé et des Services sociaux (MSSS) du Québec.
Erica E. M. Moodie acknowledges a chercheur boursier senior career award from the Fonds de recherche du Québec – Santé and a Discovery Grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada (RGPIN-2019-04230).
[28]{}
natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[1\] \[2\][\#2\#1]{}
and (****). “”, ****, , <https://doi.org/10.1080/03610918.2016.1146758>.
and (****). “”, ****, .
(****). “”, ****, , <https://www.jstatsoft.org/v024/i02>.
(****). “”, ****, , <https://www.jstatsoft.org/v024/i06>.
, , and (****). **, <http://www.stat.gouv.qc.ca/statistiques/sante/etat-sante/sante-globale/sante-quebecois-2014-2015.pdf>.
and (****). “”, ****, .
(****). **, , Cambridge Series in Statistical and Probabilistic Mathematics ().
(****). “”, ****, .
, , , and (****). “”, ****, .
and (****). “”, ****, .
, , and (****). “”, ****, .
(****). “”, ****, .
, , , , and (****). **, , <http://statnetproject.org>.
(****). **, Quantitative Applications in the Social Sciences (), <https://books.google.ca/books?id=scQ_AwAAQBAJ>.
(****). “”, ****, .
(****). “”, ****, .
, , , , and (****). “”, ****, .
, , , , , , , and (****). “”, ****, .
, , , , and (****). **, [santemontreal.qc.ca/professionnels/drsp/sujets-de-a-a-z/harsah/documentations/engage-men.ca/fr/montreal](santemontreal.qc.ca/professionnels/drsp/sujets-de-a-a-z/harsah/documentations/engage-men.ca/fr/montreal).
, , and (****). “”, in **.
(****). “”, ****.
, , and (****). “”, ****, .
and (****). “”, ****, .
and (****). “”, ****, .
(****). “”, .
, , , , and (****). “”, ****, .
, , , , and (****). “”, .
(****). “”, .
[^1]: E-mail: *[email protected]*
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Only 5 Of?p stars have been identified in the Galaxy. Of these, 3 have been studied in detail, and within the past 5 years magnetic fields have been detected in each of them. The observed magnetic and spectral characteristics are indicative of organised magnetic fields, likely of fossil origin, confining their supersonic stellar winds into dense, structured magnetospheres. The systematic detection of magnetic fields in these stars strongly suggests that the Of?p stars represent a general class of magnetic O-type stars.'
---
Introduction
============
The enigmatic Of?p stars are identified by a number of peculiar and outstanding observational properties. The classification was first introduced by Walborn (1972) according to the presence of C [iii]{} $\lambda 4650$ emission with a strength comparable to the neighbouring N [iii]{} lines. Well-studied Of?p stars are now known to exhibit recurrent, and apparently periodic, spectral variations (in Balmer, He [i]{}, C [iii]{} and Si [iii]{} lines) with periods ranging from days to decades, strong C [iii]{} $\lambda 4650$ in emission, narrow P Cygni or emission components in the Balmer lines and He [i]{} lines, and UV wind lines weaker than those of typical Of supergiants (see Nazé et al. 2010 and references therein).
Only 5 Galactic Of?p stars are known (Walborn et al. 2010): HD 108, HD 148937, HD 191612, NGC 1624-2 and CPD$-28^{\rm o} 2561$. Three of these stars - HD 108, HD 148937 and HD 191612 - have been studied in detail. In recent years, HD 191612 was carefully examined for the presence of magnetic fields (Donati et al. 2006), and was clearly detected. Recent observations, obtained chiefly within the context of the Magnetism in Massive Stars (MiMeS) Project (Martins et al. 2010; Wade et al., in prep) have furthermore detected magnetic fields in HD 108 and HD 148937, thereby confirming the view of Of?p stars as a class of slowly rotating, magnetic massive stars.
HD 191612
=========
HD 191612 was the first Of?p star in which a magnetic field was detected (Donati et al. 2006). Subsequent MiMeS observations with ESPaDOnS@CFHT (Wade et al., in prep) confirm the existence of the field, and demonstrate the sinusoidal variability of the longitudinal field with the H$\alpha$ and photometric period of 537.6 d. As shown in Fig. 1, the longitudinal field, H$\alpha$ and photometric extrema occur simultaneously when folded according to the 537.6 d period. This implies a clear relationship between the magnetic field and the circumstellar envelope. We interpret these observations in the context of the oblique rotator model, in which the stellar wind couples to the kilogauss dipolar magnetic field, generating a dense, structured magnetosphere, resulting in all observables varying according to the stellar rotation period.
HD 108
======
HD 108 was the second Of?p star in which a magnetic field was detected (Martins et al. 2010). Based on long-term photometric and spectroscopic monitoring, HD 108 is suspected to vary on a timescale of 50-60 y (Nazé et al. 2001). The magnetic observations acquired by Martins et al. from 2007-2009 show at most a marginal increase of the longitudinal field during more than 2 years of observation. This supports the proposal that the variation timescale is in fact the stellar rotational period, and that HD 108 is a magnetic oblique rotator that has undergone extreme magnetic braking.
HD 148937
=========
HD 148937 was recently observed intensely by the MiMeS Collaboration, resulting in the detection of circular polarisation within line profiles indicative of the presence of an organised magnetic field of kilogauss strength (Wade et al., in prep). Although the field is consistently detected in the observations, no variability is observed, in particular according to the 7.03 d spectral period. This result supports the proposal by Nazé et al. (2010) that HD 148937 is observed with our line-of-sight near the stellar rotational pole.
Donati et al., 2006, *MNRAS*, 365, 6 Martins et al., 2010, *MNRAS*, in press (arXiv:1005.1854) Nazé et al. 2010, *A&A*, in press (arXiv:1006.2054) Nazé et al. 2001, *A&A*, 372, 195 Walborn et al. 2010, *ApJ*, 711, 143
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The quasi one-dimensional compound [BaCu$_2$Si$_2$O$_7$]{} demonstrates numerous spin-reorientation transitions both for a magnetic field applied along the easy axis of magnetization, and for a magnetic field applied perpendicular to it. The magnetic phase diagram for all three principal orientations is obtained by magnetization and specific heat measurements. Values of all critical fields and low-temperature values of magnetization jumps are determined for all transitions.'
address:
- '$^1$ Laboratorium für Festkörperphysik, ETH Zurich, 8093 Zürich, Schweiz'
- '$^2$ P. L. Kapitza Institute for Physical Problems RAS, 119334 Moscow, Russia'
- '$^3$ Laboratoire de Physico-Chimie de l’Etat Solide, Université Paris-Sud, 91405 Orsay cedex, France'
author:
- 'V.N.Glazkov$^{1,2}$, G.Dhalenne$^3$ and A.Revcolevschi $^3$, A.Zheludev$^1$'
title: 'Multiple spin-flop phase diagram of [BaCu$_2$Si$_2$O$_7$]{}.'
---
Introduction.
=============
A spin-flop transition is a well-known characteristic feature of easy-axis antiferromagnets. It is caused by a competition between the anisotropy energy, which is minimized by aligning the order parameter along the easy axis, and the Zeeman energy, which is minimized for an antiferromagnet by aligning the order parameter perpendicularly to the field. Thus, at a certain field value, the order parameter re-orients itself, going away from the easy axis. For a collinear antiferromagnet only one transition of such sort is usually observed when the magnetic field is applied along the easy axis.
The quasi-one-dimensional compound [BaCu$_2$Si$_2$O$_7$]{} has attracted much interest due to its “excessive” spin-flop transitions. The main exchange integral in this compound is J=24.1meV, while inter-chain interactions are a factor of 100 smaller [@kenzelman-prb-2001]. It orders antiferromagnetically at $T_N=9.2$K. The ordered local magnetic moment at zero field was found to be equal to 0.15$\mu_B$ [@kenzelman-prb-2001]. The easy axis of magnetization is aligned along the $c$ direction of the orthorhombic crystal. This is confirmed by magnetization measurements [@tsukada-prl-2001] and by neutron diffraction results [@zheludev-prb-2002]. Instead of the expected single spin-flop transition, two spin-flop transitions were observed on the magnetization curves when the field was applied along the easy axis ${\mathbf{H}}||c$ [@tsukada-prl-2001] at $\mu_0 H_{c1}=2.0$T and $\mu_0 H_{c2}=4.9$T. Later, ultrasonic studies [@poirier-prb-2002] have revealed another phase transition when the field was applied perpendicularly to the easy axis ${\mathbf{H}}||b$, with a critical field value of $\mu_0
H_{c3}=7.8$T. Finally, a spin-reorientation transition in the third orientation ${\mathbf{H}}||a$ was observed by electron-spin-resonance [@glazkov-prb-2005] with a critical field of $\mu_0 H_{c4}=11$T. The exact reason for these numerous phase transitions is not completely clear yet. It was suggested that a strong reduction of the ordered moment could increase the importance of the anisotropy of the transverse susceptibility [@glazkov-prb-2005].
Despite these unusual properties, a detailed characterization of the magnetic phase diagram of this compound remains incomplete. The phase diagram in the ${\mathbf{H}}||c$ orientation was determined by magnetization measurements [@tsukada-prl-2001] and ultrasonic studies [@poirier-prb-2002]. For the ${\mathbf{H}}||b$ orientation, only ultrasonic data are available [@poirier-prb-2002]. Finally, the phase transition in the ${\mathbf{H}}||a$ orientation was reported only at a temperature of 1.5K in electron-spin resonance experiments [@glazkov-prb-2005]. No magnetization studies of the phase transitions for ${\mathbf{H}}||a,b$ were reported to date.
In the present paper we fill in these gaps and report results of magnetization and specific heat studies of the phase diagram of [BaCu$_2$Si$_2$O$_7$]{}.
Sample preparation and experimental details.
=============================================
We used single crystalline samples of [BaCu$_2$Si$_2$O$_7$]{} taken from the same batch as those used in Ref.[@glazkov-prb-2005]. All measurements were performed on the same sample of approximate size $2\times3\times0.5$ mm$^3$. Sample orientation was checked by X-ray diffraction on a Bruker APEX-II diffractometer. The misalignment of the mounted sample is estimated to be within 5$^\circ$.
Magnetization was measured by a Quantum Design MPMS-XL SQUID-magnetometer and a Quantum Design PPMS system equipped with a vibrating sample magnetometer.
Specific heat was measured using Quantum Design PPMS system. It was measured by applying a controlled heat pulse to the platform with the sample, which is connected to the cryostat with a stable heat-link. The sample temperature relaxation curve $T(t)$ was then fitted by a two-$\tau$ model, yielding the value of the specific heat. To improve the precision of the transition temperature determination, a slope analysis technique was additionally used. This technique makes use of the thermal balance equation $P=C{\frac{dT_{sample}}{dt}}+K(T_{sample}-T_{cryo})$ (where $C$ is the total specific heat of the sample and the platform, $P$ — the heater power, $K$ — the heat link thermal conductivity, $T_{sample}$ and $T_{cryo}$ — being the temperatures of the sample and of the cryostat, respectively).
Experimental results and discussion.
====================================
Examples of magnetization and specific heat curves are shown in Figures \[fig:m(h)\] and \[fig:14T\_c(t)\]. Both demonstrate clear anomalies at the phase transitions. This allows to build phase diagrams in all orientations of the applied magnetic field, as shown in Figure \[fig:t-h\].
The zero-field transition temperature is $T_N|_{H=0T}=9.19\pm0.01$K. The transition temperature is field dependent, without strong anomalies at the crossings of the phase transition lines. For ${\mathbf{H}}||a$, the transition temperature slightly increases with field, while for the other two principal orientations, it decreases with field. The transition temperatures at a field of 14T are: for ${\mathbf{H}}||a$ $T_N|_{H=14T}=9.35\pm0.01$K, for ${\mathbf{H}}||b$ $T_N|_{H=14T}=9.09\pm0.02$K, for ${\mathbf{H}}||c$ $T_N|_{H=14T}=8.21\pm0.02$K.
The low-temperature values of the critical fields, as determined by our magnetization measurements, are: for ${\mathbf{H}}||c$ $H_{c1}=(1.89\pm0.10)$T, $H_{c2}=(4.72\pm0.03)$T; for ${\mathbf{H}}||b$ $H_{c3}=(7.39\pm0.03)$T; and for ${\mathbf{H}}||a$ $H_{c4}=(11.40\pm0.08)$T. These values are in good agreement with published data [@tsukada-prl-2001; @poirier-prb-2002; @glazkov-prb-2005]. The critical field values are weakly temperature dependent: the corresponding phase transition lines are almost horizontal.
The measured magnetic susceptibilities of the higher-field phases are always higher than those of lower-field phases (see Figure \[fig:m(h)\]). This confirms the identification of the field-induced phase transitions as spin-reorientation transitions, caused by the competition between the order parameter anisotropy and the Zeeman energy. Susceptibility jumps $\Delta\chi=\Delta(M/H)$ at these transitions, as measured at 2K, are: for ${\mathbf{H}}||c$ $(3.54\pm0.05)\cdot 10^{-4}$emu/(mole Cu) and $(1.35\pm0.05)\cdot10^{-4}$emu/(mole Cu), for the first and second spin-flops, respectively; for ${\mathbf{H}}||b$ $(0.813\pm0.020)\cdot10^{-4}$emu/(mole Cu); and for ${\mathbf{H}}||a$ $(0.085\pm0.020)\cdot10^{-4}$emu/(mole Cu).
A model that describes all low-temperature phase transitions and antiferromagnetic resonance frequency-field dependences was proposed in Ref.[@glazkov-prb-2005]. This model suggests that anisotropic contributions to the transverse susceptibility are unusually large in [BaCu$_2$Si$_2$O$_7$]{}, probably due to the strong reduction of the ordered magnetic moment. The low-energy dynamics of the antiferromagnetic order parameter is then described by the potential energy:
$$\begin{aligned}
U&=&-\frac{1}{2}{\bigl[\mathbf{l}\times\mathbf{H}\bigr]}^2+a_1l_x^2+a_2l_y^2+\xi_1{\bigl(\mathbf{H}\cdot\mathbf{l}\bigr)}H_xl_x
+\nonumber\\
&&+\xi_2{\bigl(\mathbf{H}\cdot\mathbf{l}\bigr)}H_yl_y
-(\xi_1+\xi_2){\bigl(\mathbf{H}\cdot\mathbf{l}\bigr)}H_zl_z-\nonumber\\
&&-B_1H_x^2(l_y^2-l_z^2)-B_2H_y^2(l_x^2-l_z^2)-B_3H_z^2(l_x^2-l_y^2)+\nonumber\\
&&+C_1H_yH_zl_yl_z+C_2H_xH_zl_xl_z+C_3H_xH_yl_xl_y\label{eqn:energy}\end{aligned}$$
Here, Cartesian coordinates are chosen as $x||a$, $y||b$ and $z||c$ and ${\mathbf{l}}$ is the antiferromagnetic order parameter. The exchange part of the transverse susceptibility is set to unity for the sake of convenience. The phase transitions are described in terms of this potential energy as rotations of the order parameter ${\mathbf{l}}$. In the low-field phases, the order parameter is aligned along the easy axis, ${\mathbf{l}}||z$. When the field is applied along the easy axis $z$, it rotates at $H_{c1}$ towards the second easy axis, ${\mathbf{l}}||y$, and at $H_{c2}$, towards the hard axis ${\mathbf{l}}||x$. When the field is applied along $y$ or $x$, the order parameter rotates towards the $x$ or the $y$ axis at $H_{c3}$ and $H_{c4}$, respectively. These orientations of the order parameter are shown schematically in Figure \[fig:t-h\].
The parameter values, all determined from best fits of the electron spin resonance data [@glazkov-prb-2005], are: $\gamma=2.82$GHz/kOe, $a_1=400$kOe$^2$, $a_2=118$kOe$^2$, $B_1$=0.0047, $B_2=0.0370$, $B_3=0.0614$, $\xi_1=0.135$, $\xi_2=-0.03$. The $C_i$ constants could be ignored in the principal orientations of the magnetic field.
Our magnetization measurements allow an independent check of this model, since susceptibility jumps at the phase transitions are related to certain parameters of the potential (\[eqn:energy\]). Predicted susceptibility jumps at spin-reorientation transitions are: $$\begin{aligned}
\Delta\chi_1&=&1-2B_3-2(\xi_1+\xi_2)\\
\Delta\chi_2&=&4B_3\\
\Delta\chi_3&=&4B_2\\
\Delta\chi_4&=&4B_1\end{aligned}$$
Here, $\Delta\chi_i$ are susceptibility jumps at the corresponding critical field $H_{c~i}$. In order to exclude the scaling factor involved in the choice of energy units in Eqn.\[eqn:energy\] they can be normalized to $\Delta\chi_1$.
A comparison of the calculated and measured values is given below:
measured calculated
----------------------------- ----------------- ------------
$\Delta\chi_2/\Delta\chi_1$ $0.381\pm0.015$ 0.365
$\Delta\chi_3/\Delta\chi_1$ $0.230\pm0.007$ 0.220
$\Delta\chi_4/\Delta\chi_1$ $0.024\pm0.006$ 0.028
The correspondence between experimental and model values is close to perfect. Thus, the magnetization measurements are fully compatible with the proposed form of potential energy. This supports the model of phase transitions proposed in Ref.[@glazkov-prb-2005] and points to the non-trivial effects of spin-reduction on spin-reorientation transitions in low-dimensional antiferromagnets.
The work was supported by RFBR Grant No.09-02-00736.
[10]{} M. Kenzelmann, A. Zheludev, S. Raymond, E. Ressouche, T. Masuda, P. Böni, K. Kakurai, I. Tsukada, K. Uchinokura, and R. Coldea, Phys. Rev. B **64**, 054422 (2001).
I. Tsukada, J. Takeya, T. Masuda, and K. Uchinokura, Phys. Rev. Lett. **87**, 127203 (2001)
A. Zheludev, E. Ressouche, I. Tsukada, T. Masuda, and K. Uchinokura, Phys. Rev. B **65**, 174416 (2002)
M. Poirier, M. Castonguay, A. Revcolevschi, and G. Dhalenne, Phys. Rev. B **66**, 054402 (2002)
V. N. Glazkov, A. I. Smirnov, A. Revcolevschi, and G. Dhalenne, Phys. Rev. B **72**, 104401 (2005)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the steady fractional Schrödinger equation $L u + V u = f$ posed on a bounded domain $\Omega$; $L$ is an integro-differential operator, like the usual versions of the fractional Laplacian $(-\Delta)^s$; $V\ge 0$ is a potential with possible singularities, and the right-hand side are integrable functions or Radon measures. We reformulate the problem via the Green function of $(-\Delta)^s$ and prove well-posedness for functions as data. If $V$ is bounded or mildly singular a unique solution of $(-\Delta)^s u + V u = \mu$ exists for every Borel measure $\mu$. On the other hand, when $V$ is allowed to be more singular, but only on a finite set of points, a solution of $(-\Delta)^s u + V u = \delta_x$, where $\delta_x$ is the Dirac measure at $x$, exists if and only if $h(y) = V(y) |x - y|^{-(n+2s)}$ is integrable on some small ball around $x$. We prove that the set $Z = \{x \in \Omega : \textrm{no solution of } (-\Delta)^s u + Vu = \delta_x \textrm{ exists}\}$ is relevant in the following sense: a solution of $(-\Delta)^s u + V u = \mu$ exists if and only if $|\mu| (Z) = 0$. Furthermore, $Z$ is the set points where the strong maximum principle fails, in the sense that for any bounded $f$ the solution of $(-\Delta)^s u + Vu = f$ vanishes on $Z$.'
author:
- 'D. Gómez-Castro [^1]'
- 'J.L. Vázquez [^2]'
title: |
The fractional Schrödinger equation\
with singular potential and measure data
---
[Keywords.]{} Nonlocal elliptic equations, bounded domains, Schrödinger operators, singular potentials, measure data.
[Mathematics Subject Classification]{}. 35R11, 35J10, 35D30, 35J67, 35J75.
Introduction and outline of results
===================================
We study equations of the form $$\tag{P$_V$}
\label{eq1}
\begin{dcases}
L u + V u = f & \Omega, \\
u = 0 & \partial \Omega \ (\textrm {resp. } \Omega^c)\,,
\end{dcases}$$ where $L$ is an integro-differential operator, we are thinking of the usual Laplacian or one the usual fractional Laplacians ${(-\Delta)^s}$ posed on a bounded domain $\Omega$ of $\mathbb R^n$, where $n \ge 3$ and $0<s\le 1$. $V$ (the potential) is a nonnegative Borel measurable function. In the paper we will assume Dirichlet boundary conditions to focus on the most relevant setting, but this is in no way essential. We recall that for nonlocal operators boundary conditions are usually replaced by exterior conditions. There are excellent references to nonlocal elliptic equations, both linear and nonlinear, see e.g. [@BucurValdi; @CabreSire2014; @CaffSilv2007; @FelKassV2015; @Ros-Oton2016].
We have recently studied Problem in [@diaz+g-c+vazquez2018], in the case where $L$ is the so-called restricted fractional Laplacian on a bounded domain. The problem was solved for all locally integrable potentials $V\ge 0$ and all right-hand data $f$ in the weighted space $L^1(\Omega, \mbox{dist} (\cdot, \Omega^c)^s)$, which turns out to be optimal for existence and uniqueness of so-called very weak solutions.
The aim of the present paper is to extend the theory in two directions. Firstly, we want to consider a general class of operators for which a common theory can be constructed. This part of the paper encounters no major obstacles once the proper functional setting is found involving the properties of the Green functions.
Secondly, we want to extend the theory from integral functions $f$ to Radon measures $\mu$. In doing that we will find a delicate existence problem when the potential $V$ is singular and $\mu$ is a measure, since $V$ and $\mu$ may be incompatible. We want to understand this difficulty by characterizing and describing the situation when nonexistence happens. We start by introducing a suitable concept of generalized solution obtained from natural approximations. This kind of approximation process gives rise to candidate solutions often known as *SOLA solutions or limit solutions when they are admissible solutions.*
Finally, we describe what happens to the approximations in case of nonexistence: the limit solves the modified problem corresponding to a reduced measure $\mu_r$ instead of $\mu$. Reduced measures are compatible with $V$ and the solution to the problem with $V$ and $\mu_r$ is a kind of closest admissible problem to the original one.
#### Redefinition of the problem for general operators.
We will follow a trend that has been successfully used in the recent literature on elliptic and parabolic equations involving fractional Laplacians, cf. [@Bonforte+Sire+Vazquez2015; @Bonforte+Vazquez2016; @bonforte+figalli+vazquez2018] which consists in recalling that the main fractional operators that appear in the literature have a Green operator ${\mathcal{G}}: f \mapsto \overline u$, where $\overline u$ is the unique solution of the inverse problem $$\label{eq:Laplace with L}
\tag{P$_0$}
\begin{dcases}
{(-\Delta)^s}\overline u = f& \Omega, \\
\overline u = 0 & \partial \Omega \ (\textrm {resp. } \Omega^c).
\end{dcases}$$ This solution is given by $$\tag{G}
\label{eq:integral expression of Green}
\overline u(x)= {\mathcal{G}}(f) (x) = \int_ \Omega {\mathbb{G}}(x,y) f (y) dy.$$ The important point is that ${\mathcal{G}}$ has very good functional properties acting on classes of continuous or $L^p$ data $f$. We will list below in the specific assumptions that determine the class of operators ${\mathcal{G}}$ that we can consider. In we make sure that main examples of fractional operators are included. The Green operator approach is quite efficient and leads us to propose a suitable definition of solution.
A dual solution of for data $f\in L^1(\Omega)$ is a function $u \in L^1 (\Omega)$ such that
\[eq:fixed point formulation\] $$\begin{gathered}
Vu \in L^1 (\Omega) \\
u = {\mathcal{G}}( f - Vu )
\end{gathered}$$
In we show how this definition matches previous notions: very weak solutions and weak-dual solutions. See in this respect previous proposals like those of [@Bonforte+Sire+Vazquez2015] and [@bonforte+figalli+vazquez2018] dealing with nonlinear parabolic problems and elliptic problems, resp.
Outline of results
------------------
We state the main contributions.
#### Results for operators without potentials.
contains general facts about the action of operators ${\mathcal{G}}$ with attention to covering the examples of operators introduced in . Due to we show by duality that ${\mathcal{G}}: {{\mathcal M}}(\Omega) \to L^1 (\Omega)$ and, hence, can be extend the theory to the case where $f \in L^1 (\Omega)$ is replaced by a measure $\mu \in {{\mathcal M}}(\Omega)$. In we discuss the definition of dual, weak-dual and very weak solutions for the problem with and without a potential $V$.
#### Results for operators with bounded potentials.
presents the general existence and uniqueness theory under the assumptions that $V$ is bounded while $f$ is merely integrable. In other words, we construct the operator ${\mathcal{G}}_V$ for $V \in L^\infty$. The solution is constructed as a fixed point.
#### Uniqueness for general potentials.
In we prove that, under some assumptions on ${\mathcal{G}}$, there exists at most one solution of . When it exists, it will obtained as ${\mathcal{G}}_V (\mu)$. The difficult question is whether this solution exists in the sense of our definitions. In Section \[sec.exist.L1L1\] we prove uniqueness for $V\ge 0$ and $f$ merely integrable.
#### Results for integrable potentials and data.
In we deal with the case: $f, V \in L^1 (\Omega)$. In paper [@diaz+g-c+vazquez2018] we were interested in understanding the effect of a singularity of $V$ at the boundary, and so we chose $V \in L^1_{loc} (\Omega)$, $f \mathrm{d} (x, \Omega^c)^s \in L^1 (\Omega)$ and we also studied the Restricted Fractional Laplacian ($(-\Delta)^s_{\mathrm{RFL}}$) as operator. Under those circumstances we proved existence in all cases, because we restricted to functions. Our approach of double limit used in that paper will still work here, for general ${(-\Delta)^s}$, when $(f,V) \in L^1 (\Omega) \times L^1_+(\Omega)$.
[**Interaction of singular potentials and measures.**]{} We now turn our attention to the existence theory when the integrable function $f$ is replaced by a measure $\mu$. The problem lies in the interaction of the measure with an unbounded potential $V\ge 0$. We find an obstacle to existence if $V$ is too singular at points where the measure has a discrete component.
In order to focus on the main obstacle, we consider only potentials $V\ge 0$ with isolated singularities. The precise condition is as follows: $V$ will be singular, at most, at a finite set $S \subset \Omega$ and $$V : \Omega \to [0, + \infty] \textrm{ is measurable and } L^\infty\left( \Omega \setminus \bigcup_{x \in S} B_\rho (x) \right) \textrm{ for all } \rho > 0, \nonumber
\tag{V1}
\label{eq:V singular 0}$$ Notice that we specify no particular rate of blow-up at the points of $S$.
In Section \[sec:existence for V singular at 0\] we introduce the approximation method by means of bounded regularized potentials $V_k= V \wedge k$, that will lead us to the existence of a well-defined limit, that we call the Candidate Solution Obtained as Limit of Approximation (CSOLA). This works for all Radon measures $\mu$ as right-hand side. In the case where $f\in L^p (\Omega)$ we prove existence of a dual solution as a limit of ${\mathcal{G}}_{V_k} (f)$, and we study the limit operator ${\mathcal{G}}_V$.
#### Characterizing solvability and describing non-existence
In Section \[sec.nonex\] we address the question of nonexistence when $\mu$ and $V$ turn out to be incompatible. As the most representative instance, we first address the case where $\mu$ is a point mass and describe what happens when no solution exists in the form of [*concentration phenomenon*]{} for $Vu$. In that case, it happens that if $u_k$ is the sequence of approximate solutions, then $$u_k\to 0, \quad \mbox{and} \quad V_k u_k\to \delta_{x_0}.$$ This allows to introduce the set $Z$ of incompatible points $$Z = \{ x \in \Omega : \textrm{ there is no dual solution of \eqref{eq:fixed point formulation} when } \mu = \delta_x \}.$$ We also have the concept of [*reduced measure*]{}. For a measure with support intersecting $Z$, the obtained CSOLA is not a solution of with data $\mu$, but it is the solution corresponding to a [*reduced measure*]{} associated to $\mu$, $V$ and $G$, which is given by $$\mu_r = \mu - \sum_{x \in Z} \mu (\{x\}) \delta_x.$$ The notion of reduced measure was introduced by Brezis, Marcus and Ponce [@Brezis2004a; @brezis+marcus+ponce2007] in the study of the nonlinear Poisson equation $-\Delta u +g(u) =\mu$. See precedents in [@Brezis2003; @vazquez_1983]. A excellent general reference is [@Ponce2016].
#### Properties of the solution operator when $V$ is singular.
We study the limit operator $\widetilde {\mathcal{G}}_V : {{\mathcal M}}(\Omega) \to L^1 (\Omega)$ that we call the CSOLA operator. This leads to the questions of the next paragraph.
#### $Z$ and the loss of the strong maximum principle.
In Section \[sec.Z\] address the problem of better understanding $Z$. First, we relate the solvability of the problem with a delta measure at a point $x_0\in S$ with the set of points where the Strong Maximum Principle does not hold for solutions with bounded data. In this investigation we follow ideas developed by Orsina and Ponce for the classical Laplacian [@Orsina2018]. More precisely, we show that a set of universal zeros is precisely the set of incompatible points, i.e. $$Z = \{ x \in \Omega : {\mathcal{G}}(f) (x) = 0 \quad \forall f \in L^\infty (\Omega) \}$$ This can be easily explained in by the fact that the kernel ${\mathbb{G}}_V$ of the operator ${\mathcal{G}}_V$ vanishes: $$x \in Z \iff {\mathbb{G}}_V(x, y) = 0, \quad \textrm{ a.e. } y \in \Omega.$$ In fact the kernel ${\mathbb{G}}_V$ induces an operator $\widetilde {\mathcal{G}}_V$ which extends ${\mathcal{G}}_V$, but does not necessarily give solutions of . Furthermore, $${\mathcal{G}}_V (\delta_x) \textrm{ is defined } \iff \widetilde {\mathcal{G}}_V (\delta_x) \ne 0.$$ The existence of this set $Z$ set is caused by $V$.
Work in this direction for the classical Laplacian using capacity can be found in [@Rakotoson2018].
#### Complete characterization of $Z$.
Finally, under our assumption that $V$ has only isolated singular points, $Z$ is completely characterized in by the condition $$x \notin Z \iff \int_{ B_\rho (x) } \frac{V(y)}{|x-y|^{n+2s}}dy <+ \infty \quad \textrm{ for some } \rho>0 \textrm{ small enough}.$$ Notice that, naturally, $Z \subset S$.
#### Comments.
Our results on singular potentials extend to fractional operators the results in [@Orsina2018] when $S = \{ x : V(x) = +\infty \} $ is a discrete set. However, our approach to the proof is completely different. We prove a solution exists if and only if it is the limit of approximating sequences corresponding to a cut-off $V_k = V \wedge k$, and we carefully study this limit. We explain what the limit is in all cases. Actually, we have seen that in the case of nonexistence, a degenerate situation happens where a part of the singular data $\mu$ remains concentrated as the singular part of the limit of the potential term $Vu$.
Basic hypothesis on ${\mathcal{G}}$ {#sec:hypothesis Green}
-----------------------------------
We list the properties that we will use in the study. All of them are satisfied by the Green operators that are inverse to the usual Laplacians with zero Dirichlet boundary or external conditions.
\(i) ${\mathcal{G}}$ is symmetric and self-adjoint in the sense that $$\tag{G1}
\label{eq:G is symmetric}
{\mathbb{G}}(x,y) = {\mathbb{G}}(y,x).$$
\(ii) We assume $n \ge 3$ and we have the estimate $$\tag{G2}
\label{eq:estimate for G}
{\mathbb{G}}(x,y) \asymp \frac{1}{|x-y|^{n-2s}} \left( \frac{{\delta}(x) {\delta}(y)}{|x-y|^{2}} \wedge 1 \right)^\gamma.$$ We call $0 < s \le 1$ the fractional order of the operator by copying from what happens for the standard of fractional Laplacians, while $0 < \gamma \le 1$ distinguishes between the different known cases fractional Laplacians via the boundary behaviour.
In some cases it could be sufficient to require that for every compact $K \Subset \Omega $ we have $$0 < \frac{ c_K }{|x-y|^{n-2s}} \le {\mathbb{G}}(x,y) \le \frac{ C_K } {|x-y|^{n-2s}},$$ but this not generally used.
\(iii) Furthermore, we need positivity in the sense that $$\tag{G3}
\label{eq:coercivity}
\int_ \Omega f {\mathcal{G}}(f) \ge 0 \qquad \forall f \in L^2 (\Omega)$$ The hypothesis above often follows from the stronger property of coercivity that holds for the standard versions of fractional Laplacian in forms like $$\| (-\Delta)^{\frac s 2} u \|_{L^2} \le \int_ \Omega u {(-\Delta)^s}u.$$ Putting $f = L u$ so that $u = {\mathcal{G}}(f)$, we get $$\| {\mathcal{G}}(f)\|^2 \le \int_ \Omega {\mathcal{G}}(f) f.$$
\(iv) Lastly, we assume ${\mathcal{G}}$ is regularizing in the sense that $$\tag{G4}
\label{eq:regularization}
{\mathcal{G}}: L^\infty (\Omega) \to {{\mathcal C}}(\overline \Omega).$$ Conditions for this property to hold are well-known for the main fractional operators (see, e.g., [@Ros-Oton2016] and the references therein). In the case of the most common choice, Restricted Fractional Laplacian (RFL) we refer to [@Ros-Oton2014]). For the Spectral Fractional Laplacian (SFL) a convenient reference is [@Caffarelli+Stinga2016].
Interior regularity is usually higher (see [@Cozzi2017]). A general reference to fractional Sobolev spaces, embeddings and related topics if, or instance, [@DiNezza2012].
Usual examples of admissible operators {#sec:examples laplacians}
--------------------------------------
### The classical Laplacian $-\Delta$
In this case it is known
1. holds with $s = 1$ and $\gamma = 1$.
2. is well known.
3. The regularization is a classical result. See, e.g., [@Evans1998; @Gilbarg+Trudinger2001].
### Restricted Fractional Laplacian $(-\Delta)^s_{\mathrm{RFL}}$
This operator is given by $$\label{eq:RFL}
(-\Delta)^s_{\mathrm{RFL}}u (x) =c_{n,s} \int_{\mathbb R^n} \frac{u(x)-u(y)}{|x-y|^{n+2s}} dy$$ where $u$ is extended by $0$ outside $\Omega$. In this case it is known
1. holds with $0 < s < 1$ and $\gamma = s$
2. since, for $f \in L^\infty (\Omega)$ $$\int_ \Omega f {\mathcal{G}}(f) = \int_ \Omega (- \Delta)^s ({\mathcal{G}}(f)) f = \int_ \Omega | (-\Delta)^{s/2} ({\mathcal{G}}(f)) |^2 \ge 0.$$ For the remaining functions we apply density.
3. The regularization is proven via Hörmander theory. See, e.g. [@Grubb2015; @Ros-Oton2014].
### Spectral Fractional Laplacian $(-\Delta)^s_{\mathrm{SFL}}$
This operator is given by $$(-\Delta)^s_{\mathrm{SFL}}u (x) = \sum_{i=1}^{+\infty} \lambda_i^{s} u_i \varphi_i (x)$$ where $(\varphi_i,\lambda_i )$ is the spectral sequence of the Laplacian with homogeneous Dirichlet boundary condition and $u_i = \int_ \Omega u \varphi_i$. In this case it is known
1. holds with $0 < s < 1$ and $\gamma = 1$
2. since, for $f \in H^1 (\Omega)$ $$\int_ \Omega f {\mathcal{G}}(f) = \sum_{i=1}^{+\infty}\lambda_i^s f_i^2 \ge 0 .$$
3. The regularization can be found in [@Caffarelli+Stinga2016].
### Other examples
There are a number of other operators that can be considered like the Censored (or Regional) Fractional Laplacian which is described in many references, like [@bonforte+figalli+vazquez2018].
The elliptic equation without potential {#sec:L1 theory for G}
=======================================
Immediate properties
--------------------
The following are immediate consequence of the kernel representation
Assume that ${\mathbb{G}}(x,y) \ge 0$. Then, the Green operator is monotone in the sense that $$\begin{gathered}
\label{eq:monotonicity of Green}
0 \le f \in L^\infty (\Omega) \implies 0 \le {\mathcal{G}}(f).
\end{gathered}$$ If, furthermore, then is self-adjoint: $$\begin{gathered}
\label{eq:Green self adjoint}
\int_ \Omega {\mathcal{G}}(f) g = \int_ \Omega f {\mathcal{G}}(g) \qquad \forall f , g \in L^\infty (\Omega) .
\end{gathered}$$
For the monotonicity we simply take into account that ${\mathbb{G}}\ge 0$ and therefore ${\mathbb{G}}(x,y) f(y) \ge 0$. To show that it is self-adjoint we compute explicitly $$\begin{aligned}
\int_ \Omega {\mathcal{G}}(f) (x) g(x) dx &= \int_ \Omega \left( \int_ \Omega {\mathbb{G}}(x,y) f(y) dy \right) g(x) dx \nonumber\\
&= \int_ \Omega \int_ \Omega {\mathbb{G}}(x,y) f(y) g(x) dy dx \nonumber\\
&= \int_ \Omega \int_ \Omega {\mathbb{G}}(y,x) f(y) g(x) dx dy \nonumber\\
&= \int_ \Omega f(y) \left( \int_ \Omega {\mathbb{G}}(y,x) g(x) dx \right) dy \nonumber \\
&= \int_ \Omega f(y) {\mathcal{G}}(g) (y) dy
\end{aligned}$$ This completes the proof.
Regularization {#sec:regularization}
--------------
\[thm:regularization\] If $f \in L^p (\Omega)$ then ${\mathcal{G}}(f) \in L^{ q }(\Omega)$ for all $1 \le q < Q(p) = \frac{n}{n-2s}p$. Furthermore ${\mathcal{G}}: L^p (\Omega) \to L^{q} (\Omega)$ is continuous.
Our aim is to apply the Riesz-Thorin interpolation theorem (see, e.g., [@Triebel]).
Let $T$ be a linear operator such that $$\begin{aligned}
T&:L^{p_i} (\mathbb R^n) \to L^{q_i} (\mathbb R^n), \qquad i = 0,1
\end{aligned}$$ is continuous for some $1 \le p_0 ,p_1 , q_0, q_1 \le +\infty$ and let, for $\theta \in (0,1)$ define $$\frac{1}{p_\theta} = \frac{1- \theta}{p_0} + \frac{\theta}{p_1}, \qquad \frac{1}{q_\theta} = \frac{1- \theta}{q_0} + \frac{\theta}{q_1}.$$ Then $$T : L^{p_\theta} (\mathbb R^n) \to L^{q_\theta} (\mathbb R^n)$$ is continuous. Furthermore $$\| T \|_{\mathcal L (L^{p_\theta} , L^{q_\theta})} \le \| T \|_{\mathcal L (L^{p_0} , L^{q_0})}^{1-\theta}\| T \|_{\mathcal L (L^{p_1} , L^{q_1})}^{\theta}.$$
\[thm:regularization from L1\] Let $f \in L^1 (\Omega)$. Then ${\mathcal{G}}(f) \in L^{q} (\Omega)$ for $1 \le q < Q(1) = \frac{n}{n-2s}$ and the map ${\mathcal{G}}: L^1 (\Omega) \to L^q (\Omega)$ is continuous.
We split the proof in some lemmas. The two first lemmas can be found in [@bonforte+figalli+vazquez2018] and are given here for the reader’s convenience
$$\int_ \Omega |{\mathbb{G}}(x,y)|^{q} dy \le C, \textrm{ where } 1 \le q < \frac{ n } { n - 2s}$$
and $C$ does not depend on $x \in \Omega$.
We take $R$ large enough so that $\Omega \subset B_R (x)$ for every $x \in \Omega$. We have that $$\begin{aligned}
\int_ \Omega |{\mathbb{G}}(x,y)|^{ q } dy &\le C \int_\Omega |x-y|^{(2s-n)q} dy \nonumber \\
&\le \int_{B_R (x)} |x-y|^{(2s-n)q} dy \nonumber \\
&\le C \int_0^R r^{(2s-n)q} r^{n-1}dr
\le C
\end{aligned}$$ if $(2s-n)q + n > 0$. In other words if $q < \frac{n}{n-2s}$. This completes the proof.
Through duality it is mediate that
${\mathcal{G}}: L^{q'} (\Omega) \to L^\infty (\Omega)$ is continuous for all $1 \le q < Q(1)$.
Through Hölder’s inequality $$|{\mathcal{G}}(f) (x)| = \left| \int_ \Omega {\mathbb{G}}(x,y) f(y) \right| \le \int_ \Omega {\mathbb{G}}(x,y) |f(y)| \le \| {\mathbb{G}}(x, \cdot) \|_{q} \| f \|_{q'} \le C \| f \|_{q'} .$$ and this holds uniformly on $x \in \Omega$.
We can now prove the theorem.
Due to the Riesz-Thorin interpolation theorem since ${\mathcal{G}}: L^1 (\Omega) \to L^{\gamma} (\Omega)$ with $1 \le \gamma < Q(1)$ and ${\mathcal{G}}: L^\infty (\Omega) \to L^{\infty} (\Omega)$ then ${\mathcal{G}}: L^p (\Omega) \to L^{\gamma p } (\Omega)$. Therefore ${\mathcal{G}}: L^p (\Omega) \to L^{q} (\Omega)$ for where $1 \le q < p Q(1) = Q(p)$.
\[rem:eigenfunctions are continuous\] Notice that this immediately implies that eigenfunctions are in ${{\mathcal C}}(\Omega)$. Indeed, let $$\underline Q (1) = \frac{1 + Q(1)}{2} \in (1, Q(1)), \qquad \underline Q(p) = p \underline Q(1) \in (p, \underline Q(p) ).$$ $u = \lambda {\mathcal{G}}(u)$. If $u \in L^1 (\Omega)$ then ${\mathcal{G}}(u) \in L^{\underline Q(1)} (\Omega)$ and so $u \in L^{\underline Q(1)} (\Omega)$. Analogously $u \in L^{\underline Q^n(1)} (\Omega)$ for every $n \ge 1$. After a finite number of iterations we have $\underline Q^n(1) > (Q(1))'$. Therefore $u \in L^\infty (\Omega)$. But then $u = \lambda {\mathcal{G}}(u) \in \mathcal C(\Omega)$.
Dunford-Pettis property of ${\mathcal{G}}$ {#sec:Dunford-Pettis}
------------------------------------------
The aim of this section is to prove that
\[thm:Dunford-Pettis for G\] We have that, for any $0 < \beta < \frac{2s}n$ $$\int_A |{\mathcal{G}}(f)| \le C |A |^\beta \| f \|_{L^1 (\Omega)}, \qquad \forall f \in L^1 (\Omega).$$ for some $C > 0$. In particular, for every bounded sequence $f_n \in L^1 (\Omega)$ the sequence ${\mathcal{G}}(f_n)$ is equiintegrable. In particular, there exists a weakly convergent subsequence ${\mathcal{G}}(f_{n_k}) \rightharpoonup u$ in $L^1 (\Omega)$.
For this we introduce the following auxiliary estimate
We have that $$\label{eq:Green of indicator function}
\| {\mathcal{G}}({\mathbf 1}_A) \|_{L^\infty} \le C |A|^\beta, \qquad \textrm{ for any } 0 < \beta < \frac{2s}{n}, \ \forall A \subset \Omega .$$ where $C$ depends on $\beta$ but not on $A$.
We have that ${\mathcal{G}}: L^{p} (\Omega) \to L^\infty (\Omega)$ for $p > Q(1)'$. Hence $$\| {\mathcal{G}}({\mathbf 1}_A) \|_{L^\infty} \le C \| {\mathbf 1}_A \|_{L^{p} (\Omega)} = \left( \int_A 1^{{p}} \right)^{1 / p} = C |A|^{1 / p}.$$ Taking $\beta = \frac 1 {p}$ we complete the proof.
We prove that ${\mathcal{G}}(f)$ satisfies $$\begin{aligned}
\int_A |{\mathcal{G}}(f)| &= \int_{ \Omega } |{\mathcal{G}}(f) | {\mathbf 1}_A \le \int_ \Omega {\mathcal{G}}(|f|) {\mathbf 1}_A
= \int_{ \Omega } |f| {\mathcal{G}}({\mathbf 1}_A) \le \| f \|_{L^1} \| {\mathcal{G}}({\mathbf 1}_A) \|_{L^\infty} \nonumber \\
& \le C |A|^\beta \| f \|_{L^1 (\Omega)}.
\end{aligned}$$ This completes the proof.
Using Marcinkiewicz spaces the results in can be proved with equality in the range of $\beta$. The required information about Marcinkiewicz spaces can be found in [@Benilan+Brezis+Crandall1975].
Extension of ${\mathcal{G}}$ to ${{\mathcal M}}(\Omega)$
--------------------------------------------------------
To use data in $\mathcal M (\Omega)$ we need the stronger assumptions , which we have not used until now.
We will extend our results by approximation. This philosophy has been applied successfully over the years (see, e.g., [@Kuusi+Mingione+Sire2015a] for relevant recent work in the nonlocal case).
\[thm:extension of G to measures\] Let ${\mathcal{G}}$ satisfy and . Then, there exists an extension $${\mathcal{G}}: {{\mathcal M}}(\Omega) \to L^1 (\Omega).$$ which is linear and continuous. Furthermore, this extension is unique and self-adjoint. The function $u = {\mathcal{G}}(\mu)$ is the unique function such that $u \in L^1 (\Omega)$ and $$\begin{gathered}
\label{eq:Laplace vwf}
\int_ \Omega u \psi = \int_\Omega {\mathcal{G}}(\psi) \mathrm{d} \mu, \qquad \forall \psi \in L^\infty_c (\Omega).
\end{gathered}$$
Let $\mu \in {{\mathcal M}}(\Omega)$. By density let $f_n \in L^\infty (\Omega)$ such that $f_n \, {\mathrm{d}x}\rightharpoonup \mu$ and bounded in $L^1 (\Omega)$. Due to there exists a subsequence ${\mathcal{G}}(f_{n_k})$ converging weakly in $L^1(\Omega)$. Let $u$ be its limit. Furthermore $$\| u \|_{L^1 (\Omega)} \le \liminf_n \| u_{n_k} \|_{L^1 (\Omega)} \le C \liminf \| f_{n_k} \|_{L^1 (\Omega)} = C \| \mu \|_{{{\mathcal M}}(\Omega)}.$$ Due to $$\int_{ \Omega } u_{n_k} \psi = \int_ \Omega {\mathcal{G}}(\psi) f_{n_k} {\mathrm{d}x}.$$ Passing to the limit, since ${\mathcal{G}}(\psi ) \in {{\mathcal C}}(\Omega)$ we deduce
There is at most one element with this property. If there two $u_1, u_2$ letting $w = u_1-u_2$ we would have $$\int_ \Omega w \psi = 0 , \qquad \forall \psi \in L^\infty (\Omega).$$ Taking $\psi = \operatorname{sign}_+ w$ we deduce $ w = 0$, so $u_1 = u_2$.
Hence, our definition $\widetilde {\mathcal{G}}(\mu) = u$ is consistent.
Linearity. To show continuity we prove boundedness. Let $\mu\in {{\mathcal M}}(\Omega)$ we have $$\int_{ \Omega } \widetilde {\mathcal{G}}(\mu) \psi = \int_\Omega {\mathcal{G}}(\psi ) \mathrm{d} \mu \le \| {\mathcal{G}}(\psi) \|_{{{\mathcal C}}} \| \mu \|_{{{\mathcal M}}(\Omega)}.$$ Taking $\psi = \operatorname{sign}({\mathcal{G}}(\mu))$ we deduce $$\| \widetilde {\mathcal{G}}(\mu) \| \le \|{\mathcal{G}}({\mathbf 1}_\Omega) \|_{{{\mathcal C}}} \| \mu \|_{{{\mathcal M}}(\Omega)}.$$ Furthermore, we have shown that $\widetilde {\mathcal{G}}(\psi)$ satisfies .
For every $\mu \in {{\mathcal M}}(\Omega)$ $$\int_A {\mathcal{G}}(\mu) \le c |A|^\beta \| \mu \|_{{{\mathcal M}}}$$
If $\mu_n \rightharpoonup \mu$ weakly in ${{\mathcal M}}(\Omega)$ then ${\mathcal{G}}( \mu_ n) \rightharpoonup {\mathcal{G}}(\mu)$ in $L^1 (\Omega)$.
However, the following is stronger:
\[prop:G continuous weak star measure to weak L1\] If $\mu_n \rightharpoonup \mu$ weak-$\star$ in ${{\mathcal M}}(\Omega)$ then ${\mathcal{G}}( \mu_ n) \rightharpoonup {\mathcal{G}}(\mu)$ in $L^1 (\Omega)$.
If $\mu_n \rightharpoonup \mu$ weak-$\star$ then $\| \mu_n \|_{{{\mathcal M}}} $ is bounded. Thus, ${\mathcal{G}}(\mu_n)$ is equiintegrable. Taking a convergent subsequence ${\mathcal{G}}(\mu_n) \rightharpoonup {\underline u}$. Substituting in the formulation $$\int_ \Omega {\mathcal{G}}( \mu_n) \psi = \langle {\mathcal{G}}(\psi) , \mu_n \rangle .$$ Passing to the limit $$\int_ \Omega {\underline u}\psi = \langle {\mathcal{G}}(\psi) , \mu \rangle .$$ Thus ${\underline u}= {\mathcal{G}}(\psi)$. The limit of every subsequence coincides so there is a limit.
Local scaling
-------------
The scaling of $
\int_{ B_ \rho } {\mathcal{G}}( \mu ) {\mathrm{d}x}$ as $\rho \to 0$ will be very significant.
### Away from $\operatorname{supp}\mu$
If $\operatorname{supp}\mu \cap B_R (x) = \emptyset$ then $$\int_ { B _\rho (x) } {\mathcal{G}}( \mu) {\mathrm{d}x}\le C (R-\rho)^{2s-n} \rho^n , \qquad \forall \rho < R$$
Notice is the natural behaviour at a Lebesgue point since it implies that $$\limsup_{ \rho \to 0 } \frac{1}{|B_\rho|} \int_{ B_\rho} {\mathcal{G}}(\mu) \le C R^{2s-n}$$
### The sequence ${\mathcal{G}}({\mathbf 1}_{B_\rho})$
Our aim is to show
Let $x_0 \in \Omega$ and $B_\rho = B_\rho (x_0)$. The following hold
\[eq:G one rho\] $$\begin{aligned}
\label{eq:G one rho at 0}
\frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) }{ \rho^{2s} }(x_0) & \ge c > 0\\
\label{eq:G one rho L1}
\frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) }{ \rho^{2s} } &\to 0 \qquad L^1( \Omega ) \\
\label{eq:G one rho Linf}
\frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) }{ \rho^{2s} } &\rightharpoonup 0 \qquad L^\infty( \Omega )\textrm{-weak-}\star \\
\label{eq:G one rho pointwise}
\frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) }{ \rho^{2s} } &\to 0 \qquad \textrm{ pointwise in } \Omega \setminus \{x_0\}. \\
\label{eq:G one rho measure}
\int_ \Omega \frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) }{ \rho^{2s} } \mathrm{d} \mu & \to 0 \qquad \textrm{ for every } \mu \in {{\mathcal M}}(\Omega) \textrm{ such that } \mu (\{x_0\}) = 0.
\end{aligned}$$
$$\begin{aligned}
0 \le {\mathcal{G}}({\mathbf 1}_{B_\rho}) (x) \le \int_ {B_\rho} {\mathbb{G}}(x,y) dy \le C \int_ {B_\rho} |x-y|^{2s-n} dy \le C \int_{ B_\rho } |y|^{2s - n}dy = C \rho^{2s}.
\end{aligned}$$
Furthermore, at $x = x_0$ this inequality hold in reverse order hold (except for the first), and is proven. Therefore $\| {\mathcal{G}}({\mathbf 1}_{B_\rho}) \|_{L^\infty}$ is bounded. Furthermore $$\int_ \Omega \frac{ {\mathbf 1}_{B_\rho} }{\rho^{2s}} = C \rho^{n-2s} \to 0 .$$ Therefore, due the strong continuity is proven. But then the limit coincides with the weak-$\star$ limit in $L^\infty$, so is proven. For $x \neq x_0$ we have the sharper estimate, for $\rho < |x-x_0|$ $$\begin{aligned}
\frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) (x)}{\rho^{2s}} \le C \rho ^{-2s} \int_ {B_\rho} |x-y|^{2s-n} dy \le C (|x-x_0| - \rho)^{2s-n} |\rho|^{n-2s} \to 0.
\end{aligned}$$
To prove we assume first that $\mu \ge 0$. When $\mu (\{x_0\}) = 0$ we have that $$0 \le \frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) (x)}{\rho^{2s}} \le C$$ and $$\mu \left( \left \{ x \in \Omega : \frac{ {\mathcal{G}}({\mathbf 1}_{B_\rho}) (x)}{\rho^{2s}} = 0 \right \} \right ) \le \mu (\{0\}) = 0.$$ Therefore, the convergence is $\mu$-everywhere. By the Dominated Convergence Theorem we have . When $\mu$ changes sign we reproduce the argument for $\mu^+$ and $\mu^-$ and the result is proven.
Notice that this is the scaling as ${{\mathcal C}}$ function. Obviously ${\mathbf 1}_{B_\rho(x)} / |B_\rho (x)|$
### Near the $\operatorname{supp}\mu$
\[thm:local integral of G mu near support\] Let $\mu \in {{\mathcal M}}(\Omega)$. Then $$\lim_{ \rho \to 0 }\rho^{-2s} \int \limits_{ B_\rho (x) } {\mathcal{G}}(\mu) {\mathrm{d}x}\asymp \mu (\{x\}).$$
Assume $\mu(\{x\}) = 0$. Since ${\mathcal{G}}$ is self-adjoint $$\rho^{-2s} \int_{ B_\rho (x) } {\mathcal{G}}(\mu) =\rho^{-2s} \int_{ \Omega } {\mathcal{G}}(\mu) {\mathbf 1}_{B_ \rho (x)} = \rho^{-2s} \int_ \Omega {\mathcal{G}}( {\mathbf 1}_{B_ \rho (x)} ) d \mu \to 0$$ due to .
On the other hand let us compute $$\begin{aligned}
\int_{ B_\rho (x) } {\mathcal{G}}({\delta}_x) &= \int_{ B_\rho (x) } {\mathbb{G}}(y, x) \mathrm dy \asymp \int_{ B_\rho (x) } |x-y|^{n-2s} \mathrm dy \nonumber \\
& = C \int_{0}^\rho r^{2s-n} r^{n-1} \mathrm dr = C \rho^{2s}.
\end{aligned}$$
Therefore, for a general measure $\mu$ we can decompose $${\mathcal{G}}( \mu ) = {\mathcal{G}}\Big(\mu - \mu(\{x\}) \, {\delta}_x \Big) + \mu(\{x\}) \, {\mathcal{G}}( {\delta}_x ).$$ Applying the two preceding parts the result is proven.
Almost everywhere approximation of ${\mathcal{G}}(\delta_{x_0})$
----------------------------------------------------------------
\[lem:ae aprox of G delta 0\] We have that $${\mathcal{G}}\left( \frac{{\mathbf 1}_{B_{\frac 1 k} (x_0)} }{|B_{\frac 1 k} (x_0)|} \right) \to {\mathcal{G}}(\delta_{x_0} ) , \qquad \textrm{ a.e. in } \Omega.$$
Assume $x \neq x_0$. For $k_0$ large enough $x \notin \overline {B_{\frac 1 {k_0}} (x_0)}$. Then, due to , ${\mathbb{G}}(x, \cdot) \in L^\infty ( B_{\frac 1 {k_0}} (x_0) )$. Hence $x_0$ is a Lebesgue point of ${\mathbb{G}}(x, \cdot)$. Due to the Lebesgue integration theorem we have that $${\mathcal{G}}\left( \frac{{\mathbf 1}_{B_{\frac 1 k} (x_0)} }{|B_{\frac 1 k} (x_0)|} \right) (x) = \frac{1}{|B_{\frac 1 k} (x_0)|} \int_{B_{\frac 1 k} (x_0)} {\mathbb{G}}(x,y) dy \to {\mathbb{G}}(x,x_0) = {\mathcal{G}}( {\delta}_{x_0} ) (x)$$ This completes the proof.
Equivalent definitions of solution {#sec:definitions}
==================================
We discuss the definition of dual, weak-dual and very weak solutions for the problem with and without a potential $V$.
Problem .
---------
Brezis introduced the notion of *very weak solution* for the classical case $s=1$ as $$\int_ \Omega u (-\Delta \varphi) = \int_ \Omega f \varphi, \qquad \forall \varphi \in W^{2,\infty}(\Omega) \cap W^{1,\infty}_0 (\Omega)$$ Chen and Véron [@chen+veron2014] extended this definition to the Restricted Fractional Laplacian as $$\begin{gathered}
\int_ \Omega u (-\Delta)^s_{\mathrm{RFL}}\varphi = \int_ \Omega f \varphi, \qquad \forall \varphi \in \mathbb X_s
\end{gathered}$$ where $$\mathbb X_s = \{ \varphi \in {{\mathcal C}}^s( \mathbb R^n ) : \varphi = 0 \textrm{ in } \mathbb R^n \setminus \Omega \textrm{ and } (-\Delta)^s \varphi \in L^\infty (\Omega) \}$$
Letting $\psi = (-\Delta)^s_{\mathrm{RFL}}\varphi \in L^\infty (\Omega)$, which implies that $\varphi = {\mathcal{G}}(\psi)$ this is equivalent to writing $$\int_ \Omega u \psi = \int_ \Omega f {\mathcal{G}}( \psi ) \qquad \forall \psi \in L^\infty (\Omega).$$
In some texts (see [@bonforte+figalli+vazquez2018]) the authors have used this as a new definition of solution of for more general operators, and they usually call this *weak dual solution*. It has the advantage that one needs not worry about fancy spaces of test functions, but only on the nature of ${\mathcal{G}}$. Furthermore, the treatment of different fractional Laplacians is unified.
Notice that, whenever ${\mathcal{G}}(f)$ is defined, since ${\mathcal{G}}$ is self-adjoint this is equivalent to $$\int_ \Omega u \psi = \int_ \Omega {\mathcal{G}}(f) \psi \qquad \forall \psi \in L^\infty (\Omega).$$ and since $u$ and ${\mathcal{G}}(f)$ are in $L^1(\Omega)$ this is simply $$u = {\mathcal{G}}(f)$$
Problem .
---------
For the Schrödinger problem the notion of very weak solution for the classical case was used multiple times in the literature (see, e.g., [@diaz+gc+rakotoson+temam:2018veryweak] and the references therein) as
$$\begin{gathered}
Vu \in L^1 (\Omega), \\
\int_ \Omega u (-\Delta \varphi) + \int_ \Omega Vu \varphi = \int_ \Omega f \varphi, \qquad \forall \varphi \in W^{2,\infty}(\Omega) \cap W^{1,\infty}_0 (\Omega)\end{gathered}$$
We extended this notion in [@diaz+g-c+vazquez2018] to the case $(-\Delta)^s_{\mathrm{RFL}}$ by using the definition
$$\begin{gathered}
Vu \in L^1 (\Omega), \\
\int_ \Omega u (-\Delta)^s_{\mathrm{RFL}}\varphi + \int_ \Omega Vu \varphi = \int_ \Omega f \varphi, \qquad \forall \varphi \in \mathbb X_s
\end{gathered}$$
The corresponding notion of weak-dual solution is very naturally
$$\begin{gathered}
Vu \in L^1 (\Omega), \\
\int_ \Omega u \psi + \int_ \Omega Vu {\mathcal{G}}( \psi) = \int_ \Omega f {\mathcal{G}}(\psi), \qquad \forall \psi \in L^\infty (\Omega).
\end{gathered}$$
Again, this notion is equivalent to our definition of *dual* solution.
Theory for $(f, V) \in L^1 (\Omega) \times L^\infty_+ (\Omega)$ {#sec:existence for V bounded}
===============================================================
Rather complete results are obtained for bounded potentials and integrable data.
Existence. Fixed-point approach
-------------------------------
Here we show the following
\[eq:existence fixed point\] Let $f \in L^1 (\Omega)$ and $V \in L^\infty_+(\Omega)$. Then, there exists a solution $u$ of and it satisfies $$|u| \le {\mathcal{G}}(|f|)$$ Furthermore, if $f \ge 0$ then $u \ge 0$.
**Step 1.** Assume $f \ge 0$. We construct the following sequence. $u_0 = 0$, $u_1 = {\mathcal{G}}(f) \ge 0$, $$\begin{aligned}
u_2 &= {\mathcal{G}}\Bigg( \Big(f - V u_{1}\Big)_+ \Bigg), \\
u_i &= {\mathcal{G}}(f - V u_{i-1}), \qquad i > 2.
\end{aligned}$$
**Step 1a.** We prove that $$u_0 \le u_2 \le u_3 \le u_1.$$ Clearly $u_0 \le u_1$. Since $$0 \le (f - V u_1 )_+ \le f .$$ Thus, applying ${\mathcal{G}}$, $u_0 \le u_2 \le u_1$. Therefore $$f- V u_1 \le f- V u_2 \le f - V u_0$$ Applying again ${\mathcal{G}}$ we have $$u_2 \le u_3 \le u_1.$$
**Step 1b.** We show, by induction, that $$u_{2i} \le u_{2i + 2} \le u_{2i + 3} \le u_{2i+1}, \qquad \forall i \ge 0.$$ The result is true for $i = 0$ by the previous step. Assume the result true for $i$: $$u_{2i} \le u_{2i + 2} \le u_{2i + 3} \le u_{2i+1}$$ we have that $$f - V u_{2i+1} \le f - V u_{2i+3} \le f - V_{2i+2} \le f - V u_{2i}.$$ Applying ${\mathcal{G}}$ we have that $$u_{2(i+1)} \le u_{2(i+1)+ 2} \le u_{2i+3} \le u_{2i+1}$$ Repeating the process $$f - Vu_{2i+1} \le f - Vu_{2i+3} \le f - Vu_{2(i+1)+2} \le f- Vu_{2(i+1)}.$$ Applying ${\mathcal{G}}$ $$u_{2(i+1)} \le u_{2(i+1)+ 2} \le u_{2(i+1) + 3} \le u_{2(i+1)+1}$$ Then the result is true for $i+1$. This step is proven.
**Step 1b.** By the monotone convergence theorem $u_{2i} \nearrow \underline u$ in $L^1 (\Omega)$ where $u_{2i+1} \searrow \overline u$ in $L^1 (\Omega)$. Clearly $0 \le \underline u \le \overline u \le {\mathcal{G}}(f) $. Since $V \in L^\infty (\Omega)$ then $V u_{2i}$ and $V u_{2i+1}$ also converge in $L^1 (\Omega)$. Since ${\mathcal{G}}$ is continuous in $L^1 (\Omega)$ we have $$\begin{gathered}
\overline u = {\mathcal{G}}(f - V \underline u), \\
\underline u = {\mathcal{G}}(f - V \overline u).
\end{gathered}$$ Therefore $u = \frac 1 2 (\underline u + \overline u)$ is a solution of $$u = {\mathcal{G}}(f - Vu).$$
**Step 2.** Assume now that $f$ changes sign. We decompose $f = f_+ - f_-$ and solve for each $f_+$ and $f_-$, to obtain $u_1$ and $u_2$. Then, clearly $u = u_1 - u_2$ is a solution of the problem. Furthermore $$|u| = |u_1-u_2| \le u_1 + u_2 \le {\mathcal{G}}(f_+) + {\mathcal{G}}(f_-) = {\mathcal{G}}(|f|).$$
This completes the proof.
Uniqueness
----------
\[thm:uniqueness fixed point\] $V \in L^\infty (\Omega)$. There exists at most one solution $u \in L^1 (\Omega)$ of .
Let $u_1, u_2$ be two solutions. We proceed as in . Its difference $u = u_1-u_2 \in L^1(\Omega)$ satisfies $u = -{\mathcal{G}}(Vu)$. Then $V u \in L^1 (\Omega)$ and $u \in L^{\underline Q(1)} (\Omega)$. Repeating this process we deduce that $u \in L^2 (\Omega)$.
Therefore $Vu^2 = -Vu {\mathcal{G}}(Vu) \in L^2 (\Omega)$. We deduce $$0 \le \int_ \Omega V u^2 = - \int_ \Omega Vu {\mathcal{G}}(Vu) \le 0.$$ due to . Hence $Vu^2 = 0$ and so $V u= 0$. But then $u = - {\mathcal{G}}(0) = 0$. The solutions $u_1$ and $u_2$ are equal.
The solution operator ${\mathcal{G}}_V$
---------------------------------------
Let $V \in L^\infty_+ (\Omega)$. We consider the solution operator $${\mathcal{G}}_V : f \in L^1 (\Omega) \mapsto u \in L^1(\Omega)\,,$$ where $u$ is the unique solution of $u = {\mathcal{G}}(f- Vu)$. It is well-defined, linear and continuous.
We leave the easy details to the reader.
Equi-integrability independently of $V$
---------------------------------------
\[thm:Dunford-Pettis for G V with V bounded\] We have $$\int_A |{\mathcal{G}}_V (f)| \le C |A |^\beta \| f \|_{L^1 (\Omega)}, \qquad \forall f \in L^1 (\Omega).$$ for any $0<\beta < {2s}/{n}$. In particular, for every bounded sequence $f_n \in L^1 (\Omega)$ the sequence ${\mathcal{G}}(f_n)$ is equiintegrable. In particular, there exists a weakly convergent subsequence ${\mathcal{G}}(f_{n_k}) \rightharpoonup u$ in $L^1 (\Omega)$.
Estimate of $Vu$ in $L^1 (\Omega)$
----------------------------------
In order to have an extension to an $L^1$ theory we introduce the following estimate
Let $V \in L^\infty_+ (\Omega)$ and $u = {\mathcal{G}}_V (f)$. Then, for every $K \Subset \Omega$ $$\label{eq:estimate norm Vu}
\int_ \Omega V |u| \le \| {\mathcal{G}}(1) \|_{L^\infty(\Omega)} \left( \frac{1}{\inf_K {\mathcal{G}}(1)} + \| V \|_{L^\infty(K)} \right) \int_\Omega |f|.$$
Assume first that $f \ge 0$. Then, we use $\psi = {\mathcal{G}}(1) $ as a test function we deduce $$\int_ \Omega u + \int_ \Omega V u {\mathcal{G}}(1) = \int_ \Omega f {\mathcal{G}}(1).$$ Clearly ${\mathcal{G}}(1) |_K \ge c > 0$. Hence $$\int_K Vu \le \frac 1 c \int_K Vu {\mathcal{G}}(1) \le \frac 1 c \int_\Omega f {\mathcal{G}}(1) \le C \int_ \Omega f.$$ On the other hand, $$\int_{ \Omega \setminus K } Vu {\mathcal{G}}(1) \le C \| V \|_{L^\infty (\Omega \setminus K)} \int_ \Omega u \le C \int_ \Omega f.$$ Thus, $$\int_ \Omega V u \le C \int_ \Omega f. \\$$ If $f$ changes sign we decompose as $f = f_+ - f_-$. We apply the result above for $u_1 = {\mathcal{G}}_V (f_+), u_2 = {\mathcal{G}}_V (f_-)$. Thus, $u = u_1 - u_2$, and so $|u| \le u_1 + u_2$. Hence, $$\int_\Omega V|u| \le \int_ \Omega V u_1 + \int_ \Omega Vu_2 \le C \int_ \Omega f_+ + C \int_ \Omega f_- = C \int_ \Omega |f|.$$ This completes the proof.
Uniqueness for general $V \ge 0$ {#sec:uniqueness of Schrodinger}
=================================
Assume $|\{ V = +\infty \}| = 0$. There exists, at most, one solution $u \in L^1 (\Omega)$ of .
Let $u_1, u_2 \in L^1 (\Omega)$ be two solution. Then $u = u_1 - u_2 \in L^1 (\Omega)$ is a solution of $u = - {\mathcal{G}}(Vu)$.
For $k \in \mathbb N$ we define $V_k = V \wedge k \in L^\infty_+ (\Omega)$. We write $$u = {\mathcal{G}}( (V_k - V) u - V_k u ) = {\mathcal{G}}(f_k - V_k u)$$ where $f_k = (V_k - V) u \in L^1 (\Omega)$. Hence, due to , $u$ is the unique solution of $u + {\mathcal{G}}(V_k u) = {\mathcal{G}}(f_k)$ and we deduce that $$\| u \|_{L^1 (\Omega)} \le C \| f _ k \|_{L^1 (\Omega)}.$$
On the other hand, we have that $|(V-V_k) u | \le |V - V_k| |u| \le V|u| \in L^1 (\Omega)$. Since the $V_k \to V$ a.e. we deduce that $(V-V_k)u \to 0$ a.e. in $\Omega$. Thus, due the Dominated Convergence Theorem we have $(V-V_k) u \to 0$ in $L^1 (\Omega)$ and so $u = 0$.
Existence for $(f,V) \in L^1 (\Omega) \times L^1_+ (\Omega)$ {#sec.exist.L1L1}
============================================================
\[thm:existence when f and V in L1\] If $(f,V) \in L^1 (\Omega) \times L^1_+ (\Omega)$, there exists a solution.
The proof of this replicates the double limit argument in our previous paper [@diaz+g-c+vazquez2018] for more general operators.
If $V_1 \le V_2$ and $f_1 \ge f_2$ then ${\mathcal{G}}_{V_1} (f_1) \ge {\mathcal{G}}_{V_2} (f_2)$.
**Step 1. Assume $f_2 \ge 0$.** Let $u_i \ge 0$ be the unique solutions of $u_i = {\mathcal{G}}(f_i - V_i u_i)$. Let $w = u_1 - u_2$. It satisfies $$w + {\mathcal{G}}(V_1 w) = {\mathcal{G}}(f_1 - f_2 + (V_2 - V_1) u_2 ).$$ Letting $F= f_1 - f_2 + (V_2 - V_1) u_2 \ge 0$ we have that $w$ is the unique solution of $w = {\mathcal{G}}(F - V_1 w )$, and therefore $w \ge 0$ and, hence $u_1 \ge u_2$.
**Step 2. $f_2$ has no sign.** The we decompose in positive and negative part $f_i = (f_i)_+ - (f_i)_-$. It is clear that $$(f_1)_+ \ge (f_2)_+ \ge 0 , \qquad 0 \le (f_1)_- \le (f_2)_-.$$ Applying the previous step we have $${\mathcal{G}}((f_1)_+) \ge {\mathcal{G}}((f_2)_+) , \qquad {\mathcal{G}}((f_1)_- ) \le {\mathcal{G}}((f_2)_-).$$ Therefore $${\mathcal{G}}(f_1) = {\mathcal{G}}((f_1)_+ - (f_1)_-) \ge {\mathcal{G}}((f_2)_+ - (f_2)_-) = {\mathcal{G}}(f_2).$$ This completes the proof.
**Step 1.** $f \ge 0$. We define $$V_k = V \wedge k , \qquad f_m = f \wedge m.$$ We define $u_{k,m} = {\mathcal{G}}_{V_k} (f_m) \in L^\infty (\Omega)$. Let $U_m = {\mathcal{G}}(f_m) \in L^\infty (\Omega)$.
**Step 1a. $k \to +\infty$.** Clearly $u_{k,m}$ is a non-increasing sequence on $k$ such that $0 \le u_{k,m} \le U_m$, hence $u_{k,m} \to u_m$ in $L^1 (\Omega)$, due the Monotone Convergence Theorem. On the other hand $$V_{k} u_{k,m} \le V U_m \in L^1 (\Omega)$$ and we have $$V_k u_{k,m} \to V u_m \qquad \textrm { a.e. } \Omega.$$ Therefore, due the Dominated Convergence Theorem and, due to the estimate $$\int_ \Omega V_k u_{k,m} \delta^\gamma \le C \int_ \Omega f_m \delta^\gamma$$ we have that $$V_k u_{k,m} \to V u_m \qquad L^1 (\Omega , {\delta^\gamma}).$$
Hence $$u_m = \lim_{k} u_{k,m} = \lim_{k} {\mathcal{G}}(f_m - V_k u_{k,m}) = {\mathcal{G}}(f_m - V u_m)$$ and $u_m$ is the solution corresponding to $(f_m, V)$.
**Step 1b. $m \to + \infty$.** The sequence $u_m$ is increasing. Since $\int_{ \Omega } u_m \le C$ due to the Monotone Convergence Theorem we have $u_m \to u$ in $L^1 (\Omega)$. Analogously $V u_m \to Vu$ in $L^1 (\Omega,{\delta^\gamma})$. Furthermore ${u_m} = {\mathcal{G}}(f_m - V_m u_m) \to {\mathcal{G}}(f - Vu)$.
[**Step 2. $f$ has no sign.**]{} We decompose $f = f_+ - f_-$ and we apply Step 1.
Singular potential and measure data: CSOLAs {#sec:existence for V singular at 0}
===========================================
Once the theory of data $f$ and integrable potentials is complete, we address the novel question of measure data and possibly non-integrable potentials and the consequence of their interactions for the theory of existence.
CSOLA: Limit of approximating sequences. Reduced measures
---------------------------------------------------------
We regularize the potential by putting $$V_{\varepsilon}(x) = V(x) \wedge \frac 1 {\varepsilon}.$$ Since $V_{\varepsilon}(x) \in L^\infty (\Omega)$, a Green kernel in the standard sense exists.
For the remainder of this section we fix a measure $\mu \in {{\mathcal M}}$. We want to understand what happens to $$u_{\varepsilon}= {\mathcal{G}}_ {V_{\varepsilon}} (\mu),$$ i.e., to the solution of $Lu + V_{\varepsilon}u=\mu$, as ${\varepsilon}\to 0$. We say that $${\underline u}= \lim_{ {\varepsilon}\to 0 } u_{\varepsilon}$$ is a Candidate Solution Obtained as Limit of Approximations (CSOLA). We will prove that such a convergence holds, at least, in $L^1 (\Omega)$. The main problem is to decide when the CSOLA is an actual dual solution.
\[sec:existence of SOLA\] We prove the following
\[thm:existence of CSOLA\] Assume that $V\ge0 $ satisfies condition from the introduction and let $\mu \ge 0$ be a nonnegative Radon measure. Then, there exist an integrable function ${\underline u}\ge 0$ and constants $(\alpha_\mu^x)_{x \in S} \in \mathbb R$ such that:
1. \[it:convergence uee\] $u_{\varepsilon}\searrow {\underline u}$ in $L^1 (\Omega)$
2. \[it:convergence Vee uee away from 0\] $V_{\varepsilon}u_{\varepsilon}\to V {\underline u}$ in $L^1 (\Omega \setminus B_\rho({S}), {\delta^\gamma})$ for any $\rho > 0$
3. \[it:convergence Vee uee as measure\] $V_{\varepsilon}u_{\varepsilon}\rightharpoonup V {\underline u}+ \sum_{x \in S} \alpha_\mu^x {\delta}_x$ weakly in ${{\mathcal M}}(\Omega, {\delta^\gamma})$.
4. \[it:problem for ulim\] The limit satisfies the equation $${\underline u}+ {\mathcal{G}}(V {\underline u}) = {\mathcal{G}}(\mu_r),$$ where $\mu_r$ is the reduced measure $$\mu_r = \mu - \sum_{x \in S} \alpha_ \mu ^x {\delta}_x.$$
It is important to notice that, according to point (iv), ${\underline u}$ is the solution of corresponding to the reduced measure $\mu_r$. We do not assert having solved with data $\mu$.
Let us prove \[it:convergence uee\]). It is immediate that $u_{\varepsilon}\ge 0$. Since the sequence $V_{\varepsilon}$ is pointwise increasing, then the sequence $u_{\varepsilon}$ is pointwise decreasing. Thus, due the Monotone Convergence Theorem, it has an $L^1(\Omega)$ limit, ${\underline u}\ge 0$.
To prove \[it:convergence Vee uee away from 0\]) we recall and thus $u_{\varepsilon}\to {\underline u}$ in $L^1 (\Omega)$ is sufficient.
To prove \[it:convergence Vee uee as measure\]) we start by indicating that $V_{\varepsilon}u_{\varepsilon}\ge 0$. On the other hand, $\int_ \Omega V_{\varepsilon}u_{\varepsilon}{\delta^\gamma} \le C \int_{ \Omega }\delta^\gamma d\mu$. Indeed, taking let $$K = \bigcup_{x \in S} \overline{ B_{\rho_x} (x)}$$ where $0 < \rho _ x < \mathrm{dist}(S, \partial \Omega) / 2$ is small enough so that $$\overline{ B_{\rho_x} (x)} \cap S = \{x\}.$$ We have $S \subset \mathrm{int} (K)$. The estimate is preserved.
Thus, there exists a limit $\gamma \in {{\mathcal M}}_+ (\Omega)$ as measures $$V_{\varepsilon}u _{\varepsilon}\rightharpoonup \gamma \textrm{ in } {{\mathcal M}}_+ (\Omega, {\delta^\gamma}).$$ Due to the pointwise convergence away from $0$ the regular part of $\gamma$ is $V \underline u$. On the other hand, the singular support of $\mu$ is, at most, $S$. Thus, the singular part is a combination of ${\delta}$ measures. Hence, $$\gamma = \sum_{x \in S} \alpha_\mu^x {\delta}_x + V {\underline u}.$$ Then, $$\begin{aligned}
f - V_{\varepsilon}u_{\varepsilon}\rightharpoonup f - \sum_{x \in S} \alpha_{ \mu }^x {\delta}_ x - V {\underline u}\qquad \textrm{ weak}-\star-{{\mathcal M}}(\Omega,{\delta^\gamma}).
\end{aligned}$$ Hence, $$\begin{aligned}
u_{\varepsilon}= {\mathcal{G}}\left( f - V_{\varepsilon}u_{\varepsilon}\right) \rightharpoonup {\mathcal{G}}\left( f - \sum_{x \in S} \alpha_{ \mu }^x {\delta}_ x - V {\underline u}\right) \qquad \textrm{ weakly } L^1 (\Omega).
\end{aligned}$$ Due to the uniqueness of limit, $${\underline u}= {\mathcal{G}}\left( f - \sum_{x \in S} \alpha_{ \mu }^x {\delta}_ x - V {\underline u}\right)$$ In other words. $${\underline u}+ {\mathcal{G}}(V {\underline u}) = {\mathcal{G}}\left( f - \sum_{x \in S} \alpha_{ \mu }^x {\delta}_ x \right). \\$$ This completes the proof.
Every solution is a CSOLA
-------------------------
\[prop:if Green V exists it is the limit\] Assume that a solution $u$ of exists, then $u_{\varepsilon}\to u$ in $L^1 (\Omega)$.
Let $u_{\varepsilon}= {\mathcal{G}}_{V_{\varepsilon}} (\mu)$. Clearly $u = {\mathcal{G}}_V (f) \ge 0$. By subtracting the two problems and letting $w_{\varepsilon}= u_{\varepsilon}- u$ and $$\begin{aligned}
w_{\varepsilon}&= {\mathcal{G}}(\mu - V_{\varepsilon}u_{\varepsilon}) - {\mathcal{G}}(\mu - V u) \nonumber\\
&= {\mathcal{G}}(V u - V_{\varepsilon}u_{\varepsilon}) \nonumber\\
&= {\mathcal{G}}((V-V_{\varepsilon}) u - V_{\varepsilon}w_{\varepsilon})
\end{aligned}$$ Thus $w_{\varepsilon}= {\mathcal{G}}_{V_{\varepsilon}} ((V-V_{\varepsilon})u)$. Since $V \ge V_{\varepsilon}$ we have $(V-V_{\varepsilon})u \ge 0$. Also $(V - V_{\varepsilon})u \le V u \in L^1 (\Omega)$. Thus, by the Dominated Convergence Theorem and taking into account the pointwise limit we have $$(V-V_{\varepsilon}) u \to 0 \qquad L^1 (\Omega).$$ Hence $$0 \le w_{\varepsilon}\le {\mathcal{G}}((V-V_{\varepsilon})u ) \to 0$$ in $L^1 (\Omega)$.
CSOLAs are solutions if $\mu (S) = 0$. Solutions for $f \in L^1 (\Omega)$
-------------------------------------------------------------------------
Let $\mu \ge 0$. Then,
1. \[it:scaling at 0\] We can estimate the scaling at $0$ as $$\label{eq:scaling estimate with respect to alpha mu}
\lim_{\rho \to 0} \rho^{-2s} \int_{B_\rho (x)} {\underline u}= c_x \Big(\mu(\{x\}) - \alpha_ \mu^x \Big).$$ For some $c_x > 0$. In particular $\alpha_\mu^x \le \mu(\{x\})$.
2. \[it:existence if mu 0 is 0\] If $\mu(\{x\}) = 0$ then $\alpha_{\mu}^x = 0$.
We prove \[it:scaling at 0\]). We rearrange the fact that ${\underline u}= {\mathcal{G}}_V (\mu - \sum_{x \in S} \alpha_\mu^x {\delta}_x)$ as $$\sum_{x \in S} \alpha_\mu^x {\mathcal{G}}(\psi) (x) = \int_\Omega {\mathcal{G}}(\psi) \mathrm{d} \mu - \int_{\Omega} {\underline u}\psi - \int_{\Omega} V {\underline u}{\mathcal{G}}(\psi) .$$ We subtract $\mu( \{x_0\}) {\mathcal{G}}(\psi) (x_0) $ to deduce $$\Big (\alpha_\mu - \mu(x_0) \Big) {\mathcal{G}}(\psi) (x_0) + \sum_{x_0 \ne x \in S} \alpha_\mu^x {\mathcal{G}}(\psi) (x) = \int_\Omega {\mathcal{G}}(\psi) d(\mu - \mu(x_0){\delta}_{x_0}) - \int_{\Omega} {\underline u}\psi - \int_{\Omega} V {\underline u}{\mathcal{G}}( \psi ) .$$ Take $\psi = {\mathcal{G}}({\mathbf 1}_{ B_ \rho (x_0)} ) \rho^{-2s}$ and we deduce, due to that $$\begin{aligned}
c(\alpha_\mu - \mu(x_0)) &= -\lim_{\rho \to 0} \rho^{-2s} \int_{B_\rho (x_0)} {\underline u}\le 0.
\end{aligned}$$ where $c>0$.\
We now prove \[it:existence if mu 0 is 0\]). If $\mu (x_0)=0$ we can apply to deduce $$0 \le \frac 1 c \lim_{ \rho \to 0 } \rho^{-2s} \int_{B_\rho (x_0)} {\underline u}\le \frac 1 c \lim_{ \rho \to 0 } \rho^{-2s} \int_{B_\rho (x_0)} {\mathcal{G}}( \mu ) = 0.$$ Combining this with \[it:scaling at 0\] we deduce that $\alpha_ \mu^{x_0} = 0$.
\[cor:CSOLA measures away from S\] If $\mu (S) = 0$ then $ {\mathcal{G}}_{V_{\varepsilon}} (\mu) \to {\underline u}$ in $L^1(\Omega)$, where ${\underline u}$ satisfies ${\underline u}= {\mathcal{G}}(\mu - V {\underline u})$.
Let $V$ satisfy . Then, for every $0 \le f \in L^1 (\Omega)$ there is a solution ${\underline u}\in L^1 (\Omega)$. It is the unique solution of .
The operator ${\mathcal{G}}_V : L^1 (\Omega) \to L^1 (\Omega)$
--------------------------------------------------------------
Let $V$ satisfy . Then, ${\mathcal{G}}_V : L^1 (\Omega) \to L^q (\Omega)$ for all $q < Q(1)$ is linear and continuous and ${\mathcal{G}}_V(f)$ is the unique dual solution of .
For $f \in L^1 (\Omega)$ it is clear that the measure $\mu = f {\mathrm{d}x}$ satisfies $\mu( 0 ) = 0$. In particular ${\mathcal{G}}_V (f)$ is defined. Furthermore, due to the strong $L^1$ convergence we have that $$\| {\mathcal{G}}_V (f) \|_{L^1} = \lim_{ {\varepsilon}\to 0} \| {\mathcal{G}}_{V _{\varepsilon}} (f) \|_{L^1} \le \| {\mathcal{G}}(f) \|_{L^1} \le C \| f \|_{L^1 }.$$ Linearity is trivial and the result is proven.
In fact, since $| {\mathcal{G}}_{V_{\varepsilon}} (f)| \le |{\mathcal{G}}(f)|$ we have, for $ 1\le q < Q(1)$, $$\| | {\mathcal{G}}_{V_{\varepsilon}} (f) \||_{L^q (\Omega)} \le \| {\mathcal{G}}(f) \|_{L^q (\Omega) }.$$ For $q>1$ we have weak $L^q(\Omega)$ compactness, and hence $$\| {\mathcal{G}}_V (f) \|_{L^q} \le \lim_{ {\varepsilon}\to 0} \| {\mathcal{G}}_{V _{\varepsilon}} (f) \|_{L^q} \le \| {\mathcal{G}}(f) \|_{L^q} \le C \| f \|_{L^1 }.$$ This proves the result.
In fact, an ${\mathcal{G}}_V : L^1 \to L^1$ theory can be constructed simply under the hypothesis ${\mathcal{G}}: L^\infty \to L^\infty$. However, the aim of the paper is the study of measures.
Solvability. Characterization of the reduced measure {#sec.nonex}
====================================================
We address now the cases where $\mu$ and $V$ are not compatible. We start by point masses.
Concentration of measures when $\mu = {\delta}_x$. Possible non-existence
-------------------------------------------------------------------------
When the measure $\mu$ is precisely a Dirac delta at $0$ we show that non-existence is due to a concentration of measure. We remind the reader that we define the set $Z$ of incompatible points as $$Z = \{ x \in \Omega : \textrm{ there is no solution of } u = {\mathcal{G}}({\delta}_x - Vu) \textrm{ such that } u, Vu \in L^1 (\Omega) \}.$$
\[thm:concentration of measures\] Assume . And let $u_{\varepsilon}={\mathcal{G}}_{V_{\varepsilon}} ({\delta}_x)$, i.e. solving $u_{\varepsilon}= {\mathcal{G}}({\delta}_x - V_{\varepsilon}u_{\varepsilon})$. $$u_{\varepsilon}= {\mathcal{G}}_{V_{\varepsilon}} ({\delta}_x) \searrow {\underline u}\quad \textrm{ where }
\begin{dcases}
{\underline u}\textrm{ is the unique solution of } u = {\mathcal{G}}(\delta_x - Vu) & \textrm{if } x \notin Z, \\
{\underline u}= 0 & \textrm{if }x \in Z.
\end{dcases}$$ Furthermore, we have $$V_{\varepsilon}u_{\varepsilon}\rightharpoonup
\begin{dcases}
V {\underline u}& \textrm{if }x \notin Z, \\
{\delta}_x, & \textrm{if } x \in Z.
\end{dcases}$$ weak-$\star$ in ${{\mathcal M}}(\Omega)$.
\(i) If $x \notin S$ we apply and we deduce that there is a solution of $u = {\mathcal{G}}({\delta}_x - Vu)$. Therefore $x \notin Z$.
\(ii) If $x \in S$ we know $$V_{\varepsilon}u_{\varepsilon}\rightharpoonup V {\underline u}+ \alpha_{ {\delta}_x }^x \delta_x.$$ Since it will not lead to confusion, let us just use $\alpha = \alpha_{ {\delta}_x }^x$. The reduced measure is $$({\delta}_x)_r = (1 - \alpha) {\delta}_x.$$
\(iii) If $\alpha = 1$ then $({\delta}_x)_r = 0$, and so ${\underline u}= 0$. Clearly ${\underline u}= 0 \ne {\mathcal{G}}(\delta_x) = {\mathcal{G}}(\delta_x - V {\underline u})$. By if there was a solution of $u = {\mathcal{G}}(\delta_x - Vu)$, then $u = {\underline u}$, and so there is no solution.
\(iv) If $\alpha \ne 1$ we define $$U := \frac{{\underline u}}{1- \alpha} = \frac{1}{1-\alpha} {\mathcal{G}}\left( ({\delta}_x)_r - V {\underline u}\right)= \frac{1}{1-\alpha} {\mathcal{G}}\left( (1-\alpha) {\delta}_x - V {\underline u}\right) = {\mathcal{G}}({\delta}_x - V U).$$ Hence, by we have $\underline u = U$ and, therefore, $ \alpha = 0$.
Characterization of the reduced measure
---------------------------------------
We obtain an immediate consequence of the point mass analysis.
\[thm:characterization of reduced measure\] Assume . Then, $$\label{eq:characterization of reduced measure}
\mu_r = \mu - \sum_{x \in Z } \mu (\{x\}) {\delta}_x.$$
By writing the decomposition $$\mu = \mu - \sum_{x \in S} \mu (x) \delta_x + \sum_{x \in S \setminus Z} \mu (x) \delta_x + \sum_{x \in Z} \mu (x) \delta x.$$ We solve the approximating problems by superposition $${\mathcal{G}}_{V_{\varepsilon}} (\mu) = {\mathcal{G}}_{V_{\varepsilon}} \left( \mu - \sum_{x \in S} \mu (x) \delta_x \right) + \sum_{x \in S \setminus Z} \mu (x) {\mathcal{G}}_{V_{\varepsilon}} (\delta_x) + \sum_{x \in Z} \mu (x) {\mathcal{G}}_{V_{\varepsilon}} ({\delta}_x).$$ We know from we know that $${\mathcal{G}}_{V_{\varepsilon}} (\mu) \to {\underline u}, \qquad {\underline u}= {\mathcal{G}}( \mu_r - V {\underline u})$$ Using and we deduce that $$\begin{aligned}
{\mathcal{G}}_{V_{\varepsilon}} \left( \mu - \sum_{x \in S} \mu (x) \delta_x \right) &\to u_1, \qquad u_1 = {\mathcal{G}}\left( \mu - \sum_{x \in S} \mu (x) \delta_x - Vu_1 \right) \\
\sum_{x \in S \setminus Z} \mu (x) {\mathcal{G}}_{V_{\varepsilon}} (\delta_x) &\to u_2, \qquad u_2 = {\mathcal{G}}\left( \sum_{x \in S \setminus Z} \mu (x) \delta_x - Vu_1 \right)\\
\sum_{x \in Z} \mu (x) {\mathcal{G}}_{V_{\varepsilon}} ({\delta}_x)& \to 0.
\end{aligned}$$ Hence ${\underline u}= u_1 + u_2$ and we have that $$\begin{gathered}
{\mathcal{G}}(\mu_r - V{\underline u}) = {\mathcal{G}}\left( \mu - \sum_{x \in S} \mu (x) \delta_x - Vu_1 \right) + {\mathcal{G}}\left( \sum_{x \in S \setminus Z} \mu (x) \delta_x - Vu_1 \right)\\
{\mathcal{G}}( \mu_r - V {\underline u}) = {\mathcal{G}}\left( \mu - \sum_{x \in Z} \mu (x) \delta_x - V(u_1 + u_2) \right) \\
{\mathcal{G}}( \mu_r ) = {\mathcal{G}}\left( \mu - \sum_{x \in Z} \mu (x) \delta_x \right)\\
\sum_{x \in S} \alpha_x^\mu {\mathcal{G}}(\delta_x) = \sum_{x \in Z} \mu (x) {\mathcal{G}}(\delta_x).
\end{gathered}$$ Using the scaling in we deduce that $$\alpha_x^\mu =
\begin{dcases}
\mu(x) & x \in Z, \\
0 & x \notin Z.
\end{dcases}$$ This completes the proof.
Necessary and sufficient condition for existence of solution
------------------------------------------------------------
In this way we get the necessary and sufficient condition for existence of solution of .
There exists a dual solution of with data $\mu \in {{\mathcal M}}(\Omega)$ if and only if $|\mu| (Z) = 0$.
By know that the CSOLA exists and it solves the problem with the reduced measure. By , if a solution exists it is the CSOLA. Therefore ${\mathcal{G}}(\mu_r - Vu) = u = {\mathcal{G}}(\mu - Vu)$. Hence ${\mathcal{G}}(\mu) = {\mathcal{G}}(\mu_r)$. Then, due to this implies that $$\sum_{x \in Z} \mu(\{x\}) {\mathcal{G}}({\delta}_x) = 0.$$ This is equivalent to $\mu(\{x\}) = 0$ for all $x \in Z$. Since $Z$ is countable, this is equivalent to $|\mu|(Z) = 0$.
Properties and representation of ${\mathcal{G}}_V$ {#sec:representation of G_V for general V}
==================================================
Extension of ${\mathcal{G}}_V$. The CSOLA operator
--------------------------------------------------
We can define the CSOLA operator, $\widetilde {\mathcal{G}}_V$, which can be understood both as the limit of ${\mathcal{G}}_{V_{\varepsilon}}:{{\mathcal M}}(\Omega) \to L^1 (\Omega)$ or as the extension of ${\mathcal{G}}_V:L^1 (\Omega) \to L^1 (\Omega)$ to the space of measures: $$\widetilde {\mathcal{G}}_V (\mu) = {\mathcal{G}}_V (\mu_r).$$
Notice that, due to , $$\widetilde {\mathcal{G}}_V ( \delta_{x} ) = \begin{dcases}
{\mathcal{G}}_V (\delta_x) & \textrm{ if } x \notin Z, \\
0 & \textrm{ if } x \in Z .
\end{dcases}$$
The operator $\widetilde {\mathcal{G}}_V : {{\mathcal M}}(\Omega) \to L^1 (\Omega)$ is a linear continuous extension of ${\mathcal{G}}_V$.
In the proof of it is easy to see that $\alpha_{\mu}$ is linear in $\mu$.
For $\mu \ge 0$ $$\int_ \Omega |\widetilde {\mathcal{G}}_V (\mu)| = \int_ \Omega {\mathcal{G}}_V (\mu - \alpha_\mu {\delta}_0) \le \int_ \Omega {\mathcal{G}}(\mu - \alpha_\mu {\delta}_0) \le \int_ \Omega {\mathcal{G}}(\mu) \le C \| \mu \|_{\mathcal M (\Omega)}.$$ For $\mu$ general we repeat for the positive and negative parts to deduce $$\| \widetilde {\mathcal{G}}_V (\mu) \|_{L^1 (\Omega)} \le C \| \mu \|_{{{\mathcal M}}(\Omega)}.$$ This completes the proof.
If $\mu_n \rightharpoonup \mu$ weakly in ${{\mathcal M}}(\Omega)$ we have $$\widetilde {\mathcal{G}}_V (\mu_n) \rightharpoonup \widetilde {\mathcal{G}}_V (\mu) \textrm{ in }L^1 (\Omega).$$
\[prop:GV continuous weak star M to weak L1\] If $\mu_n \rightharpoonup \mu$ weak-$\star$ in ${{\mathcal M}}(\Omega)$ we have $$\widetilde {\mathcal{G}}_V (\mu_n) \rightharpoonup \widetilde {\mathcal{G}}_V (\mu) \textrm{ in }L^1 (\Omega).$$
Due to linearity we assume $\mu = 0$.
**Step 1.** Assume $\mu_n \ge 0$ and $\mu = 0$. Let $\psi \in L^\infty (\Omega)$. $$\begin{aligned}
0 \le \langle \widetilde {\mathcal{G}}_V (\mu_{n}) , \psi_+ \rangle & \le \langle {\mathcal{G}}(\mu_n) , \psi_+ \rangle \to 0 , \\
0 \le \langle \widetilde {\mathcal{G}}_V (\mu_n) , \psi_- \rangle & \le \langle {\mathcal{G}}(\mu_n) , \psi_- \rangle \to 0.
\end{aligned}$$ Thus $$\langle \widetilde {\mathcal{G}}_V (\mu_n) , \psi \rangle \to 0.$$ $\widetilde {\mathcal{G}}_V (\mu_n) \rightharpoonup 0$ in $L^1 (\Omega)$.
**Step 2.** Assume $\mu_n$ can change sign. The sequence $(\mu_n)_+$ and $(\mu_-)$ are bounded. Take a convergent subsequence of $(\mu_n)_+$ and, out of that subsequence, a convergent subsequence of $(\mu_n)_-$. Hence, there exist $\lambda_1, \lambda_2$ such that $$(\mu_n)_+ \rightharpoonup \lambda_1, \qquad
(\mu_n)_- \rightharpoonup \lambda_2$$ By uniqueness of the limit $\mu = \lambda_1 - \lambda_2$. We apply the first part of the proof to deduce that the result.
Regularization ${\mathcal{G}}_V : L^\infty (\Omega) \to \mathcal C(\overline \Omega)$ and kernel representation
---------------------------------------------------------------------------------------------------------------
\[thm:Green V L inf to continuous\] ${\mathcal{G}}_V : L^\infty (\Omega) \to \mathcal C(\overline \Omega)$ is continuous. Furthermore $${{\mathcal{G}}_V (f)} (x) = \int_ \Omega {\mathbb{G}}_V (x,y) f(y) \quad \textrm{ where } {\mathbb{G}}_V (x,y) = \widetilde {\mathcal{G}}_V (\delta_x) (y).$$
For $f, \psi \in L^\infty (\Omega)$ we have $$\int_ \Omega {\mathcal{G}}_V (f) \psi = \int_ \Omega f {\mathcal{G}}_V (\psi) = \int_ \Omega f \widetilde {\mathcal{G}}_V (\psi)$$ Let $\psi_{\varepsilon}= \frac{1}{|B_{\varepsilon}(x)|} {\mathbf 1}_{B_{\varepsilon}(x)} \to {\delta}_x$ for $x \in \Omega$. Then $\widetilde {\mathcal{G}}_V (\psi) \to \widetilde {\mathcal{G}}_V ({\delta}_x)$. Since ${\mathcal{G}}_V (f)$ In particular $$\widehat{{\mathcal{G}}_V (f)} (x) = \int_ \Omega f \widetilde {\mathcal{G}}_V ({\delta}_x).$$
Let $x_n \to x$ in $\mathbb R^n$. We have that $${\delta}_{x_n} \rightharpoonup {\delta}_x \textrm{ weak}-{\star}-{{\mathcal M}}(\Omega)$$ Due to we have $$\widehat{{\mathcal{G}}_V (f)} (x_n) = \int_ \Omega f \widetilde {\mathcal{G}}_V ({\delta}_{x_n}) \to \int_ \Omega f \widetilde {\mathcal{G}}_V ({\delta}_x) = \widehat{{\mathcal{G}}_V (f)} (x).$$ Hence $\widehat{{\mathcal{G}}_V (f)}$ is continuous on $\bar \Omega$. We can express ${\mathcal{G}}_V (f)$ as its precise representation.
The kernel ${\mathbb{G}}_V$ as limit of ${\mathbb{G}}_{V_{\varepsilon}}$
------------------------------------------------------------------------
In this clear that ${\mathbb{G}}_{V_{\varepsilon}} (x,y)$ is a pointwise non-increasing sequence. Thus there is a limit $${\mathbb{G}}_{V_{\varepsilon}} \searrow \underline {{\mathbb{G}}_{V}} \qquad \textrm{ in } L^2 (\Omega \times \Omega).$$ what we have proven in the previous section can be understood as follows: $$\underline {{\mathbb{G}}_{V}} (0,y) = 0, \qquad \textrm { if } \widetilde G_V ({\delta}_0) = 0.$$
But we know that ${\mathcal{G}}_{V_{\varepsilon}} (f) \to {\mathcal{G}}_V (f)$ for $f \in L^1 (\Omega)$, therefore $$\underline {\mathbb{G}}_V (x,y) = {\mathbb{G}}_V (x,y).$$ Furthermore, since symmetry holds for ${\mathbb{G}}_{V_{\varepsilon}}$, we give yet a further reason for the symmetry $${\mathbb{G}}_V (x,y) = {\mathbb{G}}_V (y,x).$$
Characterization of $Z$. Maximum principle. {#sec.Z}
===========================================
We first recall the results of Ponce and Orsina [@Orsina2018] about set $Z$ and failure of the strong maximum principle for bounded data in the case $L=-\Delta$ and adapt it to our fractional setting. We then proceed with the actual characterization of $Z$ in our setting.
Set of universal zeros. Failure of the strong maximum principle {#sec:universal zero-set}
---------------------------------------------------------------
Ponce and Orsina formalized the notion of set of universal zeros (or universal zero-set in their notation): $$Z_0 = \{ x \in \Omega : {{\mathcal{G}}_V (f)} (x) = 0 \quad \forall f \in L^\infty (\Omega) \}$$ in the context $s = 1$. As noted in their paper this is a failure of the strong maximum principle. For ${(-\Delta)^s}= -\Delta$ in [@Orsina2018], the universal zero-set is characterized as $$\label{eq:universal zero-set terms of delta}
Z_0 = Z.$$ Furthermore, the authors show that ${\mathcal{G}}_V(\mu)$ exists for $L=-\Delta$ if and only if $|\mu| (Z) = 0$. This leads them to indicate that in $Z \ne \emptyset$ then *the Green kernel does not exist*. However, the authors do indicate that, when $|\mu|(Z) = 0$ then (in our notation) the unique solution is written $${\mathcal{G}}_V (\mu) (x) = \int_ \Omega {\mathcal{G}}_V ({\delta}_x) (y) \mathrm{d} \mu (y).$$
In order to connect these assertions with the results in , in this paragraph we prove the following:
\[thm:Z V decomposed\] Assume and –. It holds that $$\label{eq:kernel G V as Green V of delta x}
\widetilde {\mathcal{G}}_V ( \delta_x ) (y) = {\mathbb{G}}_V (y,x)$$ Then, the following are equivalent
1. \[it:zero of Green V of delta x\] $\widetilde {\mathcal{G}}_V ({\delta}_x) = 0$ (i.e. $x \in Z$)
2. \[it:zero of GV of x\] ${\mathbb{G}}_V (x, \cdot) = 0$ a.e. in $\Omega$.
3. \[it:no max principle\] ${\mathcal{G}}_V (f) (x) = 0$ for all $f \in L^\infty (\Omega)$.
4. \[it: zero of Green V of 1\] ${\mathcal{G}}_V ({\mathbf 1}_\Omega) (x) = 0$.
It is easy to see that $$\widetilde {\mathcal{G}}_V ( \delta_x ) (y) = \int_ \Omega {\mathbb{G}}_V (y,z) d\delta_x (z) = {\mathbb{G}}_V (y,x).$$
We prove that: \[it:zero of Green V of delta x\] $\iff$ \[it:zero of GV of x\] $\implies$ \[it:no max principle\] $\implies$ \[it: zero of Green V of 1\] $\implies$ \[it:zero of GV of x\].
The equivalence between \[it:zero of Green V of delta x\] and \[it:zero of GV of x\] is immediate from .
Assume that \[it:zero of GV of x\]. Then, for $f \in L^\infty (\Omega)$ we have that $${\mathcal{G}}_V (f) (x) = \int_ \Omega {\mathbb{G}}_V (x,y) f(y) dy = \int_ \Omega 0 f(y) dy = 0.$$ This is precisely \[it:no max principle\].
Since the function ${\mathbf 1}_\Omega \in L^\infty (\Omega)$ clearly \[it:no max principle\] implies \[it: zero of Green V of 1\].
Assume \[it: zero of Green V of 1\]. Then $$0 = {\mathcal{G}}_V ({\mathbf 1}_ \Omega) (x) = \int_ \Omega {\mathbb{G}}_V (x,y) dy = \int_ \Omega |{\mathbb{G}}_V (x,y)| dy$$ Hence, \[it:zero of GV of x\] holds.
Necessary and sufficient condition on $V$ so that $x \in Z$
-----------------------------------------------------------
We now state and prove the final result that characterizes nonexistence in terms of the integrability of $V$.
\[thm:Z depending on V\] Assume . Then $$x \notin Z \iff V {\mathcal{G}}( \delta_x) \in L^1 (B_\rho (x)) \textrm{ for some } \rho > 0.$$ In particular, $Z \subset S$.
Notice that $$V {\mathcal{G}}( \delta_x) \in L^1 (B_\rho (x)) \iff \int_ {B_\rho (x)} \frac{ V(y) }{|x-y|^{n-2s}} dy < + \infty.$$
We may take $x=0$ for convenience. Let $U = {\mathcal{G}}({\delta}_0) \in L^1 (\Omega)$.
\(i) Assume first $V U \in L^1 (\Omega)$. Then, for the approximating sequence in corresponding to $\mu = {\delta}_0$ we have $$V_{\varepsilon}u _{\varepsilon}\le V U \in L^1 (B_\rho (x)).$$ Thus, due to the Dominated Convergence Theorem we have $$V_{\varepsilon}u_{\varepsilon}\to V {\underline u}\in L^1 (B_\rho (x)).$$ Therefore, the same convergence holds in the sense of measures. In particular, $\alpha_\mu = 0$, and ${\underline u}$ satisfies . Therefore ${\mathcal{G}}_V( {\delta}_0 )$ is defined and $0 \notin S$. Since $0 \notin Z_0$ we deduce $0 \notin Z_V$.
\(ii) Conversely, assume $0 \notin Z$. Taking into account $f = {\mathbf 1}_ \Omega$ $$\int_ \Omega V {\mathcal{G}}_V ({\mathbf 1}_ \Omega) {\mathcal{G}}(\psi) \le \int_ \Omega {\mathcal{G}}_V ({\mathbf 1}_ \Omega) \psi + \int_ \Omega V {\mathcal{G}}_V ({\mathbf 1}_ \Omega) {\mathcal{G}}(\psi) = \int_ \Omega {\mathcal{G}}(\psi), \qquad \forall 0 \le \psi \in L^\infty (\Omega).$$ Since, by construction $V {\mathcal{G}}_V ({\mathbf 1}_ \Omega) \in L^1 (\Omega)$ we can take a sequence $$0\le \psi_k = \frac{ {\mathbf 1}_ { B_ {1/k} (x_0) } } {|B_ {1/k} (x_0)|}.$$ Due to , ${\mathcal{G}}(\psi_k) \to U$ a.e. in $\Omega$. Due to Fatou’s lemma $$\int_ \Omega V U {\mathcal{G}}_V ({\mathbf 1}_ \Omega) \le \int_ \Omega U.$$ Towards a contradiction, assume that ${\mathcal{G}}_V (\delta_0)$ is defined. Then ${\mathcal{G}}_V ({\mathbf 1}_ \Omega) (0) > 0$ and, due to ${\mathcal{G}}_V ({\mathbf 1}_ \Omega) (0) \ge c > 0$ on $B_\rho$ for some $\rho$. But then $$c \int_ {B_\rho} V U \le \int_ \Omega U < +\infty$$ using that $U \in L^1 (\Omega)$. Since we have that $$\int_{\Omega \setminus B_\rho} VU \le \| V \|_{L^\infty (\Omega \setminus B_\rho)} \int_ \Omega U < +\infty.$$ Thus, $VU \in L^1 (\Omega)$.
Extensions and open problems
============================
The theory that has been developed in this paper can be extended in different directions.
- We may also treat the problems in space dimensions $n=1,2$ which, as is well known, are somewhat special for the standard Laplacian. Here, there are some difficulties only in the case $n - 2s \le 0$ (which corresponds to $n = 1$ and $s \ge 1/2$, or $n = 2$ and $s = 1$) since, otherwise, the kernels have the same form. Thus, for for $n - 2s < 0$ the kernel is not singular at $x=y$ and, for $n = 2s$, it has a logarithmic singularity. In [@Bonforte+Vazquez2016] the information on the estimates for the different typical operators is gathered, and some of the sources we cite include $n=1,2$ (see, for instance, Corollary 1.4 of [@KimKim2014]). Our computations can be adapted for these cases as it is done in the standard theory for the usual Laplace operator.
- We may consider more general operators $\mathrm L$, like those considered in one can replace $|x-y|^{-(n+2s)}$ by a different kernel $\mathbb K (x,y)$ under some conditions. Furthermore, a similar logic applies for other spectral-type operators, like $(-\Delta + m I)_{\mathrm{SFL}}^s$.
- We can replace the condition $f\in L^1(\Omega)$ by inclusion in a weighted space $f\in L^1(\Omega, w)$ like we did in [@diaz+g-c+vazquez2018], where the optimal weight was $w=\mbox{dist}(x,\Omega^c)^s$. The weight depends on the operator.
- There is an interest in studying the interaction of singular potentials with diffuse measures. See, for instance, [@Ponce2017] in the case of the classical Laplacian.
- Problems with a combination of linear and nonlinear zero-order terms, like $$\mathrm L u + Vu = f(u).$$
- An interesting line is to consider the corresponding parabolic problems: $$u_t+\mathrm Lu+Vu=f\,.$$
- Study of more general functions $V$. We will give a more detailed account of the following development. It is natural to consider the case of $V \ge 0$ a Borel measurable function. Let us define a linear continuous operator $$\widetilde {\mathcal{G}}_V : {{\mathcal M}}(\Omega) \to L^1 (\Omega)$$ given by $$\widetilde {\mathcal{G}}_V (\mu) = \lim_{{\varepsilon}\to 0} {\mathcal{G}}_{V_{\varepsilon}} (\mu).$$ When a solution of exists, it is as before $\widetilde {\mathcal{G}}_V (\mu)$.
This new operator is given by a kernel ${\mathbb{G}}_V$. Furthermore $${\mathbb{G}}_{V_{\varepsilon}} \searrow {\mathbb{G}}_V \qquad \textrm { in } L^1 (\Omega \times \Omega) = L^1 (\Omega; L^1 (\Omega)).$$ We define the sets $$Z = \{x \in \Omega: {\mathbb{G}}_V (x,y) = 0 \textrm { for a.e. } y\in \Omega \}.$$
Given a measure $\mu$ we can split $\mu = \mu_{Z} + \underline \mu$ where $$\mu_{Z} (A) = \mu (A \cap Z), \qquad \underline \mu (A) = \mu (A \setminus Z).$$
For $x_0 \in Z$ we have that $\widetilde {\mathcal{G}}_V (\delta_{x_0}) = 0$, but is not a solution of , since $x_0 \notin Z_0$. Therefore ${\mathcal{G}}_V (\delta_{x_0})$ does not exist. Analogously, if $ \mu_Z \ne 0$, then ${\mathcal{G}}(\mu)$ is not defined, and $\widetilde {\mathcal{G}}_V (\mu) = 0$.
It remains to see that ${\mathcal{G}}_V (\underline \mu)$ exists.
For a general $\mu$ we will have $$V_{\varepsilon}{\mathcal{G}}_{V_{\varepsilon}} (\mu) \rightharpoonup V \widetilde {\mathcal{G}}_V (\mu) + \lambda_\mu.$$ This new measure $\lambda_\mu$ may be complicated and have an strange support. The expected result is $$\lambda_\mu = 0 \iff \mu_Z = 0.$$ In the case $Z = \{0\}$, it holds that $\lambda_\mu = \mu_Z$ so this result might be maintained.
This is equivalent to the natural extension of the results in [@Orsina2018] and their result is $${\mathcal{G}}_V (\mu) \textrm{ is defined } \iff \mu (Z) = 0.$$
Acknowledgements {#acknowledgements .unnumbered}
================
The first author is funded by MTM2017-85449-P (Spain). The second author is partially funded by Project MTM2014-52240-P (Spain). Performed while visiting at Univ. Complutense de Madrid.
[10]{} url \#1[`#1`]{} doi \#1 urlprefix href \#1\#2[\#2]{} burlalt \#1\#2[[\#1](#2)]{}
P. B[é]{}nilan and H. Brezis. . , 3(4):673–770, 2003. .
P. Bénilan, H. Brezis, and M. G. Crandall. . , 2(4):523–555, 1975.
M. Bonforte, A. Figalli, and J. V[á]{}zquez. . , 57(2):1–34, 2018. .
M. Bonforte, Y. Sire, and J. L. V[á]{}zquez. . , 35(12):5725–5767, 2015, . .
M. Bonforte and J. L. V[á]{}zquez. . , 131:363–398, 2016, . .
H. Brezis, M. Marcus, and A. C. Ponce. . , 339(3):169–174, 2004. .
H. Brezis, M. Marcus, and A. C. Ponce. Nonlinear elliptic equations with measures revisited. , 163:55–110, 2007.
C. Bucur and E. Valdinoci. , volume 20 of [*Lecture Notes of the Unione Matematica Italiana*]{}. Springer, \[Cham\]; Unione Matematica Italiana, Bologna, 2016. .
X. Cabré and Y. Sire. Nonlinear equations for fractional [L]{}aplacians, [I]{}: [R]{}egularity, maximum principles, and [H]{}amiltonian estimates. , 31(1):23–53, 2014. .
L. A. Caffarelli and L. Silvestre. An extension problem related to the fractional [L]{}aplacian. , 32(7-9):1245–1260, 2007. .
L. A. Caffarelli and P. R. Stinga. . , 33(3):767–807, 2016, . .
H. Chen and L. V[é]{}ron. . , 257(5):1457–1486, 2014, . .
M. Cozzi. . , 196(2):555–578, 2017, . .
E. [Di Nezza]{}, G. Palatucci, and E. Valdinoci. . , 136(5):521–573, 2012, . .
J. I. D[í]{}az, D. G[ó]{}mez-Castro, J.-M. Rakotoson, and R. Temam. . , 38(2):509–546, 2018, . .
J. I. D[í]{}az, D. G[ó]{}mez-Castro, and J. V[á]{}zquez. . , pages 1–36, 2018, . .
L. C. Evans. . American Mathematical Society, Providence, Rhode Island, 1998.
M. Felsinger, M. Kassmann, and P. Voigt. The [D]{}irichlet problem for nonlocal operators. , 279(3-4):779–809, 2015. .
D. Gilbarg and N. S. Trudinger. . Springer-Verlag, Berlin, 2001.
G. Grubb. . , 268:478–528, 2015. .
K.-Y. Kim and P. Kim. Two-sided estimates for the transition densities of symmetric [M]{}arkov processes dominated by stable-like processes in [$C^{1,\eta}$]{} open sets. , 124(9):3055–3083, 2014. .
T. Kuusi, G. Mingione, and Y. Sire. , volume 337. 2015. .
L. Orsina and A. C. Ponce. . 2018, .
A. C. Ponce. . European Mathematical Society Publishing House, Zurich, 2016.
A. C. Ponce and N. Wilmet. . , 263(6):3581–3610, 2017. .
J. M. Rakotoson. . pages 1–37, 2018, . <http://arxiv.org/abs/1812.04061>.
X. Ros-Oton. . , 60:3–26, 2016.
X. Ros-Oton and J. Serra. . , 101(3):275–302, 2014, . .
H. Triebel. . North-Holland, Amsterdam, 1978.
J. L. Vázquez. . , 95(3-4):181–202, 1983. .
[^1]: Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid. [[email protected]]([email protected])
[^2]: Departamento de Matemáticas, Universidad Autónoma de Madrid. [[email protected]]([email protected])
|
{
"pile_set_name": "ArXiv"
}
|
---
address:
- 'Independent Moscow University, 11 Bolshoj Vlasjevskij pereulok, Moscow 121002 Russia'
- 'Independent Moscow University, 11 Bolshoj Vlasjevskij pereulok, Moscow 121002 Russia'
- 'Dept. of Mathematics and Statistics, University of Massachusetts at Amherst, Amherst MA 01003-4515, USA'
author:
- Michael Finkelberg
- Alexander Kuznetsov
- Ivan Mirković
title: 'The Singular Supports of IC sheaves on Quasimaps’ Spaces are irreducible'
---
[[[B]{}ls]{}]{}
[[om]{}]{}
[[gn]{}]{}
[[ac]{}]{}
[[[J]{}ac]{}]{}
[[[L]{}ocsys]{}]{}
[[[M]{}ap]{}]{}
[[[O]{}r]{}]{}
[[[O]{}rd]{}]{}
[[[P]{}art]{}]{}
[[ep]{}]{}
[[ets]{}]{}
[[ew]{}]{}
[[[S]{}h]{}]{}
[[[S]{}ign]{}]{}
[[eich]{}]{}
[[\_]{}]{}
[[[V]{}ect]{}]{}
[[A]{}]{}
[[|]{}]{}
[[|[J]{}]{}]{}
[[B]{}]{}
[[|[D]{}]{}]{}
[[G]{}]{}
[[ \_a ]{}]{}
[[|]{}]{}
[[P]{}]{}
[[Q]{}]{}
[[|]{}]{}
[[T]{}]{}
[[|]{}]{}
[[A]{}]{}
[[B]{}]{}
[[C]{}]{}
[[D]{}]{}
[[E]{}]{}
[[F]{}]{}
[[G]{}]{}
[[H]{}]{}
[[I]{}]{}
[[J]{}]{}
[[K]{}]{}
[[L]{}]{}
[[M]{}]{}
[[N]{}]{}
[[O]{}]{}
[[P]{}\_]{}
[[Q]{}]{}
[[R]{}]{}
[[S]{}]{}
[[T]{}]{}
[[V]{}]{}
[[W]{}]{}
[[X]{}]{}
[[Y]{}]{}
[[Z]{}]{}
[[E]{}]{}
[[F]{}]{}
[[K]{}]{}
[[S]{}]{}
[[W]{}]{}
[[y]{}]{}
[[Y]{}]{}
[[U\_]{}]{}
[[\_]{}]{}
\[1\][\^D\_[\#1]{}]{}
\[1\][\^L\_[\#1]{}]{}
\[1\][\^K\_[\#1]{}]{}
[[P]{}]{}
[[C]{}]{}
[[N]{}]{}
[[Z]{}]{}
[[RT]{}]{}
[[S]{}]{}
[\^\_[\_[c\_1]{},…, \_[c\_]{}]{}]{}
[{{]{}
[}}]{}
\[1\][Theorem \[\#1\]]{}
\[1\][Proposition \[\#1\]]{}
\[1\][Lemma \[\#1\]]{}
\[1\][Corollary \[\#1\]]{}
\[1\][Conjecture \[\#1\]]{}
\[1\][Claim \[\#1\]]{}
\[1\][Definition \[\#1\]]{}
\[1\][Example \[\#1\]]{}
\[1\][Remark \[\#1\]]{}
\[1\][Note \[\#1\]]{}
\[1\][\[\#1\][**Theorem. **]{} **]{} \[1\][\[\#1\][**Proposition. **]{} **]{} \[1\][\[\#1\][**Lemma. **]{} **]{} \[1\][\[\#1\][**Corollary. **]{} **]{} \[1\][\[\#1\][**Conjecture. **]{} **]{} \[1\][\[\#1\][**Claim. **]{} **]{}
\[1\][\[\#1\][**Definition. **]{} ]{} \[1\][\[\#1\][**Example. **]{} ]{}
\[1\][\[\#1\][*Remark. *]{} ]{} \[1\][\[\#1\][*Note. *]{} ]{} \[1\][\[\#1\][*Exercise. *]{} ]{}
[[E]{}]{}
[^1]
Introduction
============
Let $C$ be a smooth projective curve of genus 0. Let $\CB$ be the variety of complete flags in an $n$-dimensional vector space $V$. Given an $(n-1)$-tuple $\alpha\in\BN[I]$ of positive integers one can consider the space $\CQ_\alpha$ of algebraic maps of degree $\alpha$ from $C$ to $\CB$. This space is noncompact. Some remarkable compactifications $\CQ^D_\alpha$ (Quasimaps), $\CQ^L_\alpha$ (Quasiflags) of $\CQ_\alpha$ were constructed by Drinfeld and Laumon respectively. In [@k] it was proved that the natural map $\pi:\ \CQ^L_\alpha\to
\CQ^D_\alpha$ is a small resolution of singularities. The aim of the present note is to study the singular support of the Goresky-MacPherson sheaf $IC_\alpha$ on the Quasimaps’ space $\CQ^D_\alpha$.
Namely, we prove that this singular support $SS(IC_\alpha)$ is irreducible. The proof is based on the [*factorization property*]{} of Quasimaps’ space and on the detailed analysis of Laumon’s resolution $\pi:\ \CQ^L_\alpha\to\CQ^D_\alpha$.
We are grateful to P.Schapira for the illuminating correspondence.
This note is a sequel to [@k] and [@fk]. In fact, the local geometry of $\CQ^D_\alpha$ was the subject of [@k]; the global geometry of $\CQ^D_\alpha$ was the subject of [@fk], while the microlocal geometry of $\CQ^D_\alpha$ is the subject of the present work. We will freely refer the reader to [@k] and [@fk].
Reductions of the main theorem
==============================
Notations
---------
### {#not}
We choose a basis $\{v_1,\ldots,v_n\}$ in $V$. This choice defines a Cartan subgroup $H\subset G=SL(V)=SL_n$ of matrices diagonal with respect to this basis, and a Borel subgroup $B\subset G$ of matrices upper triangular with respect to this basis. We have $\CB=G/B$.
Let $I=\{1,\ldots,n-1\}$ be the set of simple coroots of $G=SL_n$. Let $R^+$ denote the set of positive coroots, and let $2\rho=\sum_{\theta\in R^+}\theta$. For $\alpha=\sum a_ii\in\BN[I]$ we set $|\alpha|:=\sum a_i$. Let $X$ be the lattice of weights of $G,H$. Let $X^+\subset X$ be the set of dominant (with respect to $B$) weights. For $\lambda\in X^+$ let $V_\lambda$ denote the irreducible representation of $G$ with the highest weight $\lambda$.
Recall the notations of [@k] concerning Kostant’s partition function. For $\gamma\in\BN[I]$ a [*Kostant partition*]{} of $\gamma$ is a decomposition of $\gamma$ into a sum of positive coroots with multiplicities. The set of Kostant partitions of $\gamma$ is denoted by $\fK(\gamma)$.
There is a natural bijection between the set of pairs $1\leq q\leq p\leq n-1$ and $R^+$, namely, $(p,q)$ corresponds to $i_q+i_{q+1}+\ldots+i_p$. Thus a Kostant partition $\kappa$ is given by a collection of nonnegative integers $(\kappa_{p,q}), 1\leq q\leq p\leq n-1$. Following [*loc. cit.*]{} (9) we define a collection $\mu(\kappa)$ as follows: $\mu_{p,q}=\sum_{r\leq q\leq p\leq s}\kappa_{s,r}$.
Recall that for $\gamma\in\BN[I]$ we denote by $\Gamma(\gamma)$ the set of all partitions of $\gamma$, i.e. multisubsets (subsets with multiplicities) $\Gamma=\lbr \gamma_1,\ldots,\gamma_k\rbr $ of $\BN[I]$ with $\sum_{r=1}^k\gamma_r=\gamma,\ \gamma_r>0$ (see e.g. [@k], 1.3).
The configuration space of colored effective divisors of multidegree $\gamma$ (the set of colors is $I$) is denoted by $C^\gamma$. The diagonal stratification $C^\gamma=\sqcup_{\Gamma\in\Gamma(\gamma)}
C^\gamma_\Gamma$ was introduced e.g. in [*loc. cit.*]{} Recall that for $\Gamma=\lbr \gamma_1,\ldots,\gamma_k\rbr $ we have $\dim C^\gamma_\Gamma=k$.
### {#section-1}
For the definition of Laumon’s Quasiflags’ space $\CQ^L_\alpha$ the reader may consult [@la] 4.2, or [@k] 1.4. It is the space of complete flags of locally free subsheaves $$0\subset E_1\subset\dots\subset E_{n-1}\subset V\otimes\CO_C=:\CV$$ such that rank$(E_k)=k$, and $\deg(E_k)=-a_k$.
It is known to be a smooth projective variety of dimension $2|\alpha|+\dim\CB$.
### {#section-2}
For the definition of Drinfeld’s Quasimaps’ space $\CQ^D_\alpha$ the reader may consult [@k] 1.2. It is the space of collections of invertible subsheaves $\CL_\lambda\subset V_\lambda\otimes\CO_C$ for each dominant weight $\lambda\in X^+$ satisfying Plücker relations, and such that $\deg\CL_\lambda=-\langle\lambda,\alpha\rangle$.
It is known to be a (singular, in general) projective variety of dimension $2|\alpha|+\dim\CB$.
The open subspace $\CQ_\alpha\subset\CQ^D_\alpha$ of genuine maps is formed by the collections of line subbundles (as opposed to invertible subsheaves) $\CL_\lambda\subset V_\lambda\otimes\CO_C$. In fact, it is an open stratum of the stratification by the [*type of degeneration*]{} of $\CQ^D_\alpha$ introduced in [@k] 1.3: $$\CQ^D_\alpha=\bigsqcup_{\beta\leq\alpha}^{\Gamma\in\Gamma(\alpha-\beta)}
\fD^{\beta,\Gamma}_\alpha$$ We have $\fD_{\alpha,\emptyset}=\CQ_\alpha$, and $\fD^{\beta,\Gamma}_\alpha=
\CQ_\beta\times C^{\alpha-\beta}_\Gamma$ (see [*loc. cit.*]{} 1.3.5).
The space $\CQ^D_\alpha$ is naturally embedded into the product of projective spaces $$\BP_\alpha=\prod_{1\leq p\leq n-1}
\BP(\Hom(\CO_C(-\langle\omega_p,\alpha\rangle),
V_{\omega_p}\otimes\CO_C))$$ and is closed in it (see [*loc. cit.*]{} 1.2.5). Here $\omega_p$ stands for the fundamental weight dual to the coroot $i_p$. The fundamental representation $V_{\omega_p}$ equals $\Lambda^pV$.
{#main}
We will study the characteristic cycle of the Goresky-MacPherson perverse sheaf (or the corresponding regular holonomic $D$-module) $IC_\alpha$ on $\CQ^D_\alpha$. As $\CQ^D_\alpha$ is embedded into the smooth space $\BP_\alpha$, we will view this characteristic cycle $SS(IC_\alpha)$ as a Lagrangian cycle in the cotangent bundle $T^*\BP_\alpha$. [*A priori*]{} we have the following equality: $$SS(IC_\alpha)=\overline{T^*_{\CQ_\alpha}\BP_\alpha}+
\sum_{\beta<\alpha}^{\Gamma\in\Gamma(\alpha-\beta)}m^{\beta,\Gamma}_\alpha
\overline{T^*_{\fD^{\beta,\Gamma}_\alpha}\BP_\alpha},$$ closures of conormal bundles with multiplicities.
[**Theorem.**]{} $SS(IC_\alpha)=\overline{T^*_{\CQ_\alpha}\BP_\alpha}$ is irreducible.
In the following subsections we will reduce the Theorem to a statement about geometry of Laumon’s resolution.
{#section-3}
We fix a coordinate $z$ on $C$ identifying it with the standard $\BP^1$. We denote by $\CQ^\infty_\alpha\subset\CQ^D_\alpha$ the open subspace formed by quasimaps which are genuine maps in a neighbourhood of the point $\infty\in C$. In other words, $(\CL_\lambda\subset V_\lambda\otimes\CO_C)_{\lambda\in X^+}
\in\CQ^\infty_\alpha$ iff for each $\lambda$ the invertible subsheaf $\CL_\lambda\subset V_\lambda\otimes\CO_C$ is a line subbundle in some neighbourhood of $\infty\in C$.
Evidently, $\CQ^\infty_\alpha$ intersects all the strata $\fD_{\beta,\Gamma}$. Thus it suffices to prove the irreducibility of the singular support of Goresky-MacPherson sheaf of $\CQ^\infty_\alpha$.
There is a well-defined map of evaluation at $\infty\in C$: $$\Upsilon_\alpha:\ \CQ^\infty_\alpha\lra\CB$$ It is compatible with the stratification of $\CQ^\infty_\alpha$ and realizes $\CQ^\infty_\alpha$ as a (stratified) fibre bundle over $\CB$. In effect, $G$ acts naturally both on $\CQ^\infty_\alpha$ (preserving stratification) and on $\CB$; the map $\Upsilon_\alpha$ is equivariant, and $\CB$ is homogeneous. We denote the fiber $\Upsilon_\alpha^{-1}(B)$ over the point $B\in\CB$ by $\CZ_\alpha$.
It inherits the stratification $$\CZ_\alpha=
\bigsqcup_{\beta\leq\alpha}^{\Gamma\in\Gamma(\alpha-\beta)}
\CZ\fD^{\beta,\Gamma}_\alpha$$ from $\CQ^\infty_\alpha$ and $\CQ^D_\alpha$. It is just the transversal intersection of the fiber $\Upsilon_\alpha^{-1}(B)$ with the stratification of $\CQ^\infty_\alpha$. As in [@k] 1.3.5 we have $\CZ\fD^{\beta,\Gamma}_\alpha\iso\CZ_\beta\times
(C-\infty)^{\alpha-\beta}_\Gamma$.
Hence it suffices to prove the irreducibility of the singular support $SS(IC(\CZ_\alpha))$ of Goresky-MacPherson sheaf $IC(\CZ_\alpha)$ of $\CZ_\alpha$.
Factorization
-------------
The Theorem 6.3 of [@fm] admits the following immediate Corollary. Let $(\phi_\beta,\gamma_1x_1,\ldots,\gamma_kx_k)=\phi_\alpha\in
\CZ_\beta\times(C-\infty)^{\alpha-\beta}_\Gamma=\CZ\fD^{\beta,\Gamma}_\alpha
\subset\CZ_\alpha$. Consider also the points $(\phi_r,\gamma_rx_r)=
\phi_{\gamma_r}\in\CZ_0\times(C-\infty)^{\gamma_r}_{\{\{\gamma_r\}\}}=
\CZ\fD^{0,\{\{\gamma_r\}\}}_{\gamma_r}\subset\CZ_{\gamma_r},\ 1\leq r\leq k$.
[**Proposition.**]{} There is an analytic open neighbourhood $U_\alpha$ (resp. $U_\beta$, resp. $U_{\gamma_r},\ 1\leq r\leq k$) of $\phi_\alpha$ (resp. $\phi_\beta$, resp. $\phi_{\gamma_r},\ 1\leq r\leq k$) in $\CZ_\alpha$ (resp. $\CZ_\beta$, resp. $\CZ_{\gamma_r},\ 1\leq r\leq k$) such that $$U_\alpha\iso U_\beta\times\prod_{1\leq r\leq k}U_{\gamma_r}$$ $\Box$
Recall the nonnegative integers $m^{\beta,\Gamma}_\alpha$ introduced in \[main\]. The Proposition implies the following Corollary.
[**Corollary.**]{} $m^{\beta,\Gamma}_\alpha=\prod_{1\leq r\leq k}
m^{0,\{\{\gamma_r\}\}}_{\gamma_r}$. $\Box$
Thus to prove that all the multiplicities $m^{\beta,\Gamma}_\alpha$ vanish, it suffices to check the vanishing of $m^{0,\{\{\gamma\}\}}_\gamma$ for arbitrary $\gamma>0$.
{#section-4}
It remains to prove that the conormal bundle $T^*_{\fD^{0,\{\{\gamma\}\}}_\gamma}\BP_\alpha$ to the closed stratum of $\CQ_\gamma$ enters the singular support $SS(IC_\alpha)$ with multiplicity 0. To this end we choose a point $(B,\gamma0)=\phi\in\CB\times C=
\CQ_0\times C^\gamma_{\{\{\gamma\}\}}=\fD_\gamma^{0,\{\{\gamma\}\}}
\subset\CQ_\gamma\subset\BP_\gamma$. We also choose a sufficiently generic meromorphic function $f$ on $\BP_\gamma$ regular around $\phi$ and vanishing on $\fD_\gamma^{0,\{\{\gamma\}\}}$. According to the Proposition 8.6.4 of [@ks], the multiplicity in question is 0 iff $\Phi_f(IC_\gamma)_\phi=0$, i.e. the stalk of vanishing cycles sheaf at the point $\phi$ vanishes.
To compute the stalk of vanishing cycles sheaf we use the following argument, borrowed from [@bfl] §1. As $\pi:\ \CQ^L_\gamma\lra\CQ^D_\gamma$ is a small resolution of singularities, up to a shift, $IC_\alpha=\pi_*\uQ$. By the proper base change, $\Phi_f\pi_*\uQ=\pi_*\Phi_{f\circ\pi}\uQ$. So it suffices to check that $\Phi_{f\circ\pi}\uQ|_{\pi^{-1}(\phi)}=0$.
Let us denote the differential of the function $f$ at the point $\phi$ by $\xi$ so that $(\phi,\xi)\in T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$. Then the support of $\Phi_{f\circ\pi}\uQ|_{\pi^{-1}(\phi)}$ is [*a priori*]{} contained in the [*microlocal fiber*]{} over $(\phi,\xi)$ which we define presently.
### Definition
Let $\varpi:\ A\to B$ be a map of smooth varieties. For $a\in A$ let $d_a^*\varpi:\ T^*_{\varpi(a)}B\lra T^*_aA$ denote the codifferential, and let $(b,\eta)$ be a point in $T^*B$. Then the [*microlocal fiber*]{} of $\varpi$ over $(b,\eta)$ is defined to be the set of points $a\in \varpi^{-1}(b)$ such that $d^*_a\varpi(\eta)=0$.
### {#prop}
Thus we have reduced the Theorem \[main\] to the following Proposition.
[**Proposition.**]{} For a sufficiently generic $\xi$ such that $(\phi,\xi)
\in T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$, the microlocal fiber of Laumon’s resolution $\pi$ over $(\phi,\xi)$ is empty. Equivalently, the cone $\cup_{E_\bullet\in\pi^{-1}(\phi)}\Ker (d^*_{E_\bullet}\pi)$ is a proper subvariety of the fiber of $T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$ at $\phi$.
Piecification of a simple fiber
-------------------------------
The fiber $\pi^{-1}(\phi)$ was called the [*simple fiber*]{} in [@k] §2. It was proved in [*loc. cit.*]{} 2.3.3 that $\pi^{-1}(\phi)$ is a disjoint union of (pseudo)affine spaces $\fS(\mu(\kappa))$ where $\kappa$ runs through the set $\fK(\gamma)$ of Kostant partitions of $\gamma$ (for the notation $\mu(\kappa)$ see \[not\] or [@k] (9)). Another way to parametrize these pseudoaffine pieces was introduced in [@fk] 2.11. Let us recall it here.
We define nonnegative integers $c_p, 1\leq p\leq n-1$, so that $\gamma=\sum_{p=1}^{n-1}c_pi_p$.
### Definition
$\CalD(\gamma)$ is the set of collections of nonnegative integers $(d_{p,q})_{1\leq q\leq p\leq n-1}$ such that
a\) For any $1\leq q\leq p\leq r\leq n-1$ we have $d_{r,q}\leq d_{p,q}$;
b\) For any $1\leq p\leq n-1$ we have $\sum_{q=1}^pd_{p,q}=c_p$.
### Lemma
The correspondence $\kappa=(\kappa_{p,q})_{1\leq q\leq p
\leq n-1}\mapsto(d_{p,q}:=\sum_{r=p}^{n-1}\kappa_{r,q})_{1\leq q\leq p
\leq n-1}$ defines a bijection between $\fK(\gamma)$ and $\CalD(\gamma)$. $\Box$
### {#section-5}
Using the above Lemma we can rewrite the parametrization of the pseudoaffine pieces of the simple fiber as follows: $$\pi^{-1}(\phi)=\bigsqcup_{\fd\in\CalD(\gamma)}\fS(\fd)$$ In these terms the dimension formula of [@k] 2.3.3 reads as follows: for $\fd=(d_{p,q})_{1\leq q\leq p\leq n-1}$ we have $\dim\fS(\fd)=
\sum_{1\leq q<p\leq n-1}d_{p,q}$.
Note also that $\sum_{1\leq q\leq p\leq n-1}d_{p,q}=
\sum_{1\leq p\leq n-1}c_p=|\gamma|$.
Proposition {#red}
-----------
For arbitrary $\fd=(d_{p,q})_{1\leq q\leq p\leq n-1}\in\CalD(\gamma)$ and arbitrary quasiflag $E_\bullet\in\fS(\fd)\subset\pi^{-1}(\phi)$ we have $\dim\Ker (d_{E_\bullet}\pi)<\sum_{1\leq p\leq n-1}d_{p,q}
+\sum_{1\leq q\leq p\leq n-1}d_{p,q}-1$.
This Proposition implies the Proposition \[prop\] straightforwardly. In effect, $\codim\Ker (d_{E_\bullet}\pi)=\dim\CQ^L_\gamma-\dim\Ker
(d_{E_\bullet}\pi)>2|\gamma|+\dim\CB-\sum_{1\leq p\leq n-1}d_{p,p}
-\sum_{1\leq q\leq p\leq n-1}d_{p,q}+1=
\dim\CB+1+\sum_{1\leq q<p\leq n-1}d_{p,q}$. Hence the subspace $\Ker (d^*_{E_\bullet}\pi)\subset T^*_\phi\BP_\gamma$ has codimension greater than $\dim\CB+1+\sum_{1\leq q<p\leq n-1}d_{p,q}$. Recall that $\dim\fD_\gamma^{0,\{\{\gamma\}\}}=\dim\CB+1$. Hence the codimension of $\Ker (d^*_{E_\bullet}\pi)\cap
T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$ in the fiber of $T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$ at $\phi$ is greater than $\sum_{1\leq q<p\leq n-1}d_{p,q}=\dim\fS(\fd)$. Hence the cone $\cup_{E_\bullet\in\fS(\fd)}\Ker (d^*_{E_\bullet}\pi)$ is a proper subvariety of the fiber of $T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$ at $\phi$.
The union of these proper subvarieties over $\fd\in\CalD(\gamma)$ is again a proper subvariety of the fiber of $T^*_{\fD_\gamma^{0,\{\{\gamma\}\}}}\BP_\gamma$ at $\phi$ which concludes the proof of the Proposition \[prop\].
Fixed points
------------
It remains to prove the Proposition \[red\]. To this end recall that the Cartan group $H$ acts on $V$ and hence on $\CQ^L_\alpha$. The group $\BC^*$ of dilations of $C={\Bbb P}^1$ preserving $0$ and $\infty$ also acts on $\CQ^L_\alpha$ commuting with the action of $H$. Hence we obtain the action of a torus $\BT:=H\times\BC^*$ on $\CQ^L_\alpha$.
It preserves the simple fiber $\pi^{-1}(\phi)$ and its pseudoaffine pieces $\fS(\fd),\ \fd\in\CalD(\gamma)$, for evident reasons. It was proved in [@fk] 2.12 that each piece $\fS(\fd),\ \fd=(d_{p,q})_{1\leq q\leq p\leq n-1}$ contains exactly one $\BT$-fixed point $\delta(\fd)=(E_1,\ldots,E_{n-1})$. Here $${\arraycolsep=1pt
\begin{array}{llrlcrlcccrlc}
E_1 & = E_{1,1} \\
E_2 & = E_{2,1} &\opl& E_{2,2} \\
\ \vdots && \vdots &&& \vdots \\
E_{n-1} & = E_{n-1,1} &\opl& E_{n-1,2} &\opl& \dots &\opl& E_{n-1,n-1} \\
\end{array}
}$$ and $E_{p,q}=\CO(-d_{p,q})\subset\CO v_q\subset\CV=V\otimes\CO_C$ with quotient sheaf $\dfrac\CO{\CO(-d_{p,q})}$ concentrated at $0\in C$.
### {#key}
Now the $\BT$-action contracts $\fS(\fd)$ to $\delta(\fd)$. Since the map $\pi$ is $\BT$-equivariant, and the dimension of $\Ker (d_{E_\bullet}\pi)$ is lower semicontinuous, the Proposition \[red\] follows from the next one.
[**Key Proposition.**]{} For arbitrary $\fd=(d_{p,q})_{1\leq q\leq p\leq n-1}\in\CalD(\gamma)$ ($\gamma\ne0$) we have $\dim\Ker (d_{\delta(\fd)}\pi)<\sum_{1\leq p\leq n-1}d_{p,p}
+\sum_{1\leq q\leq p\leq n-1}d_{p,q}-1$.
The proof will be given in the next section.
### Remark
In general, the pieces $\fS(\fd)$ of the simple fiber are not equisingular, i.e. $\dim\Ker (d_{E_\bullet}\pi)$ is not constant along a piece. The simplest example occurs for $G=SL_3,\ \gamma=2i_1+2i_2$. Then the simple fiber is a singular 2-dimensional quadric. Its singular point is the fixed point of the 1-dimensional piece $\fS(\fd)$ where $d_{1,1}=2,d_{2,1}=d_{2,2}=1$. At this point we have $\dim\Ker (d_{\delta(\fd)}\pi)=3$ while at the other points in this piece we have $\dim\Ker (d_{E_\bullet}\pi)=2$.
The proof of the Key Proposition
================================
Tangent spaces
--------------
Let $\Omega$ be the following quiver: $\Omega=1\lra 2\lra\ldots\lra n-1$. Thus the set of vertices coincides with $I$. A quasiflag $(E_1\hra E_2\hra
\ldots\hra E_{n-1}\subset\CV)\in\CQ^L_\gamma$ may be viewed as a representation of $\Omega$ in the category of coherent sheaves on $C$. If we denote the quotient sheaf $\CV/E_p$ by $Q_p,\ 1\leq p\leq n-1$, we have another representation of $\Omega$ in coherent sheaves on $C$, namely, $$Q_\bullet:=(Q_1\twoheadrightarrow Q_2\twoheadrightarrow
\ldots\twoheadrightarrow Q_{n-1})$$
### Exercise
$T_{E_\bullet}\CQ^L_\gamma=\Hom_\Omega(E_\bullet,
Q_\bullet)$ where $\Hom_\Omega(?,?)$ stands for the morphisms in the category of representations of $\Omega$ in coherent sheaves on $C$.
### {#section-6}
Consider a point $\CL_\bullet=
(\CL_1,\ldots,\CL_{n-1})\in\BP_\gamma$. Here $\CL_p\subset V_{\omega_p}\otimes\CO_C$ is an invertible subsheaf, the image of morphism $\CO_C(-\langle\omega_p,\gamma\rangle)\hra V_{\omega_p}
\otimes\CO_C$.
[*Exercise.*]{} $T_{\CL_\bullet}\BP_\gamma=\prod_{p=1}^{n-1}\Hom(\CL_p,
V_{\omega_p}\otimes\CO_C/\CL_p)$.
### {#section-7}
Recall that for $E_\bullet\in\CQ^L_\gamma$ we have $\pi(E_\bullet)=\CL_\bullet\in\BP_\gamma$ where $\CL_p=\Lambda^pE_p$ for $1\leq p\leq n-1$.
[*Exercise.*]{} For $h_\bullet=(h_1,\ldots,h_{n-1})\in
T_{E_\bullet}\CQ^L_\gamma$ we have $d_{E_\bullet}\pi(h_\bullet)=
(\Lambda^1h_1,\Lambda^2h_2,\ldots,\Lambda^{n-1}h_{n-1})\in
T_{\CL_\bullet}\BP_\gamma$.
{#section-8}
From now on we fix $\gamma>0,\ \fd\in\CalD(\gamma),\ \delta(\fd)=:E_\bullet$. To unburden the notations we will denote the tangent space $T_{E_\bullet}\CQ^L_\gamma$ by $T$. Since $\QL\gamma$ is a smooth $(2|\gamma|+\dim\CB)$-dimensional variety it suffices to find a subspace $N\subset T$ of dimension $$2|\gamma|+\dim\CB-\sum_{1\le p\le n-1}d_{p,p}-\sum_{1\le q\le p\le n-1}d_{p,q}+1=
\sum_{1\le q<p\le n-1}(d_{p,q}+1)+1$$ such that $d_{E_\bullet}\pi|_{N}$ is injective.
{#section-9}
Let $N_0=\opll_{n-1\ge p>q\ge 1}\Hom(\CO(-d_{p,q}),\CO)$. We have $\dim N_0=\sum_{n-1\ge p>q\ge 1}(d_{p,q}+1)$.
Recall that we have canonically $T=\Hom_\Om(E_\bullet,Q_\bullet)$, where $$Q_p=\CV/E_p=\left(\opll_{q=1}^p\left(\dfrac\CO{\CO(-d_{p,q})}\right)v_q\right)
\opl\left(\opll_{q=p+1}^n\CO v_q\right).$$
{#section-10}
Let us define a map $\nu_0:N_0\to T$ assigning to an element $(f_{p,q})\in N_0$ a morphism $\nu_0(f_{p,q}):=
F\in\Hom_\bullet(E_\bullet,Q_\bullet)$ of graded coherent sheaves, where $F|_{E_{p,q}}=\opll_{r=p+1}^n F_{p,q}^r$, and $$F_{p,q}^r:E_{p,q}\to\CO v_r\subset Q_p\quad
\text{is defined as the composition}\quad
E_{p,q} \subset E_{r,q}=\CO(-d_{r,q}) @>f_{r,q}>> \CO v_r$$
{#section-11}
[**Lemma.**]{} The map $F:E_\bullet\to Q_\bullet$ is a morphism of representations of the quiver $\Om$.
We need to check the commutativity of the following diagram $$\begin{CD}
E_p @>>> E_{p'} \\
@VFVV @VFVV \\
Q_p @>>> Q_{p'}
\end{CD}$$ Since $E_p$ and $Q_{p'}$ are canonically decomposed into the direct sum it suffices to note that for any $q\le p\le p'<r$ the following diagram $$\begin{CD}
E_{p,q} @>>> E_{p',q} @>>> E_{r,q} \\
@VF_{p,q}^rVV @VF_{p',q}^rVV @Vf_{r,q}VV \\
\CO v_r @= \CO v_r @= \CO v_r
\end{CD}$$ commutes and for any $q\le p<r\le p'$ the following diagram $$\begin{CD}
E_{p,q} @>>> E_{r,q} @>>> E_{p',q} \\
@VF_{p,q}^rVV @Vf_{r,q}VV @V0VV \\
\CO v_r @= \CO v_r @>>>
\left(\dfrac\CO{\CO(-d_{r,q})}\right)v_r
\end{CD}$$ commutes as well.
{#section-12}
Let $N_1={\Bbb C}$. Let $p_0=\min\{1\le p\le n-1\ |\ d_{p,p}>0\}$ and pick a non-zero element $\ff\in\Hom(\CO(-d_{p,p}),\frac\CO{\CO(-d_{p,p})})$. Define the map $\nu_1:N_1\to T$ by assigning to $1\in N_1$ the element $\fF\in\Hom_\Om(E_\bullet,Q_\bullet)$ defined on $E_{p,p}$ as the composition $$E_{p,p}=\CO(-d_{p,p}) @>\ff>> \frac\CO{\CO(-d_{p,p})}v_p\subset Q_p$$ and with all other components equal to zero.
{#section-13}
Let $\CM(r,d;\CV)$ denote the space of rank $r$ and degree $d$ subsheaves in $\CV$.
### {#lem}
Let $E\subset\CV$ be a rank $k$ and degree $d$ subsheaf in the vector bundle $\CV$. Let $\CV/E=\CT\opl \CF$ be a decomposition of the quotient sheaf into the sum of the torsion $\CT$ and a locally free sheaf $\CF$. Consider the map $\det:\CM(r,d;\CV)\to\CM(1,d;\Lambda^k\CV)$ sending $E$ to $\Lambda^kE$. Then the restriction of its differential $d_E\det:T_E\CM(r,d;\CV)=\Hom(E,\CV/E)\to
\Hom(\Lambda^kE,\Lambda^kV/\Lambda^kE)=T_{\Lambda^kE}\CM(1,d;\Lambda^k\CV)$ to the subspace $\Hom(E,\CF)\subset\Hom(E,\CV/E)$ factors as $\Hom(E,\CF)\cong\Hom(\Lambda^kE,\Lambda^{k-1}E\ot \CF)\subset
\Hom(\Lambda^kE,\Lambda^kV/\Lambda^kE)$. Therefore it is injective.
### {#lem1}
Let $E=\CO^{\opl(r-1)}\opl\CO(-d)$ be a subsheaf in $\CV=\CO^{\opl n}$. Then the restriction of differential $d_E\det$ to the subspace $\Hom(E,\CT)\subset\Hom(E,\CV/E)$ is injective.
This immediately follows from the following fact. Let $\TE=\CO^{\opl r}\subset\CV$ be the normalization of $E$ in $\CV$, that is, the maximal vector subbundle $\TE\subset\CV$ such that $\TE/E$ is torsion. Then $\CT=\TE/E\cong\Lambda^k\TE/\Lambda^kE\subset\Lambda^k\CV/\Lambda^kE$.
### {#rem}
Clearly, the subsheaves $\Lambda^{k-1}E\ot \CF\subset\Lambda^k\CV/\Lambda^kE$ and $\CT\subset\Lambda^k\CV/\Lambda^kE$ do not intersect.
### {#section-14}
It follows from \[lem\], \[lem1\] and \[rem\] that the composition $d_{E_\bullet}\pi\circ(\nu_0\opl\nu_1):N_0\opl N_1\to T_{\pi(E_\bullet)}\CP$ is injective, hence $N:=(\nu_0\opl\nu_1)(N_0\opl N_1)\subset T_{E_\bullet}\QD\gamma$ enjoys the desired property. Namely, $d_{E_\bullet}\pi|_N$ is injective, and $\dim N=\sum_{1\leq q<p\leq n-1}(d_{p,q}+1)+1$.
This completes the proof of the Key Proposition \[key\] along with the Main Theorem \[main\]. $\Box$
[XXXX]{}
Bressler P., Finkelberg M., Lunts V., Vanishing cycles on Grassmannians, [*Duke Mathematical Journal*]{}, [**61**]{}, 3 (1990), pp. 763-777.
Finkelberg M., Kuznetsov A., Global Intersection Cohomology of Quasimaps’ spaces, Preprint alg-geom/9702010, pp. 1-20, to appear in [*IMRN*]{}.
Finkelberg M., Mirković I., Semiinfinite flags. I. Transversal slices, to appear.
Kashiwara M., Schapira P., Sheaves on Manifolds, [*Grundlehren der mathematischen Wissenschaften*]{}, [**292**]{} (1994).
Kuznetsov A., Laumon’s resolution of Drinfeld’s compactification is small, Preprint alg-geom/9610019, pp. 1-15, to appear in [*MRL*]{}.
Laumon G., Faisceaux Automorphes Liés aux Séries d’Eisenstein, In: Automorphic Forms, Shimura Varietes, and $L$-functions, [*Perspectives in Mathematics*]{}, [**10**]{}, Academic Press, Boston, MA (1990), pp. 227–281.
[^1]: M.F. and A.K. were partially supported by CRDF grant RM1-265. M.F. was partially supported by INTAS-94-4720. I.M. was partially supported by NSF
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'IC 310 has recently been identified as a gamma-ray emitter based on observations at GeV energies with Fermi-LAT and at very high energies (VHE, $E > 100$ GeV) with the MAGIC telescopes. Despite IC 310 having been classified as a radio galaxy with the jet observed at an angle $> 10$ degrees, it exhibits a mixture of multiwavelength properties of a radio galaxy and a blazar, possibly making it a transitional object. On the night of 12/13$^\mathrm{th}$ of November 2012 the MAGIC telescopes observed a series of violent outbursts from the direction of IC 310 with flux-doubling time scales faster than 5 min and a peculiar spectrum spreading over 2 orders of magnitude. Such fast variability constrains the size of the emission region to be smaller than 20% of the gravitational radius of its central black hole, challenging the shock acceleration models, commonly used in explanation of gamma-ray radiation from active galaxies. Here we will show that this emission can be associated with pulsar-like particle acceleration by the electric field across a magnetospheric gap at the base of the jet.'
author:
- 'J. Sitarek'
- 'D. Eisenacher Glawion, K. Mannheim'
- 'P. Colin for the MAGIC Collaboration'
- 'M. Kadler'
- 'R. Schultz, F. Krauß'
- 'E. Ros'
- 'U. Bach'
- 'J. Wilms'
title: 'Insights into the particle acceleration of a peculiar gamma-ray radio galaxy IC 310'
---
INTRODUCTION
============
The nearby lenticular (S0, $z=0.0189$) galaxy IC310 located in the Perseus cluster exhibits an active galactic nucleus (AGN). This object has been detected at high energies (above 30GeV) with *Fermi*/LAT [@nsv10] as well as at TeV energies [@al10; @al13]. The jet of IC310, extending in the outward direction from the center of the cluster led to early assignment of this object as a head-tail radio galaxy [@ryle68; @sijbring; @miley80]. However, using the Very-Long-Baseline Interferometry (VLBI) technique, a parsec-scale one-sided jet was found to follow the large-scale jet within about $10^\circ$ [@kadler12]. The alignment of the jet at different scales, without any hints of bending put in doubt the above classification. Instead, the inner jet appears to be blazar-like with a missing counter jet due to relativistically boosted emission. Further indications for transitional behavior between a radio galaxy and a blazar were found in IC310 in various energy ranges [@rector]. The mass, $M$, of the black hole of IC310 can be inferred from its relation with the velocity dispersion, $\sigma$, of the host galaxy [@gultekin2009; @al14], namely $M=(3^{+4}_{-2}) \times10^{8}M_\odot$.
MAGIC (Major Atmospheric Gamma Imaging Cherenkov) is a system of two 17-m diameter Imaging Atmospheric Cherenkov telescopes located on La Palma, Canary Islands. It allows observations of gamma-ray sources with energies above 50GeV. During the observations of the Perseus cluster performed in the end of 2012 MAGIC telescopes revealed an extreme gamma-ray flare from IC310 on the night of 12/13$^\mathrm{th}$ of November [@al14]. In addition, the source was observed in radio band by European VLBI Network (EVN) during October/November 2012.
In Section \[sec:results\] we report the data analysis and results of the MAGIC observations during the flare and the radio observations. In Section \[sec:interp\] we discuss possible interpretation of the ultrafast variability of the gamma-ray emission observed from IC310.
RESULTS {#sec:results}
=======
MAGIC
-----
MAGIC telescopes were observing the Perseus cluster on the night of 12/13$^\mathrm{th}$ of November for 3.7h. The observations consisted of 4 pointings, two of the them with a standard offset of 0.4$^{\circ}$ with respect to IC310 and the remaining ones are at a distance $0.94^{\circ}$ away from the object. The signal extraction and calibration of the data, the image parametrization, the direction and energy reconstruction as well as the gamma-hadron separation were applied with the standard analysis software MARS as described in [@zanin13]. In the night of the flare a strong signal of 507 gamma-like events above 300GeV in the region around IC310 in excess of the background estimated as 47 events was observed. Due to still limited statistics of events and a very rapid variability behavior, the classical approach for the calculation of light curves in gamma-ray astronomy which is based on the fixed width of the time bins is not optimal in this case. We used instead a method similar to the one commonly used for data of X-ray observatories for the computing of energy spectra. We first identify all periods in the data during which the telescopes were not operational (in particular $\lesssim 1$min gaps every 20min when the telescope is slewing and reconfiguring for the next data run). Afterwards, we bin the remaining time periods based on a fixed number (in this case 9) of ON events per bin. We estimate the number of background events in each time bin from four off-source regions at the same distance from the camera center. Using toy MC simulations we validated that this method limits the bias in flux value and its error [@al14]. As the signal to background ratio above 300GeV is much larger than 1 this assures that the precision of individual points in the light curve is close to $3\sigma$.
The resulting light curve is presented in Fig. \[Lightcurve\].
![ Light curve of IC 310 observed with the MAGIC telescopes in the night of November 12/13$^{\mathrm{th}}$, 2012, above 300GeV. As a flux reference, the two gray lines indicate levels of $1$ and $5$ times the flux level of the Crab Nebula, respectively. The precursor flare (MJD 56243.972–56243.994) has been fitted with a Gaussian distribution. The figure is reprinted from [@al14]. []{data-label="Lightcurve"}](lc.eps){width="49.00000%"}
The mean flux above 300GeV during this period is $\Phi_{\mathrm{mean}}=(6.1 \pm 0.3)\times10^{-11}$cm$^{-2}$s$^{-1}$. This is four times higher than the high state flux of $(1.60 \pm 0.17)\times10^{-11}$cm$^{-2}$s$^{-1}$ reported in [@al13]. The emission is highly variable, fitting the light curve in the full time range with a constant reveals a $\chi^2/\mathrm{N.d.o.f}$ of 199/58 corresponding to a probability of $2.6\times10^{-17}$.
We use the rapidly rising part of the 1$^{\mathrm{st}}$ big flare (MJD 56244.0620–56244.0652) in order to compute the conservative, slowest doubling time, $\tau_\mathrm{D}$, which is still consistent with the MAGIC data. We fit the light curve with a set of exponential functions, each time assuming a given $\tau_\mathrm{D}$ value and computing the corresponding fit probability. We obtain that $4.9\,\mathrm{min}$ is the largest value of $\tau_\mathrm{D}$, which can still marginally fit the data with probability $>5\%$ (see the blue line in Fig. \[figS4\]).
![Zoom of the first big flare seen in the light curve of IC310 above 300GeV. Black lines show exponential fits to the rising and decay edges to the substructures in the light curve. The blue line shows the slowest doubling time necessary to explain the raising part of the flare at C.L. of 95%. The figure is reprinted from [@al14]. []{data-label="figS4"}](FirstBigflare_ExpFits_SM_3.eps){width="49.00000%"}
Note that the corresponding time scale in the frame of reference of IC 310 will be slightly shorter: $4.9/(1+z)\,\mathrm{min} = 4.8\,\mathrm{min}$.
The observed spectrum can be described by a simple power law (see Fig. \[SED\]): $$\frac{\mathrm{d}F}{\mathrm{d}E}=f_0\times\left(\frac{E}{1\mathrm{TeV}}\right)^{-\Gamma}.$$
![ MAGIC measurement of average spectral energy distributions of IC 310 during the flare (red) . For comparison we show the results from the high (blue, open squares) and low (black, open markers) states reported in [@al13] and the average results (gray triangles) reported in [@al10]. The dashed lines show power-law fits to the measured spectra, and the solid line with filled circles depicts the spectrum corrected for absorption in the extragalactic background light according to [@dominguez11]. As a reference, the spectral power-law fit of the Crab Nebula observations from [@al12] is shown (gray, solid line). The figure is reprinted from [@al14]. []{data-label="SED"}](sed.eps){width="49.00000%"}
The flux normalization at 1TeV obtained from the fit is $f_0=(17.7\pm0.9_{\rm stat}\pm2.1_{\rm syst}) \times10^{-12} \mathrm{TeV}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ Even while the mean flux during the flaring night is $4-30$ larger than previous measurements, the spectral index, $\Gamma = 1.90\pm0.04_{\rm stat}\pm0.15_{\rm syst}$ is consistent with them within the statistical and systematic errors. No significant bend or cut-off is seen in the spectrum up to TeV energies. As part of the observation was carried out with a higher then usual offset angle from the camera center the systematic error on the flux normalization is slightly larger (12%) then reported in [@al14b]. The error of the energy scale is 15%.
EVN
---
IC310 has been observed with the EVN at 1.7, 5.0, 8.4 and 22.2GHz between 2012-10-21 and 2012-11-07. The data were amplitude and phase calibrated using standard procedures with the Astronomical Image Processing System (<span style="font-variant:small-caps;">AIPS</span>, [@Greisen2003]) and imaged and self-calibrated using <span style="font-variant:small-caps;">DIFMAP</span> [@Shepherd1994].
In inset panel of Fig. \[Skymap\] we present the image with the highest dynamic range obtained from the observation at 5.0 GHz from 2012-10-29.
![ Significance map (color scale) of the Perseus cluster in gamma rays observed in the night of November 12/13$^{\mathrm{th}}$, 2012, with the MAGIC telescopes. The inset shows the radio jet image of IC310 at 5.0GHz obtained with the European VLBI Network (EVN) on October 29, 2012. Contour lines (and associated to them color scale) increase logarithmically by factors of 2 starting at three times the noise level. The ratio of the angular resolution between MAGIC and the EVN is 1:580000. The figure is reprinted from [@al14]. []{data-label="Skymap"}](skymap.eps){width="49.00000%"}
The image has a peak flux density of 77mJy/beam and a 1$\sigma$ noise level of 0.027mJy/beam. The restoring beam has a major and minor axis of $4.97\times1.24$mas$^{2}$ with the major axis at a position angle of $-8.5^{\circ}$. It contains a total flux density of $S_\mathrm{total}=109\mathrm{\,mJy}$, which we conservatively assume to be accurate to 10%. The dynamic range $DR$ of the image, defined as the ratio of the peak flux density to tripled noise level in the image is $\approx 950$.
The angle $\theta$ of the radio jet to the line-of-sight can be determined from Doppler boosting arguments for a given jet speed $\beta$ and spectral index $\alpha$ by considering the ratio $R$ of the flux density in the jet and counter-jet: $$R=\left(\frac{1+\beta\cos\theta}{1-\beta\cos\theta}\right)^{2-\alpha}. \label{eq1}$$ Following [@kadler12] we use the $DR$ as an upper limit for the detection of a counter-jet. This gives us an upper limit of $\theta$: $$\theta < \mathrm{arc\,cos} \left(\frac{DR^{1/(2-\alpha)}-1}{DR^{1/(2-\alpha)}+1}\right).$$
Substituting $DR$ in Eq. \[eq1\], assuming a flat spectral index of $\alpha=0$ and $\beta\rightarrow1$, we obtain an upper limit for the angle between the jet and the line-of-sight of $\lesssim 20^{\circ}$.
Additionally, the extension of the projected one-sided kpc radio jet of $\sim350$kpc measured at a wavelength of 49cm [@sijbring] yields an estimate of a lower limit for the angle. De-projecting the jet using the upper limit quoted above would results in a lower limit of the jet length of $\sim$1Mpc. Radio galaxies typically show jets extending up to 150kpc-300kpc [@neeser95]. The maximal length of radio jets has been measured to be a few Mpcs which corresponds to an angle of $\sim5-10^{\circ}$ in the case of IC310. Smaller angles would rapidly increase the de-projected length of the jet to values far above the maximum of the distribution of the jet lengths.
INTERPRETATION {#sec:interp}
==============
GeV and TeV gamma-ray emission from blazars and radio galaxies is often explained in terms of shock-in-jet models. Charged particles are accelerated in an active region moving along the jet. Causality condition provides that the variability time scale of the observed emission can be used to constrain the size of the emission region.
A conservative estimate of the shortest variability time scale in the frame of reference of IC310 yields $\Delta t/(1+z)=4.8$ min. Using the best mass estimate of IC310 black hole this measurement corresponds to $20\%$ of the light travel time across the event horizon. Even allowing for the factor 3 uncertainty in the mass the fraction, $60\%$, is still below one. The ultrafast variability casts a shadow of doubt on the current shock-in-jet paradigm. The moving shock plasma leads to a shortening of the observed variability time scale $\Delta t$ compared with the variability time scale $\Delta t'$ in a frame comoving with the shock given by $\Delta t=(1+z)\delta^{-1}\Delta t'$. This effect is often used to explain ultrafast variability from blazars [@albert07a; @aharonian07] in which $\delta$ can be nearly arbitrarily large providing that the jet moving with large Lorentz factor is observed at a very small angle. In the case of IC310 however the estimation of the observation angle $10^\circ-20^\circ$ obtained from the radio observations constrain the maximum Doppler factor to be $\lesssim 6$.
All of these attempts to explain the sub-horizon scale variability with relativistic projection effects alone encounter a fundamental problem [@np12]. If the perturbations giving rise to the blazar variability are injected at the jet base, the time scale of the flux variations in the frame comoving with the jet is affected by time dilation with Lorentz factor $\Gamma_{\rm j}$. In blazars where $\delta\sim\Gamma_{\rm j}$, the Lorentz factor cancels out, and the observed variability time scale is ultimately bounded below by $\Delta t_{\rm BH}$.
Additionally, a very high value of the Doppler factor is required to avoid the absorption of the TeV gamma rays due to interactions with low-energy synchrotron photons. Such synchrotron photons are inevitably produced together with the gamma rays in the shock-in-jet scenario. The optical depth to pair creation by the gamma rays can be approximated by $\tau_{\gamma\gamma}(10~\rm TeV) \sim 300 \left(\delta / 4\right)^{-6}\left (\Delta t / 1~min\right)^{-1} \left(L_{\rm syn} / 10^{42}~\rm erg~s^{-1}\right)$. Adopting, conservatively, a non-thermal infrared luminosity of $\sim 1\%$ of the gamma-ray luminosity during the flare, the emission region would be transparent to the emission of 10 TeV gamma rays only if $\delta\gtrsim10$.
In summary, trying to interpret the IC310 flare in the framework of the shock-in-jet model meets difficulties. Alternative models can involve stars falling into the jet [@bednarek97; @barkov10], mini-jet structures within the jets [@giannios10] or magnetospheric models [@rieger00; @neronov07; @levinson11; @beskin92]. In the case of IC310 star-in-jet model cannot provide sufficient luminosity to explain the TeV flare [@al14]. Also jets-in-jet models suffer from rapidly dropping luminosity at larger observation angles [@al14]. Moreover the magnetic reconnection which can led to production of such mini-jets is expected to occur in the main jet rather at larger distances from the black hole.
In magnetospheric models, particle acceleration is assumed occur in electric fields parallel to the magnetic fields. This mechanism is common to the particle-starved magnetospheres of pulsars, but it could also operate in the magnetospheres anchored to the ergospheres of accreting black holes (see Fig. \[gap\]).
![ Scenario for the magnetospheric origin of the gamma-rays: A maximally rotating black hole with event horizon $r_{\rm g}$ (black sphere) accretes plasma from the center of the galaxy IC 310. In the apple-shaped ergosphere (blue) extending to $2 r_{\rm g}$ in the equatorial plane, Poynting flux is generated by the frame-dragging effect. The rotation of the black hole induces a charge-separated magnetosphere (red) with polar vacuum gap regions (yellow). In the gaps, the electric field of the magnetosphere has a component parallel to the magnetic field accelerating particles to ultra-relativistic energies. Inverse-Compton scattering and intense pair production due to interactions with low-energy thermal photons from the plasma accreted by the black hole leads to the observed gamma rays. The figure is reprinted from [@al14]. []{data-label="gap"}](gap.eps){width="49.00000%"}
Electric fields can exist in vacuum gaps when the density of charge carriers is too low to cause their shortcut, i.e. below the so-called Goldreich-Julian charge density. Electron-positron pairs in excess of the Goldreich-Julian charge density can be produced thermally by photon-photon collisions in a hot accretion torus or corona surrounding the black hole. It has also been suggested, that particles can be injected by the reconnection of twisted magnetic loops in the accretion flow [@neronov09]. A depletion of charges from thermal pair production is expected to happen when the accretion rate becomes very low. In this late phase of their accretion history, supermassive black holes are expected to have spun up to maximal rotation. Black holes can sustain a Poynting flux jet by virtue of the Blandford-Znajek mechanism [@blandfordznj77]. Jet collimation takes place rather far away from the black hole, i.e. at the scale of the light cylinder beyond $\sim 10r_{\rm g}$. Gaps could be located at various angles with the jet axis corresponding to the polar and outer gaps in pulsar magnetospheres leading to fan beams at rather large angles with the jet axis. As the gap height and seed particle content depend sensitively on plasma turbulence and accretion rate, the gap emission is expected to be highly variable. For an accretion rate of $\sim 10^{-4}$ of the Eddington accretion rate and maximal black hole rotation, the gap height in IC 310 is expected to be $h\sim 0.2 r_{\rm g}$ [@levinson11] which is in line with the variability times seen in the observations. Depending on the electron temperature and geometry of the radiatively inefficient accretion flow, its thermal cyclotron luminosity can be low enough to warrant the absence of pair creation attenuation in the spectrum of gamma rays. In this picture, the intermittent variability witnessed in IC 310 is due to a runaway effect. As particles accelerate to ultrahigh energies, electromagnetic cascades develop multiplying the number of charge carriers until their current shortcuts the gap. The excess particles are then swept away with the jet flow, until the gap reappears.
CONCLUSIONS
===========
Radio galaxies and blazars with very low accretion rates allow us to obtain a glimpse of the jet formation process near supermassive black holes. Observations of IC310 performed with the MAGIC telescopes showed variability with time scale below 5min, shorter than the light crossing time of the event horizon of its black hole. The commonly used in AGNs shock-in-jet models have troubles to explain such emission. A plausible explanation involves emission from vacuum gaps in the magnetosphere of IC310. Interestingly, such explanation invite to explore analogies with pulsars where particle acceleration takes place in two stages. In the first stage, particle acceleration occurs in the gaps of a charge-separated magnetosphere anchored in the ergosphere of a rotating black hole, and in a second stage at shock waves in the force-free wind beyond the outer light cylinder.
We would like to thank the Instituto de Astrofísica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The support of the German BMBF and MPG, the Italian INFN, the Swiss National Fund SNF, and the Spanish MICINN is gratefully acknowledged. This work was also supported by the CPAN CSD2007-00042 and MultiDark CSD2009-00064 projects of the Spanish Consolider-Ingenio 2010 programme, by grant 127740 of the Academy of Finland, by the DFG Cluster of Excellence “Origin and Structure of the Universe”, by the Croatian Science Foundation (HrZZ) Projects 09/176, by the University of Rijeka Project 13.12.1.3.02, by the DFG Collaborative Research Centers SFB823/C4 and SFB876/C3, and by the Polish MNiSzW grant 745/N-HESS-MAGIC/2010/0. We thank also the support by DFG WI 1860/10-1. J. S. was supported by ERDF and the Spanish MINECO through FPA2012-39502 and JCI-2011-10019 grants. E. R. was partially supported by the Spanish MINECO projects AYA2009-13036-C02-02 and AYA2012-38491-C02-01 and by the Generalitat Valenciana project PROMETEO/2009/104, as well as by the COST MP0905 action ’Black Holes in a Violent Universe’. The European VLBI Network is a joint facility of European, Chinese, South African and other radio astronomy institutes funded by their national research councils. The research leading to these results has received funding from the European Commission Seventh Framework Programme (FP/2007-2013) under grant agreement No. 283393 (RadioNet3).
[99]{} F. Aharonian, *et al.*, *ApJ* **664**, L71 (2007). J. Albert, *et al.*, *ApJ* **669**, 862 (2007). J. Aleksić, *et al.*, *ApJ* **723**, L207 (2010). J. Aleksić, *et al.*, *Astroparticle Physics* **35**, 435 (2012). J. Aleksić, *et al.*, *A&A* **563**, A91 (2014a). J. Aleksić, *et al.*, *Science*, **346**, 1080 (2014b). J. Aleksić, *et al.*, arXiv:1409.5594 (2014) M. V. Barkov, F. A. Aharonian, V. Bosch-Ramon, *ApJ* **724**, 1517 (2010). W. Bednarek, R. J. Protheroe, *MNRAS* **287**, L9 (1997). V. S. Beskin, Y. N. Istomin, V. I. Parev, *SOVAST* **36**, 642 (1992). R. D. Blandford, R.L. Znajek, *MNRAS* **179**, 433 (1977). A. Dom[í]{}nguez, *et al.*, *MNRAS* **410**, 2556 (2011). D. Giannios, D. A. Uzdensky, M. C. Begelman, *MNRAS* **402**, 1649 (2010). E. W. Greisen, *Information Handling in Astronomy - Historical Vistas* **285**, 109 (2003). K. Gültekin, *et al.*, *ApJ* **698**, 198 (2009). M. Kadler, *et al.*, *A&A* **538**, L1 (2012). A. Levinson, F. Rieger, *ApJ* **730**, 123 (2011). M. L. Lister, *et al.*, *AJ* **146**, 120 (2013). G. K. Miley, *Annu. Rev. Astron. Astrophys.* **18**, 165 (1980). R. Narayan, & T. Piran, *MNRAS*, **420**, 604 (2012). A. Neronov, F. A. Aharonian, *ApJ* **671**, 85 (2007). A. Y. Neronov, D. V. Semikoz, I. I. Tkachev, *New Journal of Physics* **11**, 065015 (2009). A. Neronov, D. Semikov, I. Vovk, *A&A* **519**, L6 (2010). M. J.Neeser, S. A. Eales, J. D. Law-Green, *ApJ* **451**, 76 (1995). T. A. Rector, J. T. Stocke, E. S. Perlman, *ApJ* **516**, 145 (1999). F. M. Rieger, K. Mannheim, *A&A* **353**, 473 (2000). M. Ryle, M. D. Windram, *MNRAS* **138**, 1 (1968). M. C. Shepherd, T. J. Pearson, G. B. Taylor (1994), vol. 26, pp. 987-989. D. Sijbring, A. G. de Bruyn, *A&A* **331**, 901 (1998) R. Zanin, *et al.*, *Proc. to the 33$^{rd}$ ICRC, Id. 0773, Rio de Janerio, Brazil* (2013).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Discovered in 1996 by [[*BeppoSAX*]{}]{} during a single type-I burst event, [[SAX J1753.5$-$2349]{}]{} was classified as “burst-only” source. Its persistent emission, either in outburst or in quiescence, had never been observed before October 2008, when [[SAX J1753.5$-$2349]{}]{} was observed for the first time in outburst. Based on [[*INTEGRAL*]{}]{} observations,we present here the first high-energy emission study (above 10 keV) of a so-called “burst-only”. During the outburst the [[SAX J1753.5$-$2349]{}]{} flux decreased from 10 to 4 mCrab in 18–40 keV, while it was found being in a constant low/hard spectral state. The broad-band (0.3–100 keV) averaged spectrum obtained by combining [[*INTEGRAL*]{}]{}/IBIS and [[*Swift*]{}]{}/XRT data has been fitted with a thermal Comptonisation model and an electron temperature $\gtrsim$24 keV inferred. However, the observed high column density does not allow the detection of the emission from the neutron star surface. Based on the whole set of observations of [[SAX J1753.5$-$2349]{}]{}, we are able to provide a rough estimate of the duty cycle of the system and the time-averaged mass-accretion rate. We conclude that the low to very low luminosity of [[SAX J1753.5$-$2349]{}]{} during outburst may make it a good candidate to harbor a very compact binary system.'
author:
- |
M. Del Santo$^{1}$, L. Sidoli$^{2}$, P. Romano$^{3}$, A. Bazzano$^{1}$, R. Wijnands$^{4}$, N. Degenaar$^{4}$, S. Mereghetti$^{2}$\
$^{1}$INAF/Istituto di Astrofisica Spaziale e Fisica Cosmica di Roma, via Fosso del Cavaliere 100, 00133 Roma, Italy\
$^{2}$INAF/Istituto di Astrofisica Spaziale e Fisica Cosmica di Milano, via E. Bassini 15, 20133 Milano, Italy\
$^{3}$INAF/Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo, via U. La Malfa 153, 90146 Palermo, Italy\
$^{4}$Astronomical Institute, ’Anton Pannekoek’, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
date: 'Accepted 2010 January 27. Received 2010 January 26; in original form 2010 January 18.'
title: 'Unveiling the hard X-ray spectrum from the “burst-only” source [[SAX J1753.5$-$2349]{}]{} in outburst[^1]'
---
\[firstpage\]
X-ray: binaries – X-ray: bursts – Stars: neutron – Accretion, accretion discs – Galaxy: bulge – Stars: Individual: [[SAX J1753.5$-$2349]{}]{}
Introduction
============
[[SAX J1753.5$-$2349]{}]{} is a neutron star Low Mass X-ray Binary (LMXB) discovered in 1996 by [[*BeppoSAX*]{}]{}/Wide Field Camera (WFC) during a single type-I X-ray burst [@zand99]. However, no steady emission was detected from the source leading to an upper limit of about 5 mCrab (2–8 keV) for a total exposure of 300 ks [@zand99]. Cornelisse et al. (2004) proposed [[SAX J1753.5$-$2349]{}]{} being member of a possible non-homogeneous class of LMXBs, the so-called “burst-only” sources (see also Cocchi et al. 2001). These are a group of nine bursters discovered by [[*BeppoSAX*]{}]{}/WFC when exhibiting a type-I burst without any detectable persistent X-ray emission.
Recently, [[*INTEGRAL*]{}]{} identified two new members of this class. In fact, photospheric radius expansion (PRE) bursts have been caught in two previously unclassified sources, namely [@brandt06] and [@chelo07]. Afterwards, both have been classified as “quasi-persistent” Very Faint X-ray Transients (VFXTs), since they undergo prolonged accretion episodes of many years at low (Del Santo et al. 2007, Bassa et al. 2008).
VFXTs are transients showing outbursts with low peak luminosity ($10^{34}$–10$^{36}$ in 2–10 keV), mainly discovered with high sensitivity instruments on-board [[*Chandra*]{}]{} and [[*XMM-Newton*]{}]{} during surveys of the Galactic Center region [@wij06]. They are believed to be the faintest known accretors, and are very likely a non homogeneous class of sources. A significant fraction ($\sim 1/3$) of VFXTs are X-ray bursters (Degenaar & Wijnands 2009, Del Santo et al. 2007, Del Santo et al. 2008, Cornelisse et al. 2004); thus they can be identified with neutron stars accreting matter from a low mass companion (M $\lesssim$ 1).
------ ---------- ---------- ------------ ----- ------------
Rev. Start End Total Exp. SCW Spec. Exp.
(MJD) (MJD) (ks) (ks)
724 54727.50 54728.23 58 17 -
725 54729.11 54731.52 198 56 -
726 54732.52 54734.46 160 45 -
729 54741.37 54741.86 42 12 -
731 54749.22 54749.55 20 8 -
732 54749.90 54750.85 83 32 26.2
733 54754.96 54755.46 38 11 10.8
734 54756.87 54758.54 128 48 36.5
735 54760.91 54761.53 43 13 23.2
736 54762.03 54763.63 38 49 30.0
------ ---------- ---------- ------------ ----- ------------
: Log of the [[*INTEGRAL*]{}]{} observations of the [[SAX J1753.5$-$2349]{}]{} region: orbit number (Rev.), start and end time of the observations, exposures time for each orbit taking into account the whole data-set, and number of pointings (SCW) are reported. Observations within a single orbit are not continuous. The first [[*INTEGRAL*]{}]{} detection of [[SAX J1753.5$-$2349]{}]{} occurred in rev. 732. A data sub-set from rev. 732 to 736 has been used to compute the averaged spectra. The last column reports the exposures of spectra in each orbit.
\
\[tab:log\]
In 2002 observations with [[*Chandra*]{}]{} and [[*XMM-Newton*]{}]{} allowed to reveal the nature of four [[*BeppoSAX*]{}]{} “burst-only” sources: one persistent very-faint source, two faint transient systems (with 2–10 keV peak luminosity in the range $10^{36}$–10$^{37}$ ), and one VFXT (see Wijnands et al. 2006 and reference therein). For the other five bursters, including [[SAX J1753.5$-$2349]{}]{}, only the quiescent emission could be derived ($\sim$10$^{32}$ ; Cornelisse et al. 2004). Wijnands et al. (2006) proposed these systems, as good candidates to be classified as VFXTs (see also Campana 2009).
In 2008 October 11, [[*RXTE*]{}]{}/PCA, [[*Swift*]{}]{}/BAT [@mark08] and [[*INTEGRAL*]{}]{}/IBIS [@cadol08] detected an outburst from [[SAX J1753.5$-$2349]{}]{} at 10 mCrab flux level. Then, [[*Swift*]{}]{}/XRT pointed [[SAX J1753.5$-$2349]{}]{} on October 23 [@degewij08], during the decline phase of the outburst (Fig. \[fig:lc\]). An improvement in the source position, R.A.(J2000)=$17^{h} 53^{m} 31.90^{s}$, Dec(J2000)=$-23^{\circ} 48' 16.7''$, has been provided [@starl08]. On 2009 March 13, it was re-pointed by [[*Swift*]{}]{} and a 3$\sigma$ upper-limit derived. This translates in a luminosity level $5 \times 10^{32}$ [@delsanto09].
In this paper we present the hard X-ray outburst of [[SAX J1753.5$-$2349]{}]{} observed by [[*INTEGRAL*]{}]{}/IBIS, as well as the first broad-band spectral analysis of the steady emission of a “burst-only”. We estimate the long-term mass-accretion rate and discuss the nature of the transient system.
![[[SAX J1753.5$-$2349]{}]{} BAT (top) and IBIS/ISGRI (bottom) count rate evolution in the 15–50 keV and 18-40 keV energy ranges, respectively. The XRT detection time is also shown on the bottom plot. The public BAT light curve starts from 54754 MJD; after MJD=54764 [[SAX J1753.5$-$2349]{}]{} was no longer pointed by [[*INTEGRAL*]{}]{}. \[fig:lc\]](delsanto10_fig1.ps){height="6cm"}
Observation and data analysis
=============================
[[*INTEGRAL*]{}]{}
------------------
This paper is based on [[*INTEGRAL*]{}]{} observations of the Galactic Centre region carried out in the framework of the AO6 Key-Programme. Moreover, we used data from a public ToO on the source H 1743-322, at 8.6$^\circ$ from [[SAX J1753.5$-$2349]{}]{}, performed on 2008 October, for a total exposure time of 800 ks (see Tab. \[tab:log\]). We reduced the data of the IBIS [@ube03] low energy detector ISGRI [@lebrun03], and JEM-X [@lund03] data using the [[*INTEGRAL*]{}]{} Off-Line Scientific Analysis, release 8.0. Due to the source weakness, no signal was found in the JEM-X data. On October 10, the first IBIS detection of [[SAX J1753.5$-$2349]{}]{} was found (rev. 732). We extracted the IBIS/ISGRI light curves from each revolution as reported in Tab.\[tab:log\] (binning size as the Total Exposure column) in the energy range 18–40 keV, 40–80 keV, 80–150 keV. For the spectral extraction, we used a sub-set of the data reported in Tab. \[tab:log\], selecting only pointings including [[SAX J1753.5$-$2349]{}]{} in the IBIS FOV up to 50% coding (15$^\circ \times$15$^\circ$). We obtained four averaged spectra from revolutions 732, 733, 734 and 735-736 (the latests have been added together because of the poor statistics). Spectral fits were performed using the spectral X-ray analysis package XSPEC v. 11.3.1.
[[*Swift*]{}]{}
---------------
A [[*Swift*]{}]{} ToO was performed on October 23 (Degenaar & Wijnands 2008). The [*Swift*]{}/XRT data of observation 00035713002 were collected in photon counting (PC) mode between 2008-10-23 17:48:53 and 21:08:57 UT, for a total on-source net exposure of 1 .
They were processed with standard procedures ([xrtpipeline]{} v0.12.1), filtering and screening criteria by using the [Heasoft]{} package (v.6.6.1). Moderate pile-up was present, so source events were extracted from an annular region (radii of 20 and 3 pixels; 1 pixel $\sim 2\farcs36$), while background events were extracted from an annular region (radii 120 and 80 pixels) away from background sources. An XRT spectrum was extracted and ancillary response files were generated with [xrtmkarf]{}, to account for different extraction regions, vignetting and PSF corrections. We used the spectral redistribution matrices v011 in the Calibration Database maintained by HEASARC. All spectra were rebinned with a minimum of 20 counts per energy bin.
We retrieved the BAT daily light curves (15–50 keV) available starting from MJD=54754, from the [[*Swift*]{}]{}/BAT transient monitor (Krimm et al. 2006, 2008; http://heasarc.gsfc.nasa.gov/docs/swift/results/transients/) page.
Results
=======
The IBIS/ISGRI and BAT count rate of [[SAX J1753.5$-$2349]{}]{} are shown in Fig. \[fig:lc\]. Based on the IBIS data, the hard X-ray outburst started on October 10 at a flux level of 10 mCrab (18–40 keV) and lasted at least 14 days (last pointing at 4 mCrab). This outburst is hence characterised by a fast increase of the flux and a linear decay with a slope of $-$0.13$\pm$0.01.
An [[*INTEGRAL*]{}]{} pointing with no [[SAX J1753.5$-$2349]{}]{} detection was performed eight hours before the outburst started. We also averaged all our data (from rev 724 to 731) collected before the first source detection for a total of 500 ks, resulting in a 3$\sigma$ upper limit of 1 mCrab (Fig. \[fig:lc\]).
In order to look for any possible spectral variability, we fitted the four averaged IBIS spectra with a simple power law. We obtained a constant value (within the errors) of the photon index ([$\Gamma$]{}$\sim$ 2) which indicates, in spite of the flux variation, a steady spectral state.
The lack of spectral parameter variation led us to average the IBIS spectra of different revolutions. The 18-100 keV averaged spectrum is well described by a simple power law model with a slope as $2.2 \pm 0.3$. A mean 18–100 keV flux of $1.5\times 10^{-10}$ can be derived.
The XRT spectrum can be fitted by an absorbed power law model with a Hydrogen column density of N${\rm _H}=1.8 (\pm 0.6) \times 10^{22}$ cm$^{-2}$. The photon index is $\Gamma = 2.0 \pm 0.5$ and the resulting 2–10 keV absorbed and unabsorbed fluxes are $\sim$4.4 and $\sim$5.2 $\times 10^{-11}$ , respectively.
We note that the derived N${\rm _H}$ is higher than the absorption column of $0.83 \times 10^{22}$ cm$^{-2}$ [@corne02] found by interpolating the HI maps of Dickey & Lockman (1990). In fact, the two values are perfectly consistent within the errors, given the large range of values (about $0.4-1.5 \times 10^{22}$ cm$^{-2}$) obtained in the box adopted to calculate the Weighted Average N${\rm _H}$ (with the nH Column Density Tool)[^2] from the HI maps.
The joint IBIS and XRT spectrum (0.3–100 keV) was then fitted with different models. First we used an empirical model such as the power law (Fig. \[fig:model\], [*left*]{}), then the more physical Comptonisation model. Indeed, the 1–200 keV spectrum of X-ray bursters in low/hard state is most likely produced by the upscattering of soft seed photons by a hot optically thin electron plasma (i.e. Barret et al. 2000 and references therein). Moreover, a black-body emission from the neutron star surface is also expected to be observed in the low/hard states of bursters (i.e. Natalucci et al. 2000 and references therein). We tried to add a `BB` component to the two models. The best fit parameters and mean fluxes are reported in Tab. \[tab:fit\_sim\].
Thus, using a physical thermal Comptonisation model, `COMPTT` [@tita94] in XSPEC, the electron temperature is not constrained, while a lower limit of $\sim$24 keV (at $90\%$) can be inferred (see Tab. \[tab:fit\_sim\] and contour levels in Fig. \[fig:cont\]). This is consistent with the electrons temperature observed in burster systems, even brighter than [[SAX J1753.5$-$2349]{}]{} [@barret00].
With the addition of the `BB` component to the thermal Comptonisation, a typical value of the black-body temperature (kT$_{\rm BB}$ $\sim$0.3 keV) is obtained (Fig. \[fig:model\], [*right*]{}), even though this component is not requested by the Ftest probability ($7 \times 10^{-2}$). We may argue that the high absorption observed in [[SAX J1753.5$-$2349]{}]{} could be a strong obstacle to the firm detection of this component.
As a firts approximation, the accretion luminosity L$_{\rm acc}$ is coincident with the bolometric luminosity of the source (0.1–100 keV). Using the mean 0.1–100 keV flux obtained with the `COMPTT` model fit and assuming a distance of 8 kpc (Galactic Centre), a value of L$_{\rm acc}=4.3\times10^{36}$ ($\sim$0.02 L$_{Edd} $) is derived. The averaged mass-accretion rate ($\langle \dot{M}_{\mathrm{ob}} \rangle=R L_{\mathrm{acc}}/GM$, where $G$ is the gravitational constant, $M=1.4~\mathrm{M_{\odot}}$ and $R=10$ km for a neutron star accretor) during the outburst is $6.7 \times 10^{-10}$ yr$^{-1}$.
----------- ----------------------------- --------------------- --------------- --------- ---------- --------------------- --------------------- ---------------------------
Model N$_{H}$ $kT_{BB}$ $\Gamma$ $E_{c}$ $kT_{e}$ $\tau$ $\chi^2_{\nu}$(dof) $F_{\rm bol}^{\mathrm a}$
$10^{22}$ ($\rm {cm}^{-2}$) (keV) (keV) (keV) ()
POW 2.2$^{+0.5}_{-0.4}$ - $2.3 \pm 0.3$ - - - 0.91(19) $1.3\times 10^{-9}$
BB+POW 2.8$^{+2.0}_{-1.0}$ $0.4^{+0.3}_{-0.1}$ $2.1 \pm 0.3$ - 0.82(17) $5.6\times 10^{-10}$
Comptt 1.9$\pm 0.4$ - - - $> 24$ $0.2^{+1.3}_{-0.1}$ 1.07(18) $1.1\times 10^{-9}$
BB+Comptt 2.7$^{+2.0}_{-1.0}$ $0.4^{+0.3}_{-0.2}$ - - $> 17$ $0.8^{+2.2}_{-0.6}$ 0.86(16) $6.3\times 10^{-10}$
----------- ----------------------------- --------------------- --------------- --------- ---------- --------------------- --------------------- ---------------------------
\[tab:fit\_sim\]
The bolometric flux of the unabsorbed best-fit model spectrum.
Discussion
==========
We report here for the first time the broad-band spectrum, from soft to hard X-rays, of the persistent emission from a so-called “burst-only” source. In particular, none of these sources have ever been studied above 10 keV during their persistent emission.
The outburst from [[SAX J1753.5$-$2349]{}]{} observed with [[*INTEGRAL*]{}]{}/IBIS has a duration of at least 14 days, without any evidence for type-I X-ray bursts, all along the performed [[*INTEGRAL*]{}]{} observations of the Galactic Centre region started in 2003.
From the [[*RXTE*]{}]{}/PCA flux detection at 8 mCrab [@mark08] we can derive an absorbed 2–10 keV peak flux of about $1.7 \times 10^{-10}$ which translates in an unabsorbed luminosity higher than $1.3 \times 10^{36}$ . This value seems to indicate [[SAX J1753.5$-$2349]{}]{} being a hybrid system (such as AX J1745.6–2901 and GRS 1741.9–2853, see Degenaar & Wijnands 2009) which displays very-faint outbursts with 2–10 keV peak luminosity $L_{X} < 10^{36}$ (as resulted from WFC observations in 1996), as well as outbursts with luminosities in the range $10^{36-37}$ , which are classified as faint (FXT; Wijnands et al. 2006). However, it is worth to know that the $L_{X}$ boundary as $10^{36}$ is somewhat arbitrary (such as the VFXT/FXT classification). Nevertheless, our result reinforces the hypothesis that the so-called “burst-only” sources belong to the class of the subluminous neutron star X-ray binaries.
A rough estimate of the duty cycle (as the ratio of $t_{\mathrm{ob}}/t_{\mathrm{rec}}$) can be obtained. The time interval between the two 2008 measurements of the quiescence (February 2008-March 2009) is about 13 months while the outburst recurrence ($t_{\mathrm{rec}}$) is about 12 years (from the burst event in 1996). However, it is possible that we missed other outbursts of [[SAX J1753.5$-$2349]{}]{} that occurred between 1996 and 2008 whithin periods not covered by Galactic Centre monitoring. The outburst duration ($t_{\mathrm{ob}}$) ranges from a minimum of 14 days (as observed) and a maximum of 13 months, since there are not any other X-ray observations but the ones in October. In fact, we cannot exclude that the hard X-ray outburst may be part of a longer outburst occurred at a lower luminosity level, only detectable by high-sensitivity X-ray telescopes.
This translates into a duty cycle ranging from a minimum of 0.3$\%$ to a maximum of 9$\%$ and into a long-term time-averaged accretion rate ($\langle \dot{M}_{\mathrm{long}} \rangle=\langle
\dot{M}_{\mathrm{ob}} \rangle \times t_{\mathrm{ob}} / t_{\mathrm{rec}}$) ranging from 2.2$\times$$10^{-12}$ to 6.0$\times$$10^{-11}$ M$_\odot$ yr$^{-1}$.
King & Wijnands (2006) suggested that neutron star in transient LMXBs with low time-averaged mass accretion rate might pose difficulties explaining their existence without invoking exotic scenarios such as accretion from a planetary donor. However, the regime of $\langle \dot{M}_{\mathrm{long}} \rangle$ estimated for [[SAX J1753.5$-$2349]{}]{} can be well explained within current LMXB evolution models.
In spite of the flux variability along the outburst, the spectral state of [[SAX J1753.5$-$2349]{}]{} remains steady, in low/hard state. This is in agreement with the fact that a really low X-ray luminosity, $L_{X}$ $0.01 L_{Edd}$ or so, produces a hard state in most sources [@klis06].
Following in’t Zand et al. (2007), we have estimated the hardness ratio 40–100/20–40 keV within each [[*INTEGRAL*]{}]{} revolutions. We find a value consistent with 1 which confirms the hard nature of the system. This is also consistent with the low mass accretion rate inferred (see also Paizis et al. 2006), i. e. [[SAX J1753.5$-$2349]{}]{} is not a fake faint system and there would be no reason to assume that the system is obscured to explain the low .
Moreover, King (2000) argued that the faint low-mass X-ray transients are mainly neutron star X-ray binaries in very compact binaries with orbital periods lower than 80 min. We suggest that the [[SAX J1753.5$-$2349]{}]{} system is a good candidate to harbor an accreting neutron star in a very compact system.
In conclusion, [[SAX J1753.5$-$2349]{}]{} joins a sample of low-luminosity transient LMXBs [@degewij09], which display different behaviour in terms of peak luminosity, outburst duration and recurrence time from year to year. Up to now, it is not understood whether these variations should be interpreted as being due to changes in the mass-transfer rate or as results of instabilities in the accretion disc (Degenar & Wijnands 2009 and reference therein).
Acknowledgments {#acknowledgments .unnumbered}
===============
Data analysis is supported by the Italian Space Agency (ASI), via contracts ASI/INTEGRAL I/008/07/0, ASI I/088/06/0. MDS thanks Memmo Federici for the [[*INTEGRAL*]{}]{} data archival support at IASF-Roma. We thank the anonymous referee for his quick response and useful suggestions.
Barret D., Olive J. F., Boirin L., Done C., Skinner G. K., Grindlay J. E., 2000, ApJ, 533, 329
Bassa C., et al., 2008, ATel \#1575
Brandt, S., Budtz-Jorgensen, C., Chenevez, J., Lund N., Oxborrow C. A., Westergaard N. J., 2006, ATel \#970
Campana S., 2009, ApJ, 699, 1144
Chelovekov I. V., Grebenev S. A. 2007, ATel \#1094
Cadolle Bel M., Kuulkers E., Chenevez J., Beckmann V., Soldi S., 2008, ATel \#1910
Cocchi, M., Bazzano, A., Natalucci, L., et al. 2001, A&A, 378, L37
Cornelisse R., et al., 2002, APS, 11006
Cornelisse R., et al., 2004, Nucl. Phys., 132, 518
Degenaar N., & Wijnands R., 2008, ATel \#1809
Degenaar N., & Wijnands R., 2009, A&A, 495, 547
Del Santo M., Sidoli L., Mereghetti S., Bazzano A., Tarana A., Ubertini P., 2007, A&A, 468, L17
Del Santo M., Sidoli L., Romano P., Bazzano A., Tarana A., Ubertini P., Federici M., Mereghetti S., 2008, AIPC, 1010, 162
Del Santo M., Romano P., Sidoli L., 2009, ATel \#1975
in’t Zand J. J. M., Rupen M., Heise J., Muller J. M., Bazzano A., Cocchi M., Natalucci L., Ubertini P. 1999, Nucl. Phys, 69, 228
in’t Zand J. J. M., Cornelisse R., Mendez M., 2005, A&A, 440, 287
in’t Zand J. J. M., Jonker P. G., Markwardt C. B., 2005, A&A, 465, 953
King, A. R., 2000, MNRAS, 315, L33
King, A.R., & Wijnands R. 2006, MNRAS, 366, L31
Krimm H. A., et al., 2006, ATel \#904
Krimm H. A., Barthelmy S. D., Cummings J. R., Markwardt C. B., Skinner G., Tueller J., Swift/BAT Team, 2008, in AAS/High Energy Astrophysics Division, Vol. 10, p.\#07.01
Lebrun F., et al. 2003, A&A, 411, L141
Lund N., et al. 2003, A&A, 411, L231
Markwardt C. B., Krimm H. A., Swank J. H., 2008, ATel \#1799
Natalucci L., Bazzano A., Cocchi M., Ubertini P., Heise J., Kuulkers E., in’t Zand J. J. M., Smith M. J. S., 2000, ApJ, 536, 891
Starling R. & Evans P., 2008, ATel \#1814
Paizis A., et al. 2006, A&A, 459, 187
Titarchuk L., 1994, ApJ, 434, 313
Ubertini P., et al. 2003, A&A, 411, L131
M. van der Klis, 2006, Compact Stellar X–Ray Sources, eds. W.H.G. Lewin and M. van der Klis, Cambridge University Press
Wijnands R., et al. 2006, A&A, 449, 1117
\[lastpage\]
[^1]: Based on observations with [*INTEGRAL*]{}, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with participation of Russia and the USA.
[^2]: http://heasarc.gsfc.nasa.gov/docs/tools.html
|
{
"pile_set_name": "ArXiv"
}
|
**Operators of Harmonic Analysis**
**in Weighted Spaces with Non-standard Growth**
by
**V.M. Kokilashvili,**
*A.Razmadze Mathematical Institute and Black Sea University, Tbilisi, Georgia*
*[email protected]*
and
**S.G.Samko**
*Universidade do Algarve, Portugal*
*[email protected]*
**Abstracts**
Last years there was increasing an interest to the so called function spaces with non-standard growth, known also as variable exponent Lebesgue spaces. For weighted such spaces on homogeneous spaces, we develop a certain variant of Rubio de Francia’s extrapolation theorem. This extrapolation theorem is applied to obtain the boundedness in such spaces of various operators of harmonic analysis, such as maximal and singular operators, potential operators, Fourier multipliers, dominants of partial sums of trigonometric Fourier series and others, in weighted Lebesgue spaces with variable exponent. There are also given their vector-valued analogues.
Introduction {#1}
============
During last years a significant progress was made in the study of maximal and singular operators and potential type operators in the generalized Lebesgue spaces $L^{p(\cdot)}$ with variable exponent, known also as the spaces with non-standard growth. A number of mathematical problems leading to such spaces with variable exponent arise in applications to partial differential equations, variational problems and continuum mechanics (in particular, in the theory of the so called electrorheological fluids), see E. Acerbi and G.Mingione [@9b],[@9d], X.Fan and D.Zhao [@160zb], M.Ru$\check{z}$i$\check{c}$ka [@525], V.V. Zhikov [@730ab], [@730c]. These applications stipulated a significant interest to such spaces in the last decade.
The most advance in the study of the classical operators of harmonic analysis in the case of variable exponent was made in the Euclidean setting, including weighted estimates. We refer in particular to the surveying articles L.Diening, P.H[ä]{}st[ö]{} and A.Nekvinda [@106b], V.Kokilashvili [@316b], S.Samko [@580bd] and papers D.Cruz-Uribe, A.Fiorenza, J.M.Martell and C.Perez [@101zb], D.Cruz-Uribe, A.Fiorenza and C.J.Neugebauer [@101ab], L. Diening [@106], [@105a], [@106z], L.Diening and M.Ru$\check{z}$i$\check{c}$ka [@107a], V. Kokilashvili, N.Samko and S.Samko [@317c], V.Kokilashvili and S.Samko [@321j], [@321c], [@321a], [@321i], A.Nekvinda [@414b], S.Samko [@579], [@580b], [@580bc], S.Samko, E.Shargorodsky and B.Vakulov [@584a] and references therein.
Recently there also started the investigation of these classical operators in the spaces with variable exponent in the setting of metric measure spaces, the case of constant $p$ in this setting having a long history, we refer, in particular to the papers A.P.Calder[ó]{}n [@72b], R.R.Coifman and G.Weiss [@97], [@97a], R.Macías and C.Segovia [@381a], books D.E.Edmunds and V.Kokilashvili and A.Meskhi [@145a] and I.Genebashvili, A.Gogatishvili, V.Kokilashvili and M.Krbec [@187], J.Heinonen [@225a] and references therein. The non-weighted boundedness of the maximal operator on homogeneous spaces was proved by P.Harjulehto, P.H[ä]{}st[ö]{} and M.Pere [@224b] and Sobolev embedding theorem with variable exponents on homogeneous spaces with variable dimension was proved in P.Harjulehto, P.H[ä]{}st[ö]{} and V.Latvala [@224ab].
In the present paper we give a development of weighted estimations of various operators of harmonic analysis in Lebesgue spaces with variable exponent $p(x)$. We first give theorems on the weighted boundedness of the maximal operator on homogeneous spaces (Theorems \[th3.1.\] and \[th3.2.\]). Next, in Section \[subs4.\] we give a certain $p(\cdot)\to q(\cdot)$-version of Rubio de Francia’s extrapolation theorem [@522b] within the frameworks of weighted spaces $L_\varrho^{p(\cdot)}$ on metric measure spaces. Proving this version we develop some ideas and approaches of papers [@101zb], [@101ac].
By means of this extrapolation theorem and known theorems on the boundedness with Muckenhoupt weights in the case of constant $p$, we obtain results on weighted $p(\cdot)\to q(\cdot)$- or $p(\cdot)\to p(\cdot)$-boundedness - in the case of variable exponent $p(x)$ - of the following operators\
1) potential type operators,\
2) Fourier multipliers (weighted Mikhlin, Hörmander and Lizorkin-type theorems, Subsection \[subs5.1.\]),\
3) multipliers of trigonometric Fourier series (Subsection \[subs5.2.\]),\
3) majorants of partial sums of Fourier series (Subsection \[subs5.3.\]),\
4) singular integral operators on Carleson curves and in Euclidean setting (Subsections \[subs5.4.\]-\[subs5.8.\]),\
5) Feffermann-Stein function (Subsection \[subs5.7.\]),\
6) some vector-valued operators (Subsection \[subs5.9.\]).
Definitions and preliminaries {#sec1}
=============================
On variable dimensions in metric measure spaces {#subskiki}
-----------------------------------------------
In the sequel, $(X,d,\mu)$ denotes a metric space with the (quasi)metric $d$ and non-negative measure $\mu$. We refer to [@145a], [@187], [@225a] for the basics on metric measure spaces. By $B(x,r)=\{y\in X: d(x,y)<r\}$ we denote a ball in $X$. The following standard conditions will be assumed to be satisfied:\
1) all the balls $B(x,r)=\{y\in X: d(x,y)<r\}$ are measurable,\
2) the space $C(X)$ of uniformly continuous functions on $X$ is dense in $L^1(\mu)$.
In most of the statements we also suppose that\
3) the measure $\mu$ satisfies the doubling condition: $$\mu B(x,2r)\le C \mu B(x,r),$$ where $C>0$ does not depend on $r>0$ and $x\in X.$ A measure satisfying this condition will be called doubling measure.
For a locally $\mu$-integrable function $f: X \to \mathbb{R}^1$ we consider the Hardy-Littlewood maximal function $$\mathcal{M} f(x)=\sup_{r>0} \frac{1}{\mu (B(x,r))}
\int\limits_{B(x,r)} |f(y)| \,d\mu(y).$$ By $A_s=A_s(X)$, where $1\le s <\infty$, we denote the class of weights (locally almost everywhere positive $\mu$-integrable functions) $w: X\to \mathbb{R}^1$ which satisfy the Muckenhoupt condition $$\sup\limits_{B}\left(\frac{1}{\mu
B}{\int\limits}_{B}w(y)d\mu(y)\right)\left(\frac{1}{\mu B}{\int\limits}_{B}
w^{-\frac{1}{s-1}}(y)d\mu(y)\right)^{s-1} <\infty$$ in the case $1<s<\infty$, and the condition $$\mathcal{M}w(x)\le Cw(x)$$ for almost all $x\in X$, with a constant $C>0$, not depending on $x\in X$, in the case $s=1$. Obviously, $A_1\subset A_s, \
1<s<\infty.$
As is known, see [@72b], [@381a], the weighted boundedness $${\int\limits}\limits_{X}(\mathcal{M}f(x))^s w(x) d\mu(x) \le C {\int\limits}\limits_{X}|f(x)|^s w(x) d\mu(x),$$ holds, if and only if $w\in A_s$.
Let ${\Omega}$ be an open set in $X$.
\[def1.1chch\] By $\mathcal{P}({\Omega})$ we denote the class of $\mu$-measurable functions on ${\Omega}$, such that $$\label{1.1}
1<p_-\le p_+<\infty,$$ where $ p_-=p_-({\Omega})={\operatornamewithlimits{ess\,inf}}\limits_{x\in{\Omega}} p(x) \quad \textrm{and} \quad p_+=p_+({\Omega})={\operatornamewithlimits{ess\,sup}}\limits_{x\in{\Omega}}
p(x). $
\[def1.1ch\] By $L_\varrho^{p(\cdot)}({\Omega})$ we denote the weighted Banach function space of $\mu$-measurable functions $f: {\Omega}\to
\mathbb{R}_1^+$, such that $$\label{1.2}
\|f\|_{L^{p(\cdot)}_\varrho}:=\|\varrho
f\|_{p(\cdot)}=\inf\left\{{\lambda}>0: {\int\limits}_{\Omega}\left|\frac{\varrho(x)f(x)}{{\lambda}}\right|^{p(x)} \;d\mu(x)\le
1\right\}<\infty .$$
\[def1.1\] *We say that a weight $\varrho$ belongs to the class $\mathfrak{A}_{p(\cdot)}({\Omega})$, if the maximal operator $\mathcal{M}$ is bounded in the space $L_\varrho^{p(\cdot)}({\Omega}).$*
\[def1.1\] *A function $p:{\Omega}\to \mathbb{R}^1$ is said to belong to the class $WL({\Omega})$ (weak Lipshitz), if* $$\label{1.3}
|p(x)-p(y)|\leq \frac{A}{\ln\frac{1}{d(x,y)}}\,, \;\; d(x,y)\leq
\frac{1}{2}, \;\; x,\,y\in {\Omega},$$ *where $A>0$ does not depend on $x$ and $y$.*
The notion of lower and upper local dimension of $X$ at a point $x$ introduced as $$\underline{dim}\,X(x)= \lim\limits_{\overline{r\to 0}} \frac{\ln \mu B(x,r)}{\ln r}, \quad
\overline{dim}\, X(x)= \overline{\lim\limits_{r\to 0}}\, \frac{\ln \mu B(x,r)}{\ln r}$$ are known, see e.g. [@160zzzz]. We will use different notions of local lower and upper dimensions, inspired by the notion of the so called index numbers $m(w), M(w)$ of almost monotonic functions $w$, see their definition in (\[mÌ\]). These indices studied in [@539], [@539d], [@539e], are versions of Matuzewska-Orlicz index numbers used in the theory of Orlicz spaces, see [@382a], [@382b]. The idea to introduce local dimensions in terms of these indices by the following definition was borrowed from the papers [@539j], [@539jnew].
\[defN\] The numbers $$\label{ibas}
\underline{\mathfrak{dim}}(X;x) =\sup_{r>1}\frac{\ln \
\left(\lim\limits_{\overline{h\to 0}} \frac{\mu B(x,rh)}{\mu
B(x,h)} \right)}{\ln \ r}\ , \quad
\overline{\mathfrak{dim}}(X;x) =\inf_{r>1}\frac{\ln \
\left(\overline{\lim\limits_{h\to 0}}\frac{\mu B(x,rh)}{\mu
B(x,h)}
\right)}{\ln \ r}$$ will be referred to as local lower and upper dimensions.
Observe that the “dimension” $\underline{\mathfrak{dim}}(X;x)$ may be also rewritten in terms of the upper limit as well: $$\label{ggbvcdshnew}
\underline{\mathfrak{dim}}(X;x) =\sup_{0<r<1}\frac{\ln \ \left(\overline{\lim\limits_{h\to 0}}
\frac{\mu B(x,rh)}{\mu B (x,h)} \right)}{\ln \ r}.$$ Since the function $$\label{dsgh}
\mu_0(x,r) = \overline{\lim\limits_{h\to 0}} \frac{\mu
B(x,rh)}{\mu B(x,h)}$$ is semimultiplicative in $r$, that is, $\mu_0(x,r_1r_2)\le
\mu_0(x,r_1)\mu_0(x,r_2)$, by properties of such functions ([@342], p. 75; [@342a]) we obtain that $\underline{\mathfrak{dim}}(X;x)\le
\overline{\mathfrak{dim}}(X;x)$ and we may rewrite the dimensions $\underline{\mathfrak{dim}}(X;x)$ and $\overline{\mathfrak{dim}}(X;x)$ also in the form $$\label{ksajlkJ}
\underline{\mathfrak{dim}}(X;x) = \lim\limits_{r\to
0}\frac{\ln \mu_0(x,r)}{\ln \ r}, \quad \overline{\mathfrak{dim}}(X;x) = \lim\limits_{r\to
\infty}\frac{\ln \mu_0(x,r)}{\ln \ r}.$$
\[rteyriu\] Introduction of dimensions $\underline{\mathfrak{dim}}(X;x)$ and $\overline{\mathfrak{dim}}(X;x)$ just in form (\[ggbvcdshnew\])-(\[ksajlkJ\]) is caused by the fact that they arise naturally when dealing with Muckenhoupt condition for radial type weights on metric measure spaces. They seem may not coincide with dimensions $\underline{dim}\,X(x), \overline{dim}\,
X(x)$. There is an impression that probably for different goals different notions of dimensions may be useful.
We will mainly need the lower bound for lower dimensions $\underline{\mathfrak{dim}}(X;x)$ on an open set ${\Omega}\subseteq X$: $$\underline{\mathfrak{dim}}({\Omega}):={\operatornamewithlimits{ess\,inf}}\limits_{x\in X}\underline{\mathfrak{dim}}({\Omega};x).$$
In case where ${\Omega}$ is unbounded, we will also need similar dimensions connected in a sense with the influence of infinity. Let $$\label{dsghbuyt}
\mu_\infty(x,r) = \overline{\lim\limits_{h\to \infty}} \frac{\mu
B(x,rh)}{\mu B(x,h)}.$$ We introduce the numbers $$\label{ksajlkJmnxt}
\underline{\mathfrak{dim}}_\infty(X;x) = \lim\limits_{r\to
0}\frac{\ln \mu_\infty(x,r)}{\ln \ r}, \quad \overline{\mathfrak{dim}}_\infty(X;x) =
\lim\limits_{r\to \infty}\frac{\ln \mu_\infty(x,r)}{\ln \ r}$$ and their bounds $$\label{i}
\underline{\mathfrak{dim}}_\infty({\Omega})= {\operatornamewithlimits{ess\,inf}}\limits_{x\in {\Omega}} \underline{\mathfrak{dim}}_\infty (X;x), \quad
\overline{\mathfrak{dim}}_\infty({\Omega})= {\operatornamewithlimits{ess\,sup}}\limits_{x\in {\Omega}} \overline{\mathfrak{dim}}_\infty(X;x).$$
It is not hard to see that $ \underline{\mathfrak{dim}}({\Omega}),
\underline{\mathfrak{dim}}_\infty({\Omega}),$ and $
\overline{\mathfrak{dim}}_\infty({\Omega})$ are non-negative. In the sequel, when considering these bounds of dimensions we always assume that $\underline{\mathfrak{dim}}({\Omega}), $ $
\underline{\mathfrak{dim}}_\infty({\Omega}), \
\overline{\mathfrak{dim}}_\infty ({\Omega}) \in (0,\infty)$.
Classes of the weight functions {#subsec2.}
-------------------------------
We consider, in particular, the weights $$\label{2.1}
\varrho(x)= [1+d(x_0,x)]^{{\beta}_\infty}\prod\limits_{k=1}^N
[d(x,x_k)]^{{\beta}_k}, \ \ \ x_k\in X, k=0,1,...N,$$ where ${\beta}_\infty=0$ â in the case where $X$ is bounded. Let $\Pi=\{x_0,x_1,..., x_N\}$ be a given finite set of points in $X$. We take $d(x,y)=|x-y|$ in all the cases where $X=\mathbb{R}^n$.
\[deff2.1.\] *A weight function of form (\[2.1\]) is said to belong to the class $V_{p(\cdot)}({\Omega},\Pi)$, where $p(\cdot)\in C({\Omega})$, if* $$\label{2.2}
-\frac{\underline{\mathfrak{dim}}({\Omega})}{p(x_k)}<{\beta}_k<\frac{\underline{\mathfrak{dim}}({\Omega})}{p^\prime(x_k)}$$ *and, in the case ${\Omega}$ is infinite,* $$\label{2.3}
-\frac{\underline{\mathfrak{dim}}_\infty({\Omega})}{p_\infty}<{\beta}_\infty+
\sum\limits_{k=1}^N{\beta}_k <\underline{\mathfrak{dim}}_\infty({\Omega})
-\frac{\overline{\mathfrak{dim}}_\infty({\Omega})}{p_\infty}.$$
Note that when the metric space $X$ has a constant dimension $s$ in the sense that $$c_1 r^s\le \mu B(x,r)\le c_2 r^s$$ with the constants $c_1>0$ è $c_2>0$, not depending on $x\in X$ and $r>0$, the inequalities in (\[2.2\]), (\[2.3\]) and (\[2.6\]) turn into $$\label{2.2prime}
-\frac{s}{p(x_k)}<{\beta}_k<\frac{s}{p^\prime(x_k)},
\quad -\frac{s}{p_\infty}<{\beta}_\infty+\sum\limits_{k=1}^N{\beta}_k
<\frac{s}{p^\prime_\infty}$$ and $$\label{2.6prime}
-\frac{s}{p(x_k)}
< m(w)\le M(w) < \frac{s}{p^\prime(x_k)}\, \ , \ \ k=1,2,...,N,$$ respectively.
In fact, we may admit a more general class of weights $$\label{2.4}
\varrho(x)=w_0[1+d(x_0,x)]\prod_{k=1}^N w_k[d(x,x_k)]$$ with “radial” weights, where the functions $w_0$ and $w_k,
k=1,...,N,$ belong to a class of Zygmund-Bary-Stechkin type, which admits an oscillation between two power functions with different exponents.
By $U=U([0,\ell])$ we denote the class of functions $u\in C([0,\ell]), \ 0<\ell\le \infty,$ such that $ u(0)=0, u(t)>0$ for $t>0$ and $u$ is an almost increasing function on $[0,\ell]$. (We recall that a function $u$ is called *almost increasing* on $[0,\ell]$, if there exists a constant $C (\ge 1)$ such that $u(t_1)\le u(t_2)$ for all $0\le t_1\le t_2 \le \ell$). By $\widetilde{U}$ we denote the class of function $u$, such that $t^au(t)\in U$ for some $a\in\mathbb{R}^1$.
\[def2.2.\] ([@46]) *A function $v$ is said to belong to the Zygmund-Bary-Stechkin class $\Phi^0_{\delta}$, if $$\int_0^h\frac{v(t)}{t}dt
\le cv(h) \ \ \textit{and} \ \
\int_h^\ell\frac{v(t)}{t^{1+{\delta}}}dt \le c\frac{v(h)}{h^{\delta}},$$ where $c=c(v)>0$ does not depend on $h\in (0,\ell]$.*
It is known that $v\in \Phi^0_{\delta}$, if and only if $0<m(v)\le
M(v)<{\delta}$, where $$\label{mÌ}
m(w)=\sup_{t>1}\frac{\ln\left(\lim\limits_{\overline{h\to 0}} \frac{w(ht)}{w(h)}\right)}{\ln t} \ \ \ \
\textrm{and} \ \ \ \ M(w)=\sup_{t>1}\frac{\ln\left(\overline{\lim\limits_{h\to 0}} \frac{w(ht)}{w(h)}\right)}{\ln
t}$$ (see [@539], [@539d], [@270a]).
For functions $ w$ defined in the neighborhood of infinity and such that $w\left(\frac{1}{r}\right)\in \widetilde{U}([0, {\delta}]) $ for some ${\delta}>0$, we introduce also $$\label{msusutxe}
m_\infty(w) =\sup_{x>1}\frac{\ln \
\left[\underline{\lim}_{h\to \infty} \frac{w(xh)}{w(h)}
\right]}{\ln \ x}\ , \ \ M_\infty(w) =\inf_{x>1}\frac{\ln \
\left[\overline{\lim}_{h\to \infty} \frac{w(xh)}{w(h)}
\right]}{\ln \ x}.$$
Generalizing Definition \[deff2.1.\], we introduce also the following notion.
\[def2.3.\] *A weight function $\varrho$ of form (\[2.4\]) is said to belong to the class $V^{osc}_{p(\cdot)}({\Omega},\Pi)$, where $p(\cdot)\in C({\Omega})$, if $$\label{2.6}
w_k(r)\in \widetilde{U}([0,\ell]), \ell={\mbox{\,\rm diam\,}}{\Omega}\quad
\textrm{and}\ \ -\frac{\underline{\mathfrak{dim}}({\Omega})}{p(x_k)} <
m(w_k)\le M(w_k) <
\frac{\underline{\mathfrak{dim}}({\Omega})}{p^\prime(x_k)} ,$$ $ k=1,2,...,N,$ and (in the case ${\Omega}$ is infinite) $$w_0\left(\frac{\ell^2}{r}\right)\in \widetilde{U}([0,\ell])$$ and $$\label{f27dco}
-\frac{\underline{\mathfrak{dim}}_\infty({\Omega})}{p_\infty}<\sum\limits_{k=0}^N m_\infty(w_k)\le
\sum\limits_{k=0}^N M_\infty(w_k)
<\frac{\underline{\mathfrak{dim}}_\infty({\Omega})}{p^\prime_\infty}-\Delta_{p_\infty},$$ where $\Delta_{p_\infty}=\frac{\overline{\mathfrak{dim}}_\infty({\Omega})-\underline{\mathfrak{dim}}_\infty({\Omega})}{p_\infty}.$*
Observe that in the case ${\Omega}=X=\mathbb{R}^n$ conditions (\[2.6\]) and (\[f27dco\]) take the form $$\label{2.6bvc}
w_k(r)\in \widetilde{U}(\mathbf{R}^1_+): =\left\{w: \ w\left(r\right), w\left(\frac{1}{r}\right)\in
\widetilde{U}([0,1])\right\}$$ and $$\label{f27dcobvc}
-\frac{n}{p(x_k)} < m(w_k)\le M(w_k) < \frac{n}{p^\prime(x_k)}
,\quad -\frac{n}{p_\infty}<\sum\limits_{k=0}^N m_\infty(w_k)\le \sum\limits_{k=0}^N M_\infty(w_k)
<\frac{n}{p^\prime_\infty}.$$
\[rem4.3.\] *For every $p_0\in (1,p_-)$ there hold the implications* $$\varrho\in V_{p(\cdot)}({\Omega},\Pi) \ \Longrightarrow \ \varrho^{-p_0}\in
V_{(\widetilde{p})^\prime(\cdot)}({\Omega},\Pi)$$ *and* $$\varrho\in V^{osc}_{p(\cdot)}({\Omega},\Pi) \ \Longrightarrow \ \varrho^{-p_0}\in
V^{osc}_{(\widetilde{p})^\prime(\cdot)}({\Omega},\Pi),$$ *where $\widetilde{p}(x)=\frac{p(x)}{p_0}$.*
The boundedness of the Hardy-Littlewood maximal operator on metric spaces with doubling measure, in weighted Lebesgue spaces with variable exponent {#subsec3.}
---------------------------------------------------------------------------------------------------------------------------------------------------
The following statements are valid.
\[th3.1.\] *Let $X$ be a metric space with doubling measure and let ${\Omega}$ be bounded. If $p\in \mathcal{P}({\Omega})\cap WL({\Omega})$ and $\varrho\in
V_{p(\cdot)}^{osc}({\Omega},\Pi)$, then $\mathcal{M}$ is bounded in the space $L_\varrho^{p(\cdot)}({\Omega})$.*
\[th3.2.\] *Let $X$ be a metric space with doubling measure and let ${\Omega}$ be unbounded. Let $p\in \mathcal{P}({\Omega})\cap WL({\Omega})$ and let there exist $R>0$ such that $p(x)\equiv p_\infty=const $ for $x\in {\Omega}\backslash{B(x_0,R)}$. If $\varrho\in
V^{osc}_{p(\cdot)}({\Omega},\Pi)$, then $\mathcal{M}$ is bounded in the space $L_\varrho^{p(\cdot)}({\Omega})$.*
The Euclidean version of Theorems \[th3.1.\] and \[th3.2.\] was proved in [@106] in the non-weighted case and in [@317c], [@JFSA] in the weighted case; in [@JFSA] there were also proved the corresponding versions of Theorems \[th3.1.\] and \[th3.2.\] for the maximal operator on Carleson curves (a typical example of metric measure spaces with constant dimension). The proof of Theorems \[th3.1.\] and \[th3.2.\] in the general case in main is similar, being based on the approaches used in the proofs for the case of Carleson curves.
\[erz\] Let ${\Omega}$ be a bounded open set in a doubling measure metric space $X$, let the exponent $p(x)$ satisfy conditions (\[1.1\]), (\[1.3\]). Then the operator $\mathcal{M}$ is bounded in $L^{p(\cdot)}_\varrho({\Omega})$, if $$[\varrho(x)]^{p(x)}\in
A_{p_-}({\Omega}).$$
We refer to [@newmetric] for Theorem \[th3.2.\], its detailed proof for the case where $X$ is a Carleson curve is given in [@JFSA], the proof for a doubling measure metric space being in fact the same.
Extrapolation theorem on metric measure spaces {#subs4.}
==============================================
In the sequel $\mathcal{F}=\mathcal{F}({\Omega})$ denotes a family of ordered pairs $(f,g)$ of non-negative $\mu$-measurable functions $f,g$, defined on an open set ${\Omega}\subset X$. When saying that there holds an inequality of type (\[new1\]) for all pairs $(f,g)\in \mathcal{F}$ and weights $w\in A_1$, we always mean that it is valid for all the pairs, for which the left-hand side is finite, and that the constant $c$ depends only on $p_0,q_0$ and the $A_1$-constant of the weight.
In what follows, by $p_0$ and $q_0$ we denote positive numbers such that $$\label{vgschi}
0<p_0\le q_0<\infty, \quad p_0<p_- \quad \textrm{and} \quad
\frac{1}{p_0}- \frac{1}{p_+}<\frac{1}{q_0}$$ and use the notation $$\label{nfse3a}
\widetilde{p}(x)=\frac{p(x)}{p_0}, \quad
\widetilde{q}(x)=\frac{q(x)}{q_0}.$$
\[nond\] The extrapolation Theorem \[th4.1.\] with variable exponents in the non-weighted case $\varrho(x)\equiv 1$ and in the Euclidean setting was proved in [@101zb]. For extrapolation theorems in the case of constant exponents we refer to [@522b], [@HMS].
Observe that the measure $\mu$ in Theorem \[th4.1.\] is not assumed to be doubling.
\[th4.1.\] Let $X$ be a metric measure space and ${\Omega}$ an open set in $X$. Assume that for some $p_0$ and $q_0$, satisfying conditions (\[vgschi\]) and every weight $w\in A_1({\Omega})$ there holds the inequality $$\label{new1}
\left({\int\limits}\limits_{{\Omega}}f^{q_0}(x)w(x)d\mu(x)\right)^\frac{1}{q_0}\le
c_0
\left({\int\limits}\limits_{{\Omega}}g^{p_0}(x)[w(x)]^\frac{p_0}{q_0}d\mu(x)\right)^\frac{1}{p_0}$$ for all $f,g$ in a given family $\mathcal{F}$. Let the variable exponent $q(x)$ be defined by $$\label{s9x34}
\frac{1}{q(x)}=
\frac{1}{p(x)}-\left(\frac{1}{p_0}-\frac{1}{q_0}\right),$$ let the exponent $p(x)$ and the weight $\varrho(x)$ satisfy the conditions $$\label{new1bcxc54esa}
p \in \mathcal{P}({\Omega}) \quad \textrm{and}\ \quad
\varrho^{-q_0} \in
\mathfrak{A}_{(\widetilde{q})^\prime}({\Omega}).$$ Then for all $(f,g)\in\mathcal{F}$ with $f\in
L_\varrho^{p(\cdot)}({\Omega})$ the inequality $$\label{new2}
\|f\|_{L_\varrho^{q(\cdot)}} \le C
\|g\|_{L_\varrho^{p(\cdot)}}$$ is valid with a constant $C>0$, not depending on $f$ and $g$.
By the Riesz theorem, valid for the spaces with variable exponent in the case $1<p_-\le p_+<\infty$, (see [@332], [@575a]), we have $$\|f\|_{L_\varrho^{q(\cdot)}}^{q_0}= \|f^{q_0}\varrho^{q_0}\|_{L^{\widetilde{q}(\cdot)}}\le
\sup {\int\limits}_{\Omega}f^{p_0}(x)h(x)d\mu(x),$$ where we assume that $f$ is non-negative and $\sup $ is taken with respect to all non-negative $h$ such that $\|h\varrho^{-q_0}\|_{L^{(\widetilde{q})^\prime(\cdot)}}\le 1$. We fix any such a function $h$. Let us show that $$\label{4.2}
{\int\limits}\limits_{{\Omega}}f^{q_0}(x)h(x)d\mu(x)\le C
\|g\varrho\|^{q_0}_{L^{q(\cdot)}}$$ for an arbitrary pair $(f,g)$ from the given family $\mathcal{F}$ with a constant $C>0$, not depending on $h, f$ and $g$. By the assumption $\varrho^{-q_0} \in
\mathfrak{A}_{(\widetilde{q})^\prime}({\Omega})$ we have $$\label{4.3}
\|\varrho^{-q_0}\mathcal{M}{\varphi}\|_{L^{\widetilde{q}^\prime(\cdot)}({\Omega})}\le
C_0 \|\varrho^{-q_0}{\varphi}\|_{L^{\widetilde{p}^\prime(\cdot)}({\Omega})}$$ where the constant $C_0>0$ does not depend on ${\varphi}$.
We make use of the following construction which is due to Rubio de Francia [@522b] $$\label{4.4}
S{\varphi}(x)=\sum\limits_{k=0}^\infty (2C_0)^{-k}\mathcal{M}^k{\varphi}(x),$$ where $\mathcal{M}^k$ is the $k$-iterated maximal operator and $C_0$ is the constant from (\[4.3\]) (one may take $C_0\ge
1$). The following statements are obvious:
1\) ${\varphi}(x) \le S{\varphi}(x), \ \ x\in{\Omega}$ for any non-negative function ${\varphi}$; $$\label{4.5}
\ \ \ \ \ \ 2) \ \ \ \hspace{24mm}
\|\varrho^{-q_0}S{\varphi}\|_{L^{(\widetilde{q})^\prime}({\Omega})} \le 2
\|\varrho^{-q_0}{\varphi}\|_{L^{(\widetilde{q})^\prime}({\Omega})},
\hspace{36mm}$$
3\) $\mathcal{M}(S{\varphi})(x)\le 2C_0 S{\varphi}(x), \ \ \ \ x\in{\Omega},$
so that $S{\varphi}\in A_1({\Omega})$ with the $A_1$-constant not depending on ${\varphi}$. Therefore $S{\varphi}\in A_{q_0}({\Omega})$.
By 1), for ${\varphi}=h$ we have $$\label{4.6}
{\int\limits}_{\Omega}f^{q_0}(x)h(x)d\mu(x) \le {\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x).$$ By the Hölder inequality for variable exponent, property 2) and the condition $f\in L_\varrho^{q(\cdot)}$, we have $${\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x) \le k
\|f^{q_0}\varrho^{q_0}\|_{L^{\widetilde{q}(\cdot)}} \cdot
\|\varrho^{-q_0}Sh\|_{L^{(\widetilde{q})^\prime(\cdot)}}$$ $$\le C \|f\varrho\|^{q_0}_{L^{q(\cdot)}} \cdot
\|h\varrho^{-q_0}\|_{L^{(\widetilde{q})^\prime(\cdot)}} \le C \|f\varrho\|_{L^{q(\cdot)}}^{q_0}
<\infty .$$ Consequently, the integral ${\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x)$ is finite, which enables us to make use of condition (\[new1\]) with respect to the right-hand side of (\[4.6\]). Condition (\[new1\]) being assumed to be valid with an arbitrary weight $w\in A_1$, is in particular valid for $w=Sh$. Therefore, $${\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x)\le C \left({\int\limits}_{\Omega}g^{p_0}(x)[Sh(x)]^\frac{p_0}{q_0}
d\mu(x)\right)^\frac{q_0}{p_0}.$$ Applying the Hölder inequality on the right-hand side, we get $${\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x)\le C \left(\|g^{p_0}\varrho^{p_0}\|_{L^\frac{p(\cdot)}{p_0}}
\left\|(Sh)^\frac{p_0}{q_0}\varrho^{-p_0}\right\|_{L^{(\widetilde{p})^\prime}}\right)^\frac{q_0}{p_0}.$$ Thus $$\label{4.7}
{\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x)\le C \left\|\varrho g\right\|_{L^{p(\cdot)}}^{q_0}
\left\|\varrho^{-p_0}
(Sh)^\frac{p_0}{q_0}\right\|^\frac{q_0}{p_0}_{L^{(\widetilde{p})^\prime(\cdot)}}.$$
From (\[s9x34\]) we easily obtain that $(\widetilde{p})^\prime(x)=\frac{q_0}{p_0}(\widetilde{q})^\prime(x)$ and then $$\left\|\varrho^{-p_0}
(Sh)^\frac{p_0}{q_0}\right\|^\frac{q_0}{p_0}_{L^({\widetilde{p})^\prime(\cdot)}}=
\left\|\varrho^{-q_0}
Sh\right\|_{L^{\widetilde{q}^\prime}(\cdot)}.$$ Consequently, $$\label{4f7}
{\int\limits}_{\Omega}f^{q_0}(x)Sh(x)d\mu(x)\le C \left\|\varrho
g\right\|_{L^{p(\cdot)}}^{q_0} \left\|\varrho^{-q_0}
Sh\right\|_{L^{\widetilde{q}^\prime}(\cdot)}.$$ To prove (\[4.2\]), in view of (\[4f7\]) it suffices to show that $\left\|\varrho^{-q_0}
Sh\right\|_{L^{\widetilde{q}^\prime}(\cdot)}$ may be estimated by a constant not depending on $h$. This follows from (\[4.5\]) and the condition $\|h\varrho^{-q_0}\|_{L^{(\widetilde{q})^\prime(\cdot)}}\le 1$ and proves the theorem.
\[rem1\] It is easy to check that in view of Theorem \[erz\] the condition $$\label{nasiu6f}
[\varrho(y)]^{q_1(y)} \in A_s, \quad \textrm{where} \quad q_1(y)= \frac{q(y)(q_+-q_0)}{q(y)-q_0} \ \textrm{and} \
s= \frac{q_+}{q_0},$$ is sufficient for the validity of the condition $\varrho^{-q_0} \in
\mathfrak{A}_{(\widetilde{q})^\prime}({\Omega})$ of Theorem \[th4.1.\].
By means of Theorems \[th3.1.\] and \[th3.2.\], we obtain the following statement as an immediate consequence of Theorem \[th4.1.\] in which we denote $${\gamma}= \frac{1}{p_0}-\frac{1}{q_0}.$$
\[th4.2.\] Let $X$ be a metric space with doubling measure and ${\Omega}$ an open set in $X$. Let also the following be satisfied\
1) $p\in
\mathcal{P}({\Omega})\cap WL({\Omega})$, and in the case ${\Omega}$ is an unbounded set, let $p(x)\equiv
p_\infty=const$ for $x\in
{\Omega}\backslash B(x_0,R)$ with some $x_0\in {\Omega}$ and $R>0$;\
2) there holds inequality (\[new1\]) for some $p_0$ and $q_0$ satisfying the assumptions in (\[vgschi\]) and all $(f,g)\in\mathcal{F}$ from some family $\mathcal{F}$ and every weight $w\in A_1({\Omega})$. Then\
I) there holds inequality (\[new2\]) for all pairs $(f,g)$ from the same family $\mathcal{F}$ , such that $f\in
L_\varrho^{p(\cdot)}({\Omega})$ and weights $\varrho$ of form (\[2.4\]) where $$\label{jui7}
\left({\gamma}-\frac{1}{p(x_k)}\right)\underline{\mathfrak{dim}}({\Omega})
<m(w_k)\le M(w_k)<
\left(\frac{1}{p^\prime(x_k)}-\frac{1}{p^\prime_0}\right)\underline{\mathfrak{dim}}({\Omega})$$ and, in case ${\Omega}$ is unbounded, $$\label{jun70}
{\delta}+
\left({\gamma}-\frac{1}{p_\infty}\right)\underline{\mathfrak{dim}}({\Omega})<\sum\limits_{k=0}^N
m(w_k)\le \sum\limits_{k=0}^N M(w_k)<
\left(\frac{1}{p^\prime_\infty}-\frac{1}{p^\prime_0}\right)\underline{\mathfrak{dim}}({\Omega}),$$ where $${\delta}= \left[\overline{\mathfrak{dim}}_\infty({\Omega})-\underline{\mathfrak{dim}}_\infty({\Omega})\right]
\left(\frac{1}{p_0}- \frac{1}{p_\infty}\right);$$\
II) in case inequality (\[new1\]) holds for all $p_0\in
(1,p_-)$, the term $\frac{1}{p^\prime_0}$ in (\[jui7\]) and (\[jun70\]) may be omitted and ${\delta}$ may be taken in the form ${\delta}=\left[\overline{\mathfrak{dim}}_\infty({\Omega})-\underline{\mathfrak{dim}}_\infty({\Omega})\right]
\left(\frac{1}{p_-}- \frac{1}{p_\infty}\right).$
Application to problems of the boundedness in $ L_\varrho^{p(\cdot)}$ of classical operators of harmonic analysis {#subs5.}
==================================================================================================================
Potentials operators and fractional maximal function {#subspotetials}
----------------------------------------------------
We first apply Theorem \[th4.1.\] to potential operators $$\label{dochnet}
I^{\gamma}_X f(x)= {\int\limits}_X \frac{f(y)\,d\mu(y)}{\mu B (x,d(x,y))^{1-{\gamma}}}$$ where $0<{\gamma}<1$. We assume that $\mu X=\infty$ and the measure $\mu$ satisfies the doubling condition. We also additionally suppose the following conditions to be fulfilled: $$\label{condit1}
\textrm{there exists a point} \ \ x_0\in X \ \ \textrm{such that}\ \ \ \mu(x_0)=0$$ and $$\label{condit2}
\mu(B(x_0,R)\backslash B(x_0,r))>0 \ \ \ \textrm{for all} \ \ \ \ 0<r<R<\infty.$$
The following statement is valid, see for instance [@145a], p. 412.
\[thKokil\] Let $X$ be a metric measure space with doubling measure satisfying conditions (\[condit1\])-(\[condit2\]), $\mu X =\infty$, let $0<{\gamma}<1$, $1<p_0<\frac{1}{{\gamma}}$ and $
\frac{1}{q_0}=\frac{1}{p_0}-{\gamma}$. The operator $I^{\gamma}_X$ admits the estimate $$\label{admits}
\left({\int\limits}_X |v(x)I_X^{\gamma}f(x)|^{q_0}d\mu\right)^\frac{1}{q_0}\le \left({\int\limits}_X |
v(x)f(x)|^{p_0}d\mu\right)^\frac{1}{p_0},$$ if the weight $v(x)$ satisfies the condition $$\label{muck-wheed}
\sup\limits_B\left(\frac{1}{\mu B}{\int\limits}_Bv^{q_0}(x)d\mu\right)^\frac{1}{q_0} \left(\frac{1}{\mu
B}{\int\limits}_B v^{-p^\prime_0}(x)d\mu\right)^\frac{1}{p^\prime_0}<\infty$$ where $B$ stands for a ball in $X$.
By means of Theorem \[thKokil\] and extrapolation Theorem \[th4.1.\] we arrive at the following statement.
\[thpoten\] Let $X$ satisfy the assumptions of Theorem \[thKokil\], let $p\in \mathcal{P}$, $0<{\gamma}<1$ and $p_+<\frac{1}{{\gamma}}$. The weighted estimate $$\label{Sobolev1}
\left\|I^{\gamma}_X f\right\|_{L^{q(\cdot)}_\rho} \le C \left\|f\right\|_{L^{p(\cdot)}_\rho}$$ with the limiting exponent $q(\cdot)$ defined by $\frac{1}{q(x)}=\frac{1}{p(x)}-{\gamma}$, holds if $$\label{cond}
\varrho^{-q_0} \in
\mathfrak{A}_{\left(\frac{q(\cdot)}{q_0}\right)^\prime}(X)$$ under any choice of $q_0>\frac{p_-}{1-{\gamma}p_-}$.
By Theorem \[thKokil\], inequality (\[admits\]) holds under condition (\[muck-wheed\]). As is known, inequality (\[new1\]) with $f=I^{\alpha}g$ holds for every weight $w$ satisfying the $1<p_0<\infty $ and $\frac{1}{q_0}=\frac{1}{p_0}-{\gamma}$. Condition (\[muck-wheed\]) is satisfied if $v^{q_0}\in A_1$. Consequently, inequality (\[new1\]) with $f=I^{\alpha}g$ holds for every $w\in A_1$. Then (\[Sobolev1\]) follows from Theorem \[th4.1.\].
From Theorem \[thpoten\] we derive the following corollary for the Riesz potential operators $$\label{Riesz}
I^{\alpha}f(x)= {\int\limits}_{{\mathbb{R}^n}} \frac{f(y)\, dy}{|x-y|^{n-{\alpha}}}.$$
\[Riesz\] Let $p\in \mathcal{P}$, let $0<{\alpha}<n$ and $p_+<\frac{n}{{\alpha}}$. The weighted Sobolev theorem $$\label{Sobolev}
\left\|I^{\alpha}f\right\|_{L^{q(\cdot)}_\rho} \le C
\left\|f\right\|_{L^{p(\cdot)}_\rho}$$ with the limiting exponent $q(\cdot)$ defined by $\frac{1}{q(x)}=\frac{1}{p(x)}-\frac{{\alpha}}{n}$, holds if $$\label{cond}
\varrho^{-q_0} \in
\mathfrak{A}_{\left(\frac{q(\cdot)}{q_0}\right)^\prime}(\mathbb{R}^n)$$ under any choice of $q_0>\frac{np_-}{n-{\alpha}p_-}$.
\[remho\] Since Theorems \[th3.1.\] and \[th3.2.\] provide sufficient conditions for the weight $\varrho$ to satisfy assumption (\[cond\]), we could write down the corresponding statements on the validity of (\[Sobolev\]) in terms of the weights used in Theorems \[th3.1.\] and \[th3.2.\]. In the sequel we give results of such a kind for other operators. For potential operators in the case ${\Omega}={\mathbb{R}^n}$ we refer to [@584a] and [@539h], where for power weights of the class $V_{p(\cdot)}({\mathbb{R}^n},\Pi)$ and for radial oscillating weights of the class $V^{osc}_{p(\cdot)}({\mathbb{R}^n},\Pi)$, respectively, there were obtained estimates (\[Sobolev\]) under assumptions more general than should be imposed by the usage of Theorem \[th3.2.\].
Fourier multipliers {#subs5.1.}
-------------------
A measurable function $\mathbb{R}^n\to \mathbb{R}^1$ is said to be a Fourier multiplier in the space $L_\varrho^{p(\cdot)}(\mathbb{R}^n)$, if the operator $T_m$, defined on the Schwartz space $S(\mathbb{R}^n)$ by $$\widehat{T_m f} = m \widehat{f},$$ admits an extension to the bounded operator in $L_\varrho^{p(\cdot)}(\mathbb{R}^n)$.
We give below a generalization of the classical Mikhlin theorem ([@399a], see also [@400]) on Fourier multipliers to the case of Lebesgue spaces with variable exponent.
\[th5.1.\] Let a function $m(x)$ be continuous everywhere in $\mathbb{R}^n$, except for probably the origin, have the mixed distributional derivative $\frac{\partial^n m}{\partial x_1x_2\cdots x_n}$ and the derivatives $D^{\alpha}m
=\frac{\partial^{|{\alpha}|} m}{\partial x_1^{{\alpha}_1}x_2^{{\alpha}_2}\cdots x_n^{{\alpha}_n}}, {\alpha}=({\alpha}_1,...,{\alpha}_n)$ of orders $|{\alpha}|={\alpha}_1+\cdots_+{\alpha}_n\le n-1$ continuous beyond the origin and $$|x|^{|{\alpha}|} |D^{\alpha}m(x)|\le C, \quad |{\alpha}|\le n-1,$$ where the constant $C>0$ does not depend on $x$. Then under conditions (\[new1bcxc54esa\]) and (\[vgschi\]) with ${\Omega}=\mathbb{R}^n$, $m$ is a Fourier multiplier in $L^{p(\cdot)}_\varrho(\mathbb{R}^n)$.
Theorem \[th5.1.\] follows from Theorem \[th4.1.\] under the choice ${\Omega}=X=\mathbb{R}^n$ and $\mathcal{F}=\{T_mg,g\}$ with $g\in S(\mathbb{R}^n)$, if we take into account that in the case of constant $p_0 >1$ and weight $\varrho \in A_{p_0} (\supset A_1)$, a function $m$, satisfying the assumptions of Theorem \[th5.1.\], is a Fourier multiplier in $L_\varrho^{p_0}(\mathbb{R}^n)$. The latter was proved in [@349b], see also [@316zz].
\[cor\] Let $m$ satisfy the assumptions of Theorem \[th5.1.\] and let the exponent $p$ and the weight $\varrho$ satisfy the assumptions\
i) $p\in \mathcal{P}(\mathbb{R}^n)\cap WL(\mathbb{R}^n)$ and $p(x)=p_\infty=const$ for $|x|\ge R$ with some $R>0$,\
ii) $\varrho\in V^{osc}_{p(\cdot)}(\mathbb{R}^n,\Pi), \Pi
=\{x_1,...x_N\}\subset
\mathbb{R}^n$.\
Then $m$ is a Fourier multiplier in $L^{p(\cdot)}_\varrho(\mathbb{R}^n)$. In particular, assumption ii) holds for weights $\varrho$ of form $$\label{doch}
\varrho(x)= (1+|x|)^{{\beta}_\infty}\prod\limits_{k=1}^N
|x-x_k|^{{\beta}_k}, \ \ \ x_k\in \mathbb{R}^n,$$ where $$\label{5.1}
-\frac{n}{p(x_k)}<{\beta}_k< \frac{n}{p^\prime(x_k)}, \quad
k=1,2,...,N,$$ $$\label{5.2}
-\frac{n}{p_\infty}<{\beta}_\infty + \sum\limits_{k=1}^N{\beta}_k<
\frac{n}{p^\prime_\infty}.$$
It suffices to observe that conditions on the weight $\varrho$ imposed in Theorem \[th5.1.\], are fulfilled for $\varrho\in
V^{osc}_{p(\cdot)}(\mathbb{R}^n,\Pi)$ which follows from Remark \[rem4.3.\] and Theorem \[th3.2.\]. In the case of power weights, conditions defining the class $V^{osc}_{p(\cdot)}(\mathbb{R}^n,\Pi)$ turn into (\[5.1\])-(\[5.2\]).
The statement of Theorem \[th5.1.\] also holds in a more general form of Mikhlin/Hörmander theorem.
\[th5.2.\] *Let a function $m:\mathbb{R}^n\to \mathbb{R}^1$ have distributional derivatives up to order $\ell
>\frac{n}{p_-}$ satisfying the condition* $$\sup\limits_{R>0}\left(R^{s|{\alpha}|-n}{\int\limits}_{{R<|x|<2R}}|D^{\alpha}m(x)|^sdx\right)^\frac{1}{s}<\infty$$ *for some $s, 1<s\le 2$ and all ${\alpha}$ with $|{\alpha}|\le
\ell.$ If conditions (\[new1bcxc54esa\]), (\[vgschi\]) with ${\Omega}=X=\mathbb{R}^n$ on $p$ and $\varrho$ are satisfied, then $m$ is a Fourier multiplier in $L^{p(\cdot)}_\varrho(\mathbb{R}^n)$.*
Theorem \[th5.2.\] is similarly derived from from Theorems \[th4.1.\] , if we take into account that in the case of constant $p_0$ the statement of the theorem for Muckenhoupt weights was proved in [@Kurtznew].
\[cor1\] Let a function $m:\mathbb{R}^n\to \mathbb{R}^1$ satisfy the assumptions of Theorem \[th5.2.\] and let $p$ and $\rho$ satisfy conditions $i)$ and $ii)$ of Corollary \[cor\]. Then $m$ is a Fourier multiplier in $L^{p(\cdot)}_\varrho(\mathbb{R}^n)$.
Follows from Theorem \[th5.2.\] since conditions on the weight $\varrho$ imposed in Theorem \[th5.1.\], are fulfilled for $\varrho\in
V^{osc}_{p(\cdot)}(\mathbb{R}^n,\Pi)$ by Theorem \[th3.2.\] and Remark \[rem4.3.\].
In the next theorem by $\Delta_j$ we denote the interval of the form $\Delta_j=[2^j,2^{j+1}]$ or $\Delta_j=[-2^{j+1}, - 2^j], \ j\in \mathbb{Z}$.
\[th5.3.\] *Let a function $m:\mathbb{R}^1\to \mathbb{R}^1$ be representable in each interval $\Delta_j$ as* $$m({\lambda})={\int\limits}_{-\infty}^{\lambda}d\mu_{\Delta_j}, \ \quad {\lambda}\in \Delta_j,$$ *where $\mu_{\Delta_j}$ are finite measures such that $\sup\limits_{j} \mathrm{var} \
\mu_{\Delta_j}<\infty$. If conditions (\[new1bcxc54esa\]), (\[vgschi\]) with ${\Omega}=X=\mathbb{R}^n$ on $p$ and $\varrho$ are satisfied, then $m$ is a Fourier multiplier in $L^{p(\cdot)}_\varrho(\mathbb{R}^1)$.*
To derive Theorem \[th5.3.\] from Theorem \[th4.2.\], it suffices to refer to the boundedness of the maximal operator in the space $L^{p(\cdot)}_\varrho(\mathbb{R}^1)$ by Theorem \[th3.2.\] and the fact that in the case of constant $p$ the theorem was proved in [@368a] (for $\varrho\equiv 1$) and [@316zz], [@316zza] (for $\varrho\in A_p$).
\[cor2\] Let $m$ satisfy the assumptions of Theorem \[th5.3.\] and the exponent $p$ and weight $\varrho$ fulfill conditions *i)* and *ii)* of Corollary \[cor\] with $n=1$. Then $m$ is a Fourier multiplier in $L^{p(\cdot)}_\varrho(\mathbb{R}^1)$.
The “off-diagonal”$L^{p(\cdot)}_\varrho\to
L^{q(\cdot)}_\varrho$-version of Theorem \[th5.3.\] in the case $q(x)>p(x)$ is covered by the following theorem.
\[tg13.\] Let $p\in\mathcal{P}(\mathbb{R}^1)\cap WL(\mathbb{R}^1)$ and $p(x)\equiv p_\infty=const$ for large $|x|>R,$ and let a function $m:\mathbb{R}^1\to \mathbb{R}^1$ be representable in each interval $\Delta_j$ as $$m({\lambda})={\int\limits}_{-\infty}^{\lambda}\frac{d\mu_{\Delta_j}(t)}{({\lambda}-t)^{\alpha}}, \
\quad {\lambda}\in \Delta_j,$$ where $ 0<{\alpha}<\frac{1}{p_+}$ and $\mu_{\Delta_j}$ are the same as in Theorem \[th5.3.\]. Then $T_m$ is a bounded operator from $L^{p(\cdot)}_\varrho(\mathbb{R}^1)$ to $L^{q(\cdot)}_\varrho(\mathbb{R}^1)$, where $$\frac{1}{q(x)}=\frac{1}{p(x)}-{\alpha}$$ and $\varrho$ is a weight of form (\[doch\]) whose exponents satisfy the conditions $$\label{5tyrok}
{\alpha}-\frac{1}{p(x_k)}<{\beta}_k< \frac{1}{p^\prime(x_k)}, \quad
k=1,2,...,N, \quad \textrm{and} \quad {\alpha}-\frac{1}{p_\infty}<{\beta}_\infty + \sum\limits_{k=1}^N{\beta}_k<
\frac{1}{p^\prime_\infty}.$$
In [@316zzb] there was proved that the operator $T_m$ is bounded from $L^{p_0}_v({\mathbb{R}^1})$ into $L^{q_0}_v({\mathbb{R}^1})$ for every $p_0\in(1,\infty)$, $0<{\alpha}<\frac{1}{p_0}$, $\frac{1}{q_0}=\frac{1}{p_0}-{\alpha}$, and an arbitrary weight $v$ satisfying the condition $$\label{6tyrok}
\sup\limits_{I}\left(\frac{1}{|I|}{\int\limits}_{I}v^{q_0}(x)dx\right)^\frac{1}{q_0}
\left(\frac{1}{|I|}{\int\limits}_{I}v^{-p^\prime_0}(x)dx\right)^\frac{1}{p_0},$$ where the supremum is taken with respect to all one-dimensional intervals. Condition (\[6tyrok\]) is satisfied if $v^{q_0}\in
A_1$. Then inequality (\[new1\]) with $f=T_m g$ holds for every $w\in A_1$. Then the statement of the theorem follows immediately from Part *II* of Theorem \[th4.2.\], conditions (\[jui7\])-(\[jun70\]) turning into (\[5tyrok\]) since $\underline{\mathfrak{dim}}({\Omega})=\underline{\mathfrak{dim}}_\infty({\Omega})=1$, $m(w_k)=M(w_k)={\beta}_k, k=1,\dots, N$, and $m(w_0)=M(w_0)={\beta}_\infty.$
All the statements in the following subsections are also similar direct consequences of the general statement of Theorem \[th4.2.\] and Theorems \[th3.1.\] and \[th3.2.\] on the maximal operator in the spaces $L^{p(\cdot)}_\varrho$, so that in the sequel for the proofs we only make references to where these statements were proved in the case of constant $p$ and Muckenhoupt weights.
Multipliers of trigonometric Fourier series {#subs5.2.}
-------------------------------------------
With the help of Theorem \[th4.2.\] and known results for constant exponents, we are now able to give a generalization of theorems on Marcinkiewicz multipliers and Littlewood-Paley decompositions for trigonometric Fourier series to the case of weighted spaces with variable exponent.
Let $\mathbb{T}=[\pi,\pi]$ and let $ f$ be a $2\pi$-periodic function and $$\label{5.3}
f(x) \sim \frac{a_0}{2} +\sum\limits_{k=0}^\infty (a_k \cos kx + b_k \sin kx).$$
\[th5.4.\] Let a sequence ${\lambda}_k$ satisfy the conditions $$\label{5vhtq}
|{\lambda}_k|\le A \ \quad \quad \textrm{and} \quad \quad
\sum_{k=2^{j-1}}^{{2^j}-1} |{\lambda}_k-{\lambda}_{k+1}|\le A,$$ where $A>0$ does not depend on $k$ and $j$. Suppose that $$\label{nef41qs}
p \in \mathcal{P}(\mathbb{T}) \quad \textrm{and}\ \quad
\varrho^{-p_0} \in \mathfrak{A}_{(\widetilde{p})^\prime}(\mathbb{T}), \quad \textrm{where}
\quad \widetilde{p}(\cdot)=\frac{p(\cdot)}{p_0}$$ with some $p_0\in\left(1,p_-(\mathbb{T})\right)$. Then there exists a function $F(x)\in L^{p(\cdot)}_\varrho(\mathbb{T})$ such that the series $\frac{{\lambda}_0 a_0}{2} +\sum\limits_{k=0}^\infty
{\lambda}_k(a_k \cos kx + b_k sin kx)$ is Fourier series for $F$ and $$\|F\|_{L^{p(\cdot)}_\varrho}\le cA \|f\|_{L^{p(\cdot)}_\varrho}$$ where $c>0$ does not depend on $f\in
L^{p(\cdot)}_\varrho(\mathbb{T})$.
\[cor3\] The statement of Theorem \[th5.4.\] remains valid if condition (\[nef41qs\]) is replaced by the assumption, sufficient for (\[nef41qs\]), that $p\in \mathcal{P}(\mathbb{T})\cap WL(\mathbb{T})$ and the weight $\varrho$ has form $$\label{2.4asew}
\varrho(x)=\prod_{k=1}^N w_k(|x-x_k|), \quad x_k\in \mathbb{T}$$ where $$\label{2.6cce}
w_k\in \widetilde{U}([0,2\pi]) \quad \textrm{and}\ \
-\frac{1}{p(x_k)} < m(w_k)\le M(w_k) < \frac{1}{p^\prime(x_k)} .$$
\[th5.5.\] Let $$\label{2bfe}
A_k(x)=a_k \cos kx + b_k \sin kx, \quad k=0,1,2,... , \quad
A_{2^{-1}}=0.$$ Under conditions (\[nef41qs\]) there exist constants $c_1>0$ and $c_2>0$ such that $$\label{5.4ax}
c_1\|f\|_{L^{p(\cdot)}_\varrho}\le
\left\|\left(\sum\limits_{j=0}^\infty\left|\sum\limits_{k=2^{j-1}}^{2^j-1}
A_k(x)\right|^2\right)^\frac{1}{2}\right\|_{L^{p(\cdot)}_\varrho}\le
c_2\|f\|_{L^{p(\cdot)}_\varrho}$$ for all $f\in L^{p(\cdot)}_\varrho(\mathbb{T})$.
In the case of constant $p$ and $\varrho\in A_p$ this theorem was proved in [@349b].
\[cor4\] Inequalities (\[5.4ax\]) hold for $p\in \mathcal{P}(\mathbb{T})\cap WL(\mathbb{T})$ and weights $\varrho$ of form (\[2.4asew\])-(\[2.6cce\]).
Majorants of partial sums of Fourier series {#subs5.3.}
-------------------------------------------
Let $$S_\ast(f)=S_\ast(f,x)=\sup\limits_{k\ge 0} |S_k(f,x)|,$$ where $S_k(f,x)=\sum\limits_{j=0}^kA_j(x)$ is a partial sum of Fourier series (\[5.3\]).
\[th5.6.\] Under conditions (\[nef41qs\]) $$\label{5.4}
\|S_\ast(f)\|_{L^{p(\cdot)}_\varrho}\le c
\|f\|_{L^{p(\cdot)}_\varrho},$$ for all $f\in L^{p(\cdot)}_\varrho(\mathbb{T})$, where the constant $c>0$ does not depend on $f$.
In the case of constant $p$ and $\varrho\in A_p$, Theorem \[th5.6.\] was proved in [@236a].
\[cor5\] Inequality (\[5.4\]) is valid for $p\in \mathcal{P}(\mathbb{T})\cap WL(\mathbb{T})$ and weights $\varrho$ of form (\[2.4asew\])-(\[2.6cce\]).
Zygmund and Cesaro summability for trigonometric series in $L_\varrho^{p(\cdot)}(\mathbb{T})$ {#subs5.u4.}
---------------------------------------------------------------------------------------------
Under notation (\[5.3\]) and (\[2bfe\]) we introduce the Zygmund and Cesaro means of summability $$Z_n^{(2)}(f,x)= \sum\limits_{k=0}^n \left[1-
\left(\frac{k}{n+1}\right)^2\right]A_k(x)$$ and $${\sigma}_n(f,x)=\frac{1}{n+1}\sum\limits_{k=0}^n S_k(f,x),$$ respectively. By $${\Omega}_{p,\varrho}(f,{\delta})=\sup\limits_{0<h<{\delta}}\|(I-\tau_h)f\|_{L_\varrho^{p(\cdot)}}$$ we denote the continuity modulus of a function $f$ in $L_\varrho^{p(\cdot)}(\mathbb{T})$ with respect to the generalized shift (Steklov mean) $$\tau_h f(x)=\frac{1}{2h}{\int\limits}_{x-h}^{x+h} f(t) dt.$$
\[thnew\] Under conditions (\[nef41qs\]) there hold the estimates $$\label{43kj}
\|f(\cdot)-Z_n^{(2)}(f,\cdot)\|_{L_\varrho^{p(\cdot)}} \le
C{\Omega}_{p,\varrho}\left(f,\frac{1}{n}\right)$$ and $$\label{43kjbc}
\|f(\cdot)-{\sigma}_n(f,\cdot)\|_{L_\varrho^{p(\cdot)}} \le
Cn{\Omega}_{p,\varrho}\left(f,\frac{1}{n}\right).$$
We make use of the estimate $$\label{4vr5}
\|f(\cdot)-S_n(f,\cdot)\|_{L_\varrho^{p(\cdot)}} \le
C{\Omega}_{p,\varrho}\left(f,\frac{1}{n}\right)$$ proved in [@IKS] under assumptions (\[nef41qs\]). For the difference $S_n(f,x)-Z_n^{(2)}(f,x)$ we have $$\label{4nevr5}
\|S_n(f,\cdot)-Z_n^{(2)}(f,\cdot)\|_{L_\varrho^{p(\cdot)}}=
\left\|\sum\limits_{k=1}^n
\left(\frac{k}{n+1}\right)^2A_k(\cdot)\right\|_{L_\varrho^{p(\cdot)}}.$$ Keeping in mind that $$\label{4bv9a}
f(x)-\tau_h f(x)\sim \sum\limits_{k=1}^\infty\left(1-\frac{\sin
kh}{kh}\right)A_k(x),$$ we transform (\[4nevr5\]) to $$\|S_n(f,\cdot)-Z_n^{(2)}(f,\cdot)\|_{L_\varrho^{p(\cdot)}}
= \left\|\sum\limits_{k=1}^n {\lambda}_{k,n} \left(1-\frac{\sin
\frac{k}{n}}{\frac{k}{n}}\right)A_k(\cdot)\right\|_{L_\varrho^{p(\cdot)}}$$ where $${\lambda}_{k,n}=\left\{\begin{array}{cc}\frac{\left(\frac{k}{n+1}\right)^2}
{1-\frac{\sin \frac{k}{n}}{\frac{k}{n}}}, & k\le n
\\
0, & k>n\end{array}\right.$$ It is easy to check that the multiplier ${\lambda}_{k,n}$ satisfies assumptions (\[5vhtq\]) of Theorem \[th5.4.\] with the constant $A$ in (\[5vhtq\]) not depending on $n$. Therefore, by Theorem \[th5.4.\] we get $$\|S_n(f,\cdot)-Z_n^{(2)}(f,\cdot)\|_{L_\varrho^{p(\cdot)}}
\le C\left\|\sum\limits_{k=1}^\infty \left(1-\frac{\sin
\frac{k}{n}}{\frac{k}{n}}\right)A_k(\cdot)\right\|_{L_\varrho^{p(\cdot)}} = C\left\|f-\tau_h
f\right\|_{L_\varrho^{p(\cdot)}}$$ by (\[4bv9a\]). Then in view of (\[4vr5\]) estimate (\[43kj\]) follows.
Estimate (\[43kjbc\]) is similarly obtained, with the multiplier ${\lambda}_{k,n}$ of the form $$\left\{\begin{array}{cc}\frac{\frac{k}{n+1}} {n\left(1-\frac{\sin
\frac{k}{n}}{\frac{k}{n}}\right)}, & k\le n
\\
0, & k>n\end{array}\right..$$
\[cor5vc\] Estimates (\[43kj\]),(\[43kjbc\]) are valid for $p\in \mathcal{P}(\mathbb{T})\cap WL(\mathbb{T})$ and weights $\varrho$ of form (\[2.4asew\])-(\[2.6cce\]).
\[643\] When $p>1$ is constant, estimates (\[43kj\]),(\[43kjbc\]) in the non-weighted case were obtained in [@Knew].
Cauchy singular integral {#subs5.4.}
------------------------
We consider the singular integral operator $$S_{\Gamma}f(t)=\frac{1}{\pi i} {\int\limits}_{\Gamma}\frac{f(\tau)\, d\nu(\tau)}{\tau-t},$$ where ${\Gamma}$ is a simple finite Carleson curve and $\nu$ is an arc length.
\[th5.7.\] Let $$\label{nef41}
p \in \mathcal{P}({\Gamma}) \quad \textrm{and}\ \quad
\varrho^{-p_0}\in
\mathfrak{A}_{(\widetilde{p})^\prime}({\Gamma})$$ for some $p_0\in (1,p_-)$, where $ \widetilde{p}(\cdot)=\frac{p(\cdot)}{p_0}$. Then the operator $S_{\Gamma}$ is bounded in the space $L^{p(\cdot)}_\varrho
({\Gamma})$ .
For the case of constant $p$ and $\varrho^p\in A_p({\Gamma})$, Theorem \[th5.7.\] by different methods was proved in [@310a] and [@63]. (As is known, $\varrho^{-p_0}\in
\mathfrak{A}_{(\widetilde{p})^\prime}({\Gamma}))\Longleftrightarrow \varrho^p\in
A_{\frac{p}{p_0}}({\Gamma})$ for an arbitrary Carleson curve in the case of constant $p$, see [@310a] and [@63], so that the conditions $\varrho^{-p_0}\in
\mathfrak{A}_{(\widetilde{p})^\prime}({\Gamma}))$ and $\varrho^p\in A_p({\Gamma})$ are equivalent in the sense that the former always yields the latter for every $p_0>1$ and the latter yields the former for some $p_0>1$).
\[cor6\] The operator $S_{\Gamma}$ is bounded in the space $L^{p(\cdot)}_\varrho
({\Gamma})$, if $p\in \mathcal{P}({\Gamma})\cap WL({\Gamma})$ and the weight $\varrho$ has the form
$$\label{2.4axcsew}
\varrho(t)=\prod_{k=1}^N w_k(|t-t_k|), \quad t_k\in {\Gamma},$$
where $$\label{2.6ccece}
w_k\in \widetilde{U}([0,\nu({\Gamma})]) \quad \textrm{and}\ \
-\frac{1}{p(t_k)} < m(w_k)\le M(w_k) < \frac{1}{p^\prime(t_k)} .$$
In the case of power weights, the statement of Corollary \[cor6\] was proved in [@317b], where the case of an infinite Carleson curve was also dealt with.
Multidimensional singular operators {#subs5.5.}
-----------------------------------
We consider a multidimensional singular operator $$\label{5.40}
Tf(x)=\lim\limits_{{\varepsilon}\to 0}{\int\limits}\limits_{y\in{\Omega}: |x-y|>{\varepsilon}}
K(x,y) f(y)\,dy, \quad x\in {\Omega}\subseteq \mathbb{R}^n,$$ where we assume that the singular kernel $K(x,y)$ satisfies the assumptions: $$\label{5.4a}
|K(x,y)|\le C |x-y|^{-n},$$ $$\label{5.4b}
|K(x^\prime,y)-K(x,y)|\le C \frac{|x^\prime
-x|^{\alpha}}{|x-y|^{n+{\alpha}}}, \quad \quad |x^\prime
-x|<\frac{1}{2}|x-y|,$$ $$\label{5.4c}
|K(x,y^\prime)-K(x,y)|\le C \frac{|y^\prime
-y|^{\alpha}}{|x-y|^{n+{\alpha}}}, \quad \quad |y^\prime
-y|<\frac{1}{2}|x-y|,$$ where ${\alpha}$ is an arbitrary positive exponent, $$\label{5.4ca}
\textrm{there exists} \ \ \ \lim\limits_{{\varepsilon}\to 0}{\int\limits}_{y\in{\Omega}:
|x-y|>{\varepsilon}} K(x,y)\, dy,$$
$$\label{5.4d}
\textrm{operator (\ref{5.40}) is bounded in $L^2({\Omega})$. }$$
\[th5.8.\] Let the kernel $K(x,y)$ fulfill conditions (\[5.4a\])-(\[5.4d\]). Then under the conditions $$\label{nef41f4}
p \in \mathcal{P}({\Omega}) \quad \textrm{and}\ \quad
\varrho^{-p_0}\in
\mathfrak{A}_{(\widetilde{p})^\prime}({\Omega}) \quad \textrm{with}
\quad \widetilde{p}(\cdot)=\frac{p(\cdot)}{p_0}$$ the operator $T$ is bounded in the space $L^{p(\cdot)}_\varrho ({\Omega})$.
In the case of constant $p$ and $\varrho\in A_p(\mathbb{R}^n)$, Theorem \[th5.8.\] was proved in [@100z].
\[cor7\] Let $p\in
\mathcal{P}({\Omega})\cap WL({\Omega})$ and let $p(x)\equiv p_\infty=const $ outside some ball $|x|< R$ in case ${\Omega}$ is unbounded. The operator $T$ with the kernel satisfying conditions (\[5.4a\])-(\[5.4d\]) is bounded in the space $L^{p(\cdot)}_\varrho ({\Omega})$ with a weight $\varrho$ of the form $$\label{2.4agggsw}
\varrho(x)=\prod_{k=1}^N w_k(|x-x_k|), \quad x_k\in {\Omega},$$where $ w_k\in \widetilde{U}(\mathbb{R}_+^1)$ and $$-\frac{1}{p(x_k)} < m(w_k)\le M(w_k) < \frac{1}{p^\prime(x_k)}
\quad \textrm{and} \quad -\frac{n}{p_\infty}<\sum\limits_{k=1}^N m_\infty(w_k)\le \sum\limits_{k=1}^N
M_\infty(w_k) <\frac{n}{p^\prime_\infty}.$$
In the case of variable $p(\cdot)$, the statement of Corollary \[cor7\] was proved in [@107a] in the non-weighted case, and in [@317e] in weighted case (\[2.4agggsw\]) for bounded sets ${\Omega}$.
Commutators {#subs5.6.}
-----------
Let us consider the commutators $$[b,T]f(x)=b(x)Tf(x)-T(bf)(x), \quad x\in\mathbb{R}^n$$ generated by the operator (\[5.40\]) with ${\Omega}=\mathbb{R}^n$ and a function $b\in BMO(\mathbb{R}^n)$.
\[th5.9.\] Let the kernel $K(x,y)$ fulfill assumptions (\[5.4a\])-(\[5.4d\]) and let $b\in
BMO(\mathbb{R}^n)$. Then under the conditions $$\label{nefgoa4}
p \in \mathcal{P}(\mathbb{R}^n) \quad \textrm{and}\ \quad
\varrho^{-p_0}\in
\mathfrak{A}_{(\widetilde{p})^\prime}(\mathbb{R}^n) \quad \textrm{with}
\quad \widetilde{p}(\cdot)=\frac{p(\cdot)}{p_0}$$ the commutator $[b,T]$ is bounded in the space $L^{p(\cdot)}_\varrho
(\mathbb{R}^n)$.
In the case of constant $p$ and $\varrho\in A_p(\mathbb{R}^n),
1<p<\infty$, Theorem \[th5.9.\] was proved in [@479zz]. In the case of variable $p(\cdot)$, the non-weighted case of Theorem \[th5.9.\] was proved in [@299c] under the assumption that $1\in \mathfrak{A}_{p(\cdot)}(\mathbb{R}^n)$.
\[cor8\] Let the kernel $K(x,y)$ fulfill conditions (\[5.4a\])-(\[5.4d\]) and let $b\in
BMO(\mathbb{R}^n)$. Then the commutator $[b,T]$ is bounded in the space $L^{p(\cdot)}_\varrho (\mathbb{R}^n)$ if\
i) $p\in \mathcal{P}(\mathbb{R}^n)\cap WL(\mathbb{R}^n)$ and $p(x)\equiv
p_\infty=const $ outside some ball $|x|< R$,\
2) the weight $\varrho$ has the form $$\label{2.4agggsw}
\varrho(x)=w_0(1+|x|)\prod_{k=1}^N w_k(|x-x_k|), \quad x_k\in
\mathbb{R}^n,$$ with the factors $w_k, \ k=0,1,...,N,$ satisfying conditions (\[2.6bvc\])-(\[f27dcobvc\]).
Pseudo-differential operators {#subs5.8.}
-----------------------------
We consider a pseudo-differential operator ${\sigma}(x,D)$ defined by $${\sigma}(x,D) f(x)={\int\limits}_{\mathbb{R}^n}{\sigma}(x,\xi)e^{2\pi i(x,\xi)}\hat{f}(\xi)\,d\xi.$$
\[th5.11.\] Let the symbol ${\sigma}(x,\xi)$ satisfy the condition $$\left|\partial^{\alpha}_\xi\partial_x^{\beta}{\sigma}(x,\xi)\right|\le
c_{{\alpha}{\beta}}(1+|\xi|)^{-|{\alpha}|}$$ for all the multiindices ${\alpha}$ and ${\beta}$. Then under condition (\[nefgoa4\]) the operator ${\sigma}(x,D)$ admits a continuous extension to the space $L^{p(\cdot)}_\varrho (\mathbb{R}^n)$.
In the case of constant $p$ and $\varrho\in A_p$ Theorem \[th5.11.\] was proved in [@404a].
\[cor9\] Let $p\in \mathcal{P}(\mathbb{R}^n)\cap WL(\mathbb{R}^n)$ and $p(x)\equiv p_\infty=const $ outside some ball $|x|< R$ and let $\varrho \in
V^{osc}_{p(\cdot)}(\mathbb{R}^n,\Pi)$.
For variable $p(\cdot)$ the statement of Corollary \[cor9\] by a different method was proved in the non-weighted case in [@503a].
Feffermann-Stein function {#subs5.7.}
-------------------------
Let $f$ be a measurable locally integrable function on $\mathbb{R}^n$, $B$ an arbitrary ball in $\mathbb{R}^n$, $f_B=\frac{1}{|B|}{\int\limits}_{B}f(x)\,dx$ and $$\mathcal{M}^\# f(x)=\sup\limits_{B\in X}\frac{1}{|B|}{\int\limits}_{B}|f(x)-f_B|\,dx$$ be the Fefferman-Stein maximal function.
\[th5.10.\] Under condition (\[nefgoa4\]), the inequality $$\label{5.5}
\|\mathcal{M}f\|_{L^{p(\cdot)}_\varrho (\mathbb{R}^n)}\le C
\|\mathcal{M}^\# f\|_{L^{p(\cdot)}_\varrho (\mathbb{R}^n)}$$ is valid, where $C>0$ does not depend on $f$.
In the case of constant $p$ and $\varrho\in A_p$ inequality (\[5.5\]) was proved in [@160c].
\[cor10\] Inequality (\[5.5\]) is valid under the conditions:\
i) $p\in \mathcal{P}(\mathbb{R}^n)\cap WL(\mathbb{R}^n)$ and $p(x)\equiv p_\infty=const $ outside some ball $|x|< R$,\
ii) $\varrho \in
V^{osc}_{p(\cdot)}(\mathbb{R}^n,\Pi)$.
Vector-valued operators {#subs5.9.}
-----------------------
Let $f=(f_1,\cdots,f_k, \cdots)$, where $f_i:\mathbb{R}^n\to\mathbb{R}^1$ are locally integrable functions.
\[th5.12.\] Let $0<\theta<\infty$. Under conditions (\[nefgoa4\]), the inequality $$\label{x}
\left\|\left(\sum\limits_{j=1}^\infty(\mathcal{M}
f_j)^\theta\right)^\frac{1}{\theta}\right\|_{L^{p(\cdot)}_\varrho(\mathbb{R}^n)}
\le C
\left\|\left(\sum\limits_{j=1}^\infty|f_j|^\theta\right)^\frac{1}{\theta}\right\|
_{L^{p(\cdot)}_\varrho(\mathbb{R}^n)}$$ is valid, where $c>0$ does not depend on $f$.
In the case of constant $p$ and $\varrho\in A_p$ weighted inequalities for vector-valued functions were proved in [@316zz], [@316zza], [@316zzb], see also [@20a].
\[cor11\] Inequality (\[x\]) is valid under the conditions\
i) $p\in \mathcal{P}(\mathbb{R}^n)\cap WL(\mathbb{R}^n)$ and $p(x)\equiv p_\infty=const $ outside some ball $|x|< R$,\
ii) $\varrho \in
V^{osc}_{p(\cdot)}({\Omega},\Pi)$.
The corresponding statements for vector-valued operators are also similarly derived from Theorem \[th4.2.\] in the case of singular integrals, commutators, Feffermann-Stein maximal function, Fourier-multipliers, etc.
This work was made under the project “Variable Exponent Analysis” supported by INTAS grant Nr.06-1000017-8792. The first author was also supported by Center CEMAT, Instituto Superior Técnico, Lisbon, Portugal, during his visit to Portugal, November 29 - December 2006.
\#1[0=0=0 0 by1pt\#1]{}
[10]{}
E. Acerbi and G.Mingione Regularity results for a class of functionals with non-standard growth. , 156(2):121–140, 2001.
E. Acerbi and G.Mingione Regularity results for staionary electrorheological fluids. , 164(3): 213–259, 2002.
K. F. Andersen and R. T. John. Weighted inequalities for vector-valued maximal functions and singular integrals. , 69(1):19–31, 1980/81.
N.K. Bary and S.B. Stechkin. Best approximations and differential properties of two conjugate functions (in [Russian]{}). , 5:483–522, 1956.
A. Böttcher and Yu. Karlovich. . Basel, [Boston]{}, [Berlin]{}: [Birkhäuser]{} Verlag, 1997. 397 pages.
A.-P. Calder[ó]{}n. Inequalities for the maximal function relative to a metric. , 57(3):297–306, 1976.
R.R. Coifman and G. Weiss. , volume 242. Lecture Notes Math., 1971. 160 pages.
R.R. Coifman and G. Weiss. Extensions of [H]{}ardy spaces and their use in analysis, 1977. , 83(4):569–645, 1977.
A. Cordoba and C. Fefferman. A weighted norm inequality for singular integrals. , 57(1):97–101, 1976.
D. Cruz-Uribe, A. Fiorenza, J.M. Martell, and C Perez. The boundedness of classical operators on variable [$L\sp p$]{} spaces. , 31(1):239–264, 2006.
D. Cruz-Uribe, A. Fiorenza, and C.J. Neugebauer. The maximal function on variable ${L}^p$-spaces. , 28:223–238, 2003.
D. Cruz-Uribe, J. M. Martell, and C. P[é]{}rez. Extrapolation from [$A\sb \infty$]{} weights and applications. , 213(2):412–439, 2004.
L. Diening. Maximal function on generalized [L]{}ebesgue spaces [$L\sp
{p(\cdot)}$]{}. , 7(2):245–253, 2004.
L. Diening. Riesz potential and [Sobolev]{} embeddings on generalized [Lebesgue]{} and [Sobolev]{} spaces ${L}^{p(\cdot)}$ and ${W}^{k, p(\cdot)}$. , 268:31–43, 2004.
L. Diening. Maximal function on [M]{}usielak-[O]{}rlicz spaces and generalized [L]{}ebesgue spaces. , 129(8):657–700, 2005.
L. Diening, P. H[ä]{}st[ö]{}, and A. Nekvinda. Open problems in variable exponent [Lebesgue]{} and [S]{}obolev spaces. In [*“Function Spaces, Differential Operators and Nonlinear Analysis”, Proceedings of the Conference held in Milovy, Bohemian-Moravian Uplands, May 28 - June 2, 2004*]{}. Math. Inst. Acad. Sci. Czech Republick, Praha.
L. Diening and M. Ru$\check{z}$i$\check{c}$ka. Calderon-[Z]{}ygmund operators on generalized [Lebesgue]{} spaces ${L}^{p(x)}$ and problems related to fluid dynamics. , 563:197–220, 2003.
D.E. Edmunds, V. Kokilashvili and A. Meskhi. , volume 543 of [ *Mathematics and its Applications*]{}. Kluwer Academic Publishers, Dordrecht, 2002.
K. Falconer. . John Wiley & Sons Ltd., Chichester, 1997.
X. Fan and D. Zhao. . , 36(3, Ser. A):295–318, 1999.
C. Fefferman and E. M. Stein. spaces of several variables. , 129(3-4):137–193, 1972.
I. Genebashvili, A. Gogatishvili, V. Kokilashvili, and M. Krbec. Pitman [Monographs]{} and [Surveys]{}, [Pure]{} and [Applied]{} mathematics: [Longman]{} [Scientific]{} and Technical, 1998. 422 pages.
E. Harboure, R.A, Macias and C. Segovia. Extrapolation [R]{}esults for [C]{}lasses of [W]{}eights. , 110(3): 383-397, 1988.
P. Harjulehto, P. H[ä]{}st[ö]{}, and V. Latvala. Sobolev embeddings in metric measure spaces with variable dimension. , 254(3):591–609, 2006.
P. Harjulehto, P. H[ä]{}st[ö]{}, and M. Pere. Variable [E]{}xponent [L]{}ebesgue [S]{}paces on [M]{}etric [S]{}paces: [T]{}he [H]{}ardy-[L]{}ittlewood [M]{}aximal [O]{}perator. , 30(1):87–104, 2004.
J. Heinonen. . Universitext. Springer-Verlag, New York, 2001.
R. A. Hunt and W.S. Young. A weighted norm inequality for [F]{}ourier series. , 80:274–277, 1974.
D.M.Israfilov, V.Kokilkashvili and S.Samko Approximation in weighted Lebesgue spacesand Smirnov spaces with variable exponents , 143: 25-35, 2007.
N.K. Karapetiants and N.G. Samko. Weighted theorems on fractional integrals in the generalized [H]{}ölder spaces ${H}_0^\omega(\rho)$ via the indices $m_\omega$ and ${M}_\omega$. , 7(4):437–458, 2004.
A. Yu. Karlovich and A.K. Lerner. Commutators of singular integrals on generalized [$L\sp p$]{} spaces with variable exponent. , 49(1):111–125, 2005.
G. Khuskivadze, V. Kokilashvili, and V. Paatashvili. Boundary value problems for analytic and harmonic functions in domains with nonsmooth boundaries. [A]{}pplications to conformal mappings. , 14:195, 1998.
V. Kokilashvili. On approximation of periodic functions (in [R]{}ussian). , 34:51–81, 1968.
V. Kokilashvili. On a progress in the theory of integral operators in weighted [B]{}anach function spaces. In [*“Function Spaces, Differential Operators and Nonlinear Analysis”, Proceedings of the Conference held in Milovy, Bohemian-Moravian Uplands, May 28 - June 2, 2004*]{}. Math. Inst. Acad. Sci. Czech Republick, Praha.
V. Kokilashvili. Maximal inequalities and multipliers in weighted [L]{}izorkin-[T]{}riebel spaces. , 239(1):42–45, 1978.
V. Kokilashvili. Maximal functions in weighted spaces. , 65:110–121, 1980.
V. Kokilashvili. Weighted [L]{}izorkin-[T]{}riebel spaces. [S]{}ingular integrals, multipliers, imbedding theorems. , 161:125–149, 1983. English Transl. in Proc. Steklov Inst. Math. 3(1984), 135-162.
V. Kokilashvili, V. Paatashvili, and Samko S. Boundedness in [L]{}ebesgue spaces with variable exponent of the [C]{}auchy singular operators on [C]{}arleson curves. In Ya. Erusalimsky, I. Gohberg, S. Grudsky, V. Rabinovich, and N. Vasilevski, editors, [*“Operator Theory: Advances and Applications”, dedicated to 70th birthday of Prof. I.B.Simonenko*]{}, pages 167–186. Birkhäuser Verlag, Basel, 2006.
V. Kokilashvili, N. Samko, and S. Samko. The maximal operator in variable spaces ${L}^{p(\cdot)}({\Omega},\rho)$. , 13(1):109–125, 2006.
V. Kokilashvili, N. Samko, and S. Samko. Singular operators in variable spaces ${L}^{p(\cdot)}({\Omega},\rho)$ with oscillating weights.
V. Kokilashvili, N. Samko, and S. Samko. The [M]{}aximal [O]{}perator in [W]{}eighted [V]{}ariable [S]{}paces ${L}^{p(\cdot)}$. 5(3): 299-317, 2007.
V. Kokilashvili and S. Samko. Singular [Integrals]{} in [Weighted]{} [Lebesgue]{} [Spaces]{} with [Variable]{} [Exponent]{}. , 10(1):145–156, 2003.
V. Kokilashvili and S. Samko. Maximal and fractional operators in weighted ${L}^{p(x)}$ spaces. , 20(2):495–517, 2004.
V. Kokilashvili and S. Samko. oundedness in [L]{}ebesgue spaces with variable exponent of maximal, singular and potential operators. , pages 152–158, 2006.
V. Kokilashvili and S. Samko. , 144:137-144, 2007.
V. Kokilashvili and S. Samko. , , 24(1), 2008.
O. Kov$\acute{\textrm{a}}$c$\check{\textrm{i}}$k and J. R$\acute{\textrm{a}}$kosn$\check{\textrm{i}}$k. On spaces ${L}^{p(x)}$ and ${W}^{k,p(x)}$. , 41(116):592–618, 1991.
S.G. Krein, Yu.I. Petunin, and E.M. Semenov. . Moscow: Nauka, 1978. 499 pages.
S.G. Krein, Yu.I. Petunin, and E.M. Semenov. , volume 54 of [ *Translations of Mathematical Monographs*]{}. American Mathematical Society, Providence, R.I., 1982.
D. S. Kurtz. Littlewood-[P]{}aley and multiplier theorems on weighted [$L\sp{p}$]{} spaces. , 259(1):235–254, 1980.
D. S. Kurtz and R.L. Wheeden. Results on weighted norm inequalities for multipliers. , 255:343–562, 1979.
P.I. Lizorkin. Multipliers of [F]{}ourier integrals in the spaces [$L\sb{p,\,\theta
}$]{}. , 89:231–248, 1967. English Transl. in Proc. Steklov Inst. Math. 89 (1967), 269-290.
R. Mac$\grave{i}$as and C. Segovia. A well behaved quasidistance for spaces of homogeneous type. , 32:1–18, 1981.
L. Maligranda. Indices and Interpolation. , 234:49, 1985.
L. Maligranda. . Departamento de Matemática, Universidade Estadual de Campinas, 1989. Campinas SP Brazil.
S.G. Mikhlin. On multipliers of [F]{}ourier integrals (in [R]{}ussian). , 109:701–703, 1956.
S.G. Mikhlin. . Moscow: Fizmatgiz, 1962. 254 pages.
N. Miller. Weighted [S]{}obolev spaces and pseudodifferential operators with smooth symbols. , 269(1):91–109, 1982.
B. Muckenhoupt and R.L Wheeden Weighted norm inequalities for fractional integrals. , 192: 261-274, 1974.
A. Nekvinda. Hardy-[Littlewood]{} maximal operator on ${L}^{p(x)}(\mathbb{R}^n)$. , 7(2):255–265, 2004.
C. P[é]{}rez. Sharp estimates for commutators of singular integrals via iterations of the [H]{}ardy-[L]{}ittlewood maximal function. , 3(6):743–756, 1997.
V.S. Rabinovich and S.G Samko. Boundedness and [F]{}redholmness of pseudodifferential operators in variable exponent spaces. , (to appear).
J. L. Rubio de Francia. Factorization and extrapolation of weights. , 7(2):393–395, 1982.
M. Ru$\check{z}$i$\check{c}$ka. . Springer, [Lecture]{} [Notes]{} in [Math.]{}, 2000. vol. 1748, 176 pages.
N.G. Samko. Singular integral operators in weighted spaces with generalized [Hölder]{} condition. , 120:107–134, 1999.
N.G. Samko. On compactness of [I]{}ntegral [O]{}perators with a [G]{}eneralized [W]{}eak [S]{}ingularity in [W]{}eighted [S]{}paces of [C]{}ontinuous [F]{}unctions with a [G]{}iven [C]{}ontinuity [M]{}odulus. , 136:91, 2004.
N.G. Samko. On non-equilibrated almost monotonic functions of the [Z]{}ygmund-[B]{}ary-[S]{}techkin class. , 30(2):727–745, 2004/2005.
N. Samko. Parameter depending Bary-Stechkin classes and local dimensions of measure metric spaces. , 145 (2007), 122-129
N. Samko. Parameter depending almost monotonic functions and their applications to dimensions in metric measure spaces. , (2008), to appear
N.Samko, S. Samko and B.Vakulov, Weighted Sobolev theorem in Lebesgue spaces with variable exponent, , 335(1): 560–583, 2007.
S.G. Samko. Differentiation and integration of variable order and the spaces ${L}^{p(x)}$. Proceed. of Intern. Conference “Operator Theory and Complex and Hypercomplex Analysis”, 12–17 December 1994, Mexico City, Mexico, Contemp. Math., Vol. 212, 203-219, 1998.
S.G. Samko. Denseness of ${C_0^{\infty}({R}^N)}$ in the generalized [Sobolev]{} spaces $ {W^M,P(X)}{({R}^N)} $. In [*Intern. [Soc]{}. for [Analysis]{}, [Applic]{}. and [Comput]{}., vol. 5, “Direct and Inverse [Problems]{} of [Math]{}. [Physics]{}”, Ed. by R.Gilbert, J. Kajiwara and Yongzhi S. Xu, 333-342*]{}. Kluwer [Acad]{}. [Publ]{}., 2000.
S.G. Samko. Hardy inequality in the generalized [L]{}ebesgue spaces. , 6(4): 355-362, 2003.
S.G. Samko. Hardy-[L]{}ittlewood-[S]{}tein-[W]{}eiss inequality in the [L]{}ebesgue spaces with variable exponent. , 6(4):421–440, 2003.
S.G. Samko. On a progress in the theory of [L]{}ebesgue spaces with variable exponent: maximal and singular operators. , 16(5-6):461–482, 2005.
S.G. Samko, E. Shargorodsky, and B. Vakulov. Weighted [S]{}obolev theorem with variable exponent for spatial and spherical potential operators, [I]{}[I]{}. , 325(1):745–751, 2007.
V.V.Zhikov. On [L]{}avrentiev’s phenomenon. , 3(2):249–269, 1995.
V.V.Zhikov. eyer-type estimates for solving the non-linear [S]{}tokes system. , 33(1): 108–115, 1997.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Davide Gaiotto
bibliography:
- 'swn-paper.bib'
title: Opers and TBA
---
Introduction
============
The moduli spaces $\CM$ of $\CN=2$ four-dimensional gauge theories compactified on a circle are a rich subject of investigation. They are endowed with an hyperkähler metric, which encodes the BPS spectrum of the four-dimensional theory. For theories of the class $\CS$, which arise from the compactification of 6d SCFTs on punctured Riemann surfaces, the moduli spaces $\CM$ coincides roughly with the moduli spaces of solutions of Hitchin’s equations, which play an important role in mathematical physics and mathematics.
The connection between the moduli spaces and the BPS spectrum was used in [@Gaiotto:2008cd] to set up a system of integral equations which compute the hyperkähler metric at any given point in moduli space. It is reasonable to hope that these integral equations may clarify other aspects, both physical and mathematical, of the moduli spaces $\CM$.
In this note we will take a careful limit of the integral equations, akin to the conformal limit in the Thermodynamic Bethe Ansatz literature, and interpret the result as a detailed description of a specific complex Lagrangian sub-manifold $\CL_\epsilon$ in $\CM$. We will argue that the manifold coincides with a manifold defined in [@Nekrasov:2010ka] by the compactification of the four-dimensional theory on a $\Omega_\epsilon$-deformed cigar.
For theories of the class $\CS$, the relevant sub-manifold $\CL_\epsilon$ is conjecturally the oper manifold. The ambient space $\CM$ can be interpreted as a space of flat connections and the oper manifold consists of connections which can be gauged into the form of a single a Schrödinger-like differential operator on the Riemann surface, or a higher-rank generalization of that notion. Thus the TBA equations in the conformal limit characterizes the space of opers. Our results thus include and extend previous efforts to use TBA-like equations to solve the Schrödinger equation with simple potentials [@Dorey:1998pt; @Bazhanov:1998wj; @Dorey:2007zx; @Dorey:2007wz]. Our methods essentially reconstructs the solutions of a Schrödinger equation with rational potential from their analytic behavior in the $\hbar \equiv \epsilon$ plane.
The oper manifold also controls the semiclassical behaviour of conformal blocks for Virasoro or W-algebras. In particular, our method should allow the calculation of the semiclassical limit of conformal blocks which are not computable at this moment, such as the three-point function of non-degenerate vertex operators. It is natural to wonder if our TBA-like equations could somehow be “quantized”, and compute the full conformal blocks.
The generating function of the $\CL_\epsilon$ manifold in appropriate (Fenchel-Nielsen) coordinates should coincide with the effective superpotential of the two-dimensional gauge theory which emerges from the $\Omega$-deformation in a single plane of the four-dimensional gauge theory, as in [@Nekrasov:2009rc; @Nekrasov:2011bc]. Although TBA-like integral equations have appeared in that context as well [@Nekrasov:2009rc], they appear to be unrelated to the ones presented here.
Outline of our method
---------------------
The manifold $\CM$ is defined by a supersymmetric compactification of the four-dimensional gauge theory on a circle of radius $R$. It is a hyperkähler manifold, with a $CP^1$ worth of complex structures which we parameterize by a variable $\zeta$. At $\zeta=0$, the manifold is a complex integrable system, a torus fibration over a middle-dimensional base $\CB$, which coincides with the Coulomb branch of the four-dimensional theory. The torus fibre is parameterized by the choice of electric and magnetic Wilson lines on the circle. The metric on $\CM$ depends on $R$, on the four-dimensional gauge couplings and complex mass parameters $m$ and on the Wilson lines $m_3$ for the corresponding flavour symmetries. From this point on, the flavor Wilson lines will be turned off.
At $\zeta=0$, the manifold is endowed with a canonical complex Lagrangian submanifold $\CL$, a section of the torus fibration, which is intuitively defined as the locus where the gauge Wilson lines are turned off. Physically, this submanifold can be defined more precisely by the twisted compactification of the four-dimensional theory on a cigar geometry, to define a boundary condition $\CL$ for the three-dimensional sigma model on $\CM$ [@Nekrasov:2010ka] [^1]. With the help of the TBA-like integral equations from [@Gaiotto:2008cd], we can describe the manifold $\CL$ using complex coordinates for a generic complex structure $\zeta$. Of course, in a generic complex structure for $\CM$, $\CL$ is neither a complex, nor a Lagrangian submanifold.
The TBA equations have an interesting scaling limit, the “conformal limit”, where one sends $R$ and $\zeta$ to zero, keeping $\epsilon = \zeta/R$ fixed. If we only focus on the description of $\CM$ as a complex symplectic manifold in complex structure $\zeta$, the conformal limit is well defined. Indeed, the complex symplectic structure only depends on the radius and $\zeta$ through the mass parameters $$\mu = \exp \frac{R m}{\zeta} + R \bar m \zeta$$ which have a good scaling limit. $$\mu = \exp \frac{m }{\epsilon}$$ We denote the resulting complex symplectic manifold as $\CM_\epsilon$.
We will show that something surprising happens to the image $\CL_\epsilon$ of $\CL$ in $\CM_\epsilon$: the scaling limit makes it into a complex Lagrangian submanifold. We conjecture that the complex Lagrangian submanifold $\CL_\epsilon$ is associated to the boundary condition defined by the compactification of the four-dimensional theory on an $\Omega_\epsilon$-deformed cigar. We do not offer a full proof of this conjecture, though it can be motivated in part by the analysis of [@Nekrasov:2010ka]. Instead, we will attempt to demonstrate directly that for theories in the class $\CS$, the manifold $\CL_\epsilon$ is the oper manifold.
This is possible thanks to a second set of integral equations [@Gaiotto:2011tf] which compute directly the solutions of Hitchin’s equations. We can show explicitly how the solutions associated to points in $\CL$ go to opers in the conformal limit.
The main assumption in this paper is that the solutions of the integral equations have a good conformal limit, which is reasonably well-behaved at large $\epsilon$. As systematic tests of this assumption would require extensive numerical work, we leave them to future publications. We will limit ourselves here to simple examples.
General considerations
======================
Our starting point are the TBA-like equations used to describe the metric on the moduli space $\CM$ of four-dimensional ${\cal N}=2$ gauge theories compactified on a circle. $$\label{eq:int-old}
\log X_{\gamma}(\zeta) = \frac{Z_\gamma}{\zeta} + {{\mathrm i}}\theta_\gamma + \bar Z_\gamma \zeta + \sum_{\gamma'} \omega(\gamma', \gamma) \frac{1}{4 \pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'} \frac{\zeta' + \zeta}{\zeta' - \zeta} \log(1 - \sigma(\gamma')X_{\gamma'}(\zeta')).$$ with reality condition $X_{\gamma}(\zeta)=\overline{X_{-\gamma}\left(-1/\bar \zeta\right)}$.
We lightened a bit the notation compared to the reference [@Gaiotto:2008cd]. For the purpose of this note, we will not need to review the somewhat intricate geometric meaning of the symbols $\gamma$, $Z_\gamma$, $\theta_\gamma$, $ \omega(\gamma', \gamma)$, etc. Roughly, the charge $\gamma$ labels a certain choice of complex coordinates $X_\gamma$. The periods $Z_\gamma$ label a point on the basis of the complex integrable system, and the angles $\theta_\gamma$ the fibre. The canonical integration contours $\ell_\gamma$ in the $\zeta$ plane are such that $\frac{Z_\gamma}{\zeta}$ is real and negative. We will discuss alternative choice of contours at the end of this section.
The hyperkähler metric on $\CM$ is computed by plugging the solutions into a complex symplectic form $\Omega(\zeta) = \langle d \log X, d \log X \rangle$. The asymptotic behaviour at small and large $\zeta$, together with the identity $\langle d Z, d Z \rangle=0$, imply $$\Omega(\zeta) = \frac{\omega^+}{\zeta} + \omega^3 + \omega^- \zeta.$$ and thus we arrive to the complex symplectic form $\omega^+$ and Kähler form $\omega^3$ in complex structure $\zeta=0$. The BPS degeneracies $\omega(\gamma', \gamma)$ are determined by the requirement that the metric should be smooth.
We are interested in a special section $\CL$ of the moduli space, which is defined by setting the angles $\theta_\gamma$ to zero. This statement requires a little clarification, as the formalism allows for sign redefinitions of the $X_\gamma$ functions, which lead to shifts of the $\theta_\gamma$ by multiples of $\pi$, and changes in the choice of the “quadratic refinement” $\sigma(\gamma)$. There is a canonical choice provided by the gauge theory [@Gaiotto:2008cd; @Gaiotto:2011tf] which is also mathematically natural. [^2]
Once we restrict $\theta_\gamma$ to zero, the equations and their solutions gain an extra $Z_2$ symmetry $X_{-\gamma}(- \zeta) = X_\gamma(\zeta)$ which allows one to combine together the contributions from $\pm \gamma'$ in the sum. $$\label{eq:int-old-half}
\log X_{\gamma}(\zeta) = \frac{Z_\gamma}{\zeta} + \bar Z_\gamma \zeta + \sum_{\gamma'>0} \omega(\gamma', \gamma) \frac{\zeta}{\pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\zeta'}{(\zeta')^2 - (\zeta)^2} \log(1 - \sigma(\gamma')X_{\gamma'}(\zeta')).$$ The cancellation of the order $1$ terms in the $\zeta \to 0$ expansion has an important consequence: if we plug the expansion in the expression for the complex symplectic form $$\Omega(\zeta) = \langle d \log X, d \log X \rangle$$ we verify that we are describing a complex Lagrangian section $\CL$ in complex structure $\zeta=0$.
Next, we would like to take the “conformal limit“ in the TBA: introduce a radius parameter by the rescaling $Z_\gamma \to R Z_\gamma$, and send $R$ to zero. Because the new integration kernel goes to zero if $\zeta/\zeta'$ is very different from $1$, we can self-consistently focus on the behaviour of the functions at small $\zeta$, by setting $\zeta = R \epsilon$,$\zeta' = R \epsilon'$ and keeping $\epsilon$ fixed as $R$ is sent to zero.
The resulting set of equations take the simpler form: $$\label{eq:int-new}
\log X_{\gamma}(\epsilon) = \frac{Z_\gamma}{\epsilon} + \sum_{\gamma'>0} \omega(\gamma', \gamma) \frac{\epsilon}{\pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 - (\epsilon)^2} \log(1 - \sigma(\gamma')X_{\gamma'}(\epsilon')).$$ The functions $X_{\gamma}(\epsilon)$ live in a complex manifold $\CM_\epsilon$ defined by the limiting value of the mass parameters $$\log \mu = \frac{m}{\epsilon}$$
Large $\epsilon$ behaviour
--------------------------
Clearly, we are making the assumption that the solutions of the integral equations have a good conformal limit, and remain somewhat well- behaved at large $\epsilon$. This assumption is crucial for our results to hold. We can gain some insight if we recall the detailed analysis of [@Gaiotto:2008cd].
For that purpose, it is useful to reintroduce the angles ${{\mathrm i}}\theta_\gamma$ in the conformal limit. More precisely, one can massage the integration kernels a little bit and introduce a complexified version $\theta^+_\gamma$ of the angles $$\label{eq:int-old2}
\log X_{\gamma}(\zeta) = \frac{R Z_\gamma}{\zeta} + {{\mathrm i}}\theta^+_\gamma + R \bar Z_\gamma \zeta + \zeta \sum_{\gamma'} \omega(\gamma', \gamma) \frac{1}{2 \pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'} \frac{1}{\zeta' - \zeta} \log(1 - \sigma(\gamma')X_{\gamma'}(\zeta')).$$ Then we can take the conformal limit $$\label{eq:int-new2}
\log X_{\gamma}(\epsilon) = \frac{Z_\gamma}{\epsilon} + {{\mathrm i}}\theta^+_\gamma +\epsilon \sum_{\gamma'} \omega(\gamma', \gamma) \frac{1}{2 \pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\epsilon'}{\epsilon'} \frac{1}{\epsilon' - \epsilon} \log(1 - \sigma(\gamma')X_{\gamma'}(\epsilon')).$$
If we keep the angles, we can look at certain differential equations in $R$ and $\zeta$ satisfied by the solutions $X_{\gamma}(\zeta)$, which have an irregular singularity at both $\zeta=0$ and $\zeta = \infty$, see section $5.5$ of [@Gaiotto:2008cd]. These equations can be combined into a differential equation in $\epsilon$, which has an irregular singularity as $\epsilon \to 0$, but only a regular singularity as $\epsilon \to \infty$ $$\epsilon\, \partial_\epsilon X_{\gamma} = \left[- {{\mathrm i}}\frac{Z }{\epsilon} + c\right] \cdot \partial_{\theta^+}X_{\gamma}$$ for some $\epsilon$-independent functions $c_\gamma$.
The regular singularity suggests that the $X_{\gamma}(\epsilon)$ will have a power-law behaviour at large $\epsilon$. The monodromy at large $\epsilon$ should coincide with the monodromy around the origin, which is decomposed into a product of Stokes factors for the irregular singularity at $\epsilon \to 0$. The Stokes factors coincide with the discontinuities of the solutions across the $\ell_\gamma$ rays, the KS transformations $$X_\gamma \to X_\gamma(1-\sigma(\gamma) X_{\gamma'})^{\omega(\gamma', \gamma)}$$ Thus the large $\epsilon$ behaviour is constrained by the BPS spectrum of the theory.
As an example, suppose that the solutions $X_{\gamma}(\epsilon)$ to go to a constant $X^\infty_\gamma$ as $\epsilon \to \infty$. We can evaluate the integral in the large $\epsilon$ limit by looking at large $\epsilon'$, and pulling the $\log$ out of the integral. We get $$\log X^\infty_{\gamma} = \sum_{\gamma'>0}\omega(\gamma', \gamma) \frac{1}{2} \sigma_{\gamma, \gamma'} \log(1 - \sigma(\gamma) X^\infty_{\gamma'}).$$ where the sign $\sigma_{\gamma,\gamma'}$ is $+$ if the ray $\ell_\gamma$ lies counterclockwise from the ray $\ell_{\gamma'}$. We can rewrite this as the algebraic equations $$(X^\infty)^2_{\gamma} = \prod_{\gamma'>0} (1 - \sigma(\gamma) X^\infty_{\gamma'})^{\omega(\gamma', \gamma) \sigma_{\gamma, \gamma'}}.$$
Depending on the model, these equations may have isolated solutions, or a moduli space of solutions. In the latter case, we will pick a specific choice as we vary the $Z_\gamma$. Thus, at least locally, the limiting values $X^\infty$ do not depend on the $Z_\gamma$. This has an important consequence: if we plug the $X_\gamma(\epsilon)$ in the complex symplectic form $\Omega(\epsilon)$ of $\CM_\epsilon$, the $\epsilon \to 0$ expansion and the limiting behaviour at $\epsilon \to \infty$ force $\Omega$ to vanish. In other words, we are describing a complex Lagrangian submanifold $\CL_\epsilon$ in $\CM_\epsilon$. If the $X_{\gamma}(\epsilon)$ do not go to constants, but rather grow polynomially in $\epsilon$ at large $\epsilon$, we can reach a similar conclusion, as long as the leading growth can be chosen to be $Z_\gamma$-independent.
Spectrum generator
------------------
Notice that the integration contours $\ell_\gamma$ can be deformed rather freely inside a half-plane centred on their original position, as long as their relative order is preserved. If several rays are collapsed together, we can still use similar integral equations, but we need to combine the discontinuities across the rays properly. In the most extreme case, all the rays inside a half-plane can be collapsed together, leaving a single integration contour.
If we define the “spectrum generator” as the composition of all the discontinuities [@Gaiotto:2009hg; @Caetano:2012ac], written as the coordinate transformation relating the $X_\gamma$ on the two sides of the cut, $$X^+_\gamma = X^-_\gamma F(X^-)$$ then the integral equations take the form (beyond the conformal limit, this form of the equations was used by [@Caetano:2012ac]) $$\label{eq:int-old-F}
\log X_{\gamma}(\epsilon) = \frac{Z_\gamma}{\epsilon} + \frac{\epsilon}{\pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon'- i 0)^2 - (\epsilon)^2} \log F(X(\epsilon')).$$ This can be very useful, as the spectrum generator for a theory is much simpler to obtain than the individual BPS degeneracies $\omega(\gamma', \gamma)$. The $i0$ prescription is needed because the discontinuity of a coordinate depends on the coordinate itself once rays are collapsed together.
In this form, the behaviour at large $\epsilon$ is simpler to understand. The equations reduce to $$(X^\infty)^2_{\gamma} F(X^\infty)=1$$ i.e. the spectrum generator must send $X^\infty_\gamma$ to $X^\infty_{-\gamma}$.
Simple examples
===============
All of our basic examples will be taken from $\CS[A_1]$ theories, so that the moduli space $\CM_\epsilon$ is a space of monodromy data for complex $SL(2)$ flat connections. We want to verify that $\CL_\epsilon$ coincides with the monodromy data of $SL(2)$ opers. More precisely, we anticipate opers of the form $$-\partial_z^2 + \frac{\phi(z)}{\epsilon^2} + t_0(z)$$ where $\phi(z)$ is a quadratic differential such that the $Z_\gamma$ are periods of $\sqrt{\phi(z)}$ and $t_0(z)$ is a classical stress tensor determined somehow by the large $\epsilon$ behaviour of the TBA equations.
As for the original $SL(2)$ Hitchin system, the $X_\gamma(\epsilon)$ variables coincide with “Fock coordinates”, cross-ratios of Wronskians of certain “small solutions” $s_a$, which are solutions of the Schrödinger differential equation with prescribed behaviour at singularities. We are only allowed to use Wronskians which can be estimated in the $\epsilon \to 0$ limit by a WKB approximation, which turn out to correspond to the edges of a “WKB triangulation”. Each triangle is centred around a turning point $\phi(z)=0$, and each edge $E$ is associated to a compact cycle $\gamma_E$ on the spectral curve $$x^2 = \phi(z),$$ which is the charge which labels the corresponding cross-ratio $X_E \equiv X_{\gamma_E}$. Indeed, by the WKB approximation, the asymptotic behaviour of $X_E$ is $Z_{\gamma_E}$, the period of $\lambda = x dz$ on $\gamma_E$.
We will also find it useful to look directly at the Wronskians themselves, $T_E \equiv X_{\hat \gamma_E}$, whose asymptotics are controlled by the periods $Z_{\hat \gamma_E}$ of $\lambda$ on non-compact cycles $\hat \gamma_E$, which coincide with the edges themselves. The non-compact cycles $\hat \gamma_E$ form a dual basis to the cycles $\gamma_E$, and we expect the Wronskians to be computed by the same integral equation, with an appropriate choice of $\omega(\gamma, \hat \gamma')$. The spectrum generator transformation is easily extended to the Wronskians.
In this section we will focus at first on a handful of “local” examples, where the Schrödinger equation can be exactly solved. The analytic calculations will use the following two definite integrals: for positive real part of $x$, $$\log \Gamma(\frac{1}{2} + x) = x (-1+\log x )+ \log \sqrt{2 \pi}+\frac{1}{\pi} \int_0^\infty \frac{d t}{t^2+1}\log \left(1+e^{- 2 \pi \frac{x}{t}} \right)$$ and $$\log \Gamma(x) = x (-1+\log x )+ \log \sqrt{\frac{2 \pi}{x}}+ \frac{1}{\pi} \int_0^\infty \frac{d t}{t^2+1}\log \left(1-e^{- 2 \pi \frac{x}{t}} \right)$$
Later in the section, we will make numerical comparisons for more general choices of spectral curve and Schrödinger operator. For numerical calculations, and for comparison to the integral equations in this and the next section, it is very useful to use a modified form of the Schrödinger equation. Starting from $$\left[ -\partial_z^2 + \frac{\phi(z)}{\epsilon^2} + t_0(z) \right] \psi(z)=0$$ and writing the wave-function $\psi(z) = e^{\frac{1}{\epsilon}\int^z x dz} f(z)$ we get $$\label{eq:mod}
\left[ -\partial_z^2 - \frac{2 x}{\epsilon} \partial_z - \frac{\partial_z x}{\epsilon} + t_0(z) \right] f(z)=0$$
This differential equation can be easily integrated numerically along the paths $\hat \gamma$. The combination $\sqrt{x} f(z)$ has a finite limit as we go to infinity, and the ratio of $\sqrt{x} f(z)$ at the end-points of the path can be readily compared with the non-trivial part of the Wronskians, $e^{- \frac{Z_{\hat \gamma}}{\epsilon}} X_{\hat \gamma}$.
The harmonic oscillator
-----------------------
There is a simple setup, which works as a local model for the metric on $\CM$ near a singular locus of the Coulomb branch, where a single BPS hypermultiplet becomes massless. It is associated to the $A_1$ spectral curve with a rank $2$ irregular singularity at infinity, denoted as $AD_2$ in [@Gaiotto:2009hg]. $$x^2 = z^2+ 2 a$$
Physically, this is associated to the theory of a single BPS hypermultiplet, of mass $Z_e = 2 \pi i a$. The period $Z_e$ is the period of the differential $\lambda = x dz$ along the finite cycle $\gamma_e$ surrounding the origin at large $z$. The corresponding coordinate is uncorrected, $X_e = \exp \frac{2 \pi {{\mathrm i}}a}{\epsilon}$.
We can define a dual, non-compact cycle $\gamma_m = \hat \gamma_e$ lying on the real axis. To make this statement and subsequent formulae precise, it is useful to keep $a$ close to the positive real axis. Analytic continuation to other values of $a$ is straightforward. The corresponding (regularized) period is $$Z_m = \Lambda^2 + a \left(1 - \ln \frac{a}{2\Lambda^2} \right)$$ This controls the asymptotics of the T-function $T_e$ dual to $X_e$, which we will denote as $X_m$. We can compute $X_m$ right away from the integral equation, using $\omega(e, m) = 1$ and $\sigma(e)=-1$, $$\label{eq:Xm}
\log X_m = \frac{Z_m}{\epsilon} + \frac{\epsilon}{\pi {{\mathrm i}}} \int_{\ell_{-\gamma_e}} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 - (\epsilon)^2} \log(1 +e^{-\frac{2 \pi {{\mathrm i}}a}{\epsilon'}})$$ to obtain the analytic form $$\begin{aligned}
X_m = X_m^+ &= e^{\Lambda^2/\epsilon}\left(2\Lambda^2/\epsilon \right)^{a/\epsilon} \frac{\sqrt{2\pi } }{\Gamma\left(\frac{1}{2}+\frac{a}{\epsilon}\right)}\qquad &\mathrm{Re} a/\epsilon>0 \cr
X_m = X_m^- &=e^{\Lambda^2/\epsilon}\left(-2 \Lambda^2/\epsilon \right)^{a/\epsilon} \frac{ \Gamma\left(\frac{1}{2}-a\right)}{\sqrt{2 \pi }} \qquad &\mathrm{Re} a/\epsilon<0\end{aligned}$$ A basic check is to verify the discontinuity $X_m^+ = X_m^- (1 +e^{\pm \frac{2 \pi {{\mathrm i}}a}{\epsilon'}})$ along the positive or negative imaginary $a/\epsilon$ axis.
We can compare these functions with the analogous quantities computed from the harmonic oscillator Schrödinger operator $$- \epsilon^2 \partial_z^2 \psi+ (z^2 + 2 a) \psi=0$$ If we look at a solution which decreases along the positive real axis as $$\psi_R \sim e^{\frac{\Lambda^2 - z^2}{2 \epsilon}}
\left(L/z\right)^{a/\epsilon}\sqrt{\frac{\epsilon}{2 z}}$$ and take the Wronskian with a solution which decreases along the negative real axis as $$\psi_L \sim e^{\frac{\Lambda^2 - z^2}{2 \epsilon}}
\left(-L/z\right)^{a/\epsilon}\sqrt{-\frac{\epsilon}{2 x}}$$ we obtain $X^+_m$. Similarly $(X_m^-)^{-1}$ arises from the Wronskian of wavefunctions which decrease along the imaginary axis $\psi_U$ and $\psi_D$. This agrees with the definition given for the $AD_2$ theory in [@Gaiotto:2009hg].
The comparison can be made less tedious by using \[eq:mod\], and comparing directly the result of the integral in \[eq:Xm\] $$\label{eq:Xm-again}
\frac{\epsilon}{\pi {{\mathrm i}}} \int_{\ell_{-\gamma_e}} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 - (\epsilon)^2} \log(1 +e^{-\frac{2 \pi {{\mathrm i}}a}{\epsilon'}})$$ with the change in $\log \sqrt{x} f(z)$ along the open path $\gamma_m$.
A more intricate local model
----------------------------
Our next model controls the behavior of moduli spaces of $\CS[A_1]$ theories as the mass parameter for an $SU(2)$ flavor symmetry is turned off, near the locus in the Coulomb branch where a Higgs branch opens up.
The spectral curve for the model is $$x^2 = 1 + \frac{2 a}{z} + \frac{c^2}{z^2}$$ This curve has two finite cycles, corresponding to $$Z_1 = 2 \pi i (c+a) \qquad Z_2 = 2 \pi i (c-a).$$ We define $Z_{1+2}= 4 \pi i c$.
The corresponding basis of non-compact cycles runs on the positive real axis and on the negative real axis respectively (say with $a$ and $c$ real and positive), giving $$\begin{aligned}
Z_{\hat 1} = \Lambda + (c+a) \left(1 - \ln \frac{c+a}{2 \Lambda} \right)-2 c \left(1 - \ln \sqrt{\frac{\tilde \Lambda}{\Lambda}} c\right) \cr
Z_{\hat 2} = \Lambda + (c-a) \left(1 - \ln \frac{c-a}{2 \Lambda} \right)-2 c \left(1 - \ln \sqrt{\frac{\tilde \Lambda}{\Lambda}} c\right)\end{aligned}$$
The coefficients of the logarithmic singularities matches the non-zero $$\omega(1, \hat 1) = - \omega(1+2, \hat 1)=1 \qquad \qquad \omega(2, \hat 2) = - \omega(1+2, \hat 2)=1,$$ and the corresponding T-functions can be computed right away in analytic form (we also need $\sigma(1) = \sigma(2) = -\sigma(1+2) =-1$). For example, $$\begin{aligned}
X_{\hat 1} = e^{\Lambda/\epsilon}\left(2\Lambda/\epsilon \right)^{a/\epsilon}\left(\tilde \Lambda\right)^{c/\epsilon} \frac{\sqrt{2c/\epsilon} \Gamma\left(\frac{2c}{\epsilon}\right)}{\Gamma\left(\frac{1}{2}+\frac{c+a}{\epsilon}\right)}\qquad &\mathrm{Re} (c+a)/\epsilon>0 \, \mathrm{Re} c/\epsilon>0\end{aligned}$$ This coincides with an appropriate Wronskian of sections for the oper $$-\epsilon^2 \partial_z^2 + 1 + \frac{2 a}{z} + \frac{c^2-\frac{\epsilon^2}{4}}{z^2}$$ with behaviour $$\sqrt{\epsilon z/(2c)} \left(\frac{2z}{\epsilon \tilde \Lambda}\right)^{c/\epsilon}$$ near the origin and $$\sqrt{\epsilon/2} e^{-\frac{x}{\epsilon}} (x/\Lambda)^{-a/\epsilon}$$ at positive infinity.
It is interesting to observe that the $-\frac{\epsilon^2}{4}$ correction at the regular singularity is just what one would expect from the semi-classical limit of the AGT dictionary [@Alday:2009aq]. There is a simple interpretation for the singular behavior of the sub-leading stress-tensor $$t_0(z) = -\frac{1}{4 z^2}$$ If we do a conformal transformation $z = e^s$ to make the regular puncture into a tube, $t_0$ disappears. This pattern will persist in other examples.
A simplified version of this local system with a rank $1/2$ irregular singularity at infinity $$x^2 = \frac{1}{z} + \frac{c^2}{z^2}$$ with a single $\omega(e, m)=-1$, $\sigma(e)=1$ state can be similarly matched to the oper $$-\epsilon^2 \partial_z^2 + \frac{1}{z} + \frac{c^2-\frac{\epsilon^2}{4}}{z^2}$$
In both problems, we can dispense of the need of carefully regulating the open periods if we base our comparisons on \[eq:mod\].
Three-punctured sphere
----------------------
The $A_1$ example with three regular punctures is more elaborate, but predictable. The spectral curve is $$x^2 = \frac{-{c_0}^2-{c_1}^2+{c_\infty}^2}{(z-1)
z}+\frac{{c_0}^2}{z^2}+\frac{{c_1}^2}{(z-1)^2}$$ There are three natural cycles of periods $$\begin{aligned}
Z_1 &= 2 \pi i (-c_0 + c_1 + c_\infty) \cr
Z_2 &= 2 \pi i (c_0 - c_1 + c_\infty) \cr
Z_3 &= 2 \pi i (c_0 + c_1 - c_\infty) \end{aligned}$$ and BPS degeneracies $$\begin{aligned}
\omega(1, \hat 1) = - \omega(1+2, \hat 1)=-\omega(1+3, \hat 1) = \omega(1+2+3, \hat 1) =1 \cr
\omega(2, \hat 2) = - \omega(1+2, \hat 2)=-\omega(2+3, \hat 2) = \omega(1+2+3, \hat 2) =1 \cr
\omega(3, \hat 3) = - \omega(2+3, \hat 3)=-\omega(1+3, \hat 3) = \omega(1+2+3, \hat 3) =1 \end{aligned}$$ and $$\sigma(1) = \sigma(2) = \sigma(3) = - \sigma(1+2) = - \sigma(1+3) = - \sigma(2 + 3) = \sigma(1+2+3) =-1.$$
A tedious but straightforward calculation produces T-functions which precisely match the Wronskians of the corresponding sections of $$-\epsilon^2 \partial_z^2 + \frac{-{c_0}^2-{c_1}^2+{c_\infty}^2+\epsilon^2/4}{(z-1)
z}+\frac{{c_0}^2-\epsilon^2/4}{z^2}+\frac{{c_1}^2-\epsilon^2/4}{(z-1)^2}$$
The cubic
---------
The first example where numerical calculations are needed is the $AD_3$ theory, $$x^2 = z^3 + \Lambda z + u$$
In order to carry on the calculation efficiently, we can first specialize to a particularly symmetric point, $u=0$. With no loss of generality, we can set the scale $\Lambda$ to $1$. $$x^2 = z^3 + z$$ The two basic periods $$Z_1 = \frac{8 \sqrt{2} \pi ^{3/2}}{5 \Gamma
\left(\frac{1}{4}\right)^2}e^{{{\mathrm i}}\pi/4} \qquad Z_2 = \frac{8 \sqrt{2} \pi ^{3/2}}{5 \Gamma
\left(\frac{1}{4}\right)^2}e^{3 {{\mathrm i}}\pi/4}$$ satisfy a relation $Z_2 = {{\mathrm i}}Z_1$, which implies an enhancement of the usual $Z_2$ symmetry to $Z_4$: $X_2({{\mathrm i}}\epsilon) = X_1(\epsilon)$. The non-zero degeneracies are $\omega(1,2) =-\omega(2,1) = 1$ with $\sigma(1)=\sigma(2)=-1$ and thus we can collapse the integral equations to $$\label{eq:int-cubic}
\log X_1(\epsilon) = \frac{Z_1}{\epsilon} + \frac{\epsilon}{\pi} \int_{\ell_1} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 +(\epsilon)^2} \log(1 +X_1(\epsilon')).$$ We can readily solve the equation numerically, by iterating it a few times. The solution goes to the golden ratio at infinity, as we have $$(X^\infty_1)^2 = 1+X^\infty_1$$
The functions $\log X_{1,2}(\epsilon)$, computed numerically on the real positive $\epsilon$ axis, agree with high numerical precision with the appropriate transport coefficients of the Schrödinger operator $$- \epsilon^2 \partial_z^2 + z^3 + z$$
Pure $SU(2)$
------------
The next example is associated to the spectral curve $$x^2 = \frac{1}{z} + \frac{u}{z^2} + \frac{1}{z^3}$$ If we set $u=0$, we gain the same sort of $Z_4$ symmetry as for the cubic example. The reduced integral equation is almost identical: it differs by a crucial factor of $2$: $$\label{eq:int-su}
\log X_1(\epsilon) = \frac{Z_1}{\epsilon} + 2 \frac{\epsilon}{\pi} \int_{\ell_1} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 +(\epsilon)^2} \log(1 +X_1(\epsilon')).$$
This factor of $2$ has deep consequences. The solutions cannot asymptote to a constant at infinity: $$X^\infty_1 = 1+X^\infty_1$$ does not make sense. Indeed, it is easy to argue that a logarithmic divergence $k \log \epsilon$ of $\log X_1$ is self-consistent for any $k$. The integral equations in the conformal limit do not fix $k$.
On the other hand, if we look at the full integral equations, and take the conformal limit of their solutions, the limit appears well-defined. The limiting solutions appear to diverge roughly as $\frac{1}{2} \log \epsilon$, i.e. $X_1$ appears to grow as $\sqrt{\epsilon}$.
Some educated guesswork produces a candidate Schrödinger operator whose transport coefficients appear to match numerically the conformal limit of the solutions of the full integral equations at $u=0$: $$- \epsilon^2 \partial_z^2 + \frac{1}{z} + \frac{u-\epsilon^2/4}{z^2} + \frac{1}{z^3}$$
Specialization to Hitchin moduli space
======================================
If we are dealing with a theory in class $\CS$, so that $\CM$ is roughly the Hitchin’s moduli space, we can use a further set of integral equations which produce directly the flat sections of the auxiliary flat connection $$\CA = \frac{\Phi}{\zeta} + A + \zeta \bar \Phi$$ for the Hitchin system on a punctured Riemann surface $C$, and thus the solution $A,\Phi$ of the Hitchin system labeled by a point in $\CM$. Then we can verify directly if the equations provide flat sections of opers in the conformal limit.
We first introduce the auxiliary functions $x_{\gamma_{ij'}}$, labeled by open paths $\gamma_{ij'}$ on the spectral curve $$\label{eq:spec}
\det \left[ x dz - \Phi(z)\right] =0,$$ through the equation $$\label{eq:int-old-x}
\log x_{\gamma_{ij'}}(\zeta) = \frac{Z_{\gamma_{ij'}}}{\zeta} + {{\mathrm i}}\theta_{\gamma_{ij'}} + \bar Z_{\gamma_{ij'}} \zeta + \sum_{\gamma'} \omega(\gamma', \gamma_{ij'}) \frac{1}{4 \pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'} \frac{\zeta' + \zeta}{\zeta' - \zeta} \log(1 - X_{\gamma'}(\zeta')).$$ The coefficients $\omega(\gamma', \gamma_{ij'})$ are piecewise constant on $C$, thus the dependence on the initial and final points of $\gamma_{ij'}$ is locally captured by the periods $Z_{\gamma_{ij'}}$ of the canonical differential $\lambda = x dz$ on the spectral curve. The labels $i$ and $j'$ indicate on which sheets of the spectral curve the path $\gamma_{ij'}$ ends.
Then we define a matrix $$\label{eq:int-2}
g_k(\zeta) = g_k^0 + \sum_{\ell \neq k, \gamma_{\ell k}} \mu(\gamma_{\ell k})
\frac{1}{4 \pi {{\mathrm i}}} \int_{\ell_{\gamma_{\ell k }}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'}
\frac{\zeta' + \zeta}{\zeta' - \zeta} g_\ell (\zeta') x_{\gamma_{\ell k}}(\zeta'),$$ and its inverse $$\label{eq:int-2-inv}
g_{-k}(\zeta) = g_{-k}^0 - \sum_{\ell \neq k, \gamma_{ k \ell}} \mu(\gamma_{ k \ell} )
\frac{1}{4 \pi {{\mathrm i}}} \int_{\ell_{\gamma_{ k \ell} }} \frac{{{\mathrm{d}}}\zeta'}{\zeta'}
\frac{\zeta' + \zeta}{\zeta' - \zeta} x_{\gamma_{ k \ell}}(\zeta') g_{-\ell} (\zeta') .$$
These matrices define a local gauge transformation which diagonalizes the Hitchin complex flat connection $\CA$ and reduces it to an abelian connection whose local holonomies are encoded in the $x_{\gamma_{ij'}}$. The source terms $g_k^0$ and $g_{-k}^0$ pick a specific choice of gauge for the solution of the Hitchin system. We will come back to them momentarily.
Notice that the abelian connection can be recovered simply as $a_{j'} = d x_{\gamma_{ij'}}$, where $d$ acts on the endpoint of the path $\gamma_{ij'}$. We can write directly $$\label{eq:dint-old}
a_i(\zeta) = \frac{\lambda_i}{\zeta} + {{\mathrm i}}d\theta_i + \bar \lambda_i \zeta + \sum_{\gamma'} d\omega(\gamma', i) \frac{1}{4 \pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'} \frac{\zeta' + \zeta}{\zeta' - \zeta} \log(1 - X_{\gamma'}(\zeta')).$$ The closed forms $d\omega(\gamma', i)$ will be supported on specific codimension $1$ loci on $C$.
The complex flat connection is recovered as $$\CA = \sum_i g_i a_i g_{-i} + g_i dg_{-i}$$ It is useful to expand everything around $\zeta=0$. We can pick a convenient complexified gauge choice by rewriting the $g_k$ integral equation as $$\label{eq:int-2-gauge}
g_k(\zeta) = g_k^+ + \sum_{\ell \neq k, \gamma_{\ell k}} \mu(\gamma_{\ell k})
\frac{\zeta}{2 \pi {{\mathrm i}}} \int_{\ell_{\gamma_{\ell k }}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'}
\frac{1}{\zeta' - \zeta} g_\ell (\zeta') x_{\gamma_{\ell k}}(\zeta'),$$ and similarly for $g_{-i}$.
If we expand $$g_i = g_i^+ + \cdots \qquad g_{-i} = g^+_{-i} + \cdots$$ and $$a_i = \frac{\lambda_i}{\zeta} + {{\mathrm i}}d\theta_i + \rho_i +\cdots$$ The correction term $$\rho_i = \sum_{\gamma'} d\omega(\gamma', i) \frac{1}{4 \pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\zeta'}{\zeta'} \log(1 - X_{\gamma'}(\zeta'))$$ is a real form, due to the reality conditions on $X_\gamma$ and $\omega(-\gamma', i) = - \omega(\gamma', i)$. Thus we find $$\Phi = \sum_i g^+_i \lambda_i g^+_{-i}$$ and (remember that the canonical one form $\lambda$ is holomorphic) $$A_{\bar z} = \sum_i g^+_i ({{\mathrm i}}d\theta_i + \rho_i )_{(0,1)} g^+_{-i} + g^+_i dg^+_{-i}$$ We can express these relations as $$\label{eq:eigen}
\Phi g^+_i = \lambda_i g^+_{i} \qquad \qquad D_{\bar z} g^+_i = (\rho_i +{{\mathrm i}}d\theta_i)_{(0,1)} g^+_i$$
We would like to match this with the standard parameterization of the Higgs bundle in terms of the spectral curve data. The construction is reviewed beautifully in section $4$ of [@2007arXiv0710.5939F], which explains several mathematical subtleties which are important in the following analysis. The spectral curve for an $SL(K)$ Hitchin system is the curve of eigenvalues of $\Phi$ \[eq:spec\]. The Higgs bundle defines a line bundle on the spectral curve, defined as the co-kernel of $x dz - \Phi$. The line bundle has a non-trivial Chern class. It can be made into a degree zero line bundle by combining it with the difference between the square roots of the canonical bundles of the base curve $C$ and the spectral curve $\Sigma$. Essentially, the point is that the eigenline bundle has curvature localized at the turning points, where two eigenvalues collide. Then the degree zero line bundle can be made into a flat $U(1)$ bundle, and used to parameterize the fibre of Hitchin fibration.
This is exactly what we see in $\ref{eq:eigen}$! The $g^+_{i}$ intertwine the full bundle $V$ and the eigenline bundles $V_i$. A local calculation near the turning points show that the $g^+_{i}$ matrix must have a precise singularity there in order to have a smooth solution of the integral equations. Roughly, $g_i$ diverges as $\prod_{j \neq i} (x_i - x_j)^{-1/2}$. The singularity is exactly such that we can reinterpret $g^+_{i}$ as a smooth intertwiner between $V \otimes K_C^{1/2}$ and $V_i \otimes K_\Sigma^{1/2}$.
We see thus that $\partial_{\bar z} -(\rho_i +{{\mathrm i}}d\theta_i)$ is the Abelian connection on the degree $0$ eigenline bundle. We can verify by hand that the forms $\rho_i$ are closed on the spectral curve: if we look in detail at the jumps in $\omega(\gamma', i)$ induced by $2d-4d$ wall-crossing across $\gamma'$ [@Gaiotto:2011tf], we see that the periods of $d\omega(\gamma', i)$ around turning points are zero. Then $\partial_{\bar z} -{{\mathrm i}}d\theta_i$ is the $U(1)$ connection which parameterizes the Higgs bundle, and $\theta_{\gamma}$ coordinates on the fibre.
Notice that there is a certain degree of ambiguity in picking the square roots of the canonical bundles of the base curve and the spectral curve. A clean, if unfamiliar, way to eliminate the ambiguity is to take the two square roots as twisted line bundles rather than line bundles [@2007arXiv0710.5939F]. Then the coordinates $\theta_{\gamma}$ are holonomies of a twisted $U(1)$ bundle on the spectral curve, and canonical coordinates on the moduli space of Hitchin’s equations for a twisted bundle.
This twisted perspective is not strictly necessary: one can work with normal bundles, making extra non-canonical choices which propagate in the form of sign choices in many places, in particular in the choice of quadratic refinements $\sigma(\gamma)$. On the other hand, the degeneracies $\mu$ and $\omega$ have been computed in the twisted formalism [@Gaiotto:2011tf], where they have canonical, natural signs.
As the choice of square root of the canonical bundle is needed in order to define the Hitchin zero section and the oper manifold, the twisted formalism has the added benefit of making these completely canonical.
The section of Hitchin fibration
--------------------------------
It is natural to set the $\theta_\gamma$ (and $d \theta_i$) to zero in the TBA equations, i.e. look at the special section of Hitchin’s fibration associated to a trivial degree-zero line bundle on the spectral curve. The solutions of the TBA equations acquire an extra symmetry $X_{-\gamma}(- \zeta) = X_\gamma(\zeta)$ which removes the $\rho_i$ corrections as well.
The $g_i^+$ matrix can then be given in detail. The $k$-th element of $g_i^+$ is $$(g_i^+)^k = \prod_{j \neq i} (x_i - x_j)^{-1/2} x_i^k$$ Notice that the determinant of $(g_i^+)^k$ is $1$.
This form of $g_i^+$ corresponds to a very specific gauge choice. The zero section of the Hitchin fibration can be [*defined*]{} by requiring the existence of a non-trivial line-subbundle which generates the whole bundle when acted upon by $\Phi$. The first element of $g_i^+$ defines such a sub-bundle, and the other elements are produced by the action of powers of $\Phi$. Thus $\Phi$ takes the form $$\Phi = \begin{pmatrix} 0 & 0 &\cdots & 0 & \phi_K \cr 1 & 0 &\cdots & 0 & \phi_{K-1} \cr \cdots & \cdots & \cdots &\cdots& \cdots \cr 0 & 0 & \cdots & 1 & 0 \end{pmatrix}$$
The integral equations simplify a bit because of the symmetry $X_{-\gamma}(- \zeta) = X_\gamma(\zeta)$ and the condition $d\theta=0$. The contribution from opposite BPS rays in the integral equations for the $x_{\gamma_{ij'}}$ can be combined together as before, to give an integral kernel which decays at small and large $\zeta'$. The integral equations for the $g_i$ do not change: the symmetry relates $g_{-i}(-\zeta)$ and $g_i(\zeta)$, and thus cannot be used to re-group terms.
The conformal limit
-------------------
The conformal limit is now obvious. We get $$\label{eq:int-1-conf}
x_{\gamma_i}(\epsilon) := Z_{\gamma_i}/\epsilon +
\sum_{\gamma'>0} \omega(\gamma', \gamma_i) \frac{1}{\pi i}
\int_{\ell_{\gamma'}}d \epsilon' \frac{\epsilon}{(\epsilon')^2- \epsilon^2}
\log(1 - X_{\gamma'}(\epsilon')).$$ and $$\label{eq:int-2-conf}
g_k(\epsilon) = g_k^+ + \sum_{\ell \neq k, \gamma_{\ell k}} \mu(\gamma_{\ell k})
\frac{\epsilon}{2 \pi {{\mathrm i}}} \int_{\ell_{\gamma_{\ell k }}} \frac{{{\mathrm{d}}}\epsilon'}{\epsilon'}
\frac{1}{\epsilon' - \epsilon} g_\ell (\epsilon') x_{\gamma_{\ell k}}(\epsilon'),$$ and similarly for $g_{-i}$. Now, we get to the crucial observation. The integral equations build sections of a flat connection $$\CA = \sum_i g_i a_i g_{-i} + g_i dg_{-i}$$ with a rather specific structure. The first element of $g_i^+$, and thus $g_i$, will define a line sub-bundle as long as the integral equations do not require a further gauge transformation for the first element of $g_i$ to be well-defined at turning points. This can be verified by a local analysis, which will be our first example in the next section.
The oper manifold can be defined by the existence of such a line sub-bundle, with the property that acting with powers the $D_z$ component of the connection generates the whole bundle. The latter condition is an open condition, and $D_z$ is dominated by $\Phi$ for small $\epsilon$, thus we expect the solutions of the integral equations will lie in the oper manifold for sufficiently small $\epsilon$. This motivates our conjecture that $\CL_\epsilon$ is the oper manifold in $\CM_\epsilon$.
Notice that we can restrict the integral equations to the first element $f_i$ of $g_i$, which correspond to writing the oper as a degree $K$ differential operator, acting on sections of the $(-K/2)$-th power of the canonical bundle on $C$.
Examples
--------
In this section we will restrict ourselves to $A_1$ examples with spectral curve $$x^2 = \phi_2(z).$$ The integral equations involve two functions $f_1$ and $f_2$. The integral equation has a symmetry which implies that $f_2(-\epsilon) = {{\mathrm i}}f_1(\epsilon)\equiv {{\mathrm i}}\psi(\epsilon)$. Thus we can reduce ourselves to a single integral equation $$\label{eq:int-22}
f(\epsilon) = \frac{1}{\sqrt{2 x}} - \sum_{\gamma_{+-}} \mu(-\gamma_{+-})
\frac{\epsilon}{2 \pi} \int_{\ell_{\gamma_{+-}}} \frac{{{\mathrm{d}}}\epsilon'}{\epsilon'}
\frac{1}{\epsilon' + \epsilon} f(\epsilon') x_{\gamma_{+-}}(\epsilon').$$
### Airy
The local behaviour of the $g_i$ integral equations near a turning point is captured by the spectral curve $$x^2 = z$$ There are no $\omega$ BPS degeneracies, and a single non-zero $\mu$. The integral of $\lambda$ along the path $p$ from $z$ to the turning point at the origin and back on the opposite sheet is $$Z_p=-\frac{4}{3} z^{\frac{3}{2}}$$ and the integral equation becomes $$\label{eq:int-22-Airy}
f(\epsilon) = \frac{1}{\sqrt{2 x}} -\frac{\epsilon}{2 \pi} \int_{\ell_{p}} \frac{{{\mathrm{d}}}\epsilon'}{\epsilon'}
\frac{1}{\epsilon' + \epsilon} f(\epsilon') e^{-\frac{4}{3 \epsilon'} z^{\frac{3}{2}}}.$$
The solution converges rapidly to $$f(\epsilon) = \frac{\sqrt{2 \pi}}{\epsilon^{1/6}} e^{\frac{2}{3 \epsilon} z^{\frac{3}{2}}} \mathrm{Ai}(\frac{z}{\epsilon^{2/3}})$$ which gives us the flat section $\frac{\sqrt{2 \pi}}{\epsilon^{1/6}} \mathrm{Ai}(\frac{z}{\epsilon^{2/3}})$ of the Airy oper $$-\epsilon^2 \partial_z^2 + z$$
An improved parameterization: solving inverse scattering problems
=================================================================
As it stands, the set of integral equations we described produces a parameterization of the oper manifold and the flat sections for the opers which is somewhat convoluted: everything is expressed in terms of the periods $Z_\gamma$.
A natural problem one may consider is to identify an oper, say a Schrödinger operator, with prescribed scattering data/ cross-ratios, and find the corresponding flat sections. The current form of the integral equations is not quite convenient for the purpose. We can easily amend that, using a trick from [@Alday:2010ku]. Start from the integral equation in the conformal limit \[eq:int-old\] specialized to $\epsilon =1$, $$\label{eq:int-old1}
\log X_{\gamma}(1) = Z_\gamma + \sum_{\gamma'>0} \omega(\gamma', \gamma) \frac{1}{\pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 - 1} \log(1 - \sigma(\gamma')X_{\gamma'}(\epsilon')).$$ solve for $Z_\gamma$ and plug back into \[eq:int-old\]: $$\label{eq:int-old-new}
\log X_{\gamma}(\epsilon) = \frac{\log X_{\gamma}(1)}{\epsilon} + \sum_{\gamma'>0} \omega(\gamma', \gamma) \frac{\epsilon-\epsilon^{-1}}{\pi {{\mathrm i}}} \int_{\ell_{\gamma'}} \frac{{{\mathrm{d}}}\epsilon'}{(\epsilon')^2 - (\epsilon)^2} \frac{(\epsilon')^2}{(\epsilon')^2-1} \log(1 - \sigma(\gamma')X_{\gamma'}(\epsilon')).$$
For complete gauge theories, such as $A_1$ examples associated to the Schrödinger operators, there should be no problem associated with solving for $Z_\gamma$: by varying all the parameters in the quadratic differential, including the position of punctures, one can reach generic values of $Z_\gamma$. Thus one can solve the above integral equation to find the choice of periods $Z_\gamma$ which give specific scattering data/crossratios $X_{\gamma}(1)$ at $\epsilon=1$.
Acknowledgements {#acknowledgements .unnumbered}
================
The research of DG was supported by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation.
[^1]: The twist uses $SU(2)_R$. As a consequence, the setup is consistent if we define $\CM$ by the supersymmetric compactification on a circle which uses a $SU(2)_R$ generator $I_3$ to define the fermion number $(-1)^{I_3}$. Thus $\CM$ is the manifold denoted as $\tilde \CM$ in [@Gaiotto:2010be]
[^2]: The canonical quadratic refinement is the difference between the $SU(2)_R$ “fermion number” $(-1)^{I_3}$ of a BPS particle of charge $\gamma$ and the more standard fermion number $(-1)^{J_3}$ defined through the angular momentum generator.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Moving mirrors are submitted to reaction forces by vacuum fields. The motional force is known to vanish for a single mirror uniformly accelerating in vacuum. We show that inertial forces (proportional to accelerations) arise in the presence of a second scatterer, exhibiting properties expected for a relative inertia: the mass corrections depend upon the distance between the mirrors, and each mirror experiences a force proportional to the acceleration of the other one. When the two mirrors move with the same acceleration, the mass correction obtained for the cavity represents the contribution to inertia of Casimir energy. Accounting for the fact that the cavity moves as a stressed rigid body, it turns out that this contribution fits Einstein’s law of inertia of energy.
PACS: 12.20 - 04.90 - 42.50 -
address: |
(a) Laboratoire de Physique Théorique de l’ENS [^1], 24 rue Lhomond, F75231 Paris Cedex 05 France\
(b) Laboratoire de Spectroscopie Hertzienne [^2], 4 place Jussieu, case 74, F75252 Paris Cedex 05 France
author:
- 'Marc Thierry Jaekel $^{(a)}$ and Serge Reynaud $^{(b)}$'
date: '[Journal de Physique I]{} [**3**]{} (1993) 1093-1104'
title: Inertia of Casimir energy
---
Scatterers in vacuum are submitted to the radiation pressure of vacuum fields. In a configuration with two motionless mirrors, a mean force, the so-called Casimir force, results for each of them [@Inertia1; @Inertia2]. As known since Einstein [@Inertia3], the field energy stocked inside a box contributes to its inertia. This law of inertia of energy has to be valid for any kind of energy bound to the motion of any system [@Inertia4]. Casimir energy corresponds to the particular case of a Fabry-Perot cavity, a system formed by two mirrors which models Einstein’s “box”, immersed in vacuum fields, and can be considered as a small amount of vacuum energy stocked inside the cavity, actually a negative amount as a binding energy for an atomic system. As a consequence of the law of inertia of energy, the inertial mass of the cavity has to vary with Casimir energy for example, when the length is varied.
Because of the infiniteness of the vacuum energy density (when integrated over frequency) however, it has often been claimed that vacuum energy is not a real energy like photons. Particularly, it seems to be admitted that it does not gravitate [@Inertia5] so that its contribution to inertial forces can also be questionned. Nevertheless, it has also been argued that Casimir energy, a finite energy difference between two vacuum configurations, has to contribute to gravitation and inertia [@Inertia6]. In the present paper, we demonstrate that Casimir energy does indeed contribute to inertia of the Fabry-Perot cavity. For this demonstration, we use previously obtained results providing the motional forces, i.e. the mean forces experienced by mirrors moving in vacuum [@Inertia7; @Inertia8] which are associated with the quantum fluctuations of vacuum radiation pressure [@Inertia9].
For a perfectly reflecting mirror alone in the vacuum state of a scalar field in a two dimensional (2D) spacetime [@Inertia10], the motional force $\delta F$ can be written in a linear approximation in the displacement $\delta q$ as $$\delta F(t)=\frac{\hbar }{6\pi c^ 2 }\delta q^{\prime \prime \prime }(t)
\eqnum 1$$ This force vanishes for a uniform velocity, as well as for a uniform acceleration, which can be interpreted as a consequence of spatial symmetries of the vacuum. Vacuum fields are invariant under the action of Lorentz boosts [@Inertia11], so that the motional force vanishes for uniform velocity. When seen by a uniformly accelerating observer, vacuum fields appear as thermal fields for a motionless observer [@Inertia12] and the motional force vanishes for uniform acceleration. These properties remain true for a partially transmitting mirror in vacuum [@Inertia7; @Inertia13].
The motional forces have also been computed in the configuration of a Fabry-Perot cavity in vacuum [@Inertia10; @Inertia8], where the spatial symmetries previously discussed are broken. We show in the present paper that they contain inertial forces, proportional to the accelerations, which exhibit properties expected for a relative inertia: the mass corrections depend upon the distance, and forces are obtained for each mirror, which are proportional to the acceleration of the other one. We will also compute the mass correction for a global motion of the system. In the limiting case of perfect mirrors for instance, we obtain a mass correction which is twice the value of Casimir energy over $c^ 2$. Taking into consideration that the cavity moves as a stressed rigid body (same motion for the two mirrors), a situation elucidated by Einstein himself [@Inertia14], it turns out that this is exactly the prediction of the law of inertia of energy.
Perfect mirrors: qualitative derivation of inertia corrections {#perfect-mirrors-qualitative-derivation-of-inertia-corrections .unnumbered}
==============================================================
We first study the situation of two perfect point-like mirrors in the vacuum state of a 2D scalar field. All relations, including the definition of vacuum, will refer to an inertial frame.
A linear approximation of the expression obtained in this case by Fulling and Davies [@Inertia10] provides us with the force $\delta F_1$ exerted upon the mirror 1 as a function of the positions $\delta q_1$ and $\delta q_2$ of the two mirrors [@Inertia15]; more precisely, $\delta
F_1$ is the response of the mean force to classical displacements) $$\begin{aligned}
\delta F_1 (t) &=&\frac{\hbar }{6\pi c^ 2 }\left( \delta q_1 ^{\prime
\prime \prime }(t)-\delta q_2 ^{\prime \prime \prime }(t-\tau )+\delta
q_1 ^{\prime \prime \prime }(t-2\tau )-\delta q_2 ^{\prime \prime \prime
}(t-3\tau )+\ldots \right) \nonumber \\
&&+\frac{\hbar \pi }{6c^ 2 \tau ^ 2 }\left( \frac 1 2 \delta q_1 ^{\prime
}(t)-\delta q_2 ^\prime (t-\tau )+\delta q_1 ^\prime (t-2\tau )-\delta
q_2 ^\prime (t-3\tau )+\ldots \right) \eqnum 2 \end{aligned}$$ where $\tau $ is the propagation delay of light from one mirror to the other and $q$ the distance between the mirrors $$\tau =\frac{q}{c}$$
The terms proportional to third order derivatives in equation (2) have the same form as the damping force (1) for a single mirror, but the modification of the stress tensor generated by the mirrors’ motion now propagates from one mirror to the other and is reflected back by the mirrors. This is why the time of flight $\tau $ appears in the expression of the force. Although they have the same form as for a single mirror, these terms give rise to mass corrections. A qualitative demonstration goes as follows. Extracting the contribution of these terms to the motional force (first line of eq. 2) and considering the quasistatic limit where the positions vary slowly on a time scale $\tau $, one transforms the sum over discrete times into an integral $$\delta F_1 (t)\approx {\int^{t}}{\rm d}t^\prime \frac{\hbar }{12\pi
c^ 2 \tau }\left( \delta q_1 ^{\prime \prime \prime }(t^\prime )-\delta
q_2 ^{\prime \prime \prime }(t^\prime )\right)$$ and one gets a motional force depending upon the accelerations of the two mirrors $$\delta F_1 (t)\approx \frac{\hbar }{12\pi cq}\left( \delta q_1 ^{\prime
\prime }(t)-\delta q_2 ^{\prime \prime }(t)\right)$$
The other terms of equation (2), proportional to velocities, are not present in the one mirror problem. They are associated with the existence of a static Casimir force, since their contribution (second line of eq. 2) leads in the quasistatic approximation to $$\delta F_1 (t)\approx \frac{\hbar c\pi }{12q^{3}}\left( \delta
q_1 (t)-\delta q_2 (t)\right)$$ This is the variation with the distance $q=q_2 -q_1 $ of the mean Casimir force $F_1 $ $$F_1 =\frac{\hbar c\pi }{24q^ 2 }=\partial _{q}U\qquad U=-\frac{\hbar c\pi }
{24q} \eqnum{3}$$ where $U$ is the known expression for the Casimir energy [@Inertia2].
In the approximate expressions obtained in this section, the position or acceleration of one mirror are measured relatively to the position or acceleration of the other one. In the more precise discussion which follows, this property will remain true for the positions (the static force only depends upon the distance between the two mirrors), but accelerations will appear only partly as relative quantities; this feature will lead to a non vanishing correction for the global mass of the cavity.
Perfect mirrors: quantitative evaluation of inertia corrections {#perfect-mirrors-quantitative-evaluation-of-inertia-corrections .unnumbered}
===============================================================
The motional forces $\delta F_{i}$ can be written in the temporal or spectral domains $$\begin{aligned}
&&\delta F_{i}(t)={\int }{\rm d}\tau \sum_{j}\chi _{ij}(\tau )\delta
q_{j}(t-\tau ) \\
&&\delta F_{i}[\omega ]=\sum_{j}\chi _{ij}[\omega ]\delta q_{j}[\omega ]\end{aligned}$$ where we denote for any function $f$ $$f(t)=\int \frac{{\rm d}\omega }{2\pi }f[\omega ]e^{-i\omega t}$$
The expressions (2) for the motional forces correspond to the following susceptibility functions where the real and imaginary parts are separated in the spectral domain $$\begin{aligned}
\chi _{ij}[\omega ] &=&\chi _{ji}[\omega ]=\widetilde{\xi }_{ij}[\omega
]+i\xi _{ij}[\omega ] \eqnum{4a} \\
\xi _{11}[\omega ] &=&\xi _{22}[\omega ]=\frac{\hbar }{12\pi c^ 2 }\omega
^{3} \eqnum{4b} \\
\xi _{12}[\omega ] &=&0 \eqnum{4c} \\
\widetilde{\xi }_{11}[\omega ] &=&\widetilde{\xi }_{22}[\omega ]=-\frac
\hbar {12\pi c^ 2 }\frac{\omega ^{3}-\omega \frac{\pi ^ 2 }{\tau ^ 2 }}
{\tan (\omega \tau )} \eqnum{4d} \\
\widetilde{\xi }_{12}[\omega ] &=&\frac{\hbar }{12\pi c^ 2 }\frac{\omega
^{3}-\omega \frac{\pi ^ 2 }{\tau ^ 2 }}{\sin (\omega \tau )} \eqnum{4e}\end{aligned}$$ It can be noted that the dissipative parts $\xi _{ij}$, imaginary parts of $\chi _{ij}$ and odd functions of $\omega $, coincide with the contributions of the outer space (for each mirror, only one half of the outer space contributes and $\xi _{ii}$ is half the value of $\xi $ for a single mirror) while the dispersive parts $\widetilde{\xi }_{ij}$, real parts of $\chi _{ij}
$ and even functions of $\omega $, are the contributions of the intracavity space. This fact has a clear interpretation: the outer fields constitute an open quantum system, corresponding to a continuous spectrum; in contrast, the intracavity fields are characterized by a discrete spectrum and are unable to contribute to dissipation. As a consequence, there is no dissipative part in the mutual susceptibility ($\xi _{12}=0$).
The dissipative parts $\xi _{ij}$ are the commutators of the force operators and can be deduced from the correlation function $C_{ij}$ computed for motionless mirrors [@Inertia7; @Inertia8] $$\xi _{ij}(t)=\frac{\left\langle \left[ F_{i}(t),F_{j}(0)\right]
\right\rangle }{2\hbar }=\frac{C_{ij}(t)-C_{ji}(-t)}{2\hbar }\qquad
C_{ij}(t)=\left\langle F_{i}(t)F_{j}(0)\right\rangle -\left\langle
F_{i}\right\rangle \left\langle F_{j}\right\rangle$$ Fluctuations can also be recovered from dissipation through the relation [@Inertia7; @Inertia8] $$C_{ij}[\omega ]=2\hbar \theta (\omega )\xi _{ij}[\omega ]$$ i.e. the fluctuation-dissipation relation [@Inertia16] at the limit of zero temperature.
The dispersive functions $\widetilde{\xi }_{ij}$ diverge at the zeros $\omega =m\frac{\pi }{\tau }$ of the denominators, except for $m=0$ or $m=1$ where the numerators vanish. These divergences result from a constructive interference between the different numbers of cavity roundtrips [@Inertia8]. According to causality (which is apparent in eqs 2), the susceptibility functions (4) are analytic in the upper half-plane of the frequency domain ($\Im \omega >0$), and the dispersive parts are related to the dissipative ones through dispersion relations, a property which is more easily checked for partially transmitting mirrors (see the next section).
As the susceptibility functions are regular around $\omega =0$, a quasistatic expansion of the force may be performed, in which coefficients are introduced for describing static, viscous and inertial forces (we will not be interested in higher order quasistatic coefficients) $$\begin{aligned}
&&\delta F_{i}(t)=-\sum_{j}\left( \kappa _{ij}\delta q_{j}(t)+\lambda
_{ij}\delta q_{j}^\prime (t)+\mu _{ij}\delta q_{j}^{\prime \prime
}(t)+\ldots \right) \eqnum{5a} \\
&&\chi _{ij}[\omega ]=-\kappa _{ij}+i\omega \lambda _{ij}+\omega ^ 2 \mu
_{ij}+\ldots \eqnum{5b} \\
&&\kappa _{ij}=-\chi _{ij}[0]\qquad \lambda _{ij}=-i\chi _{ij}^{\prime
}[0]\qquad \mu _{ij}=\frac{\chi _{ij}^{\prime \prime }[0]} 2 \eqnum{5c}\end{aligned}$$ The quasistatic coefficients are deduced from equations (4); the static coefficients $\kappa _{ij}$ describe the variation with the distance $q$ of the force $F=F_1 =-F_2 $ $$\kappa _{11}=\kappa _{22}=-\kappa _{12}=-\kappa _{21}=-\frac{\hbar c\pi }
{12q^{3}}=\frac{{\rm d}F}{{\rm d}q} \eqnum{6a}$$ The viscosity coefficients $\lambda _{ij}$ vanish while the inertia corrections $\mu _{ij}$ are given by $$\begin{aligned}
\mu _{11} &=&\mu _{22}=-\frac{\hbar }{12\pi cq}\left( 1+\frac{\pi ^ 2 }{3}
\right) \eqnum{6b} \\
\mu _{12} &=&\mu _{21}=-\frac{\hbar }{12\pi cq}\left( -1+\frac{\pi ^ 2 }{6}
\right) \eqnum{6c}\end{aligned}$$ The mass corrections differ from the approximate ones previously discussed by numerical factors: the terms in equations (2) which are proportional to velocities also contribute to the inertial corrections; $\mu _{11}$ and $\mu
_{12}$ no longer have opposite values and they actually have the same sign. This will allow the mass correction for a global motion of the cavity to differ from zero.
Partially transmitting mirrors {#partially-transmitting-mirrors .unnumbered}
==============================
In a more satisfactory treatment, perfect mirrors are replaced by partially transmitting mirrors, described by reflection and transmission amplitudes respectively denoted $r_{i}$ and $s_{i}$ for the mirror $i=1,2$ obeying unitarity, causality and high frequency transparency requirements [@Inertia2]. These mirrors are more easily shown to obey causality [@Inertia7], the divergences associated with the infiniteness of vacuum energy are regularized [@Inertia2; @Inertia8], and the stability problem arising for a perfect mirror may be solved [@Inertia17]. A resonant enhancement of the motional Casimir force subsists at the optical resonance frequencies of the Fabry-Perot cavity [@Inertia8; @Inertia18].
The susceptibility functions $\chi _{ij}$ may be written (see eqs 20 and 21 of ref. [@Inertia8]; $\varepsilon $ is the sign function) $$\chi _{ij}[\omega ]=\frac{i\hbar }{2c^ 2 }\int \frac{{\rm d}\omega ^\prime }
{2\pi }\omega ^\prime (\omega -\omega ^\prime )\varepsilon (\omega
^\prime )\gamma _{ij}^{R}[\omega ^\prime ,\omega -\omega ^\prime ]$$ where the coefficients $\gamma _{ij}^{R}$ are the sum of two parts $$\gamma _{ij}^{R}[\omega ,\omega ^\prime ]=\gamma _{ij}^{S}[\omega ,\omega
^\prime ]+\gamma _{ij}^{A}[-\omega ,\omega ^\prime ]$$ Both functions $\gamma _{ij}^{S}$ and $\gamma _{ij}^{A}$ are symmetrical in the exchange of their two parameters, so that one obtains $$\begin{aligned}
\chi _{ij}[\omega ] &=&\chi _{ij}^{S}[\omega ]+\chi _{ij}^{A}[\omega ] \\
\chi _{ij}^{S}[\omega ] &=&\frac{i\hbar }{2c^ 2 }\int_{0}^{\omega }\frac
{{\rm d}\omega ^\prime }{2\pi }\omega ^\prime (\omega -\omega ^\prime )
\gamma _{ij}^{S}[\omega ^\prime ,\omega -\omega ^\prime ] \\
\chi _{ij}^{A}[\omega ] &=&\frac{i\hbar }{2c^ 2 }\int_{0}^{\infty }\frac
{{\rm d}\omega ^\prime }{2\pi }\omega ^\prime \left( (\omega +\omega
^\prime )\gamma _{ij}^{A}[\omega ^\prime ,\omega +\omega ^{\prime
}]+(\omega -\omega ^\prime )\gamma _{ij}^{A}[-\omega ^\prime ,\omega
-\omega ^\prime ]\right) \end{aligned}$$ The functions $\chi _{ij}^{S}$ scale as $\omega ^{3}$ in the vicinity of zero frequency as the susceptibility function for a single mirror, and they do not contribute to the quasistatic coefficients $\kappa _{ij}$, $\lambda
_{ij}$ and $\mu _{ij}$, which we are interested in. We will not discuss them in more detail. The functions $\gamma _{ij}^{A}$ are given by $$\begin{aligned}
\gamma _{11}^{A}[\omega ,\omega ^\prime ] &=&\frac{\left( r_1 [\omega
]+r_1 [\omega ^\prime ]\right) \left( r_2 [\omega ]e^{2i\omega \tau
}+r_2 [\omega ^\prime ]e^{2i\omega ^\prime \tau }\right) }{d[\omega
]d[\omega ^\prime ]} \\
\gamma _{21}^{A}[\omega ,\omega ^\prime ] &=&-\frac{\left( r_1 [\omega
]+r_1 [\omega ^\prime ]\right) \left( r_2 [\omega ]+r_2 [\omega
^\prime ]\right) e^{i(\omega +\omega ^\prime )\tau }}{d[\omega ]d[\omega
^\prime ]} \\
d[\omega ] &=&1-r_1 [\omega ]r_2 [\omega ]e^{2i\omega \tau }\end{aligned}$$ $\gamma _{22}^{A}$ and $\gamma _{12}^{A}=\gamma _{21}^{A}$ are obtained by exchanging the roles of the two mirrors. Straightforward differentiations of the functions $\chi _{ij}^{A}$ then lead to the quasistatic coefficients (see eqs 5) $$\begin{aligned}
\kappa _{ij} &=&-\frac{i\hbar }{2c^ 2 }\int_{0}^{\infty }\frac{{\rm d}\omega
}{2\pi }\omega ^ 2 \left( \gamma _{ij}^{A}[\omega ,\omega ]-\gamma
_{ij}^{A}[-\omega ,-\omega ]\right) \\
\lambda _{ij} &=&\frac{\hbar }{2c^ 2 }\int_{0}^{\infty }\frac{{\rm d}\omega
}{2\pi }\omega \left( \gamma _{ij}^{A}[\omega ,\omega ]+\gamma
_{ij}^{A}[-\omega ,-\omega ]+\omega \gamma _{ij}^{A}{}^\prime [\omega
,\omega ]-\omega \gamma _{ij}^{A}{}^\prime [-\omega ,-\omega ]\right) \\
\mu _{ij} &=&\frac{i\hbar }{4c^ 2 }\int_{0}^{\infty }\frac{{\rm d}\omega }
{2\pi }\omega \left( 2\gamma _{ij}^{A}{}^\prime [\omega ,\omega ]+2\gamma
_{ij}^{A}{}^\prime [-\omega ,-\omega ]+\omega \gamma _{ij}^{A}{}^{\prime
\prime }[\omega ,\omega ]-\omega \gamma _{ij}^{A}{}^{\prime \prime }[-\omega
,-\omega ]\right) \end{aligned}$$ It is understood that differentiation only bears on one of the two frequency parameters $$\begin{aligned}
\gamma _{ij}^{A}{}^\prime [\omega ,\omega ] &=&\left( \partial _{\omega
}\gamma _{ij}^{A}[\omega ,\omega ^\prime ]\right) _{\omega ^{\prime
}=\omega }=\frac 1 2 \frac{{\rm d}\gamma _{ij}^{A}[\omega ,\omega ]}{{\rm d}
\omega } \\
\gamma _{ij}^{A}{}^{\prime \prime }[\omega ,\omega ] &=&\left( \partial
_{\omega }^ 2 \gamma _{ij}^{A}[\omega ,\omega ^\prime ]\right) _{\omega
^\prime =\omega }\end{aligned}$$
One checks [@Inertia8] that the static coefficients $\kappa _{ij}$ fit the variation of the mean Casimir force $F$ between the two partially transmitting mirrors $$\begin{aligned}
&&\kappa _{11}=\kappa _{22}=-\kappa _{12}=-\kappa _{21}=\frac{{\rm d}F}
{{\rm d}q} \eqnum{7a} \\
&&F=\frac{\hbar }{c}\int_{0}^{\infty }\frac{{\rm d}\omega }{2\pi }\omega
\left( 1-\frac 1 {d[\omega ]}+1-\frac 1 {d[-\omega ]}\right) \eqnum{7b}\end{aligned}$$ The viscosity coefficients $\lambda _{ij}$ remain equal to zero for partially transmitting mirrors $$\lambda _{ij}=0 \eqnum{8}$$ One eventually rewrites the inertia corrections $$\begin{aligned}
\mu _{ij} &=&\frac{i\hbar }{4c^ 2 }\int_{0}^{\infty }\frac{{\rm d}\omega }
{2\pi }\omega ^ 2 \left( \Gamma _{ij}[\omega ]-\Gamma _{ij}[-\omega ]\right)
\eqnum{9a} \\
\Gamma _{ij}[\omega ] &=&\gamma _{ij}^{A}{}^{\prime \prime }[\omega ,\omega
]-\frac{{\rm d}\gamma _{ij}^{A}{}^\prime [\omega ,\omega ]}{{\rm d}\omega }
=-\left( \partial _{\omega }\partial _{\omega ^\prime }\gamma
_{ij}^{A}[\omega ,\omega ^\prime ]\right) _{\omega ^\prime =\omega }
\nonumber \\
\Gamma _{11}[\omega ] &=&-2\frac{r_1 ^\prime [\omega ]e^{2i\omega \tau
}\left( 2i\tau r_2 [\omega ]+r_2 ^\prime [\omega ]\right) }{d[\omega
]^ 2 }-4\frac{d^\prime [\omega ]^ 2 }{d[\omega ]^{4}} \eqnum{9b} \\
\Gamma _{21}[\omega ] &=&-4i\tau \frac{d^\prime [\omega ]}{d[\omega ]^ 2 }
+4\tau ^ 2 \frac{1-d[\omega ]}{d[\omega ]^ 2 }+2\frac{r_1 ^\prime [\omega
]r_2 ^\prime [\omega ]e^{2i\omega \tau }}{d[\omega ]^ 2 }+4\frac
{d^\prime [\omega ]^ 2 }{d[\omega ]^{4}} \eqnum{9c}\end{aligned}$$ $\Gamma _{22}$ and $\Gamma _{12}=\Gamma _{21}$ are obtained by exchanging the roles of the two mirrors. At the limit of perfect reflection, the mass corrections (6) are recovered. They are proportional to $\frac{\hbar }{cq}$ in this limit, as it could have been guessed from a dimensional analysis since there are no other dimensioned parameters than $\hbar $, $c$ and $q$. For partially transmitting mirrors, the mass corrections are no longer homogeneous functions of the distance $q$ between the two mirrors, since they now depend upon the reflectivity functions and particularly upon the reflection cutoff frequencies.
Mass correction for the compound system {#mass-correction-for-the-compound-system .unnumbered}
=======================================
We come to the study of a global motion of the compound system, when the two mirrors move with the same acceleration $$\delta q_1 (t)=\delta q_2 (t)=\delta q(t)$$ In other words, their distance remains equal to its initial value $$q_2 (t)-q_1 (t)=q_2 (0)-q_1 (0)=q$$ We notice that the motional force is computed in a first order expansion in the mirrors’ displacement, performed in the vicinity of a static configuration (mirrors at rest). In particular, Lorentz contraction, which scales as $\frac{v^ 2 }{c^ 2 }$ with $v$ the velocity of cavity and $c$ the velocity of light, can be disregarded.
The global force exerted upon the cavity is the sum of the forces exerted upon the two mirrors $$\delta F(t)=\delta F_1 (t)+\delta F_2 (t)$$ and the motion of the system is described by the linear susceptibility $\chi
$ associated with the total force $F$ $$\delta F[\omega ]=\chi [\omega ]\delta q[\omega ]\qquad \chi [\omega
]=\sum_{i}\sum_{j}\chi _{ij}[\omega ]$$ The quasistatic expansion (5) now becomes $$\begin{aligned}
&&\delta F(t)=-\left( \kappa \delta q(t)+\lambda \delta q^\prime (t)+\mu
\delta q^{\prime \prime }(t)+\ldots \right) \\
&&\kappa =\sum_{i}\sum_{j}\kappa _{ij}\qquad \lambda
=\sum_{i}\sum_{j}\lambda _{ij}\qquad \mu =\sum_{i}\sum_{j}\mu _{ij}\end{aligned}$$ The static coefficient $\kappa $ vanishes (see eqs 7), as required by invariance in a global translation of the compound system, or equivalently, by the fact that Casimir force only depends upon the distance between the two mirrors. The viscosity coefficient $\lambda $ also vanishes (see eqs 8), in consistency with Lorentz invariance of vacuum. The force computed for a global motion of the cavity is eventually an inertial force at the quasistatic limit, where the higher order terms are negligible when compared to the second order one $$\delta F(t)=-\mu \delta q"(t)\qquad \mu =\sum_{i}\sum_{j}\mu _{ij}
\eqnum{10}$$ This relation is exact in the particular case of a uniform acceleration where the higher order terms do vanish.
In the limiting case of perfect mirrors, we obtain from equations (4) $$\begin{aligned}
\chi [\omega ] &=&\widetilde{\xi }[\omega ]+i\xi [\omega ] \eqnum{11a} \\
\xi [\omega ] &=&\frac{\hbar }{6\pi c^ 2 }\omega ^{3} \eqnum{11b} \\
\widetilde{\xi }[\omega ] &=&\frac{\hbar }{6\pi c^ 2 }\left( \omega
^{3}-\omega \frac{\pi ^ 2 }{\tau ^ 2 }\right) \tan \frac{\omega \tau } 2
\eqnum{11c}\end{aligned}$$ The dissipative part $\xi $ of the susceptibility is the same for the compound system as for a single perfect mirror. It follows from the fluctuations-dissipation relations that fluctuations of the global force are also the same as for a single mirror. These properties mean that the compound system may, at least for dissipation and fluctuations, be considered as an individual object. They correspond to the fact that the dissipative functions $\xi _{ij}$ coincide precisely in the case of perfect mirrors (see the discussion following eqs 4) with the contributions of outer space.
In contrast, the dispersive part $\widetilde{\xi }$ of the susceptibility differs from the single mirror case. In particular, it contains a mass correction, whereas such a correction was zero for a single mirror $$\mu =\frac{\chi ^{\prime \prime }[0]} 2 =-\frac{\hbar \pi }{12cq}$$ This means that the field energy stocked inside the cavity, and bound along its motion, contributes to its inertia. Furthermore, the mass correction appears to be directly connected to the Casimir energy $U$ given by equation (3) $$\mu c^ 2 =2U \eqnum{12}$$ This relation explains the negative sign of the mass correction, since Casimir energy is a binding energy. However, the factor 2 seems to prevent a simple explanation of the mass correction from the law of inertia of energy [@Inertia3]. A precise discussion requires a more detailed analysis of the law of inertia of energy, and is delayed to the next section.
It is worth recalling that the mass correction $\mu $ has been calculated at the quasistatic limit. The following relation, deduced from equations (11) $$\widetilde{\xi }[\omega ]=\mu \omega ^ 2 \left( 1-\left( \frac{\omega \tau }
{\pi }\right) ^ 2 \right) \frac{\tan \frac{\omega \tau } 2 }{\frac{\omega
\tau } 2 }$$ shows that the motional force behaves as an inertial force at low frequencies only ($\omega \tau \ll 1$ that is $\omega q\ll c$). The dispersive part $\widetilde{\xi }[\omega ]$ of the susceptibility presents divergences at the frequencies $\omega =m\frac{\pi }{\tau }$ ($m$ an odd integer except $m=\pm 1$) which result from a constructive interference between the different numbers of cavity roundtrips [@Inertia8] and may in principle be observable for an arbitrarily small magnitude of the frequency component $\delta q[\omega ]$, that is for a velocity of the system remaining much smaller than the velocity of light. In other words, it turns out that the internal optical modes of the Fabry-Perot cavity are coupled to its external mechanical motion, even for a global motion where the distance between the mirrors is constant. At the adiabatic limit, this coupling may be considered as responsible for an inertia correction. At higher frequencies, internal resonances of the Fabry-Perot cavity are efficiently excited and the effective inertia may become large.
For a cavity built with two partially transmitting mirrors, $\mu $ is deduced (see eq. 10) from equations (9) $$\begin{aligned}
&&\mu =\frac{i\hbar }{4c^ 2 }\int_{0}^{\infty }\frac{{\rm d}\omega }{2\pi }
\omega ^ 2 \left( \Gamma [\omega ]-\Gamma [-\omega ]\right) \\
&&\Gamma [\omega ]=\sum_{i}\sum_{j}\Gamma _{ij}[\omega ]=-4i\tau \frac
{d^\prime [\omega ]}{d[\omega ]^ 2 }\end{aligned}$$ This relation can be transformed by an integration by parts into an expression in terms of the mean Casimir force given by equations (7) $$\mu =-\frac{2Fq}{c^ 2 } \eqnum{13a}$$ Expression (12) is recovered at the limit of perfect mirrors, since $(-Fq)$ then coincides with the Casimir energy $U$ ($U$ scales as $\frac 1 {q}$ and $F=\frac{{\rm d}U}{{\rm d}q}$). In general, the force $F$ is the difference between the mean energy densities inside and outside the cavity, and the quantity $(-Fq)$ is the integral $E_{f}$ of the field energy density, measured as a difference with respect to the mean density in free space ($e_
{\rm inner}$ and $e_{\rm outer}$ are the energy densities inside and outside the cavity [@Inertia2]) $$-Fq=\left( e_{\rm inner}-e_{\rm outer}\right) q=E_{f} \eqnum{13b}$$ Causal reflectivity functions fulfilling the high frequency transparency requirements have to be frequency dependent. Then, the field experiences reflection delays upon each mirror, and the Casimir energy $U$ may be written as the sum of the integrated field energy $E_{f}$ and of an extra contribution, attributed to an apparent modification of the cavity length associated with reflection delays (see eq. 27 in ref. [@Inertia2]). This extra contribution is much smaller than $E_{f}$ at the limit of short delays, when $\tau $ is greater than the reflexion delays.
For partially transmitting mirrors, the divergences of the motional susceptibilty at the optical resonance frequencies $\omega =m\frac{\pi }
{\tau }$ ($m$ an odd integer except $m=\pm 1$) of the cavity are regularized and become dispersion-shaped resonances [@Inertia8].
Inertia of a stressed rigid body {#inertia-of-a-stressed-rigid-body .unnumbered}
================================
The mass corrections obtained in the foregoing section differ from the expression $\frac{E_{f}}{c^ 2 }$ which would naively be expected from the law of inertia of energy [@Inertia3]. It is worth refering this problem to arguments given by Einstein in his first survey article on relativity [@Inertia14].
The point is that the cavity moves as a rigid body, the two mirrors having the same acceleration, while being submitted to a force. In this situation, the momentum $P$ of the cavity has to be written [@Inertia19] $$P=\left( m+\delta m\right) v \eqnum{14a}$$ where $v$ is the global velocity $v=q_1 ^\prime =q_2 ^\prime $, $m$ the genuine mass of the mirrors, and $\delta m$ a mass correction which depends upon the field energy $E_{f}$ and the force $F$ $$\delta m=\frac{E_{f}-Fq}{c^ 2 } \eqnum{14b}$$ The global force exerted upon the system is therefore $$\frac{{\rm d}P}{{\rm d}t}=\left( m+\delta m\right) a \eqnum{14c}$$ where $m$ and $\delta m$ are considered as constant and $a=v^\prime $ is the global acceleration. It thus follows from relativistic considerations (and not from the particular dependence of $F$ or $E_{f}$ versus $q$) that inertia of a stressed rigid body is not given by the simple expression $\frac{E_{f}}{c^ 2 }$, but by the more elaborate one (14) which involves the value of the stress. This refinement plays a role in the relativistic analysis of a thermodynamic system with a homogeneous normal pressure [@Inertia20; @Inertia21].
For the problem studied in the present paper, the stress is the Casimir force and the quantities $E_{f}$ and $(-Fq)$ coincide, so that the mass given by equations (13) and computed from the motional susceptibility fits expressions (14). This proves not only that Casimir energy does contribute to inertia of the cavity, but that its contribution is precisely what is expected from the law of inertia of energy.
It has however to be emphasized that the usual expression $\frac{E_{f}}{c^ 2}
$ effectively describes the inertia correction associated with Casimir energy, when the motion of the relativistic center of inertia of the whole system (mirrors and stocked fields) is considered. This follows directly from the first statement of the law of inertia of energy [@Inertia3], which is of course equivalent to the second statement corresponding to equations (14), as it can be checked by defining the energy $E$, the momentum $P$, and the relativistic center of inertia $Q$ of the compound system $$\begin{aligned}
E &=&e_1 +e_2 +E_{f}\qquad P=p_1 +p_2 +P_{f} \nonumber \\
&&EQ=e_1 q_1 +e_2 q_2 +E_{f}\frac{q_1 +q_2 } 2 \eqnum{15a}\end{aligned}$$ where $e_{i}$ and $p_{i}$ are the relativistic energy and momentum of the mirror $i=1,2$; $E_{f}$ and $P_{f}$ are the energy and momentum of the stocked field; we have used the fact that the stocked field energy is distributed homogeneously inside the cavity and has therefore its center of inertia at the middle point $\frac{q_1 +q_2 } 2 $. Computing explicitly the time derivative of the center of inertia $Q$ defined above and noting that $$p_{i}=e_{i}\frac{q_{i}^\prime }{c^ 2 }\qquad e_{i}^\prime =p_{i}^{\prime
}q_{i}^\prime =F_{i}q_{i}^\prime \qquad E^\prime =0$$ one shows that equations (14) are equivalent to $$c^ 2 P=e_1 q_1 ^\prime +e_2 q_2 ^\prime +\left( E_{f}-Fq\right)
\frac{q_1 ^\prime +q_2 ^\prime } 2 =EQ^\prime \eqnum{15b}$$ The center of inertia $Q$ thus behaves as the position of a particle of mass $\frac{E}{c^ 2 }$.
Discussion {#discussion .unnumbered}
==========
The main result obtained in the present paper is that Casimir energy, that is a change in vacuum energy, does contribute to the inertial mass of a Fabry-Perot cavity. The computed mass agrees with the prediction of the law of inertia of energy, when the fact that the cavity moves as a stressed rigid body is accounted for. The equivalence principle then tells us that Casimir energy has also to contribute to gravity.
The calculations have been performed in the simple case of a scalar field in a 2D spacetime, and they would have to be generalized to scatterers and vacuum fields in a 4D spacetime. In their present form however, they already meet interesting questions concerning the nature of inertia.
Einstein suggestively stated [@Inertia3] that [*“radiation conveys inertia between emitting and absorbing bodies”*]{}. In the context of the present paper, it must be understood that vacuum fields stocked inside the cavity convey inertia between the two mirrors. The stocked energy is bound to intracavity space, between the two mirrors, rather than to the mirrors themselves (this appears clearly in eqs 15). This is why its contribution to inertia gives rise to properties usually associated with the “principle of relativity of inertia”.
Following ideas expressed by Mach, Einstein [@Inertia22] attempted to define such a principle while he was developing his theory of general relativity. Later on [@Inertia23], he described as follows the properties associated with such a conception: (1) [*The inertia of a body must increase when ponderable masses are piled up in his neighborhood*]{}; (2) [*A body must experience an accelerating force when neighboring masses are accelerated;*]{} a third property, concerning rotation, can be disregarded in comparing with calculations in a 2D spacetime. Such effects are actually predicted by general relativity, with a very small magnitude however [@Inertia24].
Although they are derived from a theoretical study of vacuum fluctuations rather than from a study of gravity, the results of the present paper partly fit these requirements. Indeed, the mass of a scatterer in vacuum is modified by the presence of another scatterer, and the correction depends upon the relative distance $q$ of the two scatterers. Each scatterer experiences a force proportional to the acceleration of the other one. Note that a global acceleration of the whole system gives rise to a force, in agreement with the law of inertia of energy.
As a consequence, it appears fruitful to consider the vacuum, the quantum “empty space”, as a Lorentz-invariant realization of inertial reference frame [@Inertia25], the inertial forces representing the reaction of vacuum fields to an accelerated motion with respect to them. The results of the present paper make clear that this conception is pertinent for those inertial forces which are associated with a stocked field energy like the Casimir energy. An appealing feature of this conception is that it would make plausible that gravity forces are actually equivalent to a modification of vacuum fields [@Inertia26]. It has nevertheless to be acknowledged that difficulties plague the possibility of explaining all inertia along these lines.
It appears that the effects discussed in the present paper depend on the scattering properties of neighboring scatterers, while no direct relation has been established between these scattering properties and the masses. Furthermore, the magnitude and sign of the mass corrections do not fit Einstein’s requirements. Negative mass corrections are obtained, as expected from the fact that Casimir energy is a binding energy. Then, the mass corrections obtained in the present paper scale as $\frac{\hbar }{cq}$ for a perfect mirror; they are negligible when compared to the mirror’s mass $m$, as soon as the distance $q$ is greater than the Compton wavelength $\frac
\hbar {mc}$. It can nevertheless be noted that, if the magnitude of the effect is the same in a 4D spacetime (this would be consistent with dimensional analysis), the mass corrections may become large and even diverge, when scatterers uniformly distributed in space are considered. The effect of the finite time of flight between bodies has to be kept in mind, since distant scatterers can only modify the inertial force with large time delays.
[**Acknowledgements**]{}
Thanks are due to J.-M. Courty, A. Heidmann and P.A. Maia Neto for discussions.
Casimir H.B.G., [*Proc. K. Ned. Akad. Wet.*]{} [**51**]{} 793 (1948); a recent review may be found in: Plunien G., Müller B. and Greiner W., [*Phys. Rep.*]{} [**134**]{} 87 (1986).
Jaekel M.T. and Reynaud S., [*J. Physique*]{} [**I 1**]{} 1395 (1991).
Einstein A., [*Ann. Physik*]{} [**18**]{} 639 (1905) \[reprinted in english in [*The Principle of Relativity*]{} (Dover Publications, 1952)\]; [*Ann. Physik*]{} [**20**]{} 627 (1906).
Pais’ biography of Einstein gives an historical account of the successive demonstrations of this law: Pais A. [*Subtle is the Lord...*]{} (Oxford University Press, 1982), ch.7.
See for instance: Feynman R.P. and Hibbs A.R., [ *Quantum Mechanics and Path Integrals*]{} (Mac Graw Hill, 1965), p.244; Enz C.P., in [*Physical Reality and Mathematical Description*]{} C.P.Enz and J.Mehra eds, (Dordrecht, 1974) p.124; a recent discussion containing references is given in: Wesson P.S., [*Astrophys. J.*]{} [**378**]{} 466 (1991).
Sciama D.W., in [*The Philosophy of Vacuum*]{} S.Saunders and H.R.Brown eds, (Clarendon, 1991), p.137.
Jaekel M.T. and Reynaud S., [*Quant. Opt.*]{} [**4**]{} 39 (1992).
Jaekel M.T. and Reynaud S., [*J. Physique*]{} [**I 2**]{} 149 (1992).
Barton G., [*J. Phys.*]{} [**A24**]{} 991 (1991); [*J. Phys.*]{} [**A24**]{} 5533 (1991); in [*Cavity Quantum Electrodynamics*]{} (Suppl. Adv. Atom. Mol. Opt. Phys.), P.Berman ed., (Academic Press, 1994).
Fulling S.A. and Davies P.C.W., [*Proc. R. Soc. London*]{} [**A348**]{} 393 (1976).
Boyer T.H., [*Sci. Am.*]{} [**253**]{} 56 (1985).
Hawking S.W., [*Commun. Math. Phys.*]{} [**43**]{} 199 (1975); Davies P.C.W., [*J. Phys.*]{} [**A8**]{} 609 (1975); Unruh W.G., [ *Phys. Rev.*]{} [**D14**]{} 870 (1976); Birrell N.D. and Davies P.C.W., [ *Quantum fields in curved space*]{} (Cambridge, 1982), and references therein.
Note however that friction and mass corrections appear for a mirror in thermal fields: Jaekel M.T. and Reynaud S., [*Phys. Lett.*]{} [**A172**]{} 319 (1993).
Einstein A., [*Jahrb. Radioakt. Elektron.*]{} [**4**]{} 411 (1907), [**5**]{} 98 (1908) \[translated in english and commented by Schwartz H.M., [*Am. J. Phys.*]{} [**45**]{} 512, 811, 899 (1977)\].
We have added the contributions of intracavity and outer fields, both contributions being derived from the results of Fulling and Davies [@Inertia10]; the same expression is obtained as the limit of perfect reflection in ref. [@Inertia7].
Landau L.D. and Lifshitz E.M. [*Cours de Physique Théorique: Physique Statistique*]{} (Mir, 1967) ch.12; Kubo R., [*Rep. Progr. Phys.*]{} [**29**]{} 255 (1966).
Jaekel M.T. and Reynaud S., [*Phys. Lett.*]{} [**A167**]{} 227 (1992).
Resonant enhancement of the interaction of atoms with vacuum has been studied for instance by: Kleppner D., [*Phys. Rev. Lett.*]{} [**47**]{} 233 (1981); Haroche S., in [*New Trends in Atomic Physics*]{} G.Grynberg and R.Stora eds (North Holland, Amsterdam, 1984), p.193; enhancement of vacuum radiation pressure in a cavity has also been studied by: Braginski V.B. and Khalili F.Ya., [*Phys. Lett.*]{} [**A161**]{} 197 (1991).
We rewrite eqs (18c) and (19) of ref. [@Inertia14] with the appropriate changes of notation: $v$ stands for the global velocity of the system (in place of $q$ in ref. [@Inertia14]); corrections of the order of $\frac{v^ 2 }{c^ 2 }$ are neglected; $E_{f}$ stands for the internal energy ($E_{0}$), $F=F_1 =-F_2 $ for the internal force ($K_{0}$), $q$ for the distance between the two points of application of the force ($\delta _{0}$).
In equation (14.b), $(E_{f}-Fq)$ is replaced by $(E_{f}+pV)$, where $p$ is the pressure and $V$ the volume (see ref. [@Inertia14]); the change of sign is due to the fact that a positive pressure $p$ represents a repulsion between the two points of application, while a positive force $F$ is defined here as an attraction.
The related case of mechanical systems containing stressed threads or rods, as well as controversies held on it since the birth of relativity up to recent times, are presented for instance in: Martins R. de A., [*Am. J. Phys.*]{} [**50**]{} 1008 (1982).
Einstein A., [*Vierteljahrsschrift f. Gerichtliche Medizin.*]{} [**44**]{} 37 (1912); this attempt is described for example by: Kastler A., [*Mémoires de la Classe des Sciences de l’Académie Royale de Belgique*]{} 2e série [**44**]{} (1) 13 (1981) \[reprinted in [ *Oeuvre Scientifique*]{} (Editions du CNRS, Paris 1988) p.1230\]; Pais A. [@Inertia4] ch.15e.
Einstein A., [*The Meaning of Relativity*]{} (Princeton University Press, 1946).
Rosen N., in [*To fulfill a vision*]{} Y.Ne’eman ed., (Addison Wesley, 1981), ch.5.
This idea has often been expressed; see as an example: De Witt B.S., in [*General relativity: An Einstein Centenary Survey*]{} S.W.Hawking and W.Israel eds, (Cambridge, 1979), ch.14.
See for example: Dicke R.H., [*Rev. Mod. Phys.*]{} [ **29**]{} 363 (1957); Sakharov A.D., [*Doklady Akad. Nauk*]{} [**177**]{} 70 (1967) \[[*Sov. Phys. Doklady*]{} [**12**]{} 1040 (1968)\]; see also a list of references in: Puthoff H.E., [*Phys. Rev.*]{} [**A39**]{} 2333 (1989).
[^1]: Unité propre du Centre National de la Recherche Scientifique, associée à l’Ecole Normale Supérieure et à l’Université Paris-Sud
[^2]: Unité de l’Ecole Normale Supérieure et de l’Université Pierre et Marie Curie, associée au Centre National de la Recherche Scientifique
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using high resolution hydrodynamical cosmological simulations, we conduct a comprehensive study of how tidal stripping removes dark matter and stars from galaxies. We find that dark matter is always stripped far more significantly than the stars – galaxies that lose $\sim$80$\%$ of their dark matter, typically lose only 10$\%$ of their stars. This is because the dark matter halo is initially much more extended than the stars. As such, we find the stellar-to-halo size-ratio (measured using r$_{\rm{eff}}$/r$_{\rm{vir}}$) is a key parameter controlling the relative amounts of dark matter and stellar stripping. We use simple fitting formulae to measure the relation between the fraction of bound dark matter and fraction of bound stars. We measure a negligible dependence on cluster mass or galaxy mass. Therefore these formulae have general applicability in cosmological simulations, and are ideal to improve stellar stripping recipes in semi-analytical models, and/or to estimate the impact that tidal stripping would have on galaxies when only their halo mass evolution is known.'
author:
- 'Rory Smith, Hoseung Choi, Jaehyun Lee, Jinsu Rhee, Ruben Sanchez-Janssen,'
- 'Sukyoung K. Yi'
bibliography:
- 'bibfile.bib'
title: The Preferential Tidal Stripping of Dark Matter versus Stars in Galaxies
---
Introduction
============
Galaxy clusters are the largest gravitationally bound structures to form within the large scale structure of the Universe. The gradient of the potential well of a galaxy cluster gives rise to strong gravitational accelerations, that drive a high velocity dispersion for galaxies residing within it. Nevertheless, it is not the net acceleration of a galaxy, but rather the difference in acceleration across the body of a galaxy, or tidal forces, that cause individual galaxies to suffer tidal mass loss. In [@Byrd1990], the strength of the perturbation, that a galaxy experiences from the cluster potential, depends on its radius within the cluster to the inverse cubed. Thus the tidal forces are a smooth, decreasing function of radius, but the cluster potential is far more destructive near the cluster centre than in the cluster outskirts. The strength of the perturbation also depends on the physical size of the galaxy raised to the power of three. Therefore, at a fixed clustocentric radius, extended galaxies are much more perturbed. Tidal stripping from the cluster potential tends to preferentially affect the outer galaxy first (i.e. ‘outside-in’ stripping). A simple approach to model this effect involves calculating the tidal radius of a galaxy ([@BT1987]). Beyond this tidal radius, it is assumed that all material will be removed by external tides.
However, the cluster potential is not the sole source of tidal mass loss in clusters. Clusters are filled with other cluster member galaxies, with which tidal encounters can arise. Because the cluster galaxies typically have a high velocity dispersion ($\sim$1000 km/s), any galaxy-galaxy encounters tend to occur with high relative velocities. However, an individual galaxy may be subject to multiple, short-lived impulsive encounters. This process is known as ‘harassment’ ([@Moore1996]). The effects of such interactions can be approximated by the impulse approximation ([@Gnedin1999], [@Gonzalez2005]). In the impulse approximation, the strength of the internal dynamical kicks that a galaxy receives from a high speed encounter depends on the impact parameter, encounter speed, and is linearly dependent on radius within the galaxy. Therefore, as with the potential of the cluster, tidal stripping from impulsive galaxy-galaxy encounters preferentially affects the outer galaxy first, resulting in outside-in stripping.
The results of outside-in tidal stripping could potentially have important consequences for some galaxies. The halos and disks of galaxies may become truncated ([@Smith2015]). In fact, preferential stripping of the more extended stellar body of a nucleated dwarf, leaving the central nucleus remaining, is one evolutionary route by which Ultra-Compact dwarfs may form ([@Bekki2001]; [@Pfeffer2013]; [@Pfeffer2014]; [@Ferrarese2016]). Also, disk galaxies in clusters may become increasingly bulge-dominated because their more extended disk component is preferentially tidally stripped compared to their bulge ([@Aguerri2009]).
Observationally, it is often difficult to assess if a galaxy is clearly suffering harassment. The high speed nature of the tidal encounter means that the interacting galaxy may be long gone by the time we observe a harassed galaxy. Galaxies that undergo harassment often produce stellar streams, however the stellar streams are typically very low surface brightness ([@Moore1996]; [@Davies2005]; [@Mastropietro2005]; [@Smith2010a]). Therefore, understanding the effects of harassment on cluster galaxies has largely remained the realm of numerical simulations (e.g. [@Moore1998]; [@Gnedin2003b]). Simulations have revealed that a key parameter controlling the effectiveness of harassment is galaxy surface-brightness. Low surface brightness disk galaxies suffer much more significant disruption, morphological transformation, and stellar stripping than high surface brightness galaxies ([@Gnedin2003a]; [@Moore1999]). Simulations also find that the high speed tidal encounters can increase the mass loss beyond that from the main cluster potential alone by 10-50$\%$ ([@Gnedin2003b], [@Knebe2006], [@Smith2013]). The strength of mass loss from harassment is very dependent on a galaxy’s orbital parameters within the cluster ([@Mastropietro2005]; [@Smith2010a]; [@Smith2013]; [@Smith2015]). Galaxies with small pericentres, combined with low eccentricity suffer the highest mass loss as they spend the most time where the cluster tides are most harsh. However, even large eccentricity orbits can be destructive, if the pericentre is sufficiently small ([@Smith2015]).
Tidal mass loss can also arise in groups of galaxies. Simulations show group preprocessing can influence group members, and that the inclination of a galaxy’s disk to its orbital plane can influence the efficiency of stellar stripping ([@Villalobos2012]). This same dependence has since been noted in cluster harassment simulations as well ([@Bialas2015]). Indeed a significant fraction of galaxies, that may have suffered effects from the group environment, may be found in clusters by redshift zero ([@Mihos2004]; [@McGee2009]; [@DeLucia2012]). The presence of kinematically decoupled cores in some cluster dwarf ellipticals could be direct evidence for the influence of the group environment ([@Toloba2014a]), as it is very difficult to form such features by harassment ([@Gonzalez2005]). The high frequency of cluster galaxies with merger features may also provide evidence for preprocessing ([@Sheen2012]; [@Yi2013]).
The impact of stellar tidal stripping is not just important for galaxies themselves. The stars that are stripped in a cluster contribute to the build-up, and properties, of the Intra Cluster Light (ICL), and the Brightest Central Galaxy (BCG) ([@DeLucia2007]; [@Contini2014]). The N-body simulations of a cosmological cluster in [@Rudick2009] showed that as much as 40$\%$ of the ICL is formed from cold streams of stars that, themselves, are formed by tidal interactions with the BCG, or formed in galaxy interactions in groups before infalling into the cluster. In fact, the latter is more common at high redshift, before many clusters assemble, when the group environment was much more common.
The modelling of stellar tidal stripping may be very important in Semi-Analytical Models (SAMs). In SAMs, a dark matter only cosmological simulation is augmented with analytical recipes for how galaxies grow and evolve, including gas cooling, star formation, and stellar feedback. Among these analytical treatments for how the baryons should behave, it is necessary to consider how the stellar component of galaxies should respond to tidal stripping. A wide range of stellar tidal stripping recipes have previously been applied in the literature. In many cases, the stellar mass of a galaxy is not altered until the dark matter reaches some critical limit, and then all of the stars are instantaneously stripped. For example, in [@Somerville2008] this occurs when the halo is truncated to a single radial scalelength. In [@Guo2011], it is when the host halo density at pericentre surpasses the density of the baryons of the satellite. An alternative critical limit is when the halo mass is reduced to the same mass as the total baryonic mass (e.g., [@Guo2011]; [@Lee2013]) . In alternative recipes, the stellar mass is decreased more smoothly. [@Contini2016] assumes that the stellar mass reduces exponentially, once the galaxy enters a host halo. Alternatively some SAMs only strip stars beyond the tidal radius of that galaxy (e.g., [@Henriques2010]; [@Kimm2011]; [@Contini2014]).
As we will demonstrate in Section \[stellarmassfuncsect\], the choice of tidal stripping recipe can impact on the shape of the stellar mass function. The stripping recipe could also impact the growth rate of the ICL and BCG ([@DeLucia2007b]; [@Lidman2013]), and potentially alter the stellar metallicity radial gradients in massive galaxies ([@Contini2014]).
Numerical simulations that model both the halo and stellar mass of galaxies with live components can give some insights into how tidal stripping of stars occurs. [@Penarrubia2008] and [@Smith2013a] demonstrate that very high fractions of dark matter must be stripped before significant stellar stripping occurs. In [@Smith2013a], for a large number of harassed dwarf galaxy models, it was found that typically 80-90$\%$ of the dark matter must be tidally stripped, just to remove 10$\%$ of the stars. However, the exact fraction of dark matter that must be stripped is found to depend on the size of the stellar disk of the galaxy, such that a smaller disk requires even stronger dark matter losses to be equally affected. Thus, in [@Smith2013a], an attempt was made to link the efficiency of dark matter stripping to the efficiency of stellar stripping. In this study, we will attempt to extend on this previous analysis, by using fully cosmological simulations, and measuring the complete relation between bound dark matter fraction and bound stellar fraction, for galaxies suffering a full range of mass loss. In Section 2 we describe the numerical setup, in Section 3 we present our results, and in Section 4 there is a discussion and conclusion.
Setup
=====
The Hydrodynamic Cosmological Simulations {#hydrosims}
-----------------------------------------
We conduct zoom simulations of clusters of galaxies, using the adaptive mesh refinement code ([@Teyssier2002]). Clusters are initially selected from a low resolution, dark-matter only, 200 Mpc/h cubic volume, using initial conditions generated by ([@Prunet2008]). In this study we assume a flat $\Lambda$CDM universe with a Hubble constant $H_{0}$=70.4 km/s/Mpc, a baryon density $\Omega_{b}$=0.0456, a total matter density $\Omega_{m}$=0.272, a dark energy density $\Omega_{\Lambda}$=0.728, a rms fluctuation amplitude at 8Mpc/h of $\sigma_{8}$=0.809, and spectral index $n$=0.963 consistent with Wilkinson Microwave Anisotropy Probe 7 (WMAP7) year cosmology [@Komatsu2011].
Then, each selected cluster is zoomed, by first tracking back all particles within 3 virial radii of the cluster, then adding an additional four levels of nested initial conditions. Now, each cluster is resimulated, with a full baryonic physics treatment, and reaching a maximum spatial resolution of 760 pc/h. When 8 or more dark matter particles (or the equivalent mass in baryons) is present in a cell, it refines to the next level.
The baryonic physics treatment is deliberately chosen to match that used in the Horizon-AGN simulations ([@Dubois2012]), and is also described in Choi et al. 2016 (in prep). In brief, we use the standard implementation of radiative cooling in Ramses. A look-up table is applied of the metallicity- and temperature-dependent, collisional equilibrium, radiative cooling functions from [@Sutherland1993], for a mono-atomic gas of H and He, with a standard metal mixture. UV background heating is calculated using [@Haardt1996], assuming z$_{\rm{reion}}$=10.4, consistent with WMAP7. We note that we do not use more recent UV background models (e.g. [@Haardt2012]), however we do not expect this to have a significant influence on the main conclusions of this study because it is focused on tidal stripping. Star formation occurs above a critial density of 0.1 H cm$^{-3}$, following a Kennicutt-schmidt law, with a star formation efficiency of 0.02. Supernova feedback is modelled as in [@Dubois2008], where stars with mass greater than 10 solar masses explode as supernovae, 10 Myr after their formation, releasing 10$^{51}$ erg per 10 M$_\odot$ into ambient cells, in the form of kinetic and thermal energy. Gas cooling is suppressed for 10 Myr following a supernova, so as energy is efficiently deposited in the ambient gas. Formation of, and feedback from super massive black holes (SMBHs) is also considered following prescriptions in [@Dubois2012]. SMBHs tend to form where the density is peaked, and are treated as sink particles, which can accrete mass and merge. SMBH feedback can occur in ‘quasar’ or ‘radio’ mode, depending on whether the accretion rate surpasses the Eddington limit.
The time step between each successive snapshots is approximately 75 Myr. Halos in each snapshot are identified using the AdaptaHOP method ([@Aubert2004]), with a minimum number of dark matter particles of 64. We also rerun the halo finding code but on the stellar particle distribution in order to identify galaxies. For the stellar particles, we use the most massive sub-node method ([@Tweed2009]) and, in practice, we detect galaxies down to $\sim$10$^8$ M$_\odot$/h. We limit our analysis to galaxies within the zoom region (within three virial radii of the cluster), so as to exclude poorly resolved galaxies. In each snapshot we match the stellar component of a galaxy to its halo by finding the closest galaxy to the centre of the halo. In cases where a satellite galaxy is mis-identified as the main galaxy, due to its stellar component temporarily passing close to the halo centre, we filter for rapid, shortlived changes in stellar mass. In order to track halos between snapshots, we follow the main progenitor galaxy up the merger tree. Galaxy merger trees are built with ConsistentTrees ([@Behroozi2013]). All of our sample galaxies have a complete tree from redshift three until redshift zero.
Our galaxy sample is extracted from the hydrodynamical zoom simulations of three cosmological clusters. At redshift zero, Cluster 1 is a massive cluster with virial mass of 9.2$\times$10$^{14}$ M$_\odot$/h, and virial radius of 2.5 Mpc/h. Cluster 2 is significantly less massive, with a virial mass of 2.3$\times$10$^{14}$ M$_\odot$/h, and a virial radius of 1.6 Mpc/h. Cluster 3 is the least massive cluster, with a virial mass of 1.7$\times$10$^{14}$ M$_\odot$/h, and a virial radius of 1.5 Mpc/h. Cluster 1, 2, and 3 contribute 60, 27, and 13$\%$ of our final galaxy sample, respectively.
Linking tidal mass loss of stars and dark matter in cosmological simulations
----------------------------------------------------------------------------
![Schematic of a toy galaxy undergoing tidal mass loss, and evolving along its dark matter-stellar mass fraction track. Position ‘A’ marks the starting point of the galaxy, where it has all of its dark matter (f$_{\rm{DM}}$=1), and all of its stars (f$_{\rm{str}}$=1). The galaxy evolves to position ‘B’, where it has lost more than half its dark matter, but none of the stars have yet been stripped. At position ‘C’, the dark matter halo has been heavily stripped, and the stars begin to be stripped too. At position ‘D’, the galaxy has been destroyed as all of the dark matter and stars have been stripped (f$_{\rm{DM}}$ and f$_{\rm{str}}$=0). The dashed line indicates the trajectory a galaxy would follow if it suffered equal dark matter and stellar mass loss.[]{data-label="sketchfig"}](sketch.eps){width="8.5cm"}
In [@Smith2013a], model early type dwarf galaxies were subjected to harassment. For each model galaxy, we measured the amount of dark matter that was stripped when exactly 10$\%$ of their stars were unbound. We found that the amount of dark matter was always very high, and a similar value was measured for all our model galaxies ($\sim$80-90$\%$). However, in this previous approach, we are only recording the link between the amount of dark matter and stars that are stripped at one specific moment, when exactly 10$\%$ of the stars have been stripped.
A more complete study can be made if we instead record the full relationship between the bound fraction of dark matter (f$_{\rm{DM}}$) and bound fraction of stars (f$_{\rm{str}}$), for galaxies that are undergoing tidal stripping. Note that, initially, before tidal stripping begins, a galaxy has a bound dark matter fraction, f$_{\rm{DM}}$=1. Then, as a galaxy suffers mass loss from tidal stripping of its dark matter halo, f$_{\rm{DM}}$ falls, approaching zero when the halo is almost entirely destroyed. We produce plots of f$_{\rm{str}}$ versus f$_{\rm{DM}}$, and each galaxy creates a track on such a plot, as it undergoes tidal mass loss. A schematic of such a plot is shown in Figure \[sketchfig\]. We note that in [@Smith2013a], our result represents a single point on such a curve. Therefore by considering the complete curve, we retain significantly more information on the galaxy’s evolution while undergoing tidal stripping.
Our toy galaxy is initially at position A, before it suffers any tidal mass loss, and so its bound dark matter fraction and bound stellar mass fraction are both unity. However, as tidal stripping proceeds, the toy galaxy evolves along the track, towards position B. The initially horizontal motion of the track (between position A and B) indicates that dark matter is preferentially stripped, while no stars are stripped. Between position B and C, the dark matter halo has been heavily truncated and stellar stripping begins to occur, causing the turn-down in the track. Finally, the galaxy finishes at position D, where all of its dark matter and stars have been stripped, and the galaxy has been effectively destroyed. The dashed line is a one-to-one line. This is the trajectory a galaxy would follow if it suffered equal dark matter and stellar mass loss, at all times. Therefore, the fact that the track is always above the dotted line demonstrates that dark matter is always preferentially stripped in our toy model galaxy.
We note that when we refer to the total bound fraction of dark matter f$_{\rm{DM}}$, this fraction is derived from the total mass of the halo, which includes dark matter within the baryonic component of the galaxy, and dark matter at larger radii, out to the virial radius of the halo. Therefore we caution that our dark matter fractions should not be directly compared to observationally derived dark matter fractions. In most cases, the dynamics of a galaxy’s baryons, such as a HI rotation curve, or stellar velocity dispersion, are used to derive their observational dark matter fractions. As such, the observations only probe the dark matter which exists within the radial extent of the baryons, which is typically just the inner dark matter halo. However, if the a galaxy’s outer halo were stripped but its inner halo was largely unaffected, then our total bound dark matter fraction would be reduced, unlike the observationally derived quantity. Therefore the total bound dark matter fractions will often be more sensitive to tidal mass loss. In any case, a primary goal of our study is to improve stellar stripping recipes in SAMs, where calculating the total bound mass fraction of a halo can be accomplished directly from the N-body cosmological simulations on which the SAM is derived.
In [@Smith2013a], ‘idealised’ simulations were used that, although based on parameters from cosmological simulations, were not fully cosmological themselves. In this study we will use fully cosmological, hydrodynamical simulations which are summarised in the following section. Because galaxies first grew hierarchically in cosmological simulations, we measure when each galaxy’s dark matter halo peaks in mass. At this instant, we assume each galaxy begins its journey along the f$_{\rm{DM}}$-f$_{\rm{str}}$ track, starting at position (1,1) (e.g. location A in Figure \[sketchfig\]). As we wish to clearly understand the tidal stripping process, we exclude galaxies that undergo major mergers (i.e. more major than 1:5 mass ratio) since reaching their peak mass (occurring in about 20$\%$ of cases), as these can result in additional scatter in the motion of a galaxy along its track.
Previous studies ([@Penarrubia2008]; [@Smith2013a]) have shown that if a galaxy has a smaller disk, more dark matter must be stripped to cause stellar stripping. Therefore, we divide our sample into three categories based on the relative size of their stellar component (measured using the effective radius r$_{\rm{eff}}$), compared to their dark matter halo (measured using the halo virial radius r$_{\rm{vir}}$). We form the stellar-to-halo size-ratio r$_{\rm{eff}}$/r$_{\rm{vir}}$, which is measured at the moment in which galaxies reach their peak halo mass. Galaxies with r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.025 fall in our ‘concentrated’ category, and galaxies with r$_{\rm{eff}}$/r$_{\rm{vir}}$$>$0.04 fall in our ‘extended’ category. Galaxies that fall inbetween these limits are in the ‘intermediate category’. We choose these limits because, as we will show in Section \[deponconcsect\], these choices lead to a clear deviation in the response of the galaxies to tidal stripping between the subsamples. The concentrated, intermediate, and extended category make up 11, 50, and 39$\%$ of our final galaxy sample, respectively. In Figure \[concmstrfig\], we plot the stellar-to-halo size-ratio against the stellar mass of the galaxy, measured when the halo mass peaks. More massive galaxies tend to have slightly lower r$_{\rm{eff}}$/r$_{\rm{vir}}$, but the trend is not strong, and the spread is very broad at all stellar masses. Therefore the concentrated, intermediate, and extended galaxy subgroups each contain galaxies with a wide range of stellar masses.
In order to avoid undesirable numerical artifacts, we exclude all galaxies with r$_{\rm{eff}}$$<$2.5 kpc ($\sim$3 times the spatial resolution limit of our simulations). Additionally, the minimum detectable stellar mass of a galaxy is $\sim$1$\times$10$^8$ M$_\odot$/h. Therefore we exclude galaxies whose stellar mass is $<$2$\times$10$^9$ M$_\odot$/h when their halo mass peaks. This ensures we can measure f$_{\rm{str}}$ for all galaxies down to 0.05 or below. There is no imposed upper stellar mass limit. However the maximum stellar mass, measured when their halo mass peaks, is 1.1$\times$10$^{12}$ M$_\odot$/h in Cluster 1, 3.6$\times$10$^{11}$ M$_\odot$/h in Cluster 2, and 3.2$\times$10$^{11}$ M$_\odot$/h in Cluster 3.
![Stellar-to-halo size-ratio (r$_{\rm{eff}}$/r$_{\rm{vir}}$) plotted against stellar mass. Both parameters are measured when each galaxy’s halo reaches peak mass.[]{data-label="concmstrfig"}](conc_mstr.eps){width="8.5cm"}
In a final step, we separate our sample into two samples depending on the strength of their star formation. Galaxies whose f$_{\rm{str}}$ never rises above 1.15 (a maximum of a 15$\%$ increase in stellar mass) are placed in the ‘weakly star forming’ sample. The remaining are considered to be forming stars more significantly. For most of the analysis of the paper we will focus only on the results of the weak star formation sample, which dominates the sample by number (82$\%$ of the sample). We exclude the strongly star forming galaxies from our final sample in order to understand the effects of tidal stripping of stars alone, while minimising the counter effect of new star formation. However, we expect that some galaxies may continue to form stars vigorously, even after reaching their peak halo mass. Therefore, an alternative recipe for tidal stripping of strongly star forming galaxies is presented in Section \[SFsection\]. After applying previous cuts for major mergers, galaxy effective radius, minimum stellar mass, and now star formation, our final galaxy sample consists of 496 galaxies.
![A plot of the bound stellar fraction f$_{\rm{str}}$ versus the bound dark matter fraction f$_{\rm{DM}}$, for the final galaxy sample. The thick black central line is the fitted curve. Individual data points are shown as black dots. The grey shading indicates the first to the third quartile of the data points surrounding the fitted curve.[]{data-label="allinfig"}](allin2.eps){width="8.5cm"}
We produce f$_{\rm{str}}$-f$_{\rm{DM}}$ plots, where each galaxy produces a single data point for every snapshot of the simulation since the galaxy reached peak halo mass. Because we only consider galaxies with weak or non-existent star formation, we find we can well fit the compilation of data points from all galaxies using a simple analytic form,
$$f_{\rm{str}}=1-\exp(-a_{strip} f_{\rm{DM}}){\rm{,}}$$
where a$_{\rm{strip}}$ is the unique exponential fitting parameter required to match the trend of the data points. In order to assess the degree of scatter about this line of best fit, we calculate the first and third quartile of the data points about the line of best fit.
Results
=======
The f$_{\rm{str}}$-f$_{\rm{DM}}$ diagram - no subcategories
-----------------------------------------------------------
In Figure \[allinfig\], we show an f$_{\rm{str}}$-f$_{\rm{DM}}$ diagram that is based on our total final galaxy sample, without making any further subcategories for stellar-to-halo size-ratio. The bold line indicates the best fit to the data, and has the form $$\label{allineqn}
f_{\rm{str}}=1-\exp(-14.20 f_{\rm{DM}}){\rm{.}}$$
![The f$_{\rm{str}}$-f$_{\rm{DM}}$ plot for the galaxy sample, separated by how extended the galaxy stellar component is in comparison to the dark matter halo. The concentrated sample (top panel) has r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.025. The intermediate sample (middle panel) has 0.025$<$r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.04. The extended sample (lower panel) has r$_{\rm{eff}}$/r$_{\rm{vir}}$$>$0.04. The thick black central line is the fitted curve. The grey shading indicates the first to the third quartile of the data points surrounding the fitted curve.[]{data-label="starconcfig"}](concsep.eps){width="8.5cm"}
The best fit line is steeply curved at small f$_{\rm{DM}}$, and remains in the upper left of the figure, indicating that the dark matter halos of the galaxies in the ‘no-subcategories’ sample are always more susceptible to tidal stripping than the stellar component. Initially the dark matter fraction f$_{\rm{DM}}$ falls from 1 to $\sim$0.3, without any indication of stellar stripping. We find that when 10$\%$ of the stars are stripped, 84$\%$ of the dark matter has been stripped (compared to 85$\%$ for the standard model dwarf galaxy in [@Smith2013a]). The upper and lower thin lines are the first and third quartile respectively. The distance between the first and third quartile is generally quite small, indicating that, in the absence of additional information on a galaxy, the bound dark matter fraction alone can provide a good first-order estimate of the fraction of stars that have been stripped. This suggests that Equation \[allineqn\] could be useful for SAMs that have limited information on other galaxy properties, such as the size of the stellar component. However, as f$_{\rm{DM}}$ approaches zero (i.e. heavily tidally stripped), the spread about the best fit curve become increasingly large. In the next section, we will see that this is due to the range of size that the stellar component has, compared to the halo, in our sample of galaxies.
Dependency on stellar-to-halo size-ratio {#deponconcsect}
----------------------------------------
We now separate our sample into three subsamples, depending on their stellar-to-halo size-ratio. In Figure \[starconcfig\], our ‘concentrated’ sample (r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.025) is used in the top panel, our ‘intermediate’ sample (0.25$<$r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.04) is used in the middle panel, and our ‘extended’ sample (r$_{\rm{eff}}$/r$_{\rm{vir}}$$>$0.04) is used in the lower panel.
In all three panels, the best fit curves deviate considerably from a one-to-one line, in a direction towards the upper-left corner of each panel, indicating that dark matter is always preferentially stripped from the galaxies. This indicates that the halo is significantly more extended than stars, even in the ‘extended’ galaxy sample. However as we move from the ‘concentrated’ to the ‘extended’ sample, the curves increasingly approach the location of a one-to-one line. This indicates that if the sample has a smaller stellar-to-halo size-ratio, the galaxy must lose more dark matter to affect the stars. In other words, [*[the more embedded the stars are within the halo, the more difficult they are to strip]{}*]{} ([@Penarrubia2008]).
Comparing with Figure \[allinfig\], the spread about the curve is generally decreased, in particular at small f$_{\rm{DM}}$. This is because the different trends seen in Figure \[starconcfig\], which differ most at small f$_{\rm{DM}}$, were being combined together in Figure \[allinfig\]. This means that [*[the most accurate prediction for stellar stripping is achieved if there is knowledge of the bound dark matter fraction, and the size of the stellar component]{}*]{}.
The best fit line for the concentrated sample (r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.025) is $$\label{conceqn}
f_{\rm{str}}=1-\exp(-23.94 f_{\rm{DM}}){\rm{,}}$$
for the intermediate sample (0.025$<$r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.04) is
$$\label{intermedeqn}
f_{\rm{str}}=1-\exp(-11.87 f_{\rm{DM}}){\rm{,}}$$
and for the extended sample (r$_{\rm{eff}}$/r$_{\rm{vir}}$$>$0.04) is
$$\label{extendeqn}
f_{\rm{str}}=1-\exp(-8.60 f_{\rm{DM}}){\rm{.}}$$
Fortunately, most SAMs (e.g. [@Somerville1999]; [@Cole2000]; [@Hatton2003]; [@Croton2006]; [@DeLucia2007b]; [@Somerville2008]; [@Ricciardelli2010]; [@Benson2010]; [@Guo2011]; [@Lee2013]; [@Contini2014]; [@Croton2016], etc) already include prescriptions for the size of the stellar disk of their galaxies, based on the sharing of a fraction of a halo’s angular momentum with its baryonic component ([@Mo1998]). However, in the absence of information on the size of each galaxy’s stellar component, the fit given in Equation \[allineqn\] could be applied, albeit with less accurate results at small values of f$_{\rm{DM}}$.
Negligible dependence on cluster mass or galaxy mass
----------------------------------------------------
In the upper panel of Figure \[massfig\], we compare all of the galaxies in the ‘intermediate’ sample (solid, black curve) to a ‘high stellar mass only’ sample (containing only galaxies with a stellar mass $>$5$\times$10$^{10}$ M$_\odot$/h), and to a ‘low stellar mass only’ sample (containing only galaxies with a stellar mass $<$2$\times$10$^{10}$ M$_\odot$/h). Previously we imposed a mass limit of 2$\times$10$^{9}$M$_\odot$/h, in order to limit numerical resolution effects. Thus the high stellar mass only sample has a cut at 25 times the value of the previous mass limit, and reduces the number of galaxies in our sample to 14$\%$ of its previous value. Despite the severity of the mass cut, the two curves are so close that it is difficult to separate them in the figure. Similarly, there is only a very minor difference between the curves, and the curve of the ‘low stellar mass only’ sample. These results are important for two reasons. Firstly, it suggests that our results are not significantly affected by resolution effects, which would most likely have appeared in the lower mass galaxies. Secondly, the lack of a significant dependency on galaxy mass implies that the best fit lines (e.g. Equations \[allineqn\]–\[extendeqn\]) are universally applicable to galaxies of a wide range of sizes, which is very useful if they are to be applied to SAMs.
In the lower panel of Figure \[massfig\], we compare all of the galaxies in Cluster 1 (a massive cluster with a virial mass of 9$\times$10$^{14}$ M$_\odot$/h), to all the galaxies in Cluster 2 and Cluster 3 combined (lower mass clusters with a virial mass range of (1.7-2.3)$\times$10$^{14}$ M$_\odot$/h). Once again, the two best fit curves are difficult to distinguish, even though the cluster mass has changed by a factor of roughly four between the samples. This suggests that the behaviour of galaxies in response to the stripping of their dark matter is rather universal, independent of the mass of the system in which they reside. In fact, physically this makes sense. Even if a more massive cluster could cause stronger tidal stripping of galaxies, we see no obvious reason why such galaxies should deviate from the f$_{\rm{str}}$-f$_{\rm{DM}}$ relations that we have measured. It is logical that the [*[relative]{}*]{} tidal stripping of stars to dark matter would depend more sensitively on a galaxy’s own properties, such as r$_{\rm{eff}}$/r$_{\rm{vir}}$, than the properties of the external potential. The lack of a dependency on cluster mass that we see once again supports the universal applicability of the best fit lines given in Equations \[allineqn\]–\[extendeqn\] to SAMs.
![Best fit curves for the dependency of the f$_{\rm{str}}$-f$_{\rm{DM}}$ curves on galaxy mass (top panel), and cluster mass (bottom panel). In both panels, the two separate curves lie closely on top of each other, illustrating the weak dependency of the curves on galaxy and cluster mass.[]{data-label="massfig"}](masscomp3.eps){width="8.5cm"}
A recipe for tidal stripping of star forming galaxies {#SFsection}
-----------------------------------------------------
We have so far only considered our main sample, which contains only ‘weakly star forming’ galaxies. Therefore we now consider how to treat galaxies that continue to star form vigorously, after their halos reach peak mass.
Previously, many SAMs assumed that a galaxy lost its gas content, and star formation was halted, as soon as a galaxy became a subhalo of another halo. It is likely that the moment at which a galaxy becomes a subhalo, is similar to the time when the halo mass reaches its peak value. Indeed, in our galaxy sample, only 18$\%$ of the galaxies increase their stellar mass by more than 15$\%$, since their halo’s mass peaked. However, the assumption of a total halt in star formation has led to the ‘satellite over-quenching problem’, where too many low mass satellite galaxies become quiescent, compared to observations ([@Kimm2009]).
![In the upper panel we show the best fit curve for a sample of star forming galaxies, which fall in the intermediate category, whose total stellar bound fraction peaks between 1.5 and 1.75 (red dashed line, labelled ‘old+new’). The black curve shows the evolution of the stars formed before the halo mass peaked (black curve, labelled ‘old’). In the lower panel, the two curves are substracted (dot-dashed curve) to show the evolution of the fraction of new stars formed since the halo mass peaked.[]{data-label="SFfig"}](SFcomp2.eps){height="5.0cm" width="8.5cm"}
As a result, a number of authors have included recipes that permit galaxies to continue to star form, at least temporarily, after becoming satellites of a host galaxy. In some, a prescription for the removal of the hot gas content of a galaxy is employed ([@Font2008]; [@Kimm2011]; [@Croton2016]). [@Tecce2010] also consider the gradual removal of the cold, atomic disk gas. Therefore, for these types of SAMs, it is necessary to consider a prescription for galaxies that may be undergoing tidal stripping, while simultaneously forming stars.
In our star formation prescription, we split the total stellar mass of a galaxy into two components - ‘new’ and ‘old’. We label all the stars formed prior to the moment when the halo mass peaks as ‘old’ stars, and all of the stars formed since as ‘new’, and calculate a bound stellar fraction for each component individually (f$_{\rm{str}}$(old) and f$_{\rm{str}}$(new)).
We assume that the old stars are stripped in the same way as we have seen in the main/weakly star forming sample (i.e. f$_{\rm{str}}$(old) obeys Equations \[allineqn\]–\[extendeqn\]). We test this assumption, by measuring f$_{\rm{str}}$(old) directly from the cosmological simulation and find it is very reasonable.
To treat the new stars, we calculate f$_{\rm{str}}$(new) of a galaxy at each moment. This can increase if the galaxy continues to form stars. However, when the galaxy is heavily tidally stripped, and f$_{\rm{DM}}$ becomes small, we reduce the contribution of f$_{\rm{str}}$(new) to the galaxy’s total bound stellar fraction f$_{\rm{str}}$. We choose that the contribution of the new stars is reduced by the same fractional decrease as f$_{\rm{str}}$(old) has decreased from unity. Mathematically, this can be expressed
$$\label{nustreqn}
f_{\rm{str}}=f_{\rm{str}}{\rm{(old)}}+f_{\rm{str}}{\rm{(new)}} \times f_{\rm{str}}{\rm{(old)}}$$
In essence, we are assuming that the new stars are affected by tidal stripping in exactly the same way as the old stars are. In principle, this might not always be valid (for example, if new star formation should occur more centrally). However, our prescription could be easily modified to make the new stars more difficult to strip. For example, we could instead assume that new stars are not stripped until f$_{\rm{str}}$(old) reaches a critical value, f$_{\rm{crit}}$. Then Equation \[nustreqn\] could become
$$f_{\rm{str}}=\left\{
\begin{array}{@{}ll@{}}
f_{\rm{str}}{\rm{(old)}}+f_{\rm{str}}{\rm{(new)}}\text{,} & \text{if } f_{\rm{str}}{\rm{(old)}}>f_{\rm{crit}}\\
f_{\rm{str}}{\rm{(old)}}+f_{\rm{str}}{\rm{(new)}}\times \frac{f_{\rm{str}}{\rm{(old)}}}{f_{\rm{crit}}}\text{,} & \text{otherwise.}
\end{array}\right.$$
Nevertheless, in practice we find that the new stars are stripped at a similar rate as the old stars, in our simulations. To test this, we first choose a sample of galaxies that were previously excluded from our main sample because they were forming stars too rapidly. We select galaxies with similar tracks on a f$_{\rm{str}}$-f$_{\rm{DM}}$ plot, by only choosing galaxies whose f$_{\rm{str}}$ has a peak value in the range 1.5 to 1.75 (we also test the range 1.25-1.5 but find the exact choice is not important to our conclusions). From this sample, we choose a subsample with 0.025$<$r$_{\rm{eff}}$/r$_{\rm{vir}}$$<$0.04 (i.e. they have a stellar-to-halo size-ratio that falls in the ‘intermediate’ category). The red dashed curve in the upper panel of Figure \[SFfig\] is a best fit line to these galaxies. The black curve in the upper panel is the best fit line for the ‘intermediate’ sample (Equation \[intermedeqn\]), and can be considered to trace the evolution of f$_{\rm{str}}$(old) of the sample. In the lower panel, we show the difference between the two curves in the upper panel, which is f$_{\rm{str}}$(new) for the sample. f$_{\rm{str}}$(new) initially grows as f$_{\rm{DM}}$ decreases from 1, because these galaxies are continuing to form stars. However we note that f$_{\rm{str}}$(new) begins to decrease from their peak value at nearly the same moment as f$_{\rm{str}}$(old) begins to decrease. In fact, f$_{\rm{str}}$(old) and f$_{\rm{str}}$(new) decline at a very similar rate, as f$_{\rm{DM}}$ becomes small. For example, f$_{\rm{str}}$(old) and f$_{\rm{str}}$(new) reach a value of 90$\%$ of their peak values, at roughly the same value of f$_{\rm{DM}}$ (f$_{\rm{DM}}$=0.19 and 0.16 respectively). Therefore, at least for the galaxies considered in our sample, the assumption that the new stars are equally affected by tidal stripping as the old stars appears valid, thereby supporting the use of Equation \[nustreqn\].
Comparison to high resolution dwarf galaxy models
-------------------------------------------------
By considering the complete evolution of a galaxy on a plot of f$_{\rm{str}}$ versus f$_{\rm{DM}}$, we can gather a more complete picture of how the galaxy responds to tidal stripping. Therefore we revisit the numerical simulations of [@Smith2013a]. We calculate how the standard early type dwarf model from our previous study evolves on a f$_{\rm{str}}$-f$_{\rm{DM}}$ plot. The results are shown in Figure \[2012compfig\]. The red filled circles are the results from the 2013 study, where the error bars are the 1-sigma errors in the mean value. The standard early type dwarf model has r$_{\rm{eff}}$/r$_{\rm{vir}}$=0.013. This means its stellar-to-halo size-ratio clearly falls deep within in our ‘concentrated’ category. Therefore we plot the ‘concentrated’ best fit curve (Equation \[conceqn\]) for comparison.
This comparison is useful for two reasons. First of all, it provides a test of whether our prescription can be applied for galaxies down to the dwarf mass regime. It is important to test this as, in order to avoid numerical artifacts, we excluded galaxies with r$_{\rm{eff}}$$<$2.5 kpc from our sample, which effectively excluded dwarf galaxies from our sample. Secondly, the gravitational resolution of the early type dwarf simulations in [@Smith2013a] was only 100 pc – roughly ten times better than the cosmological simulations. Therefore, it could also potentially identify if our new results were impacted by their more limited resolution.
As demonstrated in Figure \[2012compfig\], we find that both our new results, and the early type dwarf model show similar behaviour. Both lose very large amounts of dark matter before stellar stripping becomes significant. In fact there is excellent agreement between the two studies for galaxies over a wide range of f$_{\rm{DM}}$ from 0.1 to 1.0. However for f$_{\rm{DM}}$$<$0.1, the early type dwarf model systematically sits above the best fit curve. However the offset is not substantial, and the early type dwarf models are found at approximately the position of the third quartile of the galaxies in this study. It is difficult to understand the true origin of this offset, as it could arise for multiple reasons. The significantly high gravitational resolution of the early type dwarf models could enable their self-gravity to be better resolved at their innermost radii, allowing them to better hold onto their stars. However, we also note that with r$_{\rm{eff}}$/r$_{\rm{vir}}$=0.013, the early type dwarf model is highly concentrated, and so might be expected to be slightly more robust to stellar stripping than the average galaxy in our ‘concentrated’ sample.
![The black curve is the ‘concentrated’ category galaxies from this study. For comparison, the red symbols show the curve for the high resolution, early-type dwarf galaxies from the harassment simulations of Smith et al. 2013b. Error bars show the standard deviation of the multiple individual dwarfs used in that study.[]{data-label="2012compfig"}](comp2harass2012.eps){width="8.5cm"}
Nevertheless, we conclude that the broad agreement between the curves shown in Figure \[2012compfig\] demonstrates that our recipes for stellar stripping are applicable in the dwarf regime, and suggest our results are not strongly altered by resolution effects.
### Application to a SAM: the stellar mass function {#stellarmassfuncsect}
We apply our new tidal stripping recipe (specifically Equation \[allineqn\]) to the semi-analytical model, ySAM. This SAM was developed by @Lee2013, and is based on a cosmological N-body volume simulation of structure formation, that was simulated using [@Springel2005]. The cosmological parameters used match those in our hydrodynamical cosmological simulations (see Section \[hydrosims\]). The periodic cube size of the volume is 200/h Mpc on a side, with $1024^3$ collisionless particles. We generated a halo catalog by identifying substructures using SUBFIND [@Springel2001]. Then, halo merger trees were constructed from the halo catalog by a tree building algorithm described in @Jung2014. These merger trees were used as an input to the semi-analytic model. Further details of the baryonic physics prescriptions can be found in @Lee2013.
In Figure \[SAMcompfig\], we compare the stellar mass function at redshift zero produced by ySAM, using the new tidal stripping recipe (solid line), compared to using the original tidal stripping recipe (dotted line). In the original tidal stripping recipe of ySAM there is no stellar stripping until the dark matter halo mass is equal to the baryonic mass of the galaxy (as also applied in [@Guo2011]). Then, all of the stars are stripped instantly from the galaxy, and are added to the halo stars of the host galaxy. The deviation between the two curves becomes clear above $\sim$10$^{11}$ M$_\odot$/h. The new recipe reduces the number of massive galaxies, and the offset increases with increasing stellar mass, thereby steepening the high mass end of the stellar mass function.
![Comparison of the stellar mass function of the galaxy population at redshift zero, produced by ySAM with the new tidal stripping recipe (solid line), compared to with the original tidal stripping recipe (dashed line).[]{data-label="SAMcompfig"}](starmassfunction.eps){width="9.0cm"}
This offset likely arises because the most massive galaxies tend to accrete most of their stellar mass through mergers ([@Lee2013]). Meanwhile these massive galaxies are preferentially found in a high density environment, such as a group or cluster, where tidal stripping of satellites is more likely to occur prior to merging. However, the tidal stripping of the satellites may be difficult to see in Figure \[SAMcompfig\], because the sample is drawn from a large cosmological volume, and so is dominated by low mass galaxies inhabiting low density environments. Therefore, we would expect even stronger effects, over a wider range of stellar mass, if we were to focus on the SAM results for high density environments only.
As is customary with SAMs, parameters that control baryonic physics recipes are tuned in order to match observed galaxy relations, such as the luminosity function. Therefore it is possible that some articifial over-tuning of parameters may have occurred, in order to compensate for inaccuracies in those earlier tidal stripping recipes that were employed. We briefly consider likely physical parameters that might have been affected in this way. As the original ySAM tidal stripping recipe creates a larger number of massive galaxies, the strength of AGN feedback may have previously been overestimated, compared to what is required with the new tidal stripping recipe. Altering the strength of the supernova feedback is a less desirable option, as this could impact on the stellar mass function at lower galaxy masses too. Alternatively, when a merger occurs, it is assumed that a fixed fraction (20$\%$) of the stellar mass of the galaxy is scattered into the stellar halo of the host galaxy ([@Monaco2006]; [@Murante2007]), instead of joining the main stellar mass of the galaxy. Therefore this fraction may have been set too high, compared to what is required with the new tidal stripping recipe. A number of other galaxy parameters, which are closely tied to a galaxy’s stellar mass, are also likely to be influenced, including mass weighted age, metallicity, and disk-to-bulge ratio. We will explore these, and other consequences of our new recipe for SAM galaxy populations, in a future paper.
Discussion and conclusions
==========================
Using high resolution hydrodynamical cosmological simulations of three galaxy clusters, we have studied how the fraction of bound dark matter and stars is related for galaxies undergoing tidal stripping. We find that, in all galaxies, substantial quantities of dark matter must first be stripped, before stellar stripping can begin. Typically galaxies that lose $\sim$80$\%$ of their total dark matter, lose only 10$\%$ of their stars. We emphasise that we measure the total bound fraction of dark matter, which includes all the dark matter – both inside and beyond the radii of the baryons. Therefore, our dark matter fractions are not directly comparable to observationally derived dark matter fractions, which typically can only probe the inner halo, where baryons are present.
We find that the ease with which stars are stripped depends on a key parameter - the ratio of the effective radius of the stellar component, compared to the virial radius of the dark matter halo (r$_{\rm{eff}}$/r$_{\rm{vir}}$). We term this ratio the ‘stellar-to-halo size-ratio’. If the stellar component is more extended with respect to the halo, the stars are more easily stripped. With simple analytical fitting formulae (Equations \[allineqn\]–\[extendeqn\]), we quantify the link between bound dark matter fraction and stellar fraction.
These fitting formulae could be applied to improve stellar stripping recipes in Semi Analytical Models (SAMs), or other numerical models of galaxies where only a live dark matter component is considered. With knowledge of only the bound dark matter fraction, a first order estimate of the evolution of the stellar bound fraction can be made using Equation \[allineqn\]. However, if a galaxy’s stellar-to-halo size-ratio is known, a more accurate prediction of the bound stellar fraction can be made using Equations \[conceqn\]–\[extendeqn\]. The improvement in accuracy is greatest when tidal stripping of the dark matter halo is very strong. We find negligible dependence on galaxy mass, and/or cluster mass, which suggests these equations can be applied universally, making them ideal for application in SAMs. We also provide a suggested recipe for the treatment of galaxies that continue to form stars, while suffering tidal stripping, in Section \[SFsection\].
The small scatter seen in the trends suggests that accurate predictions for the stellar stripping of a galaxy can be made, based on knowledge of the bound dark matter fraction, combined with a galaxy’s stellar-to-halo size-ratio. It has been suggested that galaxy rotation could also be a factor dictating the efficiency of stellar stripping (e.g. [@Donghia2009]; [@Villalobos2012]; [@Bialas2015]). Our cosmological simulations contain galaxies with a wide range of rotational properties (Choi et al. 2016 in prep.). Of course, it is possible that rapidly rotating galaxies might also have more extended stellar components. However, if rotation were important to our results, then the range of rotational vector with respect to their orbital plane should cause a wide range in efficiency of stellar stripping. Given the small spread that we measure about the trends, we conclude that rotation does not play as significant a role as stellar-to-halo size-ratio, at least in our simulations.
For some galaxy morphologies the effective radius, alone, might poorly encapsulate the mass distribution of the stars. One example is the case of a galaxy with a massive compact bulge, and an equally massive extended disk. In this scenario, calculating a separate stellar fraction for each of the two stellar components of the galaxy might, in principle, improve the accuracy of predicted stellar stripping. However, given the tightness of the trends, we note that this occurrence does not seem to arise frequently in our cosmological simulations.
In summary, the equations provided in this study can be easily applied to SAMs, to improve existing stellar stripping recipes. In Section \[stellarmassfuncsect\] we applied our recipe to ySAM ([@Lee2013]), and found that it steepened the high mass end of the stellar mass function. We anticipate that the impact will be even stronger in higher density environments, such as groups and clusters. It will also have consequence for SAM predictions of the growth rate ([@Lidman2013]), and metallicity gradients of brightest central galaxies, and intracluster light ([@Contini2014]). We will fully explore the consequences of applying these improved recipes in SAMs in a future paper. In this paper we have focussed on the total dark matter fraction of the halos of galaxies, as this quantity can be easily measured in SAMs. However, in the future, it would also be interesting to calculate the fraction of dark matter contained within the baryons, as in this way the results could be compared more directly with the observations of real galaxies. As the dark matter halos are preferentially stripped from the outside inwards, we note that our bound dark matter fractions, which are measured for the total halo, represent lower limits for the equivalent quantity measured within the baryons.
RS acknowledges support from Brain Korea 21 Plus Program(21A20131500002) and the Doyak Grant(2014003730). SKY acknowledges support from the National Research Foundation of Korea (Doyak grant 2014003730). SKY, the head of the group, acted as a corresponding author. Numerical simulations were performed using the KISTI supercomputer, under the program of KSC-2014-G2-003. We are grateful to the referee for their constructive input.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Bohr placed complementary bases at the mathematical centre point of his view of quantum mechanics. On the technical side then my question translates into that of classifying complex Hadamard matrices. Recent work (with Barros e Sá) shows that the answer depends heavily on the prime number decomposition of the dimension of the Hilbert space. By implication so does the geometry of quantum state space.'
author:
- Ingemar Bengtsson
bibliography:
- 'sample.bib'
title: 'How much complementarity?'
---
[ address=[Fysikum, Stockholms Universitet, 106 91 Stockholm, Sweden]{} ]{}
Reading Bohr
============
Reading what Bohr actually wrote about the foundations of quantum mechanics one is struck by the modesty of his aims [@Bohr]. To Bohr, the aim of the theory is to predict the outcomes of measurements performed on a suitably prepared system. In a possibly double edged endorsement of Bohr’s position, Mermin stresses how suitable he finds this view when teaching quantum mechanics to students coming from computer science: they want input and output, and have no emotional attachment to what goes on in between [@Mermin]. This clearly represents a retreat from the natural position of the physicists, who used to think that the essence of the phenomena resides there—and were then explicitly told by Bohr not to try to disclose them. If Bohr solved the interpretational problem of quantum mechanics then—as Marcus Appleby told me one fine day in front of the Rosetta stone—the problem is to find a point of view from which this solution appears desirable.
It is striking too how little of the mathematical formalism Bohr brings up. The one mathematical point stressed by him is the occurence, in quantum mechanics, of complementary pairs of measurements: if the system has been prepared to give a definite answer for one of them, nothing is known about the outcome should the complementary measurement be made [@Bohr]. Bohr’s choice here shows good judgment. It may not be an ideal starting point for axiomatic reconstructions of the theory, but certainly the whole structure can be made to flow naturally through there—as Schwinger so convincingly demonstrated [@Schwinger]. So Bohr’s vision cannot be dismissed lightly.
At this point the discussion can go in many directions, philosophical and technical. The former may be more urgent [@Stig], but my very modest aim here is to discuss how much freedom one has in choosing the complementary measurement. Because of the unitary symmetry the answer is independent of the choice of the first measurement, but it will turn out to depend in an interesting way on which Hilbert space we are in.
Complementary pairs of bases
============================
Let us assume that the dimension of our Hilbert space is $N$. We will have to come back to the question what this means. Meanwhile we associate measurements to orthonormal bases in the familiar way. If two such measurements are complementary it must be true that the pair of orthonormal bases $\{ |e_i\rangle \}_{i=0}^{N-1}$ and $\{ |f_i\rangle\}_{i=0}^{N-1}$ are related by
$$|\langle e_i|f_j\rangle |^2 = \frac{1}{N}$$
for all the basis vectors. The question now arises whether complementary pairs of bases exist in every dimension, and if so how many such pairs exist, counting them up to the natural equivalence under unitary transformations [@Kraus].
This problem is equivalent to another that has been studied for a long time. Let us form a matrix with elements
$$H_{ij} = \langle e_i|f_j\rangle \ .$$
If the two bases are complementary this is a complex Hadamard matrix, that is a unitary matrix all of whose elements have the same modulus. In this way the existence of a complementary pair is equivalent to the existence of a complex Hadamard matrix. It is natural to use one of the members of the pair as our computational basis, in which case the columns of the Hadamard matrix are given by the elements of the vectors in the second basis. Of course we are not interested in the order or the overall phases of these vectors, so we will regard two unitary matrices $H'$ and $H$ as equivalent if there exists a permutation $P$ and a diagonal unitary $D$ such that
$$H' = HDP \ .$$
But there is still some freedom in the choice of the coordinate system. Given a pair of bases represented by the unit matrix and an Hadamard matrix $H$, an overall unitary transformation (from the left) with a permutation and a diagonal unitary can be undone from the right when it acts on the unit matrix, while any Hadamard matrix $H$ becomes a new Hadamard matrix $H'$. So in classifying pairs of complementary bases up to unitary transformations we will regard two complex Hadamard matrices as equivalent if there exist diagonal unitaries $D_1, D_2$ and permutation matrices $P_1,P_2$ such that
$$H' = P_1D_1HD_2P_2 \ . \label{equivalence}$$
If this is so we say that $H$ and $H'$ are equivalent, written $H \approx
H'$ [@Haagerup]. The problem of classifying all complementary bases up to overall unitary transformations is equivalent to classifying all complex Hadamard matrices up to this equivalence. (Classifying all triples of mutually complementary bases is a more involved affair, since the freedom of multiplying from the left will be restricted.) We can remove some of the ambiguity by insisting that all Hadamard matrices should be presented in dephased form, meaning that all entries in the first row and the first column equal $1/\sqrt{N}$.
A complex Hadamard matrix of any size exists. A solution is the Fourier matrix $F_N$, with entries that are roots of unity only:
$$F_{ij} = \frac{1}{\sqrt{N}}\omega^{ij} \ , \hspace{6mm}
\omega = e^{\frac{2\pi i}
{N}} \ , \hspace{6mm} 0 \leq i,j < N - 1 \ .$$
And indeed this is a matrix with many applications. But are there other solutions? It is known that a generic unitary matrix is determined by the moduli of its matrix elements up to the $2N-1$ phases that are removed by dephasing [@Karabegov], so if the answer is “yes” then Hadamard matrices are quite exceptional among the unitaries.
Our question has a long history. In 1867 the British mathematician Sylvester gave many examples of such matrices [@Sylvester]. Sylvester also proved uniqueness for $N = 2,3$. In 1893 the French mathematician Hadamard studied the case $N = 4$ [@Hadamard], and found that any Hadamard matrix of this size is equivalent to
$$H(z) = \frac{1}{2} \left( \begin{array}{rrrr} 1 & 1 & 1 & 1 \\
1 & z & -1 & - z \\ 1 & -1 & 1 & - 1 \\ 1 & - z & - 1 &
z \end{array} \right) \approx
\frac{1}{2} \left( \begin{array}{rrrr} 1 & 1 & 1 & 1 \\
1 & -1 & z & - z \\ 1 & 1 & - 1 & - 1 \\ 1 & - 1 & - z &
z \end{array} \right)\ . \label{Had}$$
This is a one parameter family of dephased Hadamard matrices, since the phase factor $z$ is arbitrary and invariant (apart from its sign) under the transformations introduced in eq. (\[equivalence\]). In 1997 the Danish mathematician Haagerup proved that for $N = 5$ the Fourier matrix is again unique up to the natural equivalence [@Haagerup]. The $N = 6$ case is still open. An elegant family of dephased $N = 6$ Hadamard matrices with three real parameters was found by Karlsson [@Karlsson], and there is strong evidence that a four parameter family should exist [@Skinner; @Szollosi]. Perhaps it has Karlsson’s 3-dimensional family as its boundary? An isolated example not belonging to any continuous family is also known [@Tao]. Finally there are many constructions available in higher dimensions, but we are not even close to a classification [@TZ].
The motivations behind these works were various. Sylvester’s is a very joyful paper written when the notion of a matrix was new, and he was interested in all sorts of patterns he could observe in them. Haagerup’s motivations stemmed from operator algebra. Other motivations for the study of complex Hadamard matrices come from quantum groups [@Banica], and from various corners of quantum information theory [@Werner; @Berge]. The work on $N = 6$ is largely inspired by the problem of Mutually Unbiased Bases in quantum theory, to which we will return. It is worth mentioning that the action of a complex Hadamard matrix can be implemented in the laboratory by means of linear optics [@Marek], and that the $N = 6$ case is realistically within reach.
It is intriguing that the answer to the question seems to depend so intricately on the dimension of Hilbert space, but one also wonders if it is at all possible to say something in general about a classification problem that moves this slowly. I will argue that one can, but first we should see what the existence of complementary pairs means for the geometry of quantum state space.
The geometry of state space
===========================
Mathematically a quantum state is represented by a density matrix $\rho$, that is an $N\times N$ complex matrix with non-negative eigenvalues. The set of all quantum states is a compact body of $N^2-1$ dimensions, with the pure states $\rho = |\psi \rangle \langle \psi|$ lying at its boundary. The distance $D$ between two density matrices is conveniently defined by
$$D^2(\rho_1, \rho_2) = \frac{1}{2}\mbox{Tr}(\rho_1-\rho_2)^2
\ .$$
If we choose the maximally mixed state as the origin we can think of the container space as a real vector space, with the notion of distance coming from the scalar product
$$\rho_1\cdot \rho_2 = \frac{1}{2}\mbox{Tr} (\rho_1 - \frac{1}{N}
{\bf 1})(\rho_2 - \frac{1}{N}{\bf 1} ) \ .$$
All the pure states lie on a sphere centered at the maximally mixed state. This sphere is called the outsphere, and the maximally mixed state will be chosen as the origin. An arbitrary state is formed as a mixture of pure states, and it follows that the set of all quantum states forms a convex body with an intricate shape. The reason why the shape is intricate is that the symmetry group of the body is a small but continuous subgroup of the set of all rotations in $N^2-1$ dimensions—namely, if we ignore some discrete symmetries, the unitary group or more precisely the group $SU(N)/Z_N$. The pure states therefore form a small but continuous subset of the body’s outsphere. We define the insphere as the largest sphere one can inscribe in the body. It is concentric with the outsphere, and the radius of the outsphere is $N-1$ times the radius of the insphere. (And we learn that the case $N = 2$ is special.)
In order to get a feeling for what the shape is, one can ask what kind of regular polytopes, and similar understandable structures, one can inscribe in it. Indeed an orthonormal basis in Hilbert space corresponds to a regular simplex with $N$ vertices, inscribed in the body of all states, centred at the origin, and spanning a plane through the centre of dimension $N-1$. Every state $\rho$ lies in a simplex of this type. A complementary pair of bases spans two planes oriented with respect to each other in a special way. In fact the two planes are totally orthogonal, meaning that any vector in one of them is orthogonal to any vector in the other. Note that the totally orthogonal planes already point to the use complementary measurements have in quantum state tomography; other things being equal complementarity will minimize the uncertainties caused by the fact that, in a laboratory with access to a potentially infinite ensemble of identically prepared systems, only a finite number of measurement will actually be carried out [@Fields].
Once we have found a pair of simplices coming from a complementary pair, we can adjust the coordinates so that one basis is described by the computational basis, and then move the other simplex around by acting on the pair from the left with permutations and diagonal unitaries—operations whose action on the computational basis can be undone by irrelevant action from the right with the same type of unitaries. See eq. (\[equivalence\]). So there is some freedom, but there will be more freedom the larger the set of inequivalent Hadamard matrices is found to be. The bottom line is that the existence of complementary pairs of bases is very much a question of the shape of the body of density matrices.
We can go on in this way. Since $(N-1)(N+1) = N^2-1$ we can find $N+1$ totally orthogonal planes in our $(N^2-1)$-dimensional vector space. We can place a regular simplex of the appropriate size in each plane, but it is not at all clear that its corners correspond to pure states. By construction they lie on the outsphere, but they may well lie well outside the body of states, whose shape is so difficult to discern. But then again it may be possible to inscribe all these simplices in the body, in which case we say that we have a complete system of $N+1$ Mutually Unbiased Bases [@Fields]—and we have one more handle on the shape.
After many trials in six dimensions [@Bruzda; @BH; @BS; @Jam; @RE], most investigators are convinced that complete systems of MUBs exist if—this much is known [@Fields]—and only if—this is a conjecture only—the dimension of the Hilbert space is a power of a prime number. We will not be concerned with this problem here, but it does hang in the background. By the way the best known complete systems of MUBs [@Fields] can be obtained by choosing one special complex Hadamard matrix, and then multiplying it from the left by appropriate permutations and diagonal unitaries to construct the remaining $N-1$ complementary bases. I would be interested to know if this is true also for the more exotic examples that are known in some prime power dimensions [@Kantor], but I don’t.
Of course the shape of the body of states can be studied in many other ways. But we are focussing on an important aspect of it.
Families of Hadamard matrices
=============================
Now let us consider the family of inequivalent Hadamard matrices given in eq. (\[Had\]). By inspection we see that it includes the Fourier matrix (at $z = i$), but it also includes a real Hadamard matrix (at $z = 1$). The latter has an interesting form: it is the tensor product $F_2\otimes F_2$. On reflection we realise that whenever the dimension of Hilbert space is composite we can form Hadamard matrices from a pair of Hadamard matrices of size $N_1$ and $N_2$ in this way. But it will not always be true that $F_{N_1}\otimes F_{N_2}$ is inequivalent to $F_{N_1N_2}$. As a matter of fact they are equivalent if and only if $N_1$ and $N_2$ are relatively prime. This follows from some elementary group theory, because any Fourier matrix can be regarded as the character table of a cyclic group, and the cyclic group $Z_{N_1N_2}$ is isomorphic to the cyclic group $Z_{N_1}\times Z_{N_2}$ if and only if $N_1$ and $N_2$ are relatively prime. In prime power dimensions it is $F_p\otimes \dots \otimes F_p$, and not the inequivalent matrix $F_{p^k}$, that lays the golden eggs (i.e., a complete set of MUBs [@Fields]).
There exists a construction due, in its most general form, to Diţă [@Dita], allowing us to construct a continuous family in dimension $N = N_1N_2$ starting from one Hadamard matrix $H^{(0)}$ in dimension $N_1$ and $N_1$ possibly different Hadamard matrices $H^{(1)}, \dots , H^{(N_1)}$ in dimension $N_2$. It uses a warped tensor product. In dephased form
$$H = \left( \begin{array}{cccc} H^{(0)}_{0,0} H^{(1)} & H^{(0)}_{0,1}
D^{(1)}H^{(2)} & \dots & H^{(0)}_{0,N_1-1}D^{(N_1-1)}H^{(N_1)} \\ \vdots & \vdots & & \vdots \\
H^{(0)}_{N_1-1,0} H^{(1)} & H^{(0)}_{N_1-1,1}
D^{(1)}H^{(2)} & \dots & H^{(0)}_{N_1-1,N_1-1}D^{(N_1-1)}H^{(N_1)} \end{array}
\right)$$
where $D^{(1)}, \dots , D^{(N_1-1)}$ are diagonal unitary matrices (with their first entries equal to one in order to obtain $H$ in dephased form). In this way the example of $N = 4$ generalises. It can be shown that the family arising from the Diţă construction using Fourier matrices as seeds interpolates between the non-equivalent matrices $F_{n^k}$ and $F_n\otimes \cdots \otimes F_n$ for all values of $n$ [@Nuno]. This somehow provides the beginning of a rationale for the existence of this family.
Assuming that the parameters that may be present in the individual $H^{(i)}$ do not complicate matters, the intrinsic topology of these families—if we ignore some discrete equivalences, whose action has been completely worked out only in special cases [@Bruzda]—is that of a higher dimensional torus. They are examples of the more general class of affine families [@TZ], in which all relations between the phases in the matrix are linear. But affine families are not the end of the story. For $N = 6$ we obtain affine families of at most 2 dimensions, while the set of all inequivalent Hadamard matrices has at least 3, and almost certainly 4, parameters. Moreover Karlsson’s 3-dimensional family, which is known in explicit form, has a much more interesting geometry than the tori. Before all the discrete equivalences are taken into account it looks much like a circle bundle over a sphere, but with special points over (some copies of) the Fourier matrix, where the circles are blown up to tori.
All Hadamard matrices connected to the Fourier matrix
=====================================================
To address the classification in general we first lower our aim a bit. Rather than ask for all complex Hadamard matrices, we ask for all smooth families of Hadamard matrices that include the Fourier matrix. This is really a question about the dimension of some algebraic variety. Following Fermi—“when in doubt, expand in a power series”—we attack it by multiplying the matrix elements in the Fourier matrix by arbitrary phase factors, which are then expanded in a series:
$$F_{ij} \rightarrow F_{ij}e^{i\phi_{ij}} = F_{ij}\left( 1 +
i\phi_{ij} - \frac{1}{2}\phi^2_{ij} + \dots \right) \ .$$
Then we try to solve the unitarity conditions order by order in the free phases $\phi_{ij}$, and count the number $d$ of free parameters that remain. To first order in the perturbation Tadej and Życzkowski [@TZ] made this calculation. For dimension $N$ they found the answer
$$D_1 = \sum_{n=0}^{N-1}\mbox{gcd}(n,N) \ , \label{defect}$$
where gcd$(n,N)$ denotes the greatest common divisor of $n$ and $N$. Subtracting $2N-1$, that is the number of trivial phases arising from eq. (\[equivalence\]), this gives an upper bound on the number of free parameters in a smooth family of dephased Hadamard matrices containing the Fourier matrix. If $N$ is a prime this upper bound equals zero, so that we know that the Fourier matrix is isolated in the set of all Hadamard matrices. If $N$ is a power of a prime the upper bound is equal to the dimension of the family that arises from the Diţă construction, so that this family is the largest possible such family in this case. We tried to see what happens in the remaining cases.
By now my question has become very technical, and for the details I have to refer to a paper by Barros e Sá and myself [@Nuno]. In outline, our first step was to use the special properties of the Fourier matrix to write the equations in a more manageable form—in effect we calculate all bases complementary to the Fourier basis, rather than those complementary to the computational basis. To first order in the perturbation the equations are linear and homogeneous, and we recover eq. (\[defect\]) in a very transparent way. To higher orders we still have to solve a linear system, but now with a heterogeneous part given by the lower order solution. To a given order $s$ these systems have solutions if and only if the lower order solutions obey consistency conditions which take the form of a set of multivariate polynomial equations of order $s$. If these conditions are non-trivial the true dimension drops below the first order result (\[defect\]). Should this happen we have to solve the polynomial equations in order to determine by how much the dimension drops, and then we can proceed to higher orders ...
Using a mixture of numerical and symbolical calculations we were able to carry through this program to quite high orders, for 24 different choices of $N$ not equal to a prime power. One case then stands out as being very special: $N = 6$, for which the consistency conditions hold trivially up to order 25 in the perturbation. At order 26 Mathematica quite reasonably refused to continue the calculation. Still, this gives considerable support to the conjecture that a 4-dimensional family of dephased Hadamard matrices does exist in this case—and it warns us not to make induction from six to arbitrary composite dimensions. $N = 10$ also stands out as somewhat special (the consistency conditions break down at order 11). In all other cases the consistency conditions break down in a systematic manner: at order 3 if $N$ is a product of three different primes, at order 4 if $N$ is a product of two different primes, at least one repeated, at order 5 if $N$ is a product of two odd primes, and at order 7 if $N$ is a twice an odd prime and larger than ten.
We looked at the comparatively manageable cases of $N = 12, 18, 20$ in more detail. We found solutions to the consistency conditions in symbolic form. For $N = 12$ we found the general solution to the consistency conditions at order 4, as well as an almost watertight argument saying that there does exist a two-sheeted solution such that no further breakdowns occur in higher orders. This was confirmed up to order 11 by an explicit calculation. Based on this information we conjecture that whenever $N = p_1p_2^2$, where $p_1, p_2$ are primes, there will be a non-linear family of dephased complex Hadamard matrices of dimension
$$d = 3p_1p_2^2 - 3p_1p_2 - 2p_2^2 + p_2 + 1 \ .
\label{conj}$$
This number comes from the requirement that there should be two families, related by transposition and intersecting in a family arising from the Diţă construction in such a way that the two sheets span the whole tangent space—with its dimension given by the linearised calculation—when they intersect. We feel quite confident that this is true, but more to the point we feel that the mere fact that we were at all able to put forward a concrete conjecture suggests that there is a pattern here—we are not very close to a full solution of the problem for general $N$, but we do feel that we have a right to expect that eventually a solution will be found, in reasonably compact form.[^1]
That is to say, however odd the conjecture (\[conj\]) may seem, we feel that it represents the beginning of a clear cut answer to the question posed in the title.
Conclusion
==========
A charge that has been raised against quantum mechanics is that of boring repetition: one might feel that the shape of the space of possible states should depend in an interesting way on the physical nature of the system, but in fact quantum mechanics uses the same old Hilbert space for everything [@Mielnik]. Perhaps what we have seen is a possible answer to this. The existence of pairs of complementary bases has an elegant interpretation in terms of the shape of the convex body of all possible states. And in this regard that shape does depend dramatically on the number theoretical properties of the dimension of the Hilbert space.
But the dimension of the Hilbert space of a physical system is a property that can be measured, or at least be bounded from below by measurements [@Brunner; @Wehner]. It has even been argued that the dimension of the Hilbert space is a candidate for the elusive role of something that goes on in between preparation and measurement [@Fuchs]. If we accept this, and if the results above have convinced us that the shape of the space of states does depend in an interesting way on the dimension of Hilbert space, then the charge against quantum mechanics falls. The shape and feel of the body of quantum states does depend on the physical nature of the system.
I thank Nuno Barros e Sá for a wonderful collaboration, and Andrei Khrennikov for arranging such lively conferences. They comfort me for the fact that I was not able to attend the Warsaw conference—which was very lively too, judging from the proceedings [@Bohr].
[99]{}
N. Bohr, [*The causality problem in atomic physics*]{}, in [*New Theories in Physics*]{}, Int. Inst. of Intellectual Co-operation,1939.
N. D. Mermin, [*Copenhagen interpretation: How I learned to stop worrying and love Bohr*]{}, IBM J. Res. Dev. [**48**]{} (2004) 53.
J. Schwinger: [*Quantum Mechanics. Symbolism of Atomic Measurements*]{}, ed. by B.-G. Englert, Springer, Berlin 2001.
S. Stenholm: [*The Quest for Reality. Bohr and Wittgenstein, Two Complementary Views*]{}, Oxford UP, 2011.
K. Kraus, [*Complementary observables and uncertainty relations*]{}, Phys. Rev. [**D35**]{} (1987) 3070.
U. Haagerup, [*Orthogonal maximal abelian \*-subalgebras of the $n\times n$ matrices and cyclic $n$-roots*]{}, in *Operator Algebras and Quantum Field Theory, Rome (1996)*, Internat. Press, Cambridge, MA 1997.
A. V. Karabegov, [*A mapping from the unitary to the doubly stochastic matrices and symbols on a finite set*]{}, AIP Conf. Proc. [**1079**]{} (2008) 39.
J. J. Sylvester, [*Thoughts on inverse orthogonal matrices, simultaneous sign-successions, and tessellated pavements in two or more colours, with applications to Newton’s rule, ornamental tile-work, and the theory of numbers*]{}, Phil. Mag. [**34**]{} (1867) 461.
J. Hadamard, [*Résolution d’une question relative aux déterminants*]{}, Bull. Sci. Math. [**17**]{} (1893) 240.
B. R. Karlsson, [*Three-parameter complex Hadamard matrices of order 6*]{}, Lin. Alg. Appl. [**434**]{} (2011) 247.
A. J. Skinner, V. A. Newell, and R. Sanchez, [*Unbiased bases (Hadamards) for six-level systems: Four ways from Fourier*]{}, J. Math. Phys. [**50**]{} (2009) 012107.
F. Sz[ö]{}ll[ő]{}si, [*Complex Hadamard matrices of order 6: a four-parameter family*]{}, J. London Math. Soc., to appear.
T. Tao, [*Fuglede’s conjecture is false in 5 and higher dimensions*]{}, Math. Res. Lett. [**11**]{} (2004) 251.
W. Tadej and K. Życzkowski, [*A concise guide to complex Hadamard matrices*]{}, Open Sys. Inf. Dyn. [**13**]{} (2006) 133.
T. Banica and R. Nicoara, [*Quantum groups and Hadamard matrices*]{}, Panamer. Math. J. [**17**]{} (2007) 1.
R. F. Werner, [*All teleportation and dense coding schemes*]{}, J. Phys. [**A34**]{} (2001) 7081.
B.-G. Englert, D. Kaszlikowski, L. C. Kwek, and W. H. Chee, [*Wave-particle duality in multi-path interferometers: general concepts and three-path interferometers*]{}, Int. J. Quant. Inf. [**6**]{} (2008) 129.
M. Żukowski, A. Zeilinger, and M. A. Horne, [*Realizable higher-dimensional two-particle entanglements via multiport beam splitters*]{}, Phys. Rev. [**A55**]{} (1997) 2564.
W. K. Wootters and B. D. Fields, [*Optimal state-determination by mutually unbiased measurements*]{}, Ann. Phys. [**191**]{} (1989) 363.
I. Bengtsson, W. Bruzda, Å. Ericsson, J.-Å. Larsson, W. Tadej, and K. Życzkowski, [*Mutually unbiased bases and Hadamard matrices of order six*]{}, J. Math. Phys. [**48**]{} (2007) 052106.
P. Butterley and W. Hall, [*Numerical evidence for the maximum number of mutually unbiased bases in dimension six*]{}, Phys. Lett. [**A369**]{} (2007) 5.
S. Brierley and S. Weigert, [*Maximal sets of mutually unbiased quantum states in dimension six*]{}, Phys. Rev. [**A78**]{} (2008) 04312.
P. Jaming, M. Matolcsi, P. Móra, F. Sz[ö]{}ll[ő]{}si, and M. Weiner, [*A generalized Pauli problem and an infinite family of MUB-triplets in dimension 6*]{}, J. Phys. [**A42**]{} (2009) 245305.
P Raynal, X. Lü, and B.-G. Englert, [*Mutually unbiased bases in dimension six: The four most distant bases*]{}, arXiv eprint 1103.1025.
W. M. Kantor, [*MUBs inequivalence and affine planes*]{}, arXiv eprint 1104.3370.
P. Diţă, [*Some results on the parametrization of complex Hadamard matrices*]{}, J. Phys. [**A37**]{} (2004) 5355.
N. Barros e Sá and I. Bengtsson, [*Families of complex Hadamard matrices*]{}, eprint arXiv:1202.1181.
B. Mielnik, [*Quantum theory without axioms*]{}, in C. J. Isham, R. Penrose, and D. W. Sciama (eds.): [*Quantum Gravity II*]{}, Oxford U.P., 1981.
N. Brunner, S. Pironio, A. Acin, and N. Gisin, [*Testing the Hilbert space dimension*]{}, Phys. Rev. Lett. [**100**]{} (2008) 210503.
S. Wehner, M. Christandl, and A. C. Doherty, [*A lower bound on the dimension of a quantum system given measured data*]{}, Phys. Rev. [**A78**]{} (2008) 062112.
C. Fuchs, [*QBism, the perimeter of quantum Bayesianism*]{}, arXiv eprint 1003.5209.
[^1]: Actually the argument in our published paper [@Nuno] is quite a bit stronger than the one I present here: here I tell the story as I knew it during the Växjö meeting.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report results of quantum Monte Carlo simulations of the Bose-Hubbard model in three dimensions. Critical parameters for the superfluid-to-Mott-insulator transition are determined with significantly higher accuracy than it has been done in the past. In particular, the position of the critical point at filling factor $n=1$ is found to be at $(U/t)_{\rm c} = 29.34(2)$, and the insulating gap $\Delta$ is measured with accuracy of a few percent of the hopping amplitude $t$. We obtain the effective mass of particle and hole excitations in the insulating state—with explicit demonstration of the emerging particle-hole symmetry and relativistic dispersion law at the transition tip—along with the sound velocity in the strongly correlated superfluid phase. These parameters are the necessary ingredients to perform analytic estimates of the low temperature ($T\ll \Delta$) thermodynamics in macroscopic samples. We present accurate thermodynamic curves, including these for specific heat and entropy, for typical insulating ($U/t=40$) and superfluid ($t/U=0.0385$) phases. Our data can serve as a basis for accurate experimental thermometry, and a guide for appropriate initial conditions if one attempts to use interacting bosons in quantum information processing.'
author:
- 'B. Capogrosso-Sansone'
- 'N.V. Prokof’ev'
- 'B.V. Svistunov'
title: 'Phase diagram and thermodynamics of the three-dimensional Bose-Hubbard model'
---
Introduction
============
8.mm
In the past decade, strongly correlated lattice quantum systems have been attracting a lot of interest and effort. Remarkably, simple yet nontrivial models which contain most of the important many-body physics and known in the theory community for many years can be now realized and studied experimentally. For the first time theoretical predictions and experimental data for strongly correlated states can be directly tested against each other in the ideal setup when all model ingredients are known and controlled.
Experimentally, lattice systems are realized by trapping atoms in an optical lattice, a periodic array of potential wells resulting from the dipole coupling of the atoms to the electric field of the standing electromagnetic wave produced by a laser. Optical lattices are a very powerful and versatile tool. By changing the laser parameters and configuration, the properties and geometry of the optical lattice can be finely tuned [@Jaksch2]. Ultimately, this results in the possibility of controlling the Hamiltonian parameters and exploring various regimes of interest. In particular, ultra-cold Bose atoms trapped in an optical lattice are an experimental realization of the Bose-Hubbard model. The model has been studied in the seminal paper by Fisher, Weichman, Grinstein, and Fisher, Ref. , and its physical realization with ultra-cold atoms trapped in an optical lattice has been envisioned in Ref. [@Jaksch]. Few years later, the Bose-Hubbard system was produced in the laboratory [@Greiner]. Since then, the field remains very active [@Batrouni; @Bloch_theor; @Isacsson; @Bloch_spatial; @Gerbier1; @Clark], not only because theoretical predictions and experimental techniques still have to be substantially improved to claim the quantitative agreement, but also because of the new physical applications.
At zero temperature, a system of bosons with commensurate filling factor undergoes a superfluid-to-Mott insulator (SF-MI) quantum phase transition. The ground state of MI can be used in quantum information processing to initialize a large set of qubits (the main remaining challenge is in addressing single atoms to build quantum gates, see Ref. [@Jaksch2] and references therein). Atomic systems in optical lattices have the advantage of being well isolated from the environment. This results in a relatively long decoherence time of the order of seconds [@Jaksch2] and therefore the possibility of building long-lived entangled many body states. These properties make MI groundstates good candidates for building blocks of a quantum computer. Another possible application is in interferometric measurements [@interferometry]. It has been argued [@Burnett1; @Burnett2; @Rodriguez] that using the superfluid-to-Mott-insulator phase transition to entangle and disentangle atomic Bose-Einstein condensate one can go beyond the Heisenberg-limited interferometry.
A system of bosons with short-range repulsive pair interaction trapped in an optical lattice is described by the Bose-Hubbard Hamiltonian: $$H\; =\; -t\sum_{<ij>} b^{\dag}_i\,b_j +\frac{U}{2}\sum_i
n_i(n_i-1) -\sum_i \mu_i n_i\; , \label{BH}$$ where $b^{\dag}_i$ and $b_i$ are the bosonic creation and annihilation operators on the site $i$, $ t$ is the hopping matrix element, $U$ is the on-site repulsion and $\mu_i=\mu-V(i)$ is the sum of the chemical potential $\mu$ and the confining potential $V(i)$. In what follows, we consider bosons in the simple cubic lattice. At zero temperature and integer filling factor, the competition between kinetic energy and on-site repulsion induces the MI-SF transition. When the on-site repulsion is dominating, ${t/U \ll 1}$ the atoms are tightly localized in the MI ground state which is well approximated by the product of local (on-site) Fock states. The Mott state is characterized by zero compressibility originating from an energy gap for particle and hole excitations. When the hopping amplitude is increased up to a certain critical value ${(t/U)_{\rm c}}$, particle delocalization becomes energetically more favorable and the system Bose condenses. In the chemical potential vs. hopping matrix element plane (energies are scaled by $U$), the $T=0$ phase diagram has a characteristic lobe shape [@Fisher], see also Fig. \[phase\_diagram\] below, with the MI phase being inside the lobe (there is one lobe for each integer filling factor). The most interesting region in the phase diagram is the vicinity of the lobe tip, $(\mu=\mu_{\rm c}, \, t=t_{\rm c})$, corresponding to the MI-SF transition in the commensurate system. For other values of $\mu$ or $t$, the SF-MI criticality is trivial and corresponds to the weakly interacting Bose gas at vanishing particle density [@Fisher]. It is straightforwardly described provided the particle (hole) effective mass is known. If, however, one crosses the MI-SF boundary at constant commensurate density (this is equivalent to going through the tip of the lobe at a fixed chemical potential) the long-wave action of the system becomes relativistic and particle-hole symmetric. Now the phase transition is in the four-dimensional U(1) universality class [@Fisher]. It is worth emphasizing that here we have a unique opportunity of a laboratory realization of the non-trivial relativistic vacuum, a sort of a “hydrogen atom“ of strongly-interacting relativistic quantum fields. Approaching the critical point from the MI side, one deals with the vacuum that supports massive bosonic particles and anti-particles (particles and holes). On the other side of the transition, the SF vacuum supports massless bosons (phonons) that do not have an anti-particle analog. In principle, one can systematically study universal multiparticle scattering amplitudes of the relativistic quantum field theory in the ultra-cold ”supercollider"!
The present study is focused on the three-dimensional (3D) system. To the best of our knowledge, previous systematic studies of the 3D case were limited to the mean-field (MF) [@Fisher] and perturbative methods [@Monien]. In Ref. [@Monien], the authors utilized the strong-coupling expansion to establish boundaries of the phase diagram in the $(\mu /U,\; t/U)$ plane. This approach, based on the small ratio ${zt/U\ll 1}$, where $z=6$ is the coordination number for the simple cubic lattice, works well only in the MI phase in the region far from the tip of the lobe, where the insulating gap is larger then hopping, $\Delta/zt
> 1$. Close to the critical region, where $\Delta/zt \sim 1$, the strong-coupling expansion is no longer valid. We present the results of large-scale Monte Carlo (MC) simulations of the model (\[BH\]) by worm algorithm [@worm]. With precise data for the single-particle Green function, we are able to carefully trace the critical and close-to-critical behavior of the system, and, in particular, produce an accurate phase diagram in the region of small insulating gaps $\Delta \ll t$. Though the corresponding parameter range is quite narrow, it is crucial to clearly resolve it to reveal the emerging particle-hole symmetry and relativistic long-wave physics at the tip of the MI-SF transition. We also present data for the effective masses of particle and hole excitations inside the insulating phase. Close to the MI lobe tip, the data for the dispersion of the elementary excitations are fitted by the relativistic law, in agreement with the theory (this also allows us to extract the value of the sound velocity in the critical region). In the Mott state, the knowledge of gaps and effective masses is sufficient to calculate the partition function in the low temperature limit analytically and to make reliable predictions for the system entropy.
For such applications of the system as quantum information processing and interferometry, controlling the temperature is of crucial importance. Most applications are based on the key property of the good insulating state, which is small density fluctuations in the ground state. At zero temperature fluctuations are of quantum nature and can be efficiently controlled externally through the $t/U$ ratio. At finite temperature, fluctuations are enhanced by thermally activated particle-hole excitations. Only when the temperature is much smaller than the energy gap, the number of excitations is exponentially small. Up to date, there are no available experimental techniques to measure the temperature of a strongly interacting system. For weakly interacting systems, the temperature can be extracted in a number of ways, e.g. from the interference pattern of matter waves [@interference] or the condensate fraction observed after the trap is released and the gas expands freely [@time_of_flight] —these properties are directly related to the momentum distribution function $n(\mathbf{k})$. For strongly interacting systems, both temperature and interaction are responsible for filling the higher momentum states, which makes it hard to extract temperature using absorption imaging techniques.
The results presented in this paper can be used to perform accurate thermometry. Typically, the initial temperature, $T^{\rm (in)}$, (before the optical lattice is adiabatically loaded) is known. By entropy matching one can easily deduce the final temperature of the MI state, $T=T^{\rm (fin)}$, provided the entropy of the MI phase is known. To this end we have calculated the energy, specific heat and entropy of the system in several important regimes which include MI and strongly correlated SF phases. These data can be used to suggest appropriate initial conditions which make the Bose-Hubbard system suitable for physical applications, such as the ones described above.
Another interesting question concerns the nature of inhomogeneous states in confined systems when the MI phase is formed in the trap center. The confining potential provides a scan in the chemical potential of the phase diagram at fixed $t/U$ [@Jaksch]. As one moves away from the trap center the system changes its local state. At zero temperature, the density profile of the system can be read (up to finite-size effects) from the ground state phase diagram. At finite temperature, this is no longer possible. In particular, the liquid regions outside of the MI lobes could be normal or superfluid, depending on temperature.
So far experimental results have been interpreted by assuming that liquid regions are superfluid, but there were no direct measurements or calculations to prove that this was the case. \[Part of the problem is that absorption imaging is sensitive only to $n(\bf k)$, which is the Fourier transform of the single-particle density matrix in the relative coordinate. All parts of the system contribute to $n(\bf k)$ and it is hard to discriminate where the dominant contribution comes from.\] It is almost certain that $T^{\rm (fin)}/T_{\rm c}^{\rm (fin)}$ of the strongly correlated system is higher then $T^{\rm (in)}/T_{\rm
c}^{\rm (in)}$. Indeed, since the entropy of MI at $\Delta \gg T$ is exponentially small, most entropy will be concentrated in the liquid regions. At this point we notice that the transition temperature in the liquid is suppressed relative to the non-interacting Bose gas value $T_{\rm c}^{(0)}\, \approx\,
3.313\, n^{2/3}/m$ by both (i) effective mass enhancement in the optical lattice, $m \to 1/2ta^2$ (here $a$ is the lattice constant), and (ii) strong repulsive interactions in the vicinity of the Mott phase, in fact, $T_{\rm c} \to 0$ at the SF-MI boundary. It seems plausible that the MI phase is always surrounded by a broad normal liquid (NL) region. It may also happen that superfluidity is completely eliminated in the entire sample in the final state. \[Strictly speaking, at $T\ne 0$ the MI and NL phases are identical in terms of their symmetries and are distinguished only [*quantitatively*]{} in the density of particle-hole excitations, i.e. in the Hamiltonian (\[BH\]) the finite-temperature MI is continuously connected without phase transition to NL, see Fig. \[critical\_T\]. For definiteness, we will call NL a normal finite-$T$ state which is superfluid at $T=0$ for the same set of the Hamiltonian parameters.\] Fig. \[critical\_T\] shows the finite-temperature phase diagram for filling factor $n=1$ (we will discuss how we determine the critical temperature in Sec. III). The critical temperature goes to zero sharply, while approaching the critical point. In the limit of $U\rightarrow 0$ the critical temperature is slightly above the ideal-gas prediction ($T=5.591t$ was calculated using the tight binding dispersion relation), as expected (see, e.g. Ref. [@critical_temp]).
![(Color online). Finite-temperature phase diagram at filling factor $n=1$. Solid circles are simulation results (the line is a guidance for the eye), error bars are plotted. $T=5.591t$ is the critical temperature of the ideal Bose gas with the tight binding dispersion relation. At finite, but low enough temperature, the MI domain is loosely defined as the part of the phase diagram to the right of the gray line. The rest of the non-superfluid domain is referred to as normal liquid (NL).[]{data-label="critical_T"}](critical_T.eps)
The paper is organized as follows: in Sec. II we present results for the ground state phase diagram and effective mass of particle (hole) excitations, at integer filling factor $n=1$. In Sec. III we investigate the thermodynamic properties of the system. We present data for energy, specific heat and entropy and calculate the final temperature of the uniform and harmonically confined system in the limit of large gaps. For the case of trapped system, we also determine the state of the liquid at the perimeter of the trap. Brief conclusions are presented in Sec. IV.
ground state properties
=======================
This section deals with the results of large-scale Monte Carlo simulations for the ground state phase diagram of the Bose-Hubbard system in three dimensions. Analytical approaches, e.g. the strong coupling expansion, work well in the region where $zt/U \ll 1$ and the system is deep in the MI phase. Under these conditions the kinetic energy term in the Hamiltonian can be treated perturbatively and the unperturbed ground state is a product of local Fock states. In Ref. [@Monien] the authors carried out an expansion, up to the third order in $zt/U$, for the SF-MI boundaries and estimated positions of critical points at the tips of the MI lobes (by extrapolating results to the infinite expansion order). Their results agree with the mean field solution calculated in Ref. [@Fisher], when the latter is expanded up to the third order in $zt/U$ and the dimension of space goes to infinity. As already mentioned, this approach starts failing when $\Delta \sim zt$. Using MC techniques we were able to calculate critical parameters and predict the position of the diagram tip with much higher accuracy: with the worm algorithm (WA) approach the energy gaps can be measured with precision of the order of $10^{-2} t$ [@chains99]. The simulation itself is based on the configuration space of the Matsubara Green function $$G(i,\tau) \; =\; \langle \: {\cal T}_{\tau} \:
b^{\dag}_{i}(\tau )\, b_0^{\:}(0) \: \rangle \; , \label{G}$$ which is thus directly available. We utilize the Green function to determine dispersion relations for particle and hole excitations at small momenta \[from the exponential decay of $G(\textbf{p},
\tau )$ with the imaginary time\] which directly give us the energy gap and effective masses.
Recall that in the momentum space the Green function of a finite size system $G(\textbf{p},\tau)$ is different from zero only for $\textbf{p}=\textbf{p}_{\bf m} = 2\pi(m_x/L_x,\: m_y/L_y,
\:m_z/L_z )$, where $L_{\alpha=x,y,z}$ is the linear system size in direction $\alpha$ (we performed all simulations in the cubic system with $L_{\alpha}=L$), and $~\mathbf{m}=(m_x,m_y,m_z)~$ is an integer vector. Using Lehman expansion and extrapolation to the $\tau \to \pm \infty$ limit one readily finds that
$$G(\mathbf{p},\tau)\; \to\; \left\{
\begin{array}{l}
Z_+e^{-\epsilon_+(\mathbf{p})\tau
}\; , ~~~~~~~\tau \, \to\, +\infty \; , \\
Z_-e^{\epsilon_-(\mathbf{p})\tau }\; , ~~~~~~~~~\tau \, \to\,
-\infty \; .\end{array} \right. \label{G2}$$
The two limits describe single-particle/hole excitations in the MI phase. Here $Z_{\pm}$ and $\epsilon_{\pm}$ are the particle/hole spectral weight (or $Z$-factors) and energy, respectively. In the grand canonical ensemble, excitation energies are measured relative to the chemical potential. With this in mind, calculating the phase diagram of the system is rather straightforward. At a fixed number of particles $N=L^3$ and $t/U$ ratio one determines chemical potentials $\mu_{\pm}$ for which the energy gap for creating the particle/hole excitation with $\mathbf{p}=0$ vanishes. The insulating gap is given then by $\Delta = \mu_+ -\mu_-$. For high precision simulations of the gap one has to choose the value of $\mu$ very close to $\mu_{\pm}$ and consider finite, but zero for all practical purposes, value of temperature so that the following two conditions are satisfied: $$|\mu-\mu_{\pm}| \ll t\;, \;\;\;\;\;\; |\mu-\mu_{\pm}| \gg T \;.
\label{conditions}$$ This is exactly how we proceed. By plotting $\ln
[G(\mathbf{p},\tau)]$ vs. $\tau$ we deduce $\epsilon_{\pm}(\mathbf{p})$ from the exponential decay of the Green function. A typical example is shown in Fig. \[green\_function\]. We use the values of the hopping amplitude $t$ and the lattice constant $a$ as units of energy and distance, respectively.
![(Color online). Zero-momentum Green function in the Mott phase with the chemical potential $\mu/U =0.809$, slightly below the upper phase boundary. Here we show data for the system with $N=10^3$ bosons at $U/t=70$ and $T/t=0.025$. In the inset we plot the energy gap $\Delta$ for linear system sizes L=10 and L=20. Finite-size errors are within the statistical error bars.[]{data-label="green_function"}](green_function.eps){width="3.3in" height="2.9in"}
In Fig. \[phase\_diagram\] we present accurate results for the boundaries of the first Mott insulator lobe. We have done calculations for systems with linear sizes $L= 5,\: 10,\: 15,\: 20$. Up to values of $t/U\sim 0.031$ no size effects were detected within the error bars. \[Here and throughout the paper error bars are of two standard deviations\]. In the critical region the finite-size effects were eliminated using standard scaling techniques (see below). The dashed lines are the prediction of Ref. [@Monien] based on the third-order expansion in $t/U$. It becomes inaccurate quite far from the tip when the insulating gap is about $\sim 6t$. On the other hand, the value of the tip position extrapolated to the infinite order is right on target, within the error bar of order $3t$ for the chemical potential and on-site repulsion. In all simulations (performed at $t/T=40$) the finite-temperature effects are negligible—the system is essentially in its ground state.
To eliminate finite-size effects in the critical region and pinpoint the position of the lobe tip, we employed standard scaling techniques based on the universality considerations.
First, let us briefly review the universal properties of the insulator-to-superfluid transition (see Ref. [@Fisher] for more details). There exist two types of transitions: the “generic" transition, when the phase boundary is crossed at fixed *t/U*, and a special transition at fixed integer density, when the SF-MI boundary is crossed at fixed ${\mu/U}$. The generic transition is driven by the addition/subtraction of a small number of particles, and is fully characterized by the physics of the weakly-interacting Bose gas formed by the small incommensurate density component $n-n_0$, where $n_0$ is the nearest integer to $n$. In particular, if $\delta $ is the deviation from the generic critical point in the chemical potential or $t/U$ ratio then $|n-n_0|\sim \delta$ and $T_{\rm c}(\delta ) \sim \delta ^{2/3}$ in the SF phase.
The special transition at the tip of the lobe happens at fixed integer density. It is driven by delocalizing quantum fluctuations which for large values of [*t*]{}/U enable bosons to overcome the on-site repulsion and hop within the lattice. As explained in Ref. [@Fisher], the effective action for the special transition belongs to the $(d+1)$-dimensional XY universality class which implies emergent relativistic invariance (rotational invariance in the imaginary-time–space, which is equivalent to the Lorentz invariance in real-time–space), and, in particular, an emergent particle-hole symmetry. The upper critical dimension for this transition is $(d+1)=4$, so that for $d>3$ the critical exponents for the order parameter, $\beta $, and the correlation length, $\nu$, are of mean-field character: $\beta=\nu=1/2$ (with logarithmic corrections for $d=3$). In this study, we were not able to resolve logarithmic renormalizations for realistic 3D systems and proceed below with the analysis which assumes mean-field scaling laws.
Denoting the distance from the critical point as $\gamma=[(t/U)_{\rm c}-t/U]$, for a system of linear size L one can write $$\Delta (\gamma ,L)\; =\; \xi^{-1} f(\xi/L) \; =\; L^{-1}g(\gamma
L^2) \; , \label{delta}$$ where $\Delta$ is the particle-hole excitation gap, $\xi$ is the correlation length, and $f(x)$ and $g(x)$ are the universal scaling functions. In the last expression we have used the relation $\xi \propto \gamma^{-1/2}$. At the critical point, the product $L\Delta $ does not depend on the system size. Therefore, by plotting $L\Delta $ as a function of $t/U$ one determines the critical point from the intersection of curves referring to different values of L, as shown in Fig. \[critical\_point\]. This analysis yields (Fig. \[critical\_point\_extrap\] explains how finite-size effect in the position of the crossing point originating from corrections to scaling was eliminated) $$(t/U)_{c}\; =\; 0.03408(2)~~~~~~~~~~~~(n\; =\; 1) \; . \label{}$$
![(Color online). Finite size scaling of the energy gap at the tip of the lobe. $\Delta L/t$ vs. $t/U$ for system size $L$=5 (solid squares), $L$=10 (open circles), $L$=15 (solid circles), $L$=20 (open squares). Lines represent linear fits used to extract the critical point.[]{data-label="critical_point"}](critical_point.eps){width="3.0in" height="2.3in"}
![ Extrapolation to the thermodynamic limit. We show the intersections (triangles) of the curves (L=5, L=10), (L=10, L=15), (L=15, L=20), vs. $L_{\rm max}^{-2}$. The fit (solid line) yields $(t/U)_{\rm c}=0.03408(2)$.[]{data-label="critical_point_extrap"}](critical_point_extrap.eps){width="3.0in" height="2.3in"}
Our final results are summarized in Fig. \[phase\_diagram\]. We find that the size of the critical region where 4D XY scaling laws apply is narrow and restricted to small gaps of the order of $\Delta \le t$ (inside the vertical error bar on the strong-coupling expansion result in Fig. \[phase\_diagram\]). It appears that resolving this limit experimentally would be very demanding.
To perform analytic estimates of the MI state energy (and entropy) at low temperature $T \ll \Delta $ one has to know effective masses of particle and hole excitations, $m_{\pm}$. For example, the particle/hole contributions to energy in the grand canonical ensemble are given by the sums $$E_\pm\; =\; \sum_{\mathbf k} \epsilon_\pm ({\mathbf k})\:
n_{\epsilon} \approx \left( {L \over 2\pi } \right)^{3} \int
d\mathbf{k}\: \epsilon_\pm ({\mathbf k}) \:e^{-\epsilon_\pm
({\mathbf k})/T}\; , \label{E1}$$ where $n_{\epsilon}$ is the Bose function and $$\epsilon_\pm({\mathbf k})\; \approx \; \pm(\mu_\pm-\mu)
+k^2/2m_\pm \label{dispersion}$$ For large gaps the tight binding approximation $$\epsilon_\pm({\mathbf k})\; \approx\; \pm \, (\mu_\pm-\mu)\, +\,
\sum_{\alpha=x,y,z}\,{1-\cos k_\alpha\over m_\pm}
\label{dispersion2}$$ is a reasonable approximation for all values of ${\bf k}$ in the Brillouin zone. Note that, if one is to use the local density approximation (LDA) for the energy/entropy estimates of trapped systems, then calculations have to be performed in the grand canonical ensemble.
To determine effective masses we computed $G(\mathbf p , \tau)$ in the insulating state and deduced $\epsilon_\pm ({\mathbf p})$ for several lowest momenta from the exponential decay of the Green function on large time scales. Dispersion laws were then fitted by a parabola, with the exception for the diagram tip, where the dispersion relation is relativistic. The result for $m_\pm$ is shown in Fig. \[eff\_mass\]. When $t/U \rightarrow 0$ one can calculate effective masses perturbatively in $t/U$ to get $$t\, m_{+} \; =\; 0.25 - 3t/U \; ,~~~~t\, m_{-}\; =\; 0.5 -
12t/U \; . \label{masses}$$ Clearly, our data are converging to the analytical result as $\emph{t/U}\rightarrow 0$. On approach to the critical point the effective mass curves become identical for particles and holes indicating that there is an emergent particle-hole symmetry at the diagram tip. In agreement with the theoretical prediction, the data taken at $\emph{t/U}=0.034$ are fitted best with the relativistic dispersion relation $\varepsilon(p)=c\sqrt{m_*^2 \, c^2 +p^{2}}$, where $c$ is the sound velocity and the effective mass is defined as $m_*=\Delta/2c^2$. At this value of $t/U$ we have found that $c/t\; =\; 6.3\pm 0.4$ and $ t\, m_*\; =\; 0.010\pm 0.004$.
![(Color online). Effective mass for hole (solid circles) and particle (open circles) excitations as a function of $t/U$. The exact results at $t/U=0$ are $m_{+} =0.25/t$ and $m_{-}
=0.5/t$. By dashed lines we show the lowest order in $t/U$ correction to the effective masses. Close to the critical point the two curves overlap, directly demonstrating the emergence of the particle-hole symmetry. At $t/U=0.034$, the sound velocity is $c/t=6.3\pm 0.4$.[]{data-label="eff_mass"}](eff_mass.eps){width="3.4in"}
Finite temperature analysis
===========================
Controlling the temperature is an important experimental issue, crucial for many applications of cold atomic systems and studies of quantum phase transitions. In this section, we discuss thermodynamic properties of the Bose-Hubbard model. We present data for energy, specific heat and entropy, for some specific cases. In particular, we focus on the most important $\langle
n\rangle \, \approx\, 1$ situation. Our data can be used in two ways: (i) to understand limits of applicability of the semi-analytic approach (with calculated effective parameters) discussed above, and (ii) to have reference first-principle curves for more refined numerical analysis. Unfortunately, a direct simulation of a realistic case in the trap, i.e. with similar number of particles as in experiments, is still a challenging problem though simulations of about $10^5$ particles or more at low temperature seem feasible in near future.
The results are organized as follows: in Subsection A we probe the limits of applicability of semi-analytic predictions in the Mott state. In Subsection B we calculate the entropy of the Bose-Hubbard model, compare it to the initial entropy (i.e. before the optical lattice is turned on) and estimate the final temperature. We consider both a homogeneous system in the MI and SF states and a system of $N\sim 3\cdot10^4$ particles in a trap.
Comparison with low T semi-analytic predictions
-----------------------------------------------
Away from the tip of the lobe, in the MI state, semi-analytic predictions are reliable provided the temperature is low enough, i.e. $T\ll\Delta$. In other words, there exist a range of temperatures , defined as $T\lesssim const\cdot \Delta$, where the quasi-particle excitations can be successfully described as a non-interacting classical gas (see Eq. (\[E1\])). The value of the constant depends on $\Delta$, as the two following examples demonstrate.
![(Color online). Energy per particle at $t/U=0.005$, unity filling factor and linear system size $L=20$. Solid circles are prime data (error bars within symbol size), the solid line is the analytical prediction from Eq. \[E1\], where, at each temperature, the chemical potential has been fixed by imposing equal number of particle and hole excitations. The inset shows the total number of excitations present in the system.[]{data-label="energy_U100_uniform"}](energy_U100_uniform.eps){width="1.1\columnwidth"}
{width="34.00000%"} {width="34.00000%"} {width="34.00000%"}
Let us first consider larger gaps, e.g. $\Delta \sim 200 t$, for which we have found that the low temperature analytic predictions reproduce numerical data very well. In Fig. \[energy\_U100\_uniform\] we plot the energy per particle in the low temperature regime for $\Delta = 181.6 t$. The analytic prediction from Eq. (\[E1\]), where, at any given temperature, the chemical potential has been chosen by setting equal the total number of particle and hole excitations (as it is done for intrinsic semiconductors), is reliable up to temperatures $T\lesssim 35t$, i.e. $T\lesssim 0.15\Delta$. In the inset we plot the average number of particle-hole excitations. This number increases rapidly with temperature justifying the grand-canonical calculation for the quasi-particle gas (at fixed total number of particles). The quasi-particle number density is $\sim 5\%$ at $T\sim35t$. Apparently, for higher temperatures the ideal gas picture is no longer valid as it crosses over to that of the strongly correlated normal liquid. We conclude that for large enough gaps and $T\lesssim 0.15\Delta$, one can rely on low temperature analytical predictions to do thermometry.
For smaller gaps, instead, we do not find any interesting region (i.e. where temperature effects are visible), for which the classical description is valid. In Fig. \[entropy40\] we show the energy per particle as a function of temperature for $t/U=0.025$ (the groundstate is MI with the energy gap $\Delta =
18.35 t$). To get the specific heat and entropy, we first use spline interpolation of the energy data points to obtain a smooth curve $E(T)$. The specific heat is then obtained by differentiating the spline. The maximum in the specific heat is reached when temperature is about half the energy gap. The entropy has been calculated by numerical integration of $c_{V}/T$. In order to see any temperature effect one has to go as high as $T\sim 2.5t$; at these temperatures the classical description is already no longer applicable and one has to rely on numerical data to do thermometry.
Loading the optical lattice: estimate of $T^{(\rm fin)}$ from entropy matching
------------------------------------------------------------------------------
The standard approach to convert results obtained for a homogeneous system into predictions for systems in external fields is the so-called local density approximation (LDA), which is actually a local chemical potential approximation when the density at the site $i$ is identified with the density of the homogeneous system with the chemical potential $$\mu_{i}^{\rm (eff)}\; =\; \mu \, - \, V(i)\; . \label{effective
mu}$$ In strongly interacting regimes with a short healing and correlation length, the LDA approach can be easily justified in most cases (critical regions of phase transitions excluded). In Ref. [@Wessel], the authors directly compare simulation results for 1D and 2D harmonically trapped systems with LDA predictions based on known homogeneous system phase diagram. As expected, the density profiles differ only at the MI-SF interface, and we believe that the same will be true for the 3D case which is more “mean-field-like".
When the semi-analytic predictions are reliable (see Fig. \[energy\_U100\_uniform\]), one can use numerical results for the effective masses and gaps to calculate the entropy of the homogeneous quasi-particle gas with the tight-binding dispersion relation. The entropy is given by: $$S\; =\; -\frac{V}{(2\pi)^{3}}\int d^{3}\mathbf{k}\,
\frac{\partial[\Omega_{+}(\mathbf k)+\Omega_{-}(\mathbf
k)]}{\partial T}\; , \label{entropy_Mott}$$ where $$\Omega_{\pm}(\mathbf k)\; =\; T\, \ln \left(
1-\exp\frac{\epsilon_{\pm}(\mathbf k)}{T}\right) \; .
\label{Gibbs_energy}$$
As an example, consider a uniform weakly interacting Bose gas (WIBG) of $^{87}$Rb with the gas parameter $na_{s}^{3}\sim
10^{-6}$, which is loaded into an optical lattice with $\lambda=840$nm and $t/U=0.005$. At low enough temperature, $T\lesssim
0.3T_{\rm c}$, one can calculate the initial (prior to imposing the lattice) entropy of the system using the Bogoliubov spectrum. In Fig. \[entropy\_uniform\] we plot the entropy per unit volume before and after the optical lattice is imposed. The bottom *x* axis is temperature in units of the critical temperature of the WIBG, the top *x* axis is temperature in units of $\emph{t}$, the hopping matrix element. The dashed line is the entropy of the WIBG, the solid and dashed-dotted lines represent the entropy of the Bose-Hubbard model calculated starting from the numerical results of Fig. \[energy\_U100\_uniform\] and analytical predictions of Eq. (\[entropy\_Mott\]), respectively. If the system was prepared at $T\sim 0.25T_{\rm c}$, the final temperature would be $T\sim22t$. Fig. \[energy\_U100\_uniform\] shows that at this temperature the system is quite far from its ground state and the number density of thermally activated particle-hole excitations is $\sim1\%$. The circumstances of this kind become crucial if one is to use the system in quantum information processing. This example is also illustrative of how numerical data can be used to suggest appropriate initial conditions.
![(Color online). Entropy per particle at $t/U=0.005$, unity filling factor and linear system size $L=20$. The dashed line is the entropy of the uniform WIBG. The solid line is the result of analytical derivation/integration of numerical data for energy and the dashed-dotted line (barely visible) is the analytical prediction, Eq. (\[entropy\_Mott\]), where, at each temperature, the chemical potential has been fixed by the condition of having equal number of particle and hole excitations. If the system was initially cooled down to $T^{\rm in}=0.25T_{\rm c}$, the final temperature is $T^{\rm fin}=22t$ and nearly a hundred of thermally activated particle-hole excitations are present in the final state (see inset in Fig. \[energy\_U100\_uniform\]).[]{data-label="entropy_uniform"}](entropy_U100_uniform.eps){width="1.15\columnwidth"}
Now we turn to a more realistic case of confined system and use LDA to convert results for the uniform system into predictions for the inhomogeneous one. In what follows, we consider a gas of $N\sim3\cdot10^{4}$ $^{87}$Rb atoms, magnetically trapped in isotropic harmonic potential of frequency $2\pi60$ Hz. Experiments with such number of particles were recently performed [@Gerbier1]. With this geometry, the parameter $\eta =1.57(N^{1/6}a_{s}/a_{ho})^{2/5}$ (see Ref. [@Giorgini]) is $\sim 0.33$, which is a typical value in current experiments. For temperatures in the range $\mu<T<T_{\rm
c}$, where $T_{\rm c}$ is the critical temperature of the harmonically trapped ideal gas, one can accurately calculate energy using the Hartree-Fock [@Hartee-Fock] mean field approach [@Stringari]: $$\frac{E}{NT_{c}}=\frac{3\zeta(4)}{\zeta(3)}t^{4}+\frac{1}{7}\eta(1-t)^{2/5}(5+16t^{3}),
\label{energy_trap}$$ where *t* is the reduced temperature $T/T_{\rm c}$. At very low T ($T<\mu$), Eq. (\[energy\_trap\]) misses the contribution coming from collective excitations. We are interested in initial temperatures $T\sim0.2-0.3 T_{\rm c}$, which are feasible in current experiments [@Gerbier]. Starting from Eq. (\[energy\_trap\]), we calculate the entropy of the BEC initially prepared in the magnetic trap. After the optical lattice is adiabatically turned on, the magnetic potential provides a scan over the chemical potential of the homogeneous system \[see Eq. (\[effective mu\])\].
A direct comparison with experiments at fixed number of particles would require to calculate $\mu(T)$ from the normalization condition. At low temperatures, one expects the dependence of the chemical potential on temperature to be weak (this will be confirmed by direct simulations, see below). For simplicity, we fix $\mu$ at a value corresponding to $N=30^{3}$ trapped atoms in the first Mott lobe, at zero temperature. From this point we proceed in two directions. On one hand, we analytically calculate the low temperature contribution to energy and entropy arising from particle and hole excitations in the trapped MI state. On the other hand, we directly simulate the thermodynamics of the inhomogeneous system at a fixed chemical potential. The results are shown in Fig. [\[energy\_harmonic\]]{}, where we plot the energy per particle, counted from the ground state. The solid circles are data from the simulations (error bars are plotted), the solid line is the (analytically calculated) contribution of the particle and hole excitations. The inset shows the low temperature region. A large mismatch between the two results indicates that the main contribution to energy is given by the liquid at the perimeter of the trap. At zero temperature, there are about $N\sim29000$ particles in the trap, $7\%$ of which are not in the MI state (recall that $\mu$ has been determined by placing $30^{3}$ particles in the MI state). Simulation results show that, in the range of temperatures considered, the total number of particles increases by $0.7\%$ at most, which confirms the weak temperature dependence of the chemical potential. In addition, for $T=8t$, we performed a simulation with $N$ fixed at the [*groundstate*]{} value. The energies per particle in the canonical and grand canonical simulations differ by $0.3\%$ only, therefore we proceed calculating the entropy in the canonical ensemble and compare it with the initial entropy of the system, at a fixed particle number.
We are in a position to address the question of what is the final temperature of the system after the optical lattice is turned on and the final state is MI with the exception of a small shell at the trap perimeter. In Fig. \[entropy\_trap\] we plot the entropy of the trapped WIBG with (solid line) and without (dashed line) the optical potential. If the initial system is cooled down to temperatures $T^{(\rm in)}\sim 0.25T_{\rm c}$, see, e.g., Ref. [@Gerbier], the final temperature will be $T^{(\rm
fin)}=(2.35\pm0.30)t$.
With the initial conditions considered in this example, what is the final state of the liquid at the perimeter of the trap? Before answering this question, we would like to recall that, along the MI-SF transition lines, the critical temperature for the normal-to-superfluid transition is zero. The transition temperature increases as one moves away from the border of the Mott lobes (lowering $\mu$ at fixed $t/U$ in our case) and the quasi-particle density increases until it reaches its maximum at about $n\approx 1/2$ and then decreases. The maximum $T_{\rm c}$ can be estimated from the ideal Bose gas relation $T^{(0)}_{\rm
c}=3.313n^{2/3}/m^{*}=4.174t$, with $n=0.5$ and $m^{*}=1/2t$, but interaction effects are likely to reduce this value. We have performed simulations at half filling factor and fixed $t/U=0.005$, and found the critical temperature to be $T_{\rm
c}(n=1/2)=2.09(1)t$. As a consequence, for the chosen initial conditions, we can conclude that the final state of the liquid at the perimeter of the trap is normal and it gives the main contribution to the entropy. For such low final temperature, the contribution to the entropy per particle due to thermally activated excitations in the MI state is only 10%. The largest chemical potential is, in fact, deep in the first Mott lobe, and the energy required to introduce an extra particle or hole is much larger than $T$. Most excitations are located at the perimeter of the Mott state in a narrow shell of radius $R$ and width $\sim
0.05R$.
![(Color online). Energy per particle at $t/U=0.005$, $\mu=116.5t$ and trap frequency $\omega=2\pi 60 Hz$. Solid circles are numerical data, the solid line is the energy of particle and hole excitations in MI deduced from Eq. (\[E1\]). The inset shows a zoom of the low temperature range. At $T\sim2.3t$ the contribution of MI excitations to energy is $\sim 10\%$ only.[]{data-label="energy_harmonic"}](energy_harmonic.eps){width="1.0\columnwidth"}
![(Color online). Entropy per particle for the same system as in Fig.\[energy\_harmonic\]. The dashed line is the entropy of the trapped WIBG, before the optical lattice is loaded; the solid line is the final entropy. If the system was initially cooled down to $T^{(\rm in)}=0.25T_{\rm c}$, the final temperature is $T^{(\rm
fin)}=(2.35\pm0.3)t$ and the liquid at the perimeter of the trap is normal (see text). []{data-label="entropy_trap"}](entropy_U100_trap.eps){width="1.0\columnwidth"}
Retrieving the same information for experiments using a larger number of particles, e.g. $10^{5}\div 10^{6}$, by direct simulation is still computationally challenging. In order to use LDA, one should study the uniform system, scanning through the chemical potential. As our last example, we consider a uniform system which is in the correlated SF ground state. Fig. \[entropy26\] shows data for $E$, $c_{\rm V}$, and $S$ for $t/U=0.0385$ and unity filling factor, close to the MI-SF transition. The system stays in its ground state for $T \ll 2t$; in finite systems, the energy of the lowest mode is finite: $E_{\rm min}=cp_{\rm min}$, with $c\thickapprox 6t$ and $p_{\rm
min}=2\pi/L$). The specific heat and entropy are calculated as described for Fig. \[entropy40\]. We were not able to resolve the SF-NL transition temperature from this set of data alone: numerical data corresponding to system sizes $L=10$ and $L=20$ overlap within error bars and we did not see any feature at the critical temperature. This is not surprising since the specific heat critical exponent $\alpha$ is very small, and it is thus very difficult to resolve the singular contribution and finite size effects in energy and specific heat. For a system of linear size $L$, the singular part of the specific heat, $c_{\rm V}^{(\rm
s)}$, can be written as $$c_{V}^{(s)} (t ,L)\; =\; \xi^{\alpha/\nu} f_c(\xi/L)\; =\;
L^{\alpha/\nu}g_c(t L^{1/\nu}) \; , \label{spec_heat scaling}$$ where $t=(T-T_{c})/T_c$, $\alpha\approx-0.01$, $\nu=(2-\alpha)/3$, $\xi$ is the correlation length and $f_c(x)$ and $g_c(x)$ are the universal scaling functions. At the critical point, finite size effects for the two system sizes considered are $\sim 1\%$, within error bars.
{width="34.00000%"} {width="34.00000%"} {width="34.00000%"}
![(Color online). Finite size scaling of the superfluid stiffness. $n_s L$ vs. $T/t$ for system size $L$=5 (circles), $L$=10 (squares), $L$=20 (triangles). We estimate the critical temperature to be $T_{\rm c}=3.25(1)t$, already nearly half the non-interacting gas value.[]{data-label="SF_density"}](SF_density.eps){width="0.99\columnwidth"}
The critical temperature was extracted from data for the superfluid stiffness. The scaling of the superfluid stiffness at the critical temperature is $n_s\varpropto|t|^{\nu}$. This allows one to accurately estimate the critical temperature from $$n_{s} (t ,L) = \xi^{-1}f_s(\xi/L)\; =\; L^{-1}g_s(t L^{1/\nu}) \;
, \label{SF_density scaling}$$ by plotting $n_sL$ vs. $T$ as shown in Fig. \[SF\_density\]. From the data taken for system sizes $L=5$, $L=10$, $L=20$, we estimate the critical temperature to be $T_c=3.25(1)t$. At this temperature, the entropy per unit particle is $\sim0.195$, or, translating to entropy density in physical units, $3.6\cdot10^{-5}JK^{-1}m^{-3}$, which corresponds to an initial temperature $\sim0.35T_{\rm c}^{(\rm in)}$. Therefore it seems plausible to reach $T_{\rm c}$ experimentally.
Conclusions
===========
We have performed quantum Monte Carlo simulations of the three dimensional homogeneous Bose-Hubbard model. We were able to establish the phase diagram of the MI-SF transition with the record accuracy $\sim 0.1{\%}$ and determine the size of the fluctuation region in the vicinity of the diagram tip where universal properties of the relativistic effective theory can be seen. Comparison with the strong-coupling expansion shows that the latter works well only for sufficiently large insulating gaps $\Delta > 6t$ outside of the fluctuation region. We have studied the effective masses of particle and hole excitations along the MI-SF boundary. Our results directly demonstrate the emergence of the particle-hole symmetry at the diagram tip, and provide base for accurate theoretical estimates of the MI thermodynamics at low and intermediate temperatures.
We have studied thermodynamic properties of the superfluid and insulating phases at fixed particles number for the uniform case. These data can be used to make predictions for the inhomogeneous system using the local density approximation. We have shown that for large enough gaps the low temperature analytical predictions agree with numerical data. By entropy matching, we have calculated the final temperature of the system (after the optical lattice is adiabatically loaded), in the uniform and magnetically trapped system, at $t/U=0.005$. We have performed direct simulations of a trapped system, using typical experimental values for the magnetic potential and number of particles. For the initial conditions considered, we found the final temperature and demonstrated that the main contribution to the entropy comes from the liquid at the perimeter of the trap. We have calculated the normal-to-superfluid transition temperature at the half filing and concluded that the liquid at the perimeter is normal.
Acknowledgements
================
We are grateful to Matthias Troyer for useful discussions. This work was supported by the National Science Foundation Grant No. PHY-0426881.
[20]{} D. Jaksch and P. Zoller, Ann. Phys. **315**, 52 (2005). M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Phys. Rev. B **40**, 546 (1989). D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, Phys. Rev. Lett. **81**, 3108 (1998). M. Greiner, O. Mandel, T. Esslinger, T. W. Haensch, and I. Bloch, Nature **415**, 39 (2002). G. G. Batrouni, V. Rousseau, R. T. Scalettar, M. Rigol, A. Muramatsu, P. J. H. Denteneer, and M. Troyer, Phys. Rev. Lett. **89**, 117203 (2002). F. Gerbier, A. Widera, S. Folling, O. Mandel, T. Gericke, and I. Bloch, Phys. Rev. A **72**, 053606 (2005). A. Isacsson, Min-Chul Cha, K. Sengupta, and S. M. Girvin, Phys. Rev. B **72**, 184507 (2005). S. Fölling, A. Widera, T. Müller, F. Gerbier, and I. Bloch, Phys. Rev. Lett. **97**, 060403 (2006). F. Gerbier, S. Fölling, A. Widera, O. Mandel, and I. Bloch, Phys. Rev. Lett. **96**, 090401 (2006). S. R. Clark and D. Jaksch, New J. Phys. **8**, 160 (2006). P. Berman Ed., *Atom interferometry* (Academic Press, New York, 1997). J. A. Dunningham and K. Burnett, Phys. Rev. A **70**, 033601, (2004). J. A. Dunningham, K. Burnett, and S. M. Barnett, Phys. Rev. Lett **89**, 150401 (2002). M. Rodriguez, S. R. Clark, and D. Jaksch, cond-mat 0607397 (2006). J. K. Freericks and H. Monien, Phys. Rev. B **53**, 2691 (1996). N. V. Prokof’ev, B. V. Svistunov, and I. S. Tupitsyn, Phys. Lett. A **238**, 253 (1998); Sov. Phys. JETP **87**, 310 (1998). M. R. Andrews, C. G. Townsend, H. J. Miesner, D. S. Durfee, D. M. Kurn, and W. Ketterle, Science **275**, 637 (1997). W. Ketterle, D. S. Durfee, and D. M. Stamper-Kurn, in *Proceedings of the International School of Physics - Enrico Fermi*, edited by M. Inguscio, S. Stringari and C. E. Wieman (IOS Press), p. 67. V. A. Kashurnikov, N. V. Prokof’ev, and B. V. Svistunov, Phys. Rev. Lett. [**87**]{}, 120402 (2001). V. A. Kashurnikov, N. V. Prokof’ev, B. V. Svistunov, and M. Troyer, Phys. Rev. B [**59**]{}, 1162 (1999). S. Wessel, F. Alet, M. Troyer, and G. G. Batrouni, Phys. Rev. A **70**, 053615 (2004). S. Giorgini, L. P. Pitaevskii, and S. Stringari, Phys. Rev. Lett. [**78**]{}, 3987 (1997). V. V. Goldman, I. F. Silvera, and A. J. Leggett, Phys. Rev. B **24**, 2870 (1981); D. A. Huse and E. D. Siggia, J. Low Temp. Phys. **46**, 137 (1982). F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys [**71**]{}, 463 (1999). F. Gerbier, J. H. Thywissen, S. Richard, M. Hugbart, P. Bouyer, and A. Aspect, Phys. Rev. A **70**, 013607 (2004).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We generalize the thermal pure quantum (TPQ) formulation of statistical mechanics, in such a way that it is applicable to systems whose Hilbert space is infinite dimensional. Assuming particle systems, we construct the grand-canonical TPQ (gTPQ) state, which is the counterpart of the grand-canonical Gibbs state of the ensemble formulation. A single realization of the gTPQ state gives all quantities of statistical-mechanical interest, with exponentially small probability of error. This formulation not only sheds new light on quantum statistical mechanics but also is useful for practical computations. As an illustration, we apply it to the Hubbard model, on a one-dimensional ($1d$) chain and on a two-dimensional ($2d$) triangular lattice. For the $1d$ chain, our results agree well with the exact solutions over wide ranges of temperature, chemical potential and the on-site interaction. For the $2d$ triangular lattice, for which exact results are unknown, we obtain reliable results over a wide range of temperature. We also find that finite-size effects are much smaller in the gTPQ state than in the canonical TPQ (cTPQ) state. This also shows that in the ensemble formulation the grand-canonical Gibbs state of a finite-size system simulates an infinite system much better than the canonical Gibbs state.'
author:
- Masahiko Hyuga
- Sho Sugiura
- Kazumitsu Sakai
- Akira Shimizu
title: 'Thermal Pure Quantum States of Many-Particle Systems'
---
Quantum statistical mechanics has conventionally been formulated as the ensemble formulation, in which an equilibrium state is given by a mixed quantum state (Gibbs state) that is represented by a density operator $\hat{\rho}^{\rm ens}$. Recently, another formulation, called the TPQ formulation, has been developed by two of the authors [@SS2012; @SS2013], by generalizing theories of typicality [@Popescu; @Lebowitz; @SugitaJ; @SugitaE; @Reimann2007; @Reimann2008]. In this formulation, an equilibrium state is given by a pure quantum state, which is called a TPQ state. Since the TPQ state is not a purification [@NC] of $\hat{\rho}^{\rm ens}$, it is totally different from $\hat{\rho}^{\rm ens}$. In fact, the magnitudes of their entanglement are almost maximally different [@SS2005; @Kindai2013]. Nevertheless, one can correctly obtain all quantities of statistical-mechanical interest, including thermodynamic functions, from a single state vector of a TPQ state [@SS2012; @SS2013]. Because of this striking property, the TPQ formulation is very useful in practical applications [@SS2012; @SS2013]. In fact, it has solved problems that are hard with conventional methods, such as the specific heat of a $2d$ frustrated spin system [@SS2013].
However, it was formulated only for systems whose Hilbert space $\mathcal{H}$ is finite dimensional. Since $\dim \mathcal{H} = \infty$ for many physical systems, such as particles in continuous space, generalization of the TPQ formulation is necessary. Furthermore, only the microcanonical TPQ (mTPQ) and cTPQ states were constructed, and their validity was confirmed separately [@SS2012; @SS2013]. Although all TPQ states give the same results in the thermodynamic limit [@SS2013], they will give different results for finite-size systems because of finite-size effects. To study infinite systems, it is desirable to develop other TPQ states (such as the gTPQ state) and to clarify which TPQ state of finite size gives results closest to those of infinite systems.
In this Rapid Communication, we generalize the TPQ formulation so that it will be applicable to the case where $\dim \mathcal{H}$ and the norm of operators (such as the momentum) are infinite. Assuming particle systems as a concrete example, we construct the gTPQ state, which are specified by inverse temperature $\beta =1/T$, chemical potential $\mu$, volume $V$, magnetic field $\bm{h}$, and so on. \[In the following, we abbreviate $\beta, \mu, V, \bm{h}, \ldots$ simply as $\beta, \mu, V$.\] We show that a single realization of the gTPQ state gives all quantities of statistical-mechanical interest, including thermodynamic functions. This striking property is not only interesting as a fundamental physics, but also useful for practical computations, because it enables one to solve problems that are hardly solvable by other methods. As an illustration, we apply the TPQ formulation to numerical studies of the Hubbard model, on a $1d$ chain and on a $2d$ triangular lattice. We obtain reliable results, over wide ranges of $T, \mu$ and the on-site interaction $U$. Moreover, we show that as compared with the cTPQ state with finite $V$ the gTPQ state with the same $V$ gives results much closer to the exact results for an infinite system. The same can be said for the canonical and grand-canonical Gibbs states of the ensemble formulation.
[*Mechanical variables –* ]{} Statistical mechanics treats ‘mechanical variables’, such as energy, and ‘genuine thermodynamic variables’, such as entropy. Unfortunately, the general definition of mechanical variables in the previous formulation [@SS2012; @SS2013] breaks down when $\| \hat{A} \| = \infty$. Therefore, we here define them more physically as follows [@degreeA]. Let $\hat{A}$ be a low-degree polynomial (i.e., its degree is $\Theta(1)$) of local observables. \[For the order symbols, see, e.g., Ref. [@NC].\] We make it dimensionless. For example, we denote by $\hat{H}$ the original Hamiltonian divided by an appropriate energy (such as the transfer energy). We call $\hat{A}$ a [*mechanical variable*]{} if there exist a function $K(\beta, \mu)$ and a constant $m$, both being positive and independent of $\hat{A}$ and $V$, such that $$\langle{ \hat{A}^2 }\rangle^{\rm ens}_{\beta \mu V}
\leq
K(\beta, \mu) V^{2m}
\mbox{ for all } \beta, \mu, V.
\label{bound_<A>}$$ This means that in an equilibrium state $\hat{A}$ should have finite expectation value and fluctuation even if $\| \hat{A} \| = \infty$. For example, $n$-point correlation functions with $n \leq \Theta(m \ln V)$ (such as the spin-spin correlation function), and their sum (such as $\hat{H}$), are mechanical variables. [*gTPQ state –* ]{} We consider many particles confined in a box of arbitrary spatial dimensions. We assume that the grand canonical Gibbs state $\hat{\rho}^{\rm ens}_{\beta \mu V}$ gives the correct results, which are consistent with thermodynamics [@TD]. This implies, for example, that specific heat is positive.
Let $\{ | \nu {\rangle}\}_\nu$ be an arbitrary orthonormal basis of $\mathcal{H}$. Many equations (such as the main result Eq. (7) of Ref. [@SS2013]) of the previous formulation [@SS2012; @SS2013] become ill defined and/or meaningless when $\dim \mathcal{H}=\infty$. To overcome this difficulty, we first cut off ‘far-from equilibrium parts’ of $| \nu {\rangle}$ as $$| \nu; \beta, \mu, V {\rangle}\equiv \exp[ -\beta (\hat{H} - \mu \hat{N})/2]
| \nu {\rangle},$$ where $\hat{N}$ is the number operator. We then superpose $| \nu; \beta, \mu, V {\rangle}$’s as $$|\beta \mu V {\rangle}\equiv
\sum_\nu z_\nu | \nu; \beta, \mu, V {\rangle}.
\label{gTPQ}$$ Here, $z_\nu \equiv (x_\nu + i y_\nu)/\sqrt{2}$, where $x_1, x_2, \ldots$ and $y_1, y_2, \ldots$ are real random variables, each obeying the unit normal distribution. We first show that this vector is well defined, i.e., its norm is finite for finite $V$ even when $\dim \mathcal{H} = \infty$, with probability that approaches one with increasing $V$. (By contrast, the norm of another random vector $\sum_\nu z_\nu | \nu {\rangle}$ diverges with $\dim \mathcal{H}$.) To show this, we invoke a Markov-type inequality: Let $x$ be a real random variable and $y$ a real number, then for arbitrary ${\epsilon}>0$, $${\rm P} \left( | x - y | \geq \epsilon \right)
\leq
\overline{(x-y)^2}/ \epsilon^2,
\label{Markov}$$ where the overbar denotes the random average. Taking $
x =
\langle \beta \mu V | \beta \mu V \rangle
/
\Xi
$ and $y=1$, where $\Xi(\beta, \mu, V)$ is the grand-partition function, we evaluate $
B_V^2
\equiv
\overline{
(\langle \beta \mu V | \beta \mu V \rangle
/
\Xi
-1)^2
}
$ as $$B_V^2
\leq
1/
\exp [2V \beta \{ j(T/2, \mu ;V)-j(T, \mu ;V) \}].
\label{Var<>}$$ Here, $j(T, \mu; V) \equiv - (T/V) \ln \Xi(\beta, \mu, V)$ is a thermodynamic function, which approaches the $V$-independent one, $j(T, \mu)$, as $V \to \infty$, i.e., $j(T, \mu ;V) = j(T, \mu) + o(1)$. At finite $T$, since the entropy density $s = - \partial j/\partial T = \Theta(1)$, we have $$2 V \beta \{ j(T/2, \mu ;V)-j(T, \mu ;V) \}
\simeq V s(T, \mu)
=
\Theta(V).
\label{delta_j}$$ Therefore, $B_V^2
\leq
1/e^{V s(T, \mu)}
=
1/e^{\Theta(V)}$. Inserting this result into inequality (\[Markov\]), we find that $
\langle \beta \mu V | \beta \mu V \rangle
\stackrel{P}{\to}
\Xi(\beta, \mu, V)
$, where ‘$\stackrel{P}{\to}$’ denotes convergence in probability. Since $\Xi$ is finite for finite $V$, $| \beta \mu V \rangle$ is well-defined. This argument also shows that a single realization of $|\beta \mu V {\rangle}$ gives $j$ by $$- V \beta j(T, \mu ;V)
=
\ln \langle \beta \mu V |\beta \mu V \rangle,
\label{ln<>=j}$$ with exponentially small probability of error. All genuine thermodynamic variables, such as entropy, can be calculated from $j$.
We then show that $|\beta \mu V {\rangle}$ is a gTPQ state, i.e., $
{\langle}\hat{A}{\rangle}^{\rm TPQ}_{\beta \mu V}
\stackrel{P}{\to}
{\langle}\hat{A}{\rangle}^{\rm ens}_{\beta \mu V}
$ uniformly for every mechanical variable $\hat{A}$ as $V \to \infty$, where $
{\langle}\hat{A}{\rangle}^{\rm TPQ}_{\beta \mu V}
\equiv
{\langle}{\beta \mu V}|\hat{A}|{\beta \mu V}\rangle
/
{\langle}{\beta \mu V}|{\beta \mu V}\rangle
$. To see this, we take $
x =
\langle \hat{A} \rangle^{\rm TPQ}_{\beta \mu V}
$ and $
y =
\langle \hat{A} \rangle^{\rm ens}_{\beta \mu V}
$ in inequality (\[Markov\]), and evaluate $
D_V(A)^2
\equiv
\overline{
( {\langle}\hat{A}{\rangle}^{\rm TPQ}_{\beta \mu V}
-{\langle}\hat{A}{\rangle}^{\rm ens}_{\beta \mu V}
)^2 }
$. Dropping smaller-order terms, we find $$D_V(A)^2
\leq
{
\langle (\Delta \hat{A})^2 \rangle^{\rm ens}_{2\beta \mu V}
+
(\langle A \rangle^{\rm ens}_{2\beta \mu V}
- \langle A \rangle^{\rm ens}_{\beta \mu V} )^2
\over
\exp [2V\beta \{ j(T/2, \mu ;V)-j(T, \mu ;V) \}]},
\label{Var<M>}$$ where ${\langle}(\Delta \hat{A})^2{\rangle}^{\rm ens}_{\beta \mu V}
{\equiv}{\langle}(\hat{A}-\langle{A}\rangle^{\rm ens}_{\beta \mu V})^2
\rangle^{\rm ens}_{\beta \mu V}$, and so on. The denominator of the r.h.s $=e^{\Theta(V)}$ from Eq. (\[delta\_j\]), whereas the numerator $\leq \Theta(V^{2m})$ from (\[bound\_<A>\]). Hence, $D_V(A)^2 \leq V^{2m}/e^{\Theta(V)}$, which vanishes exponentially fast with increasing $V$, for every mechanical variable $\hat{A}$. Therefore, $
\langle \hat{A} \rangle^{\rm TPQ}_{\beta \mu V}
\stackrel{P}{\to}
\langle \hat{A} \rangle^{\rm ens}_{\beta \mu V}
$ uniformly, which shows that $|\beta \mu V \rangle$ is a gTPQ state. A single realization of the gTPQ state gives equilibrium values of mechanical variables, with exponentially small probability of error, by $\langle \hat{A} \rangle^{\rm TPQ}_{\beta \mu V}$.
Note that one can use any convenient basis as $\{ | \nu {\rangle}\}_\nu$, because the above construction of $|\beta \mu V {\rangle}$ is independent of the choice of the basis. Moreover, using $j$ obtained from formula (\[ln<>=j\]), one can estimate the upper bounds of errors from formulas (\[Var<>\]) and (\[Var<M>\]), without resorting to results of other methods. This self-validating property is particularly useful in practical applications.
Similarly to the above construction of the gTPQ state, we can also generalize the cTPQ state proposed in Ref. [@SS2013] so as to be applicable to systems with $\dim \mathcal{H} = \infty$.
[*Practical computational method –* ]{} The TPQ formulation sheds new light on quantum statistical mechanics because it is much different from the ensemble formulation [@Kindai2013]. For example, the von Neumann entropy, which coincides with the thermodynamic entropy in the ensemble formulation, vanishes for TPQ states. Because of this great difference, the TPQ formulation will also be useful for practical computations. To make this visible, we have developed practical formulas that are particularly useful for numerical computations. They are presented in Ref. [@SM]. Using them, one can obtain $|\beta \mu V \rangle$ simply by multiplying $[\mbox{constant} - (\hat{H} - \mu \hat{N})]$ with a random vector repeatedly $\Theta(N)$ times. This is a powerful numerical method, as evidenced below.
[*Application to the Hubbard model –* ]{} We now apply the present formulation to strongly-interacting electrons. We take the Hubbard model $ \hat{H}
=
- \sum_{\langle \bm{r}, \bm{r}' \rangle}
(\hat{c}_{\bm{r} \sigma}^\dagger \hat{c}_{\bm{r}' \sigma} + h.c.)
+ U \sum_{\bm{r}}
(\hat{n}_{\bm{r} \uparrow} -1/2) (\hat{n}_{\bm{r} \downarrow} -1/2)
$ with the periodic boundary conditions, where $\langle \bm{r}, \bm{r}' \rangle$ denotes a nearest pair of sites. We consider a $1d$ chain and a $2d$ triangular lattice. The number of sites $V$ is taken as $V = 14, 15$ because of the size of the memory of our computers. Although this is larger than $V$ of the numerical diagonalization (ND) ever performed (of the full spectrum to compute finite-temperature properties), the factor of Eq. (\[delta\_j\]), which appears in the r.h.s. of (\[Var<>\]) and (\[Var<M>\]), is not large enough. In such a case, one can reduce errors by averaging the denominators and numerators, separately, of these formulas over many realizations of the gTPQ states. Averaging over $M$ realizations reduces the error by the factor of $1/\sqrt{M}$. \[By contrast, averaging was not necessary for the spin system of Ref. [@SS2013] because $V$ ($=27,30$) was large enough.\] We here take $10 \leq M \leq 26$.
We first study the $1d$ chain of length $L$ ($=V$) as a benchmark, because some of physical quantities were exactly obtained for $L=\infty$ [@exact]. Since the results for $U<0$ can be obtained from those for $U>0$ (see, e.g., Refs. [@U-symmetry1; @U-symmetry2]), we can assume $U>0$ without loss of generality. We here take two values; $U=1$, where the wave-particle duality plays essential roles, and $U=8$, where the particle nature is stronger. Regarding $\mu$, it can be controlled in experiments by an external voltage [@FET1; @FET2; @FET3], in which $\mu$ is the electro-chemical potential. Hence, we take several values; $\mu=0$ (half-filled), $0.5, 2, 3$. $T$ is taken as $0.1 \leq T \leq 3$ (Figs. \[fig1\]-\[fig4\]) and $0.03 \leq T \leq 3$ ($L=14$ by the gTPQ state in Fig. \[fig5\]). To the authors’ knowledge, no other numerical methods have ever succeeded in analyzing the Hubbard chain over such wide ranges of $T, \mu, U$ (see, e.g., Ref. [@tohyama]). One can go down to even lower $T$ by increasing the computational parameters $k_{\rm term}$ (defined in Ref. [@SM]) and $M$.
The particle density $n \equiv N/L$, obtained using the gTPQ states with $L=15$, is plotted in Fig. \[fig1\]. \[We take $\mu \neq 0$ because $\mu=0$ gives the trivial result $n=1$.\] The results agree well with the exact results for the infinite system $L \to \infty$ (broken lines) [@exact].
![ $n$ versus $T$, obtained by the gTPQ states with $L=15$, for $(U,\mu,M)=(8,2,18)$, $(8,3,20)$, $(1,0.5,18)$, and $(1,2,22)$. Error bars show estimated errors, which can be made smaller by increasing $M$. Exact results for $L=\infty$ are also plotted. []{data-label="fig1"}](fig1.eps){width="0.9\linewidth"}
We also calculate the specific heat at constant $\mu$, defined by $c \equiv (T/L) (\partial S/\partial T)_{\mu,L}$. Generally, $c$ is much harder to compute than $n$ because $c$ is a higher (second) derivative of $j$. As shown in Fig. \[fig2\], the results of the gTPQ states with $L=15$ agree fairly well with the exact results for $L \to \infty$ (broken lines) [@exact]. Small deviations are due to finite-size effects, as will be discussed later.
![$c$ versus $T$, obtained by the gTPQ states with $L=15$, for $(U,\mu,M) = (8,0,14)$, $(8,3,20)$, $(1,0,12)$, and $(1,2,22)$. Error bars show estimated errors, which can be made smaller by increasing $M$. Exact results for $L=\infty$ are also plotted. []{data-label="fig2"}](fig2.eps){width="0.9\linewidth"}
Furthermore, we calculate correlation functions, for which exact solutions are unknown. We calculate the charge and the staggered spin correlation functions $\phi_+$ and $\phi_-$, respectively, which are defined by $$\phi_{\pm}(i)
\equiv
{(\pm 1)^i \over L}
\!
\sum_{j}
{\langle}(\hat{n}_{j \uparrow} \pm \hat{n}_{j \downarrow})
(\hat{n}_{j+i \uparrow} \pm \hat{n}_{j+i \downarrow})
\rangle_{\beta \mu L}.$$ As shown in Fig. \[fig3\], $\phi_+$ has a dip at $i=1$, whereas $\phi_-$ decreases monotonically with increasing $i$. These behaviors are manifestations of the wave-particle duality. $\phi_-$ was previously computed numerically in Ref. [@tohyama], where $T$ was limited to $T \leq 0.2$ and $\phi_+$ was not computed. Our results agree well with theirs.
![ $\phi_\pm$ versus $i$ for $U=8, \mu=0$, obtained by the gTPQ states with $L=14$ and $M=21$, at $T=0.1, 0.2, 1.0$. []{data-label="fig3"}](fig3.eps){width="0.9\linewidth"}
We then study the $2d$ triangular lattice, for which exact results are unknown. We analyze a weakly doped case ($0 < \mu \ll$ band width), which will be most interesting experimentally, over a wide range of $T$. Such a case is hard to analyze with most numerical methods because of the sign problem and so on. We first solve a small system with $V=8$, for which ND of the full spectrum is possible. In Fig. \[fig4\], the results for the specific heat $c$, obtained with ND and the gTPQ states, are plotted as a function of $T$. The agreement is very good. We then solve a larger system with $V=15$, for which ND of the full spectrum is impossible. The result obtained with the gTPQ states is plotted in Fig. \[fig4\]. Since we have rigorously proved that the gTPQ states give correct results (for each finite $V$) with high probability, our result is reliable within the error bars, which can be made arbitrarily small by increasing $M$ (the number of realizations). That is, we have successfully obtained reliable results for $V=15$ over a wide range of $T$.
![$c$ versus $T$, for the $2d$ triangular lattice with $U=3, \mu=1$, obtained by the gTPQ states with $V=8$ ($M=1024$) and with $V=15$ ($M=10$), and by ND for $V=8$. Error bars show estimated errors, which can be made smaller by increasing $M$. []{data-label="fig4"}](fig4.eps){width="0.9\linewidth"}
[*Superiority of the gTPQ state –* ]{} We have rigorously proved that the results of a TPQ state of size $V$ agree with those of the corresponding Gibbs state [*of the same size $V$*]{}, within exponentially small error. However, generally, these results for a finite-size system deviate from those for an infinite system. Typically, this finite-size effect is inversely proportional to a power of $V$, and hence is not so small in general. Then a question arises: Which TPQ state has a smaller finite-size effect, the gTPQ state or the cTPQ state?
To answer this question, we compute $c$ of the $1d$ chain for $L=8$ and $14$, using both TPQ states. We take $\mu=0$ (half-filled), for which $n$ is independent of $T$ ($n=1$) and hence $c$(at constant $\mu$) $=$ $c$(at constant $N$) for $L = \infty$. The results are plotted in Fig. \[fig5\]. We find that the finite-size effect is much smaller in the gTPQ states than in the cTPQ states. Even for $L=8$, the result of the gTPQ state is surprisingly close to the exact result for $L=\infty$. By contrast, the cTPQ states have very large finite-size effects even for $L=14$. That is, the gTPQ state simulates a finite subsystem in an infinite system much better than the cTPQ state. This seems reasonable because the gTPQ state contains information about all values of $N$ whereas the cTPQ state contains information only about a specific value of $N$. Moreover, the gTPQ state also has another advantage that one can study an arbitrary value of the filling factor $N/2L$ for any $L$. For example, one can calculate the quarter-filled case, where $N=L/2$, even for odd $L$. This makes wider the available ranges of parameters in numerical computations. For these reasons, the gTPQ state would be far superior for practical purposes. If one has to use the cTPQ state (e.g., to save computer resources), it is better to convert the results using, for example, the relation $
\sum_N \langle \beta N V | \beta N V \rangle e^{\beta \mu N}
\stackrel{P}{\to}
\Xi(\beta, \mu, V).
$
![$c$ versus $T$, obtained by the gTPQ states with $L=8$ ($M=1024$) and $L=14$ ($M=26$), and by cTPQ states with $L=8$ ($M=1024$) and $L=14$ ($M=22$). Error bars show estimated errors, which can be made smaller by increasing $M$. Exact results for $L=\infty$ are also plotted. []{data-label="fig5"}](fig5.eps){width="0.9\linewidth"}
These conclusions also apply to comparison between the canonical and grand-canonical Gibbs states in the ensemble formulation, because their results are identical to those of the cTPQ and gTPQ states, respectively (with exponentially small errors). To the authors’ knowledge, systematic studies on such comparison were not reported previously because $V$ of ND of the full spectra is severely upper bounded.
We thank C. Hotta and H. Tasaki for helpful discussions. This work was supported by KAKENHI Nos. 22540407, 24540393 and 26287085. SS is supported by JSPS Research Fellowship No. 245328. Although all the numerical results in this Rapid Communication have been obtained using workstations, preliminary computation has been done using the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo.
[99]{} S. Sugiura and A. Shimizu, Phys. Rev. Lett. [**108**]{}, 240401 (2012).
S. Sugiura and A. Shimizu, Phys. Rev. Lett. [**111**]{}, 010401 (2013).
S. Popescu, A.J. Short, and A. Winter, Nature Phys. [**2**]{}, 754 (2006).
S. Goldstein, J. L. Lebowitz, R. Tumulka, and N. Zanghi, Phys. Rev. Lett. [**96**]{}, 050403 (2006).
A. Sugita, RIMS Kokyuroku (Kyoto) [**1507**]{}, 147 (2006). A. Sugita, Nonlinear Phenom. Complex Syst. [**10**]{}, 192 (2007).
P. Reimann, Phys. Rev. Lett. [**99**]{}, 160404 (2007).
P. Reimann, J. Stat. Phys. [**132**]{}, 921 (2008).
M. A. Nielsen and I. L. Chuang: [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, 2000).
A. Sugita and A. Shimizu, J. Phys. Soc. Jpn. [**74**]{}, 1883 (2005).
S. Sugiura and A. Shimizu, Kinki University Series on Quantum Computing [**9**]{}, 245 (2014). \[arXiv:1312.5145.\]
Actually, our formula for the probabilistic errors show that the TPQ states give the correct results for more general $\hat{A}$, such as $| \langle{ \hat{A}^2 }\rangle^{\rm ens}_{\beta \mu V} |
\leq
Ke^{o(V)}
$ and $\hat{A} = e^{i \xi \hat{a}}$, where $\xi \in \mathbb{R}$ and $\hat{a}$ is an arbitrary observable.
A. Shimizu, [*Netsurikigaku no Kiso*]{} (Principles of Thermodynamics) (University of Tokyo Press, Tokyo, 2007), ISBN 978-4-13-062609-5.
Supplemental Material at <http://as2.c.u-tokyo.ac.jp/archive/hsss2014lettersma.pdf>, in which practical formulas for numerical computation using the gTPQ states are presented.
G. Jüttner, A. Klümper and J. Suzuki, Nucl. Phys. B [**522**]{}, 471 (1998).
M. Takahashi, [*Thermodynamics of One-Dimensional Solvable Models*]{} (Cambridge University Press, Cambridge, 1999)
F. H. L. Essler, H. Frahm, F. Göhmann, A. Klümper and V. E. Korepin, [*The One-Dimensional Hubbard model*]{} (Cambridge University Press, Cambridge, 2005).
R. E. Glover, M. D. Sherrill, Phys. Rev. Lett. [**5**]{}, 248 (1960).
C. H. Ahn, S. Gariglio, P. Paruch, T. Tybell, L. Antognazza and J.-M. Triscone, Science [**284**]{}, 1152 (1999).
K. Ueno, S. Nakamura, H. Shimotani, H. T. Yuan, N. Kimura, T. Nojima, H. Aoki, Y. Iwasa, M. Kawasaki, Nature Nanotechnology [**6**]{}, 408 (2011).
S. Sota and T. Tohyama, Phys. Rev. B [**78**]{}, 113101 (2008).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The source-count distribution as a function of their flux, [$\mathrm{d}N/\mathrm{d}S$]{}, is one of the main quantities characterizing gamma-ray source populations. We employ statistical properties of the *Fermi* Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes ($|b|
\geq 30^\circ$) between 1GeV and 10GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6-year *Fermi*-LAT data set (`P7REP`), we show that the [$\mathrm{d}N/\mathrm{d}S$]{} distribution in the regime of so far undetected point sources can be consistently described with a power law of index between 1.9 and 2.0. We measure [$\mathrm{d}N/\mathrm{d}S$]{} down to an integral flux of $\sim\! 2\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall [$\mathrm{d}N/\mathrm{d}S$]{} distribution is consistent with a broken power law, with a break at $2.1^{+1.0}_{-1.3} \times
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. The power-law index $n_1
= 3.1^{+0.7}_{-0.5}$ for bright sources above the break hardens to $n_2 = 1.97\pm 0.03$ for fainter sources below the break. A possible second break of the [$\mathrm{d}N/\mathrm{d}S$]{} distribution is constrained to be at fluxes below $6.4\times 10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ at 95% confidence level. The high-latitude gamma-ray sky between 1GeV and 10GeV is shown to be composed of $\sim$25% point sources, $\sim$69.3% diffuse Galactic foreground emission, and $\sim$6% isotropic diffuse background.
author:
- |
Hannes-S. Zechlin, Alessandro Cuoco,\
Fiorenza Donato, Nicolao Fornengo, and Andrea Vittino
bibliography:
- 'msdnds.bib'
title: 'UNVEILING THE GAMMA-RAY SOURCE COUNT DISTRIBUTION BELOW THE *FERMI* DETECTION LIMIT WITH PHOTON STATISTICS'
---
INTRODUCTION {#sec:intro}
============
The decomposition of the extragalactic gamma-ray background (EGB; see @2015PhR...598....1F for a recent review) is pivotal for unveiling the origin of the nonthermal cosmic radiation field. The EGB comprises the emission from all individual and diffuse gamma-ray sources of extragalactic origin, and thus it originates from different mechanisms of gamma-ray production in the Universe. The EGB can be dissected by resolving the various point-source contributions, characterized by their differential source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{} as a function of the integral source flux $S$ [see, e.g., @2010ApJ...720..435A; @2015MNRAS.454..115S]. Conventionally, the EGB emission that is left after subtracting the resolved gamma-ray sources is referred to as the isotropic diffuse gamma-ray background [IGRB; @2015ApJ...799...86A]. The Large Area Telescope (LAT) on board the *Fermi* satellite [@2012ApJS..203....4A] has allowed the discovery of more than 3,000 gamma-ray point sources, collected in the 3FGL catalog [@2015ApJS..218...23A]. Resolved sources amount to about 30% of the EGB [@2015ApJ...799...86A] below $\sim$100 GeV (while above $\sim$100 GeV this percentage can rise to about 50%).
For resolved point sources listed in catalogs the [$\mathrm{d}N/\mathrm{d}S$]{} distributions of different source classes can be characterized. Among these, blazars represent the brightest and most numerous population, and, consequently, their [$\mathrm{d}N/\mathrm{d}S$]{} is the best-determined one. Blazars exhibit two different subclasses: flat-spectrum radio quasars , with a typically soft gamma-ray spectrum characterized by an average power-law photon index of $\sim$2.4, and BL Lacertae (BL Lac) objects, with a harder photon index of $\sim$2.1. The [$\mathrm{d}N/\mathrm{d}S$]{} distribution of blazars has been studied in detail in several works [@Ajello:2011zi; @Ajello:2013lka; @Chang:2013yia; @DiMauro:2013zfa; @Harding:2012gk; @Inoue:2008pk; @Stecker:2010di; @Stecker:1996ma]. Besides blazars, the EGB includes fainter sources like misaligned active galactic nuclei [mAGN; @DiMauro:2013xta; @Inoue:2011bm] and star-forming galaxies [SFGs; @Ackermann:2012vca; @Fields:2010bw; @Lacki:2012si; @Tamborra:2014xia; @Thompson:2006qd]. A contribution from Galactic sources such as millisecond pulsars (MSPs) located at high Galactic latitude is possible, although it has been constrained to be subdominant [@Calore:2014oga; @Gregoire:2013yta]. Finally, pure diffuse (not point-like) components can contribute, for instance caused by pair halo emission from AGN, clusters of galaxies, or cascades of ultra-high-energy cosmic rays (UHECRs) on the CMB (see @2015PhR...598....1F and references therein).
In the usual approach, the [$\mathrm{d}N/\mathrm{d}S$]{} distributions of different populations (inferred from resolved sources) are extrapolated to the unresolved regime and used to investigate the composition of the IGRB (i.e., the unresolved EGB). This approach has revealed that the above-mentioned three main components well explain the observed IGRB spectrum, constraining further contributions to be subdominant, including a possible exotic contribution from dark matter (DM) annihilation or decay [@Ajello:2015mfa; @Cholis:2013ena; @DiMauro:2015tfa]. While the above-mentioned approach is very useful, a clear drawback is caused by the fact that it relies on the extrapolation of [$\mathrm{d}N/\mathrm{d}S$]{} distributions. In this work, we will focus on a method to overcome this problem by conducting a direct measurement of the [$\mathrm{d}N/\mathrm{d}S$]{} in the unresolved regime.
Detection capabilities for individual point sources are intrinsically limited by detector angular resolution and backgrounds. This makes in particular the IGRB a quantity that depends on the actual observation [@2015ApJ...799...86A]. The common approach of detecting individual sources [@2015ApJS..218...23A; @2016ApJS..222....5A] can be complemented by decomposing gamma-ray skymaps by statistical means, using photon-count or intensity maps. One of the simplest ways of defining such a statistic is to consider the probablity distribution function (PDF) of photon counts or fluxes in pixels, commonly known as $P(D)$ distribution in the radio [e.g., @1974ApJ...188..279C; @1957PCPS...53..764S; @2014MNRAS.440.2791V; @2015MNRAS.447.2243V and references therein] and X-ray bands. Recently, this technique has been adapted to photon-count measurements in the gamma-ray band; see @2011ApJ...738..181M, henceforth , for details. Various theoretical studies have also been performed [@2010PhRvD..82l3511B; @Dodelson:2009ih; @Feyereisen:2015cea; @Lee:2008fm]. In addition, this method has been used to probe unresolved gamma-ray sources in the region of the Galactic Center [@2016PhRvL.116e1102B; @2015JCAP...05..056L; @2016PhRvL.116e1103L], as well as to constrain the source-count distribution above 50GeV [@2015arXiv151100693T].
As argued above, this method has the advantage of directly measuring the [$\mathrm{d}N/\mathrm{d}S$]{} in the unresolved regime, thus not relying on any extrapolation. A difference with respect to the use of resolved sources is that in the PDF approach only the global [$\mathrm{d}N/\mathrm{d}S$]{}, i.e., the sum of all components, can be directly measured: since no individual source can be identified with this method, counterpart association and the separation of [$\mathrm{d}N/\mathrm{d}S$]{} into different source components become impossible. The PDF approach nonetheless offers another important advantage with respect to the standard method: the use of the [$\mathrm{d}N/\mathrm{d}S$]{} built from cataloged sources close to the detection threshold of the catalog is hampered by the fact that the threshold is not sharp but rather characterized by a detection efficiency as a function of flux [@2010ApJ...720..435A; @2015arXiv151100693T]. The [$\mathrm{d}N/\mathrm{d}S$]{} thus needs to be corrected for the catalog detection efficiency, which, in turn, is a nontrivial quantity to determine [@2010ApJ...720..435A]. On the contrary, the PDF approach treats all the sources in the same way, resolved and unresolved, and can thus determine the [$\mathrm{d}N/\mathrm{d}S$]{} in a significantly larger flux range, without requiring the use of any efficiency function.
In the following, we will measure the high-latitude [$\mathrm{d}N/\mathrm{d}S$]{} with the PDF methodology using 6 years of gamma-ray data collected with the [*Fermi*]{}-LAT. We will show that for the 1GeV to 10GeV energy band we can measure the [$\mathrm{d}N/\mathrm{d}S$]{} down to an integral flux of $\sim
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, which is a factor of $\sim 20$ lower than the nominal threshold of the 3FGL catalog.
This article is structured as follows: In Section \[sec:theory\] we introduce the mathematical framework of the analysis method, supplemented by a detailed description of our extensions to previous approaches, the modeling of source and background components, and the fitting procedure. The gamma-ray data analysis is addressed in Section \[sec:Fermi\_data\]. Section \[sec:analysis\_routine\] is dedicated to details of the statistical analysis approach and the fitting technique. The resulting global source-count distribution and the composition of the gamma-ray sky are considered in Section \[sec:application\]. Section \[sec:anisotropy\] addresses the angular power of unresolved sources detected with this analysis. Possible systematic and modeling uncertainties are discussed in Section \[sec:systematics\]. Eventually, final results are summarized in Section \[sec:conclusions\].
THE STATISTICS OF GAMMA-RAY PHOTON COUNTS {#sec:theory}
=========================================
In the present analysis, we assume the gamma-ray sky at high Galactic latitudes to be composed of three different contributions:
- A population of gamma-ray point sources. Given that the analysis is restricted to high Galactic latitudes, this source population is considered to be dominantly of extragalactic origin. Sources can thus be assumed to be distributed homogeneously across the sky.
- Diffuse gamma-ray emission from our Galaxy, mostly bright along the Galactic plane but extending also to the highest Galactic latitudes. We will refer to this component as Galactic foreground emission. The photon flux in map pixel $p$ from this component will be denoted as $F^{{(p)}}_\mathrm{gal}$.
- Gamma-ray emission from all contributions that are indistinguishable from diffuse isotropic emission, such as extremely faint sources. We will include in this component possible truly diffuse emission of extragalactic or Galactic origin, such as, for example, gamma rays from cosmological cascades from UHECRs, or possible isotropic subcomponents of the Galactic foreground emission. In addition, the component comprises the residual cosmic-ray background. All together this emission will be denoted as $F_\mathrm{iso}$.
A more detailed account of the individual components is given in Section \[sec:intro\] and later in this section.
Following the method of , we considered the celestial region of interest (ROI) to be partitioned into $N_\mathrm{pix}$ pixels of equal area , where $f_\mathrm{ROI}$ is the fraction of sky covered by the ROI. The probability $p_k$ of finding $k$ photons in a given pixel is by definition the 1-point PDF ([1pPDF]{}). In the simplest scenario of purely isotropic emission, $p_k$ follows a Poisson distribution with an expectation value equal to the mean photon rate. The imprints of more complex diffuse components and a distribution of point sources alter the shape of the [1pPDF]{}, in turn allowing us to investigate these components by measuring the [1pPDF]{} of the data.
The usual way in which the [1pPDF]{} is used requires us to bin the photon counts of each pixel into a histogram of the number of pixels, $n_k$, containing $k$ photon counts, and to compare the $p_k$ predicted by the model with the estimator $n_k/N_\mathrm{pix}$. This method is the one adopted by . By definition, this technique does not preserve any spatial information of the measurement or its components (for example, the uneven morphology of the Galactic foreground emission), resulting in an undesired loss of information. We will instead use the [1pPDF]{} in a more general form, including pixel-dependent variations in order to fully exploit all the available information.
Generating Functions {#ssec:gen_funcs}
--------------------
An elegant way of deriving the [1pPDF]{} including all the desired components exploits the framework of probability generating functions (see and references therein for details). The generating function $\mathcal{P}^{{(p)}}(t)$ of a discrete probability distribution $p^{{(p)}}_k$, which may depend on the pixel $p$ and where $k=0,1,2,\dots$ is a discrete random variable, is defined as a power series in an auxiliary variable $t$ by $$\label{eq:gf}
\mathcal{P}^{{(p)}}(t) = \sum_{k=0}^{\infty} p^{{(p)}}_k t^k .$$ The series coefficients $p^{{(p)}}_k$ can be derived from a given $\mathcal{P}^{{(p)}}(t)$ by differentiating with respect to $t$ and evaluating them at $t=0$, $$\label{eq:pkcalc}
p^{{(p)}}_k = \frac{1}{k!} \left. \frac{\mathrm{d}^k \mathcal{P}^{{(p)}}(t)}
{\mathrm{d}t^k}\right|_{t=0} .$$ The method of combining individual components into a single $\mathcal{P}^{{(p)}}(t)$ makes use of the summation property of generating functions, i.e., the fact that the generating function for the sum of two independent random variables is given by the product of the generating functions for each random variable itself.
In our case, the general representation of $\mathcal{P}^{{(p)}}(t)$ for photon-count maps can be derived from considering a superposition of Poisson processes; see Appendix \[app:genfunc\_poisson\] and for a more detailed explanation. The generating function is therefore given by $$\label{eq:gfgen1}
\mathcal{P}^{{(p)}}(t) = \exp \left[ \sum_{m=1}^{\infty} x^{{(p)}}_m
\left( t^m -1 \right) \right] ,$$ where the coefficients $x^{{(p)}}_m$ are the expected number of point sources per pixel $p$ that contribute exactly $m$ photons to the total photon count of the pixel, and $m$ is a positive integer. In the derivation of Equation , it has been assumed that the $x^{{(p)}}_m$ are mean values of underlying Poisson PDFs. The quantities $x^{{(p)}}_m$ are related to the differential source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{}, where $S$ denotes the integral photon flux of a source in a given energy range $[E_\mathrm{min}, E_\mathrm{max}]$, by $$\label{eq:xm}
x^{{(p)}}_m = \Omega_\mathrm{pix} \int_0^\infty \mathrm{d}S
\frac{\mathrm{d}N}{\mathrm{d}S} \frac{(\mathcal{C}^{{(p)}}\!(S) )^m}{m!}
e^{-\mathcal{C}^{{(p)}}\!(S)} .$$ The number of counts $\mathcal{C}^{{(p)}}\!(S)$ expected in pixel $p$ is given as a function of $S$ by $$\label{eq:counts}
\mathcal{C}^{{(p)}}\!(S) = S\,\frac{ \int_{E_\mathrm{min}}^{E_\mathrm{max}}
\mathrm{d}E\,E^{-\Gamma} \mathcal{E}^{{(p)}}(E) }
{ \int_{E_\mathrm{min}}^{E_\mathrm{max}} \mathrm{d}E\,E^{-\Gamma} }$$ for sources with a power-law-type energy spectrum $\propto
E^{-\Gamma}$, where $\Gamma$ denotes the photon index and the pixel-dependent exposure[^1] as a function of energy is denoted by $\mathcal{E}^{{(p)}}(E)$. In Equation [(\[eq:xm\])]{}, we have assumed that the PDF for a source to contribute $m$ photons to a pixel $p$ follows a Poisson distribution with mean $\mathcal{C}^{{(p)}}\!(S)$. Gamma-ray sources have been assumed to be isotropically distributed across the sky, i.e., [$\mathrm{d}N/\mathrm{d}S$]{} is pixel independent, while, in principle, Equation [(\[eq:xm\])]{} allows for an extension of the method to spatially dependent [$\mathrm{d}N/\mathrm{d}S$]{} distributions.
The generating functions for diffuse background components correspond to $1$-photon source terms, with $x^{{(p)}}_m = 0$ for all $m$ except $m=1$: $$\label{eq:Dgen}
\mathcal{D}^{{(p)}}(t) = \exp \left[ x_\mathrm{diff}^{{(p)}}\,(t-1) \right] ,$$ where $x^{{(p)}}_\mathrm{diff}$ denotes the number of diffuse photon counts expected in pixel $p$ for a given observation.[^2] This quantity is given by $$\label{eq:xdiff}
x^{{(p)}}_\mathrm{diff} = \int_{\Omega_\mathrm{pix}} \,\mathrm{d}\Omega
\int_{E_\mathrm{min}}^{E_\mathrm{max}}
\mathrm{d}E\,f^{{(p)}}_\mathrm{diff}(E)\,\mathcal{E}^{{(p)}}(E)\, ,$$ with $f^{{(p)}}_\mathrm{diff}(E)$ being the differential flux of the diffuse component as a function of energy.
The relation in Equation [(\[eq:xm\])]{} allows measuring the source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{} from pixel-count statistics. Furthermore, we can observe that the [1pPDF]{} approach may allow the detection of point-source populations below catalog detection thresholds: if the source-count distribution implies a large number of faint emitters, pixels containing photon counts originating from these sources will be stacked in an $n_k$-histogram, increasing the statistical significance of corresponding $k$-bins. The average number of photons required from individual sources for the statistical detection of the entire population will therefore be significantly smaller than the photon contribution required for individual source detection.
The simple [1pPDF]{} approach refers to a measurement of $p_k$ which is averaged over the considered ROI. The generating function for the [1pPDF]{} measurement therefore reduces to a pixel average, $$\label{eq:gf1ppdf}
\mathcal{P}(t) = \frac{1}{N_\mathrm{pix}} \sum_{p=1}^{N_\mathrm{pix}}
\mathcal{P}_\mathrm{S}^{{(p)}}(t) \, \mathcal{D}^{{(p)}}(t),$$ where we made use of the fact that the total generating function factorizes in the point-source component and the diffuse component, $\mathcal{P}^{{(p)}}(t) = \mathcal{P}_\mathrm{S}^{{(p)}}(t)\,\mathcal{D}^{{(p)}}(t)$ (see Equations [(\[eq:gfgen1\])]{} and [(\[eq:Dgen\])]{}).
The numerical implementation of Equation [(\[eq:gf1ppdf\])]{} in its most general form is computationally complex [see ; @2015JCAP...05..056L]. In the ideal situation of an isotropic point-source distribution and homogeneous exposure, $\mathcal{P}_\mathrm{S} (t) \equiv \mathcal{P}_\mathrm{S}^{{(p)}}(t)$ factorizes out of the sum, reducing the pixel-dependent part of Equation [(\[eq:gf1ppdf\])]{} to the diffuse component, which is easy to handle. The exposure of *Fermi*-LAT data is, however, not uniformly distributed in the ROI (see Section \[sec:Fermi\_data\]) and requires appropriate consideration.
To correct the point-source component for exposure inhomogeneities, we divided the exposure map into $N_\mathrm{exp}$ regions, separated by contours of constant exposure such that the entire exposure range is subdivided into $N_\mathrm{exp}$ equally spaced bins. In each region, the exposure values were replaced with the region averages, yielding $N_\mathrm{exp}$ regions of constant exposure. The approximation accuracy is thus related to the choice of $N_\mathrm{exp}$. In this case, Equation [(\[eq:gf1ppdf\])]{} reads $$\label{eq:Pexpinhomo}
\mathcal{P}(t) = \frac{1}{N_\mathrm{pix}} \sum_{i=1}^{N_\mathrm{exp}}
\sum_{\mathrm{P}_i} \mathcal{P}_\mathrm{S}^{{(p)}}(t) \, \mathcal{D}^{{(p)}}(t),$$ where $\mathrm{P}_i = \{ p | p \in \mathrm{R}_i\}$ denotes the subset of pixels belonging to region $\mathrm{R}_i$. In this way, $\mathcal{P}_\mathrm{S}^{{(p)}}(t)$ becomes independent of the inner sum and factorizes, significantly reducing the required amount of computation time.
The probability distributions $p_k$ or $p^{{(p)}}_k$ can eventually be calculated from $\mathcal{P}(t)$ or $\mathcal{P}^{{(p)}}(t)$, respectively, by using Equation [(\[eq:pkcalc\])]{}.
Model Description
-----------------
### Source-count Distribution {#sssec:dnds_model}
The source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{} characterizes the number of point sources $N$ in the flux interval $(S, S+dS)$, where $S$ is the integral flux of a source in a given energy range. The quantity $N$ actually denotes the areal source density per solid angle element $\mathrm{d}\Omega$, which is omitted in our notation for simplicity. In this analysis, we parameterized the source-count distribution with a power law with *multiple breaks*, referred to as multiply broken power law (MBPL) in the remainder. An MBPL with $N_\mathrm{b}$ breaks located at $S_{\mathrm{b}j}$, $j=1,2,\dots,N_\mathrm{b}$, is defined as $$\label{eq:mbpl}
\frac{\mathrm{d}N}{\mathrm{d}S} \propto
\begin{cases}
\left( \frac{S}{S_0} \right)^{-n_1} \qquad\qquad\qquad\qquad\qquad ,
S > S_{\mathrm{b}1} & \\
\left( \frac{S_{\mathrm{b}1}}{S_0} \right)^{-n_1+n_2}
\left( \frac{S}{S_0} \right)^{-n_2} \qquad\qquad ,
S_{\mathrm{b}2} < S \leq S_{\mathrm{b}1} \\
\vdotswithin{\left(\frac{S_{\mathrm{b}1}}{S_0}\right)}
\qquad\qquad\qquad\qquad\qquad \vdotswithin{S_{\mathrm{b}2} < S} & \\
\left( \frac{S_{\mathrm{b}1}}{S_0} \right)^{-n_1+n_2}
\left( \frac{S_{\mathrm{b}2}}{S_0} \right)^{-n_2+n_3}
\cdots \ \left( \frac{S}{S_0} \right)^{-n_{N_\mathrm{b}+1}} & \\
\hspace{14em} , S \leq S_{\mathrm{b}N_\mathrm{b}} & \\
\end{cases}$$ where $S_0$ is a normalization constant. The $n_j$ denote the indices of the power-law components. The [$\mathrm{d}N/\mathrm{d}S$]{} distribution is normalized with an overall factor $A_\mathrm{S}$, which is given by $A_\mathrm{S}
= \mathrm{d}N/\mathrm{d}S\,(S_0)$ if $S_0 > S_{\mathrm{b}1}$. We required a finite total flux, i.e., we imposed $n_1 > 2$ and $n_{N_\mathrm{b}+1} < 2$.
### Source Spectra {#sec:source_spectra}
The whole population of gamma-ray sources is disseminated by a variety of different source classes (see Section \[sec:intro\] for details). In particular, FSRQs and BL Lac objects contribute to the overall [$\mathrm{d}N/\mathrm{d}S$]{} at high Galactic latitudes. The spectral index distribution of all resolved sources in the energy band between 100MeV and 100GeV (assuming power-law spectra) is compatible with a Gaussian centered on $\Gamma = 2.40\pm 0.02$, with a half-width of $\sigma_\Gamma = 0.24
\pm 0.02$ [@2010ApJ...720..435A]. We thus used an index of $\Gamma=2.4$ in Equation [(\[eq:counts\])]{}.
### Galactic Foreground and Isotropic Background {#ssec:bckgs}
The Galactic foreground and the diffuse isotropic background were implemented as described in Equation [(\[eq:Dgen\])]{}. The total diffuse contribution was modeled by $$x^{{(p)}}_\mathrm{diff} = A_\mathrm{gal}\,x^{{(p)}}_\mathrm{gal} +
\frac{x^{{(p)}}_\mathrm{iso} }{F_\mathrm{iso}}\,F_\mathrm{iso} \,,$$ with $A_\mathrm{gal}$ being a normalization parameter of the Galactic foreground component $x^{{(p)}}_\mathrm{gal}$. For the isotropic component $x^{{(p)}}_\mathrm{iso}$ the integral flux $F_\mathrm{iso}$ was directly used as a sampling parameter, in order to have physical units of flux.
#### Galactic Foreground
The Galactic foreground was modeled using a template (`gll_iem_v05_rev1.fit`) developed by the Fermi-LAT collaboration to compile the 3FGL catalog [@2015ApJS..218...23A][^3]. The Galactic foreground model is based on a fit of multiple templates to the gamma-ray data. The templates used are radio-derived gas maps splitted into various galactocentric annuli, a further dust-derived gas map, an inverse Compton emission template derived with the GALPROP code,[^4] and some patches designed to describe observed residual emission not well represented by the pervious templates, such as the Fermi bubbles and Galactic LoopI.
The Galactic foreground template comprises predictions of the differential intensity at 30 logarithmically spaced energies in the interval between 50MeV and 600GeV. The spatial map resolution is $0.125^\circ$, which was resampled to match the pixelization scheme and spatial resolutions used in our analysis. The predicted number of counts per pixel $x^{{(p)}}_\mathrm{gal}$ was obtained from integration in the energy range $[E_\mathrm{min}, E_\mathrm{max}]$ as described in Section \[ssec:gen\_funcs\].
In order to include the effects caused by the point spread function (PSF) of the detector, we smoothed the final template map with a Gaussian kernel of $0.5^\circ$. We checked that systematics of this coarse PSF approximation (see Section \[sec:Fermi\_data\]) were negligible, by comparing kernels with half-widths between $0^\circ$ and $1^\circ$.
Figure \[fig:f1\] shows the model prediction for the diffuse Galactic foreground flux between $1\,\mathrm{GeV}$ and $10\,\mathrm{GeV}$ and Galactic latitudes $|b| \geq 30^\circ$. The complex spatial morphology of the Galactic foreground emission is evident. The intensity of Galactic foreground emission significantly decreases with increasing latitude. The integral flux predicted by the model in the energy range $\Delta E$ between 1GeV and 10GeV is $F_\mathrm{gal}(\Delta E) \simeq 4.69\times
10^{-5}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ for the full sky and $F_\mathrm{gal}(\Delta E; |b| \geq 30^\circ) \simeq 6.42\times
10^{-6}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ for high Galactic latitudes $|b| \geq 30^\circ$.
Since the model reported in `gll_iem_v05_rev1.fit` was originally normalized to best reproduce the whole gamma-ray sky, we allowed for an overall different normalization parameter $A_\mathrm{gal}$ in our analysis, given that we explored different ROIs. Nonetheless, $A_\mathrm{gal}$ is expected to be of order unity when considered a free fit parameter.
#### Isotropic Background
The expected counts for the diffuse isotropic background component $F_\mathrm{iso}$ were derived assuming a power-law spectrum with spectral index $\Gamma_\mathrm{iso} = 2.3$ [@2015ApJ...799...86A]. We verified that using the specific energy spectrum template provided by the *Fermi*-LAT Collaboration (`iso_clean_front_v05.txt`) had no impact on our results.
PSF Smearing {#sec:psf}
------------
The detected photon flux from point sources is distributed over a certain area of the sky as caused by the finite PSF of the instrument. Photon contributions from individual point sources are therefore spread over several adjacent pixels, each containing a fraction $f$ of the total photon flux from the source. Apart from being a function of the pixel position, the fractions $f$ depend on the location of a source within its central pixel. A smaller pixel size, i.e., a higher-resolution map, decreases the values of $f$, corresponding to a relatively larger PSF smoothing.
Equation must therefore be corrected for PSF effects. Following , the PSF correction was incorporated by statistical means, considering the average distribution of fractions $\rho(f)$ among pixels for a given pixel size. To determine $\rho(f)$, we used Monte Carlo simulations distributing a number of $N$ fiducial point sources at random positions on the sky. The sources were convolved with the detector PSF, and the fractions $f_i$, $i=1,\dots,N_\mathrm{pix}$, were evaluated for each source. The sums of the fractions $f_i$ were normalized to 1. We used the effective detector PSF derived from the data set analyzed below, corresponding to the specific event selection cuts used in our analysis. The effective detector PSF was obtained by averaging the detector PSF over energy and spectral index distribution. This is further explained in Section \[sec:Fermi\_data\].
The average distribution function $\rho(f)$ is then given by $$\label{eq:rho_f}
\rho(f) = \left. \frac{\Delta N(f)}{N \Delta f}
\right|_{\Delta f \rightarrow 0,\, N \rightarrow \infty}\,,$$ where $\Delta N(f)$ denotes the number of fractions in the interval $(f, f+\Delta f)$. The distribution obeys the normalization condition $$\label{eq:rho_f_norm}
\int \mathrm{d}f\,f \rho(f) = 1\,.$$ The expected number of $m$-photon sources in a given pixel corrected for PSF effects is given by $$\label{eq:xm_corr}
x^{{(p)}}_m = \Omega_\mathrm{pix} \int_0^\infty \mathrm{d}S
\frac{\mathrm{d}N}{\mathrm{d}S} \int \mathrm{d}f \rho(f)
\frac{(f\,\mathcal{C}^{{(p)}}\!(S) )^m}{m!} e^{-f\,\mathcal{C}^{{(p)}}\!(S)}\,.$$
Figure \[fig:psf\_corr\] depicts the distribution function $\rho(f)$ derived for the effective PSF of the data set for two different pixel sizes. The function $\rho(f)$ is also shown assuming a Gaussian PSF with a 68% containment radius resembling the one of the actual PSF. Compared to the Gaussian case, the more pronounced peak of the detector PSF reflects in a strongly peaked $\rho(f)$ at large flux fractions. Reducing the pixel size, i.e., effectively increasing PSF smoothing (in the sense of this analysis), shifts the peak of $\rho(f)$ to smaller $f$. The impact of the large tails of the detector PSF becomes evident at small fractions.
Data Fitting
------------
To fit the model ($H$) to a given data set (D), we used the method of maximum likelihood (see, e.g., @2014ChPhC..38i0001O for a review). We defined the likelihood $\mathcal{L}({\bf \Theta}) \equiv
P(\mathrm{D}|{\bf \Theta}, H)$ in two different ways, which we refer to as L1 and L2 in the following. The likelihood function describes the probability distribution function $P$ of obtaining the data set $\mathrm{D}$, under the assumption of the model (hypothesis) $H$ with a given parameter set ${\bf \Theta}$.
For a source-count distribution following an MBPL with $N_\mathrm{b}$ breaks and the previously defined background contributions, the parameter vector is given by $${\bf \Theta} = (A_\mathrm{S},S_{\mathrm{b}1},\dots,S_{\mathrm{b}N_\mathrm{b}},
n_1,\dots,n_{N_\mathrm{b}+1},A_\mathrm{gal},F_\mathrm{iso}),$$ containing $N_{\bf \Theta} = 2N_\mathrm{b} + 4$ free parameters.
### Likelihood L1
The L1 approach resembles the method of the simple [1pPDF]{}(see MH11). Given the probability distribution $p_k$ for a given ${\bf \Theta}$, the expected number of pixels containing $k$ photons is $\nu_k({\bf
\Theta}) = N_\mathrm{pix}\,p_k({\bf \Theta})$. The probability of finding $n_k$ pixels with $k$ photons follows a Poissonian (if pixels are considered statistically independent), resulting in the total likelihood function $$\label{eq:L1}
\mathcal{L}_1({\bf \Theta}) = \prod_{k=0}^{k_\mathrm{max}}
\frac{\nu_k({\bf \Theta})^{n_k}}{n_k!} e^{-\nu_k({\bf \Theta})}\,,$$ where $k_\mathrm{max}$ denotes the maximum value of $k$ considered in the analysis.
### Likelihood L2
The simple [1pPDF]{} approach can be improved by including morphological information provided by templates. The L2 approach defines a likelihood function that depends on the location of the pixel. The probability of finding $k$ photons in a pixel $p$ is given by $p_k^{{(p)}}$ for a given parameter vector ${\bf \Theta}$. We emphasize that now the data set comprises the measured number of photons $k_p$ in each pixel $p$, instead of the $n_k$-histogram considered in L1. For clarity, the function $p_k^{{(p)}}$ is therefore denoted by $P(k_p) \equiv p_k^{{(p)}}$ in the following. The likelihood function for the entire ROI is then given by $$\mathcal{L}_2 ({\bf \Theta}) = \prod_{p=1}^{N_\mathrm{pix}} P(k_p)\,.$$
It should be noted that the L2 approach is a direct generalization of the L1 approach. The [1pPDF]{} approach already provides the PDF for each pixel, and it is thus natural to use the appropriate PDF for each pixel instead of using the average one and comparing it with the $n_k$-histogram. The L2 approach can then be seen as building a different $n_k$-histogram for each pixel, comparing it with the appropriate $p_k$ distribution and then joining the likelihoods of all the pixels together in the global L2 one. The fact that for each pixel the $n_k$-histogram actually reduces to a single count does not pose a matter-of-principle problem.
### Bayesian Parameter Estimation {#sssec:par_est}
The sampling of the likelihood functions $\mathcal{L}_1({\bf \Theta})$ and $\mathcal{L}_2({\bf \Theta})$ is numerically demanding and requires advanced Markov Chain Monte Carlo (MCMC) methods to account for multimodal behavior and multiparameter degeneracies. We used the multimodal nested sampling algorithm `MultiNest`[^5] [@2008MNRAS.384..449F; @2009MNRAS.398.1601F; @2013arXiv1306.2144F] to sample the posterior distribution $P({\bf \Theta}|\mathrm{D},H)$. The posterior is defined by Bayes’s theorem as $P({\bf
\Theta}|\mathrm{D},H) = \mathcal{L}({\bf \Theta}) \pi({\bf \Theta})
/ \mathcal{Z}$, where $\mathcal{Z} \equiv P(\mathrm{D}|H)$ is the Bayesian evidence given by $$\mathcal{Z} = \int \mathcal{L}({\bf \Theta})
\pi({\bf \Theta})\,\mathrm{d}^{N_{\bf \Theta}} {\bf \Theta}\,,$$ and $\pi ({\bf \Theta})$ is the prior. `MultiNest` was used in its recommended configuration regarding sampling efficiency. For our analysis setups, we checked that sufficient sampling accuracy was reached using 1,500 live points with a tolerance setting of 0.2. Final acceptance rates typically resulted in values between 5% and 10%, while the final samples of approximately equal-weight parameter space points consisted of about $10^4$ points.
From the marginalized one-dimensional posterior distributions, for each parameter we quote the median, and the lower and upper statistical uncertainties were derived from the 15.85% and 84.15% quantiles, respectively. In the case of log-flat priors (see below), we assumed the marginalized posterior distribution to be Gaussian for deriving single-parameter uncertainty estimates in linear space. The derivation of uncertainty bands of the [$\mathrm{d}N/\mathrm{d}S$]{} fit exploited the same method but using the full posterior.
Priors were chosen to be flat or log flat, depending on the numerical range required for a parameter. Details are discussed in Section \[sssec:priors\].
### Frequentist Parameter Estimation {#sssec:par_est_freq}
Bayesian parameter estimates from the posterior distributions are compared to parameter estimates employing the frequentist approach. The MCMC method intrinsically provides samples of a posterior distribution that depends on the prior. Nonetheless, if the number of samples is sufficiently high such that also the tails of the posterior are well explored, it can be assumed that the final sample reasonably explored the likelihood function. Profile likelihood functions [see, e.g., @2005NIMPA.551..493R] can be built from the posterior sample. In particular, we built the profile likelihood of the [$\mathrm{d}N/\mathrm{d}S$]{} fit and one-dimensional profile likelihoods for each parameter. We quote the maximum likelihood parameter values and 68% confidence level (CL) intervals derived under the assumption that the profiled $-2\ln \mathcal{L}$ follows a chi-squared distribution with one degree of freedom, i.e., we quote the values of the parameters for which $-2\Delta \ln \mathcal{L}=1$.[^6] The advantage of profile likelihood parameter estimates is that they are prior independent.
*FERMI*-LAT DATA {#sec:Fermi_data}
================
The analysis is based on all-sky gamma-ray data that were recorded with the *Fermi*-LAT[^7] within the first 6 years of the mission.[^8] Event selection and processing were performed with the public version of the Fermi Science Tools (v9r33p0, release date 2014 May 20).[^9] We used `Pass 7 Reprocessed` (`P7REP`) data along with `P7REP_V15` instrument response functions.
The application of the analysis method presented here is restricted to the energy bin between $E_\mathrm{min}=1\,\mathrm{GeV}$ and $E_\mathrm{max}=10\,\mathrm{GeV}$. The lower bound in energy was motivated by the size of the PSF, which increases significantly to values larger than $1^\circ$ for energies below 1GeV [@2012ApJS..203....4A]. The significant smoothing of point sources caused by a larger PSF may lead to large uncertainties in this analysis (see Section \[sec:psf\]). Effects of a possible energy dependence of [$\mathrm{d}N/\mathrm{d}S$]{} are mitigated by selecting an upper bound of 10GeV.
Data selection was restricted to events passing `CLEAN` event classification, as recommended for diffuse gamma-ray analyses. We furthermore required `FRONT`-converting events, in order to select events with a better PSF and to avoid a significant broadening of the effective PSF. Contamination from the Earth’s limb was suppressed by allowing a maximum zenith angle of $90^\circ$. We used standard quality selection criteria, i.e., `DATA_QUAL==1` and `LAT_CONFIG==1`, and the rocking angle of the satellite was constrained to values smaller than $52^\circ$. The data selection tasks were carried out with the tools `gtselect` and `gtmktime`.
The resulting counts map was pixelized with `gtbin` using the equal-area HEALPix pixelization scheme [@2005ApJ...622..759G]. The resolution of the discretized map is given by the pixel size, $\theta_\mathrm{pix} =
\sqrt{\Omega_\mathrm{pix}}$. For the statistical analysis employed here, the optimum resolution is expected to be of the order of the PSF: while undersampling the PSF leads to information loss on small-scale structures such as faint point sources, oversampling increases the statistical uncertainty on the number of counts per pixel. We thus compared two choices for the map resolution, where $\kappa$ denotes the HEALPix resolution parameter: $\kappa=6$ ($N_\mathrm{side} = 64$),[^10] corresponding to a resolution of , and $\kappa=7$ ($N_\mathrm{side} = 128$), corresponding to a resolution of . These choices slightly undersample or oversample the actual PSF, respectively.
We used `gtltcube` and `gtexpcube2` to derive the exposure map as a function of energy. The lifetime cube was calculated on a spatial grid with a $1^\circ$ spacing. The exposure map imposed a spatial grating of $0.125^\circ$ (in Cartesian projection) and the same energy binning as used in the Galactic foreground template. The map was projected into HEALPix afterwards.
The statistical analysis requires a careful correction for effects imposed by the PSF; see Section \[sec:psf\] for details. The PSF of the data set was calculated with `gtpsf` for a fiducial Galactic position $(l,b)=(45^\circ,45^\circ)$ as a function of the displacement angle $\theta$ and the energy $E$. We checked that changes of the PSF at other celestial positions were negligible. Given that the PSF strongly depends on energy, analyzing data in a single energy bin requires appropriate averaging. The effective PSF of the data set was calculated by weighting with the energy-dependent exposure and power-law type energy spectra $\propto E^{-\Gamma}$, $$\mathrm{psf} (\theta,\Delta E) = \frac{\int_{E_\mathrm{min}}^{E_\mathrm{max}}
\mathrm{d}E\,E^{-\Gamma}\,\mathcal{E}(E)\,\mathrm{psf} (\theta,E) }
{ \int_{E_\mathrm{min}}^{E_\mathrm{max}} \mathrm{d}E\,E^{-\Gamma}\,\mathcal{E}(E) } ,$$ where $\mathcal{E}(E) = \langle \mathcal{E}^{{(p)}}(E)
\rangle_\mathrm{ROI}$ denotes the exposure averaged over the ROI. An average spectral index $\Gamma = 2.4$ was assumed.
The analysis presented in this article was carried out for high Galactic latitudes $|b|\geq 30^\circ$, aiming at measuring the source-count distribution and compositon of the extragalactic gamma-ray sky. For $|b|\geq 30^\circ$ (corresponding to $f_\mathrm{ROI}=0.5$), the photon counts map comprises 862,459 events distributed in 24,576 pixels ($\kappa = 6$). The counts map, with a minimum of 5 events per pixel and a maximum of 4,101 events, is shown in Figure \[fig:cnts\].
The energy-averaged exposure map of the data set is shown in Figure \[fig:exp\] for the full sky, divided into 20 equal-exposure regions (see Equation [(\[eq:Pexpinhomo\])]{}). The full-sky (unbinned) exposure varies from $8.22\times 10^{10}\,\mathrm{cm}^2\,\mathrm{s}$ to $1.27\times 10^{11}\,\mathrm{cm}^2\,\mathrm{s}$. The mean of the energy-averaged exposure is $9.18\times
10^{10}\,\mathrm{cm}^2\,\mathrm{s}$ for $|b|\geq 30^\circ$.
The effective PSF width (68% containment radius) is $\sigma_\mathrm{psf} = 0.43^\circ$.
ANALYSIS ROUTINE {#sec:analysis_routine}
================
The following section is dedicated to details of the analysis method and to the analysis strategy developed in this article. The analysis aims at measuring (i) the contribution from resolved and unresolved gamma-ray point sources to the EGB, (ii) the shape of their source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{}, and (iii) the resulting total composition of the gamma-ray sky, in the energy band between 1GeV and 10GeV. The restriction to Galactic latitudes $|b| \geq
30^\circ$ provides a reasonable choice for ensuring that the dominant source contributions are of extragalactic origin.
Expected Sensitivity {#ssec:sensitivity}
--------------------
The source-population sensitivity of the method can be estimated from the theoretical framework discussed in Section \[sec:theory\]. By definition, the total PDF incorporates background components as populations of $1$-photon sources (see Equation \[eq:Dgen\]). Sources contributing on average two photons per pixel should be clearly distinguishable from background contributions. The limiting sensitivity on the point-source flux is thus given by the inverse of the average exposure, yielding a value of $S_\mathrm{sens} \simeq 2.31\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ for a pixel size corresponding to resolution $\kappa=6$. This value gives a back-of-the-envelope estimate of the sensitivity to the point-source population, while the actual sensitivity additionally depends on quantities such as the unknown shape of the source-count distribution, the relative contribution from foreground and background components, and the number of evaluated pixels $N_\mathrm{pix}$ (i.e., the Galactic latitude cut). The actual sensitivity will be determined from a data-driven approach in Section \[sec:application\], as well as from simulations in Appendix \[app:sims\].
In comparison, the sensitivity of the 3FGL catalog drops at a flux of $\sim\! 2.2 \times 10^{-10}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ for the energy band between 1GeV and 10GeV.[^11] Additional sensitivity can be achieved to lower fluxes by correcting for point-source detection efficiency. However, determining the point-source detection efficiency is nontrivial. The catalog detection procedure needs to be accurately reproduced with Monte Carlo simulations and the method is not completely free from assumptions regarding the properties of the unresolved sources. A clear advantage of the method employed here is, instead, that no detection efficiency is involved. As indicated by the value of $S_\mathrm{sens}$, we will see that this analysis increases the sensitivity to faint point-source populations by about one order of magnitude with respect to the 3FGL catalog.
Analysis Setup
--------------
The L2 approach emerged to provide significantly higher sensitivity than the L1 approach, as a consequence of the inclusion of spatial information. We will thus use the second method $\mathcal{L}_2({\bf
\Theta})$ as our reference analysis in the remainder. We will nonetheless present in the main text a comparison of the two approaches, showing that they lead to consistent results.
All pixels in the ROI were considered in the calculation of the likelihood. The upper bound on the number of photon counts per pixel, $k_\mathrm{max}$, as used in Equation [(\[eq:L1\])]{} was always chosen to be slightly larger than the maximum number of counts per map pixel.
Source-count Distribution Fit
-----------------------------
The source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{} was parameterized with the MBPL defined in Equation [(\[eq:mbpl\])]{}. For readability, the following terminology will be used in the remainder: the source-count distribution is subdivided into three different regimes, defined by splitting the covered flux range $S$ into three disjoint intervals, $$\begin{aligned}
\left[\,0, 10^{-10} \right) \,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}:
& \quad \text{faint-source region,} \\
\left[ 10^{-10}, 10^{-8} \right) \,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}:
& \quad \text{intermediate region,} \\
\left[ 10^{-8}, S_\mathrm{cut} \right]\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}:
& \quad \text{bright-source region.} \\
\end{aligned}$$ The quantity $S_\mathrm{cut}$ corresponds to a high cutoff flux of the source-count distribution. The observational determination of $S_\mathrm{cut}$ is limited by cosmic variance, and a precise value is therefore lacking. Unless stated otherwise, we chose a cutoff value $S_\mathrm{cut} = 10^{-6}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, which is almost one order of magnitude higher than the flux of the brightest source listed in the 3FGL catalog within the ROIs considered in this work (see Section \[ssec:data\_mbpl\]). The stability of this choice was checked by comparing with $S_\mathrm{cut} =
10^{-5}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$.
In the following, we describe our strategy to fit the [$\mathrm{d}N/\mathrm{d}S$]{} distribution to the data. A validation of the analysis method with Monte Carlo simulations is described in Appendix \[app:sims\].
### Parameters of [$\mathrm{d}N/\mathrm{d}S$]{} {#sssec:dnds_params}
#### Normalization
The reference normalization flux $S_0$ was kept fixed during the fit. A natural choice for $S_0$ would be the flux where the uncertainty band of the [$\mathrm{d}N/\mathrm{d}S$]{} reaches its minimum (pivot flux). In this way, undesired correlations among the fit parameters are minimized. We refrained from a systematic determination of the pivot point, but we instead fixed $S_0$ to a value of $S_0 = 3\times
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ after optimization checks. We checked for robustness by varying $S_0$ within the range $[0.1\,S_0, S_0]$, obtaining stable results.[^12] Remaining parameter degeneracies were handled well by the sampling.
#### Number of Breaks
Previous works investigating the gamma-ray [$\mathrm{d}N/\mathrm{d}S$]{} distribution with cataloged sources concluded that the [$\mathrm{d}N/\mathrm{d}S$]{} distribution above $|b|>10^\circ$ is well described by a broken power law down to a flux of $\sim 5\times 10^{-10}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, with a break at $(2.3\pm 0.6)\times
10^{-9}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ [@2010ApJ...720..435A]. The following analysis increases the sensitivity to resolving point sources with a flux above $\sim 2\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ and provides a significantly smaller statistical uncertainty. We therefore parameterized [$\mathrm{d}N/\mathrm{d}S$]{} with up to three free breaks ($N_\mathrm{b} \leq
3$), in order to find the minimum number of breaks required to properly fit the data. In the case of $N_\mathrm{b} = 3$, one break was placed in the bright-source region, a second in the intermediate region, and the last one in the faint-source region; see Section \[sssec:priors\] for details. We compared these results with setups reducing $N_\mathrm{b}$ to one or two free breaks, to investigate stability and potential shortcomings in the different approaches.
### Fitting Techniques {#sssec:fit_approach}
We employed three different techniques of fitting the [$\mathrm{d}N/\mathrm{d}S$]{} distribution to the data, in order to investigate the stability of the analysis and to study the sensitivity limit. The third technique, which we refer to as the *hybrid approach*, is a combination of the two other techniques. This hybrid approach proved to provide the most robust results.
#### MBPL Approach
The MBPL approach comprises fitting a pure MBPL with a number of $N_\mathrm{b}$ free break positions. The total number of free parameters is given by $N_{\bf \Theta} = 2N_\mathrm{b} + 4$ (including free parameters of the background components). The parameters of the MBPL are sampled directly.
#### Node-based Approach
The complexity of the parameter space, including degeneracies between breaks and power-law indices, can be reduced by imposing a grid of $N_\mathrm{nd}$ fixed flux positions, which we refer to as nodes $S_{\mathrm{nd}j}$, where $j=0,1,\dots,N_\mathrm{nd}-1$. Nodes are counted starting from the one with the highest flux, in order to maintain compatibility with the numbering of breaks in the MBPL described in Equation [(\[eq:mbpl\])]{}. The free parameters of the source-count distribution correspond to the values of [$\mathrm{d}N/\mathrm{d}S$]{} at the positions of the nodes, i.e., $A_{\mathrm{nd} j } =
\mathrm{d}N/\mathrm{d}S\,(S_{\mathrm{nd}j})$. The index of the power-law component below the last node, $n_\mathrm{f}$, is kept fixed in this approach.
The parameter set $\{ A_{\mathrm{nd} j }, S_{\mathrm{nd} j },
n_\mathrm{f} \}$ can then be mapped to the MBPL parameters using Equation [(\[eq:mbpl\])]{}, i.e., the [$\mathrm{d}N/\mathrm{d}S$]{} distribution between adjacent nodes is assumed to follow power laws. Technically, it should be noted that $S_\mathrm{cut} \equiv S_{\mathrm{nd} 0 }$ in this case. A choice of $N_\mathrm{nd}$ nodes therefore corresponds to choosing an MBPL with $N_\mathrm{nd} - 1$ fixed breaks. The quantity $A_\mathrm{S}$ is to be calculated at a value close to the decorrelation flux to ensure a stable fit. The total number of free parameters is given by $N_{\bf \Theta} = N_\mathrm{nd} + 2$.
While this technique comes with the advantage of reducing the complexity of the parameter space, the choice of the node positions is arbitrary. This can introduce biases between nodes and can thus bias the overall [$\mathrm{d}N/\mathrm{d}S$]{} fit. The node-based approach is further considered in Appendix \[app:node\_based\].
We note that a similar approach has been recently used by [@2014MNRAS.440.2791V] for measuring the source-count distribution of radio sources.
#### Hybrid Approach
The hybrid approach combines the MBPL approach and the node-based approach. Free break positions as used in the MBPL approach are required to robustly fit the [$\mathrm{d}N/\mathrm{d}S$]{} distribution and to determine the sensitivity; see Section \[sec:application\] for details. *Fitting a pure MBPL, however, was found to underestimate the uncertainty band of the fit at the lower end of the faint-source region. In addition, the fit obtained from the Bayesian posterior can suffer a bias for very faint sources, as demonstrated by Monte-Carlo simulations in Appendix \[app:sims\].* We therefore chose to incorporate a number of nodes around the sensitivity threshold of the analysis, resolving the issues of the MBPL approach.
The hybrid approach is characterized by choosing a number $N^{\mathrm{h}}_\mathrm{b}$ of free breaks, a number $N^{\mathrm{h}}_\mathrm{nd}$ of nodes, and the index of the power-law component below the last node, $n_\mathrm{f}$. We note that the lower limit of the prior of the last free break $S_{\mathrm{b}N^{\mathrm{h}}_\mathrm{b}}$ technically imposes a fixed node $S_\mathrm{nd0}$, given that the first free node $S_\mathrm{nd1}$ is continuously connected with a power law to the MBPL component at higher fluxes. The setup corresponds to choosing an MBPL with $N^{\mathrm{h}}_\mathrm{b} + N^{\mathrm{h}}_\mathrm{nd} + 1$ breaks, with the last ones at fixed positions. The total number of free parameters in the hybrid approach is $N_{\bf \Theta} = 2N^{\mathrm{h}}_\mathrm{b} +
N^{\mathrm{h}}_\mathrm{nd} +4$.
[lllccc]{} & $A_\mathrm{S}$ & log-flat & \[1,30\] & \[1,30\] & \[1,30\]\
& $A_\mathrm{gal}$ & flat & \[0.95,1.1\] & \[0.95,1.1\] & \[0.95,1.1\]\
& $F_\mathrm{iso}$ & log-flat & \[0.5,5\] & \[0.5,5\] & \[0.5,5\]\
& $S_\mathrm{b1}$ & log-flat & \[3E-13,5E-8\] & \[3E-9,5E-8\] & \[3E-9,5E-8\]\
& $S_\mathrm{b2}$ & log-flat & & \[3E-13,3E-9\] & \[2E-11,3E-9\]\
& $S_\mathrm{b3}$ & log-flat & & & \[3E-13,2E-11\]\
& $n_1$ & flat & \[1.0,4.3\] & \[2.05,4.3\] & \[2.05,4.3\]\
& $n_2$ & flat & \[-2.0,2.0\] & \[1.4,2.3\] & \[1.7,2.2\]\
& $n_3$ & flat & & \[-2.0,2.0\] & \[1.4,2.3\]\
& $n_4$ & flat & & & \[-2.0,2.0\]\
& $S_\mathrm{b1}$ & log-flat & \[1E-11,5E-8\] & \[3E-9,5E-8\] & \[3E-9,5E-8\]\
& $S_\mathrm{b2}$ & log-flat & & \[1E-11,3E-9\] & \[2E-10,3E-9\]\
& $S_\mathrm{b3}$ & log-flat & & & \[1E-11,2E-10\]\
& $n_1$ & flat & \[2.05,4.3\] & \[2.05,4.3\] & \[2.05,4.3\]\
& $n_2$ & flat & \[1.4,2.3\] & \[1.7,2.3\] & \[1.7,2.3\]\
& $n_3$ & flat & & \[1.3,3.0\] & \[1.4,2.2\]\
& $n_4$ & flat & & & \[1.3,3.0\]\
& $A_\mathrm{nd1}$ & log-flat & \[1,300\] & \[1,300\] & \[1,300\]\
& $S_\mathrm{nd1}$ & fixed & 5E-12 & 5E-12 & 5E-12\
& $n_\mathrm{f}$ & fixed & -10 & -10 & -10
### Priors {#sssec:priors}
We used log-flat priors for the normalization $A_\mathrm{S}$, the nodes $A_{\mathrm{nd} j }$, the breaks $S_{\mathrm{b} j }$, and the isotropic diffuse background flux $F_\mathrm{iso}$, while the indices $n_j$ and the normalization of the Galactic foreground map $A_\mathrm{gal}$ were sampled with flat priors. Prior types and prior ranges are listed in Table \[tab:priors\] for the MBPL and hybrid approaches. In general, priors were limited to physically reasonable ranges. Prior ranges were chosen to cover the posterior distributions well.
In particular, data from the 3FGL catalog motivate that $S^2
\mathrm{d}N/\mathrm{d}S \simeq
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{deg}^{-2}$ in the intermediate region; see Section \[sec:application\]. The range of the prior for $A_\mathrm{S}$ was therefore adjusted to cover the corresponding interval between $3\times
10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{deg}^{-2}$ and $8\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{deg}^{-2}$ at least (assuming an index of 2). The ranges of the priors for the node normalizations were chosen similarly, but reducing the lower bound to a value of $\sim
10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{deg}^{-2}$.
The ranges of the priors for the breaks were chosen to connect continuously and not to overlap, preserving a well-defined order of the break points. For both the MBPL and the hybrid approach, the upper bound of the first break $S_\mathrm{b1}$ approximately matched the bright end of the 3FGL data points (excluding the brightest source). It is advantageous to keep the prior range for the first break sufficiently small, in order to reduce a possible bias of the intermediate region by bright sources (mediated through the index $n_2$). For the MBPL approach, the lower bound of the last break was chosen almost two orders of magnitude below the sensitivity estimate of $S_\mathrm{sens} \simeq 2\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, to fully explore the sensitivity range. In the case of three breaks, the lower bound of the intermediate break was selected to match the sensitivity estimate. For the hybrid approach, the lower bound of the last free break was set to $\sim\! S_\mathrm{sens}/2$. We comment on the choice of the nodes in Section \[ssec:data\_hybrid\].
Index ranges were selected according to expectations, allowing enough freedom to explore the parameter space. The stability of these choices was checked iteratively. For the MBPL approach, the lower bound of the last index allowed for a sharp cutoff of the [$\mathrm{d}N/\mathrm{d}S$]{} distribution. For the hybrid approach, the index $n_\mathrm{f}$ was fixed to a value of $-10$, introducing a sharp cutoff manually. This choice will be motivated in Section \[ssec:data\_mbpl\].
As discussed in Section \[ssec:bckgs\], $A_\mathrm{gal}$ is expected to be of order unity. The selection of the prior boundaries for $F_\mathrm{iso}$ was based on previous measurements [see @2015ApJ...799...86A] and was further motivated iteratively.
Prior ranges reported in Table \[tab:priors\] are further discussed in Section \[sec:application\].
### Exposure Correction: $N_\mathrm{exp}$
The results were checked for robustness with respect to variations of $N_\mathrm{exp}$ (see Section \[ssec:gen\_funcs\]). We found that the choice of this parameter is critical for a correct recovery of the final result, and it is closely related to the sensitivity of the analysis. In particular, small values $\lesssim 5$ were found insufficient. Results were stabilized by using at least $N_\mathrm{exp}=15$ contours, and we tested that increasing up to $N_\mathrm{exp}=40$ did not have further impact. Insufficient sampling of the exposure (i.e., small values of $N_\mathrm{exp}$) was seen to affect the faint end of the [$\mathrm{d}N/\mathrm{d}S$]{} by introducing an early cutoff and attributing a larger flux to the isotropic component. At the same time, the best-fit likelihood using small $N_\mathrm{exp}$ values was significantly smaller than the one obtained choosing larger values, indicating that indeed the sampling was insufficient. As a final reference value we chose $N_\mathrm{exp}=20$.
### 3FGL Catalog Data {#ssec:3FGLpoints}
The results are compared to the differential ([$\mathrm{d}N/\mathrm{d}S$]{}) and integral ($N(>S)$) source-count distributions derived from the 3FGL catalog for the same energy band and ROI. The method of deriving the source-count distribution from catalog data is described in Appendix \[app:dnds\_cat\].
APPLICATION TO THE DATA {#sec:application}
=======================
In this section, a detailed description and discussion of the data analysis and all setups chosen in this article are given. Final results are summarized in Section \[sec:conclusions\].
The data were fit by employing the MBPL approach and the hybrid approach consecutively. The use of the hybrid approach was mostly chosen to inspect the uncertainties in the faint-source region. It should be emphasized that the prior of the last free break and the position of the node depend on the results obtained with the MBPL approach.
All analyses were carried out using two different pixel sizes, i.e., HEALPix grids of order $\kappa = 6$ ($N_\mathrm{side}=64$) and $\kappa
= 7$ ($N_\mathrm{side}=128$). Details are discussed in Section \[sec:Fermi\_data\]. We chose $\kappa=6$ as a reference, due to the expected sensitivity gain. All parameters were stable within their uncertainty bands against changes to $\kappa=7$. Results using $\kappa=7$ are shown in Section \[ssec:hp7\].
MBPL Approach {#ssec:data_mbpl}
-------------
The MBPL fit was employed using the priors as discussed in Section \[sssec:priors\]. The results are shown in Table \[tab:mbpl\_fit\_3\_2\] and Figure \[fig:mbpl\_fit\_3\_2\].
[lcccccc]{} $A_\mathrm{S}$ & $4.1^{+0.3}_{-0.3}$ & $4.1^{+0.4}_{-0.5}$ & $3.5^{+1.6}_{-1.0}$ & $3.1^{+3.9}_{-1.1}$ & $3.5^{+1.4}_{-0.9}$ & $2.7^{+3.1}_{-0.6}$\
$S_\mathrm{b1}$ & $1.3^{+1.3}_{-1.3}$E-3 & $2.1^{+5.7}_{-1.8}$E-3 & $2.1^{+0.9}_{-1.2}$ & $1.8^{+2.1}_{-1.1}$ & $2.1^{+0.8}_{-1.2}$ & $1.1^{+2.4}_{-0.3}$\
$S_\mathrm{b2}$ & & & $5.6^{+5.6}_{-5.1}$E-2 & $7.8^{+24.4}_{-6.8}$E-2 & $0.7^{+1.1}_{-0.5}$ & $12.8^{+17.0}_{-12.6}$\
$S_\mathrm{b3}$ & & & & & $4.6^{+4.1}_{-6.3}$ & $13.6^{+6.4}_{-13.0}$\
$n_1$ & $2.03^{+0.02}_{-0.02}$ & $2.03^{+0.04}_{-0.03}$ & $3.11^{+0.69}_{-0.55}$ & $2.89^{+1.41}_{-0.59}$ & $3.08^{+0.65}_{-0.50}$ & $2.70^{+1.35}_{-0.35}$\
$n_2$ & $-0.49^{+1.20}_{-1.04}$ & $-0.69^{+2.34}_{-1.31}$ & $1.97^{+0.03}_{-0.03}$ & $1.98^{+0.03}_{-0.05}$ & $1.98^{+0.03}_{-0.03}$ & $1.91^{+0.13}_{-0.19}$\
$n_3$ & & & $-0.61^{+1.13}_{-0.89}$ & $-0.77^{+2.40}_{-1.23}$ & $1.85^{+0.18}_{-0.25}$ & $1.99^{+0.31}_{-0.59}$\
$n_4$ & & & & & $-0.38^{+1.06}_{-0.97}$ & $0.40^{+1.04}_{-2.40}$\
$A_\mathrm{gal}$ & $1.071^{+0.005}_{-0.005}$ & $1.072^{+0.005}_{-0.007}$ & $1.072^{+0.004}_{-0.004}$ & $1.073^{+0.005}_{-0.006}$ & $1.072^{+0.004}_{-0.004}$ & $1.072^{+0.005}_{-0.006}$\
$F_\mathrm{iso}$ & $1.0^{+0.3}_{-0.4}$ & $1.2^{+0.3}_{-0.7}$ & $0.9^{+0.3}_{-0.3}$ & $1.0^{+0.4}_{-0.5}$ & $0.9^{+0.2}_{-0.3}$ & $1.1^{+0.2}_{-0.6}$\
$\ln \mathcal{L}_1({\bf \Theta})$ & $-851.9$ & $-855.0$ & $-850.7$ & $-853.2$ & $-851.7$ & $-853.5$\
$\ln \mathcal{L}_2({\bf \Theta})$ & $-86793.1$ & $-86789.0$ & $-86786.8$ & $-86785.3$ & $-86785.9$ & $-86785.2$\
$\ln \mathcal{Z}$ & & &
\
The source-count distribution was parameterized with one, two, and three free breaks. Table \[tab:mbpl\_fit\_3\_2\] lists all best-fit values and statistical uncertainties obtained for individual fit parameters, in addition to the corresponding likelihoods of the best-fit solutions. Single-parameter uncertainties can be large in general, given that correlations were integrated over. Comparing Bayesian (posterior) and frequentist (profile likelihood[^13]) parameter estimates, best-fit values match within their uncertainties.
Figure \[fig:mbpl\_fit\_3\_2\] shows the best-fit results and corresponding statistical uncertainty bands for the [$\mathrm{d}N/\mathrm{d}S$]{} distributions parameterized with two and three free breaks. We can see that there is good agreement between the [$\mathrm{d}N/\mathrm{d}S$]{} distributions derived from the Bayesian posterior (solid black line and green band) and the [$\mathrm{d}N/\mathrm{d}S$]{} fits derived from the profile likelihood (dashed black line and blue band): they match well within their uncertainty bands. The uncertainty given by the profile likelihood is larger than the band from the posterior in all cases. The frequentist uncertainty estimates can therefore be considered more conservative. In common, the statistical uncertainty bands of the [$\mathrm{d}N/\mathrm{d}S$]{} fits obtained here are small compared to fits employing catalog points only [see @2010ApJ...720..435A]. This directly reflects the fact that the method is independent of source-detection or binning effects. The smallest statistical uncertainty appears to be around a flux of $\sim 10^{-9}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$.
As shown in Table \[tab:mbpl\_fit\_3\_2\], the fit of the simplest [$\mathrm{d}N/\mathrm{d}S$]{} model with only a single break prefers a break at low fluxes, i.e., at $\sim 10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. Below that break, the [$\mathrm{d}N/\mathrm{d}S$]{} cuts off steeply. The source-count distribution in the entire flux range above that break was fit with the single power-law component, with an index of $n_1 = 2.03 \pm 0.02$. We found that adding a break at higher fluxes, i.e., parameterizing [$\mathrm{d}N/\mathrm{d}S$]{} with two free breaks, instead improved the fit with a significance of $\sim$3$\sigma$. Here the bright-source region is resolved with a break at $\sim 2\times
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. The region between the two breaks (faint-source region and intermediate region) is compatible with an index of $n_2 = 1.97 \pm 0.03$, while the index in the bright-source region $n_1=3.1^{+0.7}_{-0.6}$ is softer (see Figure \[fig:mbpl\_fit\_3\_2\]).
The intermediate region is populated with numerous sources contributing a comparably large number of photons. Given the high statistical impact of these sources, it was found that a fit of the faint-source and intermediate regions with only a single power-law component can be significantly driven by brighter sources of the intermediate region. We therefore extended the [$\mathrm{d}N/\mathrm{d}S$]{} model to three free breaks, properly investigating possible features in the faint-source region below . We found that the model comprising three free breaks is not statistically preferred against the two-break model (see Table \[tab:mbpl\_fit\_3\_2\]). Furthermore, the three-break [$\mathrm{d}N/\mathrm{d}S$]{} distribution is consistent with the previous scenario within uncertainties (see Figure \[fig:mbpl\_fit\_3\_2\]). Differences between the best fit from Bayesian inference and the best fit given by the maximum likelihood are not statistically significant.
It can be seen in Figure \[fig:mbpl\_fit\_3\_2\] that the source-count distribution as resolved by the 3FGL catalog (red data points; see Section \[ssec:3FGLpoints\]) in the intermediate and the bright-source regions is well reproduced with both the two-break and the three-break fits. Again, we emphasize that this analysis is independent of catalog data, which are shown in the plot for comparison only.
From the MBPL approach, we therefore conclude that parameterizing [$\mathrm{d}N/\mathrm{d}S$]{} with two free breaks is sufficient to fit the data. The index $n_2 = 1.97 \pm 0.03$, characterizing the intermediate region of [$\mathrm{d}N/\mathrm{d}S$]{}, is determined with exceptionally high precision ($\sim$2%), originating from the high statistics of sources populating that region. The accuracy of the Galactic foreground normalization $A_\mathrm{gal}$ fit is at the per mil level.
We found that the fit prefers a source-count distribution that continues with an almost flat slope (in $S^2\,\mathrm{d}N/\mathrm{d}S$ representation) in the regime of unresolved sources, i.e., faint sources not detected in the 3FGL catalog. A strong cutoff was found at fluxes between $\sim\!5\times
10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ and $\sim\!10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. This cutoff, however, falls well within the flux region where this method is expected to lose sensitivity and where the uncertainty bands widen. It should thus be considered with special care. Indeed, Monte Carlo simulations were used to demonstrate that such a cutoff can originate either from the sensitivity limit of the analysis or from an intrinsic end of the source-count distribution (see Appendix \[app:sims\] for details). In the former case, possible point-source contributions below the cutoff are consistent with diffuse isotropic emission, and the fit therefore attributes them to $F_\mathrm{iso}$.
It was found that the uncertainty band below the cutoff can be underestimated due to lacking degrees of freedom in the faint-source end. Moreover, simulations revealed that the fit obtained from the Bayesian posterior can be biased in the regime of very faint sources. We therefore chose to improve the fit procedure by using the hybrid approach in Section \[ssec:data\_hybrid\].
#### Sampling
The triangle plot of the Bayesian posterior and the corresponding profile likelihood functions are shown in Figure \[fig:triangle\_plike\_mbpl\_3\] for parameterizing [$\mathrm{d}N/\mathrm{d}S$]{} with three free breaks.
It can be seen that the marginalized posterior distributions are well defined. We attenuated strong parameter degeneracies by adapting the normalization constant $S_0$ to the value quoted in the previous section. It becomes evident from the posteriors that the breaks $S_\mathrm{b2}$ and $S_\mathrm{b3}$ tended to merge to a single break; this is supplemented by the flatness of their profile likelihoods. It therefore explains the previous observation that adding a third break is not required to improve the fit of the data.
[lcccccc]{} $A_\mathrm{S}$ & $3.6^{+1.8}_{-1.1}$ & $3.2^{+3.7}_{-1.2}$ & $3.5^{+1.7}_{-1.0}$ & $3.3^{+2.9}_{-1.3}$ & $3.3^{+1.2}_{-0.8}$ & $3.4^{+2.9}_{-1.3}$\
$S_\mathrm{b1}$ & $2.2^{+1.0}_{-1.3}$ & $1.9^{+3.1}_{-1.3}$ & $2.1^{+1.0}_{-1.3}$ & $2.0^{+1.5}_{-1.3}$ & $1.8^{+0.9}_{-1.0}$ & $2.1^{+1.5}_{-1.5}$\
$S_\mathrm{b2}$ & & & $0.3^{+0.3}_{-0.2}$ & $2.4^{+27.2}_{-2.3}$ & $7.6^{+6.8}_{-6.8}$ & $4.4^{+25.6}_{-2.4}$\
$S_\mathrm{b3}$ & & & & & $27.7^{+25.3}_{-17.3}$ & $124^{+41}_{-114}$\
$n_1$ & $3.16^{+0.69}_{-0.59}$ & $2.99^{+1.16}_{-0.66}$ & $3.10^{+0.71}_{-0.54}$ & $3.20^{+0.95}_{-0.85}$ & $2.99^{+0.67}_{-0.43}$ & $3.13^{+0.76}_{-0.76}$\
$n_2$ & $1.98^{+0.02}_{-0.03}$ & $1.97^{+0.04}_{-0.06}$ & $1.97^{+0.03}_{-0.03}$ & $1.95^{+0.07}_{-0.23}$ & $1.96^{+0.06}_{-0.08}$ & $1.97^{+0.07}_{-0.27}$\
$n_3$ & & & $2.02^{+0.49}_{-0.38}$ & $2.07^{+0.93}_{-0.77}$ & $1.98^{+0.06}_{-0.06}$ & $1.87^{+0.33}_{-0.20}$\
$n_4$ & & & & & $2.02^{+0.46}_{-0.40}$ & $2.24^{+0.76}_{-0.94}$\
$A_\mathrm{nd1}$ & $10.0^{+14.1}_{-15.2}$ & $21.6^{+90.3}_{-20.6}$ & $8.7^{+12.0}_{-11.9}$ & $5.0^{+80.9}_{-4.0}$ & $8.3^{+10.9}_{-10.1}$ & $2.4^{+84.1}_{-1.4}$\
$A_\mathrm{gal}$ & $1.072^{+0.004}_{-0.004}$ & $1.073^{+0.005}_{-0.007}$ & $1.072^{+0.004}_{-0.004}$ & $1.072^{+0.005}_{-0.006}$ & $1.072^{+0.004}_{-0.004}$ & $1.070^{+0.006}_{-0.003}$\
$F_\mathrm{iso}$ & $1.0^{+0.1}_{-0.3}$ & $0.9^{+0.3}_{-0.4}$ & $0.9^{+0.2}_{-0.2}$ & $0.9^{+0.3}_{-0.4}$ & $0.9^{+0.2}_{-0.3}$ & $0.9^{+0.5}_{-0.4}$\
$\ln \mathcal{L}_1({\bf \Theta})$ & $-853.9$ & $-853.8$ & $-849.3$ & $-852.9$ & $-851.4$ & $-853.7$\
$\ln \mathcal{L}_2({\bf \Theta})$ & $-86786.4$ & $-86785.3$ & $-86788.4$ & $-86785.1$ & $-86786.7$ & $-86785.0$\
$\ln \mathcal{Z}$ & & &
\
\
\
\
Hybrid Approach {#ssec:data_hybrid}
---------------
We improved the analysis by applying the hybrid approach consecutively. Priors are discussed in Table \[tab:priors\]: In particular, the region around the sharp cutoff revealed by the MBPL approach was parameterized with a node placed at $A_\mathrm{nd1}=5\times
10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$.[^14] The lower bound of the prior of the last free break was set to $\sim\! S_\mathrm{sens}/2$. The cutoff was introduced manually by fixing the index of the power-law component describing fluxes smaller than $A_\mathrm{nd1}$ to $n_\mathrm{f} =
-10$.
The fit was carried out with [$\mathrm{d}N/\mathrm{d}S$]{} parameterizations comprising one, two, and three free breaks. Figure \[fig:hybrid\_fit\_3\_2\_1\] and Table \[tab:hybrid\_fit\_3\_2\_1\] summarize the results. The differential [$\mathrm{d}N/\mathrm{d}S$]{} distributions fitting the data best are shown in the left column of the figure. In the right column, the corresponding integral source-count distributions $N(>S)$ are compared to 3FGL catalog data, providing another reference for investigating the precision of the fit.
In the bright-source and intermediate regions, the results obtained with the MBPL approach and with the hybrid approach are consistent among each other within their uncertainties. As expected, the determination of the uncertainty bands in the faint-source region improved in the hybrid fit, given the further degree of freedom allowed. In all three scenarios ($N^{\mathrm{h}}_\mathrm{b}=1,2,3$), the fits reproduce well the differential and the integral source-count distributions from the 3FGL catalog within uncertainties.
Comparing the three [$\mathrm{d}N/\mathrm{d}S$]{} models, we find that none are statistically preferred by the data; see Table \[tab:hybrid\_fit\_3\_2\_1\]. The fit of the model with only a single free break consistently placed the break in the bright-source region, given that the cutoff in the faint-source region is effectively accounted for by the node. As argued in the previous section, in this case the fit of the intermediate and faint-source regions of [$\mathrm{d}N/\mathrm{d}S$]{} was driven by the high statistical impact of the relevant brighter sources, yielding a small uncertainty band also for faint sources (see Figure \[sfig:s2dnds\_Nb1\]). To address this issue, we extended the model with two additional free breaks ($N^{\mathrm{h}}_\mathrm{b}=2,3$), leading to consistent uncertainty bands that were stabilized by the additonal degrees of freedom added in the intermediate and faint-source regions (see Figures \[sfig:s2dnds\_Nb2\] and \[sfig:s2dnds\_Nb3\]). Because the three-break fit is not statistically preferred against the two-break fit, we conclude that two free breaks and a faint node are sufficient to fit the data properly. A comparison with the maximum likelihood values for the MBPL fits in Table \[tab:mbpl\_fit\_3\_2\] reveals also no statistical preference for the hybrid result over the MBPL result, confirming that the data are not sensitive enough to distinguish point sources below the last node from a purely diffuse isotropic emission.
Figure \[fig:phist\_3\_2\_1\] compares the best-fit model [1pPDF]{} distributions to the actual pixel-count distribution of the data set. We plot the results for both the Bayesian posterior and the maximum likelihood fits. The residuals $(\mathrm{data} -
\mathrm{model})/\sqrt{\mathrm{data}}$ are shown in addition. It can be seen that the pixel-count distribution is reproduced well. A comparison with a simple chi-squared statistic, evaluating the best-fit results using the binned histogram only, leads to reduced chi-squared values ($\chi^2/\mathrm{dof}$) between 0.89 and 0.92.
The triangle plot of the Bayesian posterior and the single-parameter profile likelihood functions are shown in Figure \[fig:triangle\_plike\_hybrid\_2\] for the [$\mathrm{d}N/\mathrm{d}S$]{} fit with two free breaks and a node.
The stability of the MBPL and hybrid approaches can be further demonstrated by comparing the respective triangle plots (see Figures \[sfig:triangle\_mbpl\_3\] and \[sfig:triangle\_hybrid\_2\]): the posteriors of parameters corresponding to each other in both approaches are substantially equal, with the exception of $n_3$. It can be seen that the choice of the node in the hybrid approach stabilized the posterior of $n_3$. We have therefore shown that the MBPL and hybrid approaches lead to comparable results except in the faint-source flux region, where the latter improves the determination of the uncertainty bands.
How many breaks? {#ssec:}
----------------
Both the MBPL approach and the hybrid approach single out a best-fit [$\mathrm{d}N/\mathrm{d}S$]{} distribution that is consistent with a single broken power law for integral fluxes in the resolved range above $S_\mathrm{sens}
\simeq 2\times 10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. Although two breaks are preferred to properly fit the flux range, the second break found with the MBPL approach in the faint-source region is consistent with a sensitivity cutoff. Instead, in the hybrid approach, the second break is needed for a viable determination of the uncertainty band.
To further describe the physical [$\mathrm{d}N/\mathrm{d}S$]{} distribution at low fluxes, we therefore derived an upper limit on the position of a possible intrinsic second break $S_{\mathrm{b}2}$. The uncertainty band obtained with the hybrid approach for $N^{\mathrm{h}}_\mathrm{b} = 2$ was used. In general, an intrinsic second break would have been present if the power-law indices $n_2$ and $n_3$ changed significantly by a given difference $\left| n_2-n_3 \right| > \Delta n_{23}$. We exploited the full posterior to derive upper limits on $S_{\mathrm{b}2}$ by assuming given $\Delta n_{23}$ values between 0.1 and 0.7, in steps of 0.1. In detail, the upper limits $S^\mathrm{UL}_{\mathrm{b}2}$ at 95% CL were obtained from the marginalized posterior $P( S_{\mathrm{b}2}
|\mathrm{D},H)$, after removing all samples not satisfying the given $\left| n_2-n_3 \right|$ constraint: $$\label{eq:Sb2ul}
\int_{ \pi_\mathrm{L} (S_{\mathrm{b}2}) }^{S^\mathrm{UL}_{\mathrm{b}2}}
P_{\left| n_2-n_3 \right| > \Delta n_{23}}(S_{\mathrm{b}2}|\mathrm{D},H)\,\mathrm{d} S_{\mathrm{b}2} = 0.95\,,$$ where $\pi_\mathrm{L} (S_{\mathrm{b}2}) =
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ is the lower bound of the prior for $S_{\mathrm{b}2}$. Frequentist upper limits were calculated from the profile likelihood, constructed from the same posterior as used in Equation , by imposing $-2\Delta \ln
\mathcal{L} = 2.71$ for 95% CL upper limits. The upper limits are shown in Figure \[fig:Sb2ul\]. In consistency with the uncertainty bands derived in the previous section, $S^\mathrm{UL}_{\mathrm{b}2}$ decreases monotonically as a function of $\Delta n_{23}$, until the sensitivity limit of the analysis is reached. Assuming a fiducial index change of $\Delta n_{23}=0.3$, we find that a possible second break of [$\mathrm{d}N/\mathrm{d}S$]{} is constrained to be below $6.4\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ at 95% CL. The corresponding frequentist upper limit is $1.3\times
10^{-10}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$.
Composition of the Gamma-ray Sky {#ssec:comp}
--------------------------------
The method allows decomposing the high-latitude gamma-ray sky ($|b|\geq 30^\circ$) into its individual constituents. The integral flux $F_\mathrm{ps}$ contributed by point sources was derived by integrating the posterior samples of $S\,\mathrm{d}N/\mathrm{d}S$ in the range $[0,S_\mathrm{cut}]$, which effectively corresponds to the interval $[S_\mathrm{nd1},S_\mathrm{cut}]$ due to the steep cutoff below the node $S_\mathrm{nd1}$. Results are presented in Table \[tab:comp\_hybrid\_hp6\_2\], comparing both Bayesian and frequentist estimates. The profile likelihood for $F_\mathrm{ps}$ is shown in Figure \[fig:plike\_Fps\_hp6\_2\]. The integral flux from point sources is determined as $F_\mathrm{ps} =
3.9^{+0.3}_{-0.2}\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$, thus with an uncertainty less than 10%.[^15]
The contribution from Galactic foreground emission $F_\mathrm{gal}$ was obtained accordingly by integrating the template (see Section \[ssec:bckgs\]), including the fit results for the normalization $A_\mathrm{gal}$ (see Figure \[fig:triangle\_plike\_hybrid\_2\]). The isotropic background emission $F_\mathrm{iso}$ was sampled directly.
[lcc]{} Parameter & &\
$F_\mathrm{ps}$ & $3.9^{+0.3}_{-0.2}$ & $3.9^{+0.6}_{-0.4}$\
$F_\mathrm{gal}$ & $10.95^{+0.04}_{-0.04}$ & $10.95^{+0.05}_{-0.06}$\
$F_\mathrm{iso}$ & $0.9^{+0.2}_{-0.2}$ & $0.9^{+0.3}_{-0.4}$\
$F_\mathrm{tot}$ & $15.8^{+0.2}_{-0.1}$ & $15.7^{+0.3}_{-0.1}$\
$q_\mathrm{ps}$ & $0.25^{+0.02}_{-0.02}$ & $0.25^{+0.03}_{-0.03}$\
$q_\mathrm{gal}$ & $0.693^{+0.007}_{-0.006}$ & $0.697^{+0.015}_{-0.006}$\
$q_\mathrm{iso}$ & $0.06^{+0.01}_{-0.02}$ & $0.06^{+0.02}_{-0.03}$\
$F^\mathrm{2FGL}_\mathrm{cat}$ &\
$F^\mathrm{3FGL}_\mathrm{cat}$ &\
$F_\mathrm{CR}$ &\
For convenience, individual components can be expressed as fractions $q$ of the total map flux $F_\mathrm{tot}$. The fractions are listed in Table \[tab:comp\_hybrid\_hp6\_2\]. We found that the high-latitude gamma-ray emission between 1GeV and 10GeV is composed of $(25 \pm
2)$% point-source contributions, $(69.3 \pm 0.7)$% Galactic foreground contributions, and $(6 \pm 2)$% isotropic diffuse background emission.
Even if not indicated by Figures \[sfig:triangle\_mbpl\_3\] and \[sfig:triangle\_hybrid\_2\], remaining degeneracies between an isotropic Galactic component accounted for in the template and the $F_\mathrm{iso}$ parameter considered in this analysis might be present.
The flux contribution from point sources can be compared to the flux of all sources resolved in the 3FGL catalog (for $|b|\geq 30^\circ$; see Table \[tab:comp\_hybrid\_hp6\_2\]). From the difference $F_\mathrm{ps} - F^\mathrm{3FGL}_\mathrm{cat}$ we conclude that a flux of $1.4^{+0.3}_{-0.2}\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ between 1GeV and 10GeV can be attributed to originate from so far unresolved point sources. With regard to the IGRB flux measured by @2015ApJ...799...86A, we could therefore clarify between 42% and 56% of its origin between 1GeV and 10GeV.[^16]
#### Residual Cosmic Rays
The sum of the values $F_\mathrm{iso}=(0.9\pm 0.2)\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ and $F_\mathrm{ps}=3.9^{+0.3}_{-0.2}\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ listed in Table \[tab:comp\_hybrid\_hp6\_2\] can be compared with the EGB derived in [@2015ApJ...799...86A]. In the energy range between 1GeV and 10GeV this amounts to values between $4.7 \times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ and $6.4
\times 10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$, including systematics in the Galactic diffuse modeling; these values compare well with the total $F_\mathrm{iso}+F_\mathrm{ps}$ found here.
However, the truly diffuse isotropic background emission $F_\mathrm{iso}$ incorporates residual cosmic rays (CRs) not rejected by analysis cuts [see @2015ApJ...799...86A], while for the EGB derived in [@2015ApJ...799...86A] the CR contamination has been accounted for and subtracted. The level of residual CR contamination in the `P7REP_CLEAN` selection used in this work has been estimated to be between 15% and 20% of the measured IGRB flux above 1GeV [see Figure 28 in @2012ApJS..203....4A], thus amounting to about $5$-$7\times
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$.
Anisotropy {#sec:anisotropy}
==========
Complementary to the [1pPDF]{}, the anisotropy (or autocorrelation) probes unresolved point sources [@Ackermann:2012uf; @Cuoco:2012yf; @DiMauro:2014wha; @2014JCAP...01..049R]. The two observables can thus be compared. The anisotropy in a given energy band can be calculated from the [$\mathrm{d}N/\mathrm{d}S$]{} distribution by $$C_\mathrm{P} = \int_0^{S_{\rm th}} \mathrm{d}S\,S^2 \frac{\mathrm{d}N}{\mathrm{d}S}\,,$$ where $S_{\rm th}$ is the flux threshold of detected point sources, assumed to be ‘sharp’ and independent of the photon spectral index of the sources. Indeed, the previous assumption is a good approximation for the 1GeV to 10GeV energy band [@Cuoco:2012yf]. We thus calculated the predicted anisotropy from the [$\mathrm{d}N/\mathrm{d}S$]{} distribution measured in this work (hybrid approach, $N^{\mathrm{h}}_\mathrm{b}=2$) as a function of the threshold flux $S_{\rm th}$. Results are shown in Figure \[fig:Cp\]. To derive the uncertainty band of $C_\mathrm{P}$, we sampled the [$\mathrm{d}N/\mathrm{d}S$]{} from the posterior and calculated $C_\mathrm{P}$ from each sampling point of the [$\mathrm{d}N/\mathrm{d}S$]{} parameter space. The uncertainty on $C_\mathrm{P}$ was then derived using both the Bayesian and the frequentist approaches; see Sections \[sssec:par\_est\] and \[sssec:par\_est\_freq\]. The predicted $C_\mathrm{P}$ can be compared to the value $(1.1\pm 0.1)
\times 10^{-17}$(cm$^{-2}$s$^{-1}$sr$^{-1}$)$^2$sr measured in @Cuoco:2012yf and @Ackermann:2012uf, using a threshold of about $4$-$6\times 10^{-10}$cm$^{-2}$s$^{-1}$ suitable for sources detected in the 1FGL catalog [@2010ApJS..188..405A]. It can be seen in Figure \[fig:Cp\] that the predicted anisotropy is slightly higher than the measured value. This can in part be explained by the approximation of the threshold as a sharp cutoff, as well as a possible systematic underestimate of the measured anisotropy itself [@Chang:2013ada]. In addition, a possible clustering of point sources at angular scales smaller than the pixel size could in principle be degenerate with the inferred [$\mathrm{d}N/\mathrm{d}S$]{} distribution, leading to systematically higher anisotropies. The anisotropy of clustering effects is, however, expected to be rather small as compared to the $C_\mathrm{P}$ values found here, i.e., $C^\mathrm{cluster}_{\ell >
200 } \lesssim
10^{-20}\,(\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1})^2\,\mathrm{sr}$ for multipoles $\ell$ corresponding to angular scales smaller than the pixel size [e.g., @2007PhRvD..75f3519A; @2015ApJS..221...29C]. Clustering can thus be neglected in this analysis. For the moment, we deem the agreement reasonable, and we wait for an updated anisotropy measurement for a more detailed comparison.
SYSTEMATICS {#sec:systematics}
===========
The following section is dedicated to systematic and modeling uncertainties of the analysis framework. In particular, we extensively investigated possible uncertainties due to the chosen pixel size (Section \[ssec:hp7\]), statistical effects imposed by bright point sources (Section \[ssec:ps\_mask\]), and the Galactic foreground modeling (Section \[ssec:GFsyst\]).
Pixel Size {#ssec:hp7}
----------
The results discussed in Section \[sec:application\] were cross-checked using smaller pixels, i.e., HEALPix order $\kappa=7$, slightly oversampling the effective PSF (see Section \[sec:Fermi\_data\]). All results were stable against the resolution change, given the corresponding uncertainty bands. However, it was found that the enhanced PSF smoothing increased the uncertainty in determining the first break. An example is given in Figure \[fig:hybrid\_fit\_hp7\_3\], showing the [$\mathrm{d}N/\mathrm{d}S$]{} distribution obtained with the hybrid approach considering three free breaks and a node. It is demonstrated in Section \[ssec:ps\_mask\] that the increased uncertainty in the bright-source region in turn led to a small bias in determining the indices $n_2$ and $n_3$.
[lcc]{} Parameter & &\
$A_\mathrm{gal}$ & $1.076^{+0.004}_{-0.004}$ & $1.074^{+0.007}_{-0.004}$\
$F_\mathrm{ps}$ & $3.6^{+0.2}_{-0.2}$ & $3.4^{+0.5}_{-0.2}$\
$F_\mathrm{iso}$ & $1.3^{+0.1}_{-0.2}$ & $1.4^{+0.3}_{-0.4}$\
$\ln \mathcal{L}_1({\bf \Theta})$ & $-667.2$ & $-667.9$\
$\ln \mathcal{L}_2({\bf \Theta})$ & $-257817.9$ & $-257812.0$\
$\ln \mathcal{Z}$ &\
Table \[tab:hybrid\_fit\_hp7\_3\] summarizes fit results that do not become evident in Figure \[fig:hybrid\_fit\_hp7\_3\]. The integral point-source flux $F_\mathrm{ps}$ slightly decreased with respect to the value obtained for $\kappa=6$, with a corresponding increase of the isotropic background emission $F_\mathrm{iso}$, while the sum $F_\mathrm{ps} + F_\mathrm{iso}$ remained constant within (single-parameter) statistical uncertainties. This is consistent with resolving fewer point sources due to reduced sensitivity, given that the value of $A_\mathrm{gal}$ stayed almost the same as found for $\kappa=6$.
Point-source Masking {#ssec:ps_mask}
--------------------
The presence of bright point sources and the corresponding shape of their source-count distribution may influence the overall fit of the intermediate region and the faint-source region. The strength of a possible bias may also depend on the pixel size.
The level of systematics caused by bright point sources was investigated with point-source masks. To eliminate the influence of bright sources, we removed all pixels including sources with an integral flux larger than or equal to $S_\mathrm{mask} \simeq
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. The value of $S_\mathrm{mask}$ was chosen to be slightly below the first break determined from the overall fit (see Section \[sec:application\]). Source positions and fluxes were retrieved from the 3FGL catalog. For each source, all pixels included in a circle with a radius of $2.5^\circ$ (corresponding to $\sim 6
\sigma_\mathrm{psf}$)[^17] around the cataloged source position were masked in the counts map. We checked that the mask area was sufficiently large by comparing radii between $3 \sigma_\mathrm{psf}$ and $7\sigma_\mathrm{psf}$. Remnant effects became negligible for radii larger than $\sim 5
\sigma_\mathrm{psf}$.
The masked data were fit using the hybrid approach with three free breaks, in order to retain full sensitivity to a possible break in the faint-source region. Priors were chosen as listed in Table \[tab:priors\], with the exception of changing the upper bound of the prior of the first break to $S_\mathrm{mask}$. The prior of $n_1$ was changed accordingly to sample the interval $[1.7,2.3]$, substantially covering the intermediate region. In addition, the flux normalization constant was fixed to $S_0 = 3\times
10^{-9}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ and the upper flux cutoff of [$\mathrm{d}N/\mathrm{d}S$]{} was set to $S_\mathrm{cut} \equiv S_\mathrm{mask}$.
The results are shown in Figure \[fig:ps\_mask\] for a pixelization with resolution parameters $\kappa=6$ and $\kappa=7$. It can be seen that the results are consistent with what was found in Section \[sec:application\]. For $\kappa=7$ we find that the uncertainty band is slightly down-shifted as compared to $\kappa=6$, but best-fit results match well within uncertainties. The value of $A_\mathrm{gal}$ was determined to be $1.071^{+0.004}_{-0.004}$ ($1.072^{+0.005}_{-0.005}$) for $\kappa=6$ and $1.075^{+0.004}_{-0.004}$ ($1.073^{+0.006}_{-0.004}$) for $\kappa=7$, using the posterior (profile likelihood). The integral flux of the isotropic diffuse background emission $F_\mathrm{iso}$ was obtained to be $0.9^{+0.2}_{-0.2}\,(0.8^{+0.5}_{-0.3}) \times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ for $\kappa=6$ and $1.2^{+0.1}_{-0.2}\,(1.4^{+0.2}_{-0.4}) \times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ for $\kappa=7$. The larger value of $F_\mathrm{iso}$ in the latter case is consistent with the fact of resolving fewer point sources for $\kappa=7$.
We conclude that systematic effects due to bright point sources are dominated by statistical uncertainties. Bright point sources do not affect the determination of the [$\mathrm{d}N/\mathrm{d}S$]{} broken power-law indices in the intermediate and faint-source regions. For $\kappa=7$, comparing the analyses of full data (Figure \[fig:hybrid\_fit\_hp7\_3\]) and masked data (Figure \[sfig:ps\_mask\_hp7\]) indicates that systematic effects slightly increased with enhanced PSF smoothing, but effects on the indices $n_2$ and $n_3$ remain rather small.
Galactic Foreground {#ssec:GFsyst}
-------------------
We checked our results for systematic uncertainties of the Galactic foreground model, considering three different approaches:
- *Dependence on the Galactic latitude cut.* We selected different ROIs, covering regions $|b|\geq b_\mathrm{cut}$. The parameter $b_\mathrm{cut}$ was varied between $10^\circ$ and $70^\circ$, in steps of $10^\circ$.
- *Extended Galactic plane mask (GPLL mask).* The GPLL mask was generated from the Galactic foreground emission model discussed in Section \[ssec:bckgs\], by merging mask arrays for $|b| < 30^\circ$, a Galactic plane mask removing all pixels above a flux threshold[^18] of $10^{-6}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$, and mask arrays for the Fermi bubbles and Galactic LoopI [@Fermi-LAT:2014sfa; @2009arXiv0912.3478C; @Su:2010qj]. The GPLL mask is shown in Figure \[sfig:gpll\_mask\].
- *Dependence on the Galactic foreground model.* Given systematic uncertainties of the Galactic foreground model in its entirety, we incorporated a different foreground model as derived for the preceeding *Fermi*-LAT data release `Pass 7`, named `gal_2yearp7v6_v0.fits`[^19]. Although mixing different versions of data releases and diffuse models is not generally recommended, the purpose here is to gauge the effect of a model differing in intensity as well as in morphology. The deviations between the two models are shown in Figure \[sfig:gal\_Cdiff\_hp6\] for Galactic latitudes greater than $30^\circ$.
The hybrid approach was employed for all setups, choosing three free breaks and a node. The prior setup resembled the one used in Section \[sec:application\], but prior ranges were extended in particular cases to cover the posterior sufficiently well. The results of the analyses are summarized in Figure \[fig:GFsys\] and Table \[tab:GFsys\]. We found that all results were stable against the systematic checks. In addition, it should be noted that the catalog (3FGL) data points derived for comparison were well reproduced in all cases.
[lcccccc]{} $S_\mathrm{b1}$ & $0.8^{+0.4}_{-0.3}$ & $0.5^{+0.4}_{-0.1}$ & $1.8^{+0.9}_{-1.0}$ & $2.1^{+1.5}_{-1.5}$ & $1.8^{+0.8}_{-0.8}$ & $1.9^{+0.8}_{-1.0}$\
$n_1$ & $2.58^{+0.23}_{-0.14}$ & $2.47^{+0.33}_{-0.10}$ & $2.99^{+0.67}_{-0.43}$ & $3.13^{+0.76}_{-0.76}$ & $3.29^{+0.60}_{-0.71}$ & $3.69^{+0.61}_{-0.92}$\
$A_\mathrm{gal}$ & $1.017^{+0.002}_{-0.002}$ & $1.018^{+0.002}_{-0.002}$ & $1.072^{+0.004}_{-0.004}$ & $1.070^{+0.006}_{-0.003}$ & $1.12^{+0.01}_{-0.01}$ & $1.12^{+0.02}_{-0.02}$\
$F_\mathrm{ps}$ & $4.6^{+0.3}_{-0.3}$ & $4.9^{+0.3}_{-0.5}$ & $3.9^{+0.3}_{-0.2}$ & $3.9^{+0.6}_{-0.3}$ & $3.5^{+0.3}_{-0.2}$ & $3.5^{+0.3}_{-0.5}$\
$F_\mathrm{gal}$ & $16.97^{+0.03}_{-0.03}$ & $16.97^{+0.04}_{-0.03}$ & $10.95^{+0.04}_{-0.04}$ & $10.94^{+0.06}_{-0.03}$ & $8.34^{+0.09}_{-0.09}$ & $8.3^{+0.1}_{-0.1}$\
$F_\mathrm{iso}$ & $1.0^{+0.2}_{-0.3}$ & $0.8^{+0.3}_{-0.3}$ & $0.9^{+0.2}_{-0.3}$ & $0.9^{+0.5}_{-0.4}$ & $0.8^{+0.2}_{-0.2}$ & $0.9^{+0.2}_{-0.4}$\
& & &\
& & &\
$S_\mathrm{b1}$ & $1.3^{+0.9}_{-0.8}$ & $0.8^{+12.3}_{-0.5}$ & $2.6^{+0.5}_{-0.3}$ & $2.5^{+0.9}_{-0.7}$ & $2.0^{+0.9}_{-1.3}$ & $1.0^{+2.5}_{-0.3}$\
$n_1$ & $3.06^{+0.64}_{-0.58}$ & $3.03^{+1.27}_{-0.85}$ & $7.28^{+1.56}_{-2.21}$ & $9.48^{+0.52}_{-4.93}$ & $2.98^{+0.61}_{-0.44}$ & $2.76^{+1.39}_{-0.39}$\
$A_\mathrm{gal}$ & $1.16^{+0.03}_{-0.03}$ & $1.17^{+0.04}_{-0.05}$ & $1.12^{+0.01}_{-0.01}$ & $1.12^{+0.01}_{-0.03}$ & $0.939^{+0.004}_{-0.004}$ & $0.938^{+0.005}_{-0.004}$\
$F_\mathrm{ps}$ & $3.5^{+0.4}_{-0.4}$ & $3.2^{+1.1}_{-0.3}$ & $3.6^{+0.2}_{-0.2}$ & $3.6^{+0.5}_{-0.4}$ & $4.3^{+0.5}_{-0.3}$ & $4.0^{+1.1}_{-0.3}$\
$F_\mathrm{gal}$ & $7.6^{+0.2}_{-0.2}$ & $7.6^{+0.3}_{-0.3}$ & $7.9^{+0.1}_{-0.1}$ & $8.0^{+0.1}_{-0.2}$ & $9.60^{+0.04}_{-0.04}$ & $9.59^{+0.05}_{-0.04}$\
$F_\mathrm{iso}$ & $0.3^{+0.2}_{-0.2}$ & $0.3^{+0.4}_{-0.2}$ & $0.8^{+0.2}_{-0.2}$ & $0.6^{+0.4}_{-0.1}$ & $2.0^{+0.2}_{-0.5}$ & $2.2^{+0.3}_{-1.0}$
In the bright-source region, the error band increases almost monotonically with increasing Galactic latitude cut, due to the decreasing number of bright sources present in the ROI. We note that for the $10^\circ$ cut the index $n_1=2.58^{+0.23}_{-0.14}$ matches well within uncertainties the index deduced by the Fermi Collaboration from 1FGL catalog data [$n_1=2.38^{+0.15}_{-0.14}$, @2010ApJ...720..435A] for the same latitude cut and energy band. The first break position, however, was found to be a factor of $2$ to $3$ larger than in the 1FGL analysis. The index below the first break is $n_2 \simeq 2$.
The fits of the faint-source region were stable against changing the Galactic latitude cut. The slopes of the corresponding [$\mathrm{d}N/\mathrm{d}S$]{} fits match well within uncertaintites for increasing latitude. Uncertainties grow for higher Galactic latitude cuts given less statistics. For lower latitude cuts, Figure \[fig:GFsys\] indicates an upturn for very faint sources, which is, however, not significant. The stability against the Galactic latitude cut is further supplemented by the integral point-source flux $F_\mathrm{ps}$ (see Table \[tab:GFsys\]), which remains stable within uncertainties.
Table \[tab:GFsys\] shows that the normalization of the Galactic foreground model, $A_\mathrm{gal}$, increases with the latitude cut by $\sim$10% from $10^\circ$ to $50^\circ$, while the integral flux of the isotropic background emission remains constant ($\sim\!9\times
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$). [^20] The increase of $A_\mathrm{gal}$ thus indicates a gradual mismatch between foreground model and data. Likewise, it can also indicate the presence of a new component not covered by our analysis setup. We note that a similar behavior has been found in other analyses, including the 3FGL catalog [see, e.g., Figure 25 in @2015ApJS..218...23A].
The stability of the results obtained in this article is supplemented by comparing with the GPLL mask and the `Pass 7` foreground model (`P7` model). The GPLL mask in particular removes the Galactic lobes and Galactic LoopI, known as regions potentially affected by large systematic model uncertainties. Employing the `P7` model introduces a different Galactic foreground model in its entirety. As demonstrated in Figure \[sfig:gal\_Cdiff\_hp6\], the differences between the models exhibit a nontrivial morphology. The pixel distribution of photon-count differences extends to $\sim$3 for the dominating part of the ROI, i.e., systematics can be expected at the flux level of the sensitivity estimate $S_\mathrm{sens}$. The resulting [$\mathrm{d}N/\mathrm{d}S$]{} distributions and the integral point-source fluxes $F_\mathrm{ps}$ are consistent within uncertainties. It is to be noted, however, that the integral isotropic background flux $F_\mathrm{iso}$ increased by a factor of $\sim$2 for the `P7` model. At the same time, $F_\mathrm{gal}$ decreased, maintaining a stable sum $F_\mathrm{gal} + F_\mathrm{iso}$. We therefore remark that modeling uncertainties can cause $F_\mathrm{iso}$ to depend on the Galactic foreground model.
CONCLUSIONS {#sec:conclusions}
===========
In this article, we have employed the pixel-count distribution (1-point PDF) of the 6-year photon counts map measured with *Fermi*-LAT between 1GeV and 10GeV to decompose the high-latitude gamma-ray sky. This statistical analysis method has allowed us to dissect the gamma-ray sky into three different components, i.e., point sources, diffuse Galactic foreground emission, and a contribution from isotropic diffuse background. The analysis of the simple pixel-count distribution has been improved by employing a pixel-dependent approach, in order to fully explore all the available information and to incorporate the morphological variation of components such as the Galactic foreground emission. A summary of the main results obtained with this analysis follows.
The distribution of point sources [$\mathrm{d}N/\mathrm{d}S$]{} has been fit assuming a multiply broken power law (MBPL approach) with one, two, and three free breaks. A possible bias in obtaining the correct statistical uncertainty band for faint-source contributions has been mitigated by extending the setup with a node, what we called the hybrid approach. Figure \[fig:dnds\_final\] summarizes the resulting [$\mathrm{d}N/\mathrm{d}S$]{} distribution at high Galactic latitudes $b$ greater than 30$^\circ$.
We have found that both the MBPL approach and the hybrid approach single out a best-fit source-count distribution for $|b|\geq 30^\circ$ that is consistent with a single broken power law for integral fluxes $S$ in the resolved range. Although two-break models are preferred to properly fit the *entire* flux range covered by the data, the second break found in the MBPL approach in the faint-source region is consistent with a sensitivity cutoff. Instead, in the hybrid approach, the second break is needed for a viable determination of the uncertainty band. The MBPL and hybrid approaches have led to comparable results except in the faint-source flux region, where the latter improved the uncertainty band. For bright sources with an integral flux above the first break at $2.1^{+1.0}_{-1.3} \times
10^{-8}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ the [$\mathrm{d}N/\mathrm{d}S$]{} distribution follows a power law with index $n_1=3.1^{+0.7}_{-0.5}$. Below the first break, the index characterizing the intermediate region and the faint-source region of [$\mathrm{d}N/\mathrm{d}S$]{} hardens to $n_2=1.97^{+0.03}_{-0.03}$. It is determined with exceptionally high precision ($\sim$2%) thanks to the high statistics of sources populating that region. The fit is consistent with the distribution of individually resolved sources listed in the 3FGL catalog. We have measured [$\mathrm{d}N/\mathrm{d}S$]{} down to an integral flux of $\sim\! 2\times
10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, improving beyond the 3FGL catalog detection limit by about one order of magnitude.\
To further constrain the physical [$\mathrm{d}N/\mathrm{d}S$]{} distribution at low fluxes, we have derived an upper limit on a possible intrinsic second break from the uncertainty band obtained with the hybrid approach. We have found that a possible second break of [$\mathrm{d}N/\mathrm{d}S$]{} is constrained to be below $6.4\times 10^{-11}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ at 95% CL, assuming a change of $\Delta n \geq 0.3$ for the power-law indices below and above that break.
We have checked our results against a number of possible systematic and modeling uncertainties of the analysis framework. Likewise, the behavior of [$\mathrm{d}N/\mathrm{d}S$]{} has been investigated as a function of the Galactic latitude cut. We have considered Galactic latitude cuts in the interval between $10^\circ$ and $70^\circ$. We have found that the faint-source and the intermediate regions of [$\mathrm{d}N/\mathrm{d}S$]{} are not altered, while the uncertainty band in the bright end becomes larger due to the decreasing number of bright sources in the ROI. At the same time, fitting the overall normalization of the Galactic foreground template has revealed that it significantly increases with higher latitude cuts. This indicates a possible gradual mismatch between the Galactic foreground model and the data at high latitudes, or a missing component not accounted for in our analysis setup. Note, however, that this increase does not affect the obtained [$\mathrm{d}N/\mathrm{d}S$]{} distribution, which is instead stable.
We have found that the high-latitude gamma-ray sky above $30^\circ$ is composed of $(25 \pm 2)$% point sources, $(69.3 \pm 0.7)$% Galactic foreground, and $(6 \pm 2)$% isotropic diffuse background emission. Both the integral point-source component and the sum of the Galactic foreground and diffuse isotropic background components were stable against Galactic latitude cuts and changes of the Galactic foreground modeling. The choice of the Galactic foreground can, however, affect the integral value of the diffuse isotropic background component itself.
With respect to the recent IGRB measurement by @2015ApJ...799...86A, this analysis allowed us to clarify between 42% and 56% of its origin between 1GeV and 10GeV by attributing it to unresolved point sources.
We kindly acknowledge valuable discussions with Luca Latronico and Marco Regis, and the valuable support by the *Fermi* LAT Collaboration internal referee Dmitry Malyshev and the anonymous journal referee in improving the manuscript.
We are grateful for the support of the [*Servizio Calcolo e Reti*]{} of the Istituto Nazionale di Fisica Nucleare, Sezione di Torino, and of its coordinator Stefano Bagnasco.
This work is supported by the research grant [*Theoretical Astroparticle Physics*]{} number 2012CPPYP7 under the program PRIN 2012 funded by the Ministero dell’Istruzione, Università e della Ricerca (MIUR), by the research grants [*TAsP (Theoretical Astroparticle Physics)*]{} and [*Fermi*]{} funded by the Istituto Nazionale di Fisica Nucleare (INFN), and by the [*Strategic Research Grant: Origin and Detection of Galactic and Extragalactic Cosmic Rays*]{} funded by Torino University and Compagnia di San Paolo. This research was partially supported by a grant from the GIF, the German-Israeli Foundation for Scientific Research and Development.
Some of the results in this paper have been derived using the HEALPix [@2005ApJ...622..759G] package. This analysis made use of the package; see for details.
The *Fermi* LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat à l’Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucléaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d’Études Spatiales in France.
DERIVATION OF THE [1pPDF]{} FORMULAE FROM POISSON PROCESSES {#app:genfunc_poisson}
===========================================================
The general representation of the generating function $\mathcal{P}^{{(p)}}(t)$ for photon-count maps can be derived from a superposition of Poisson processes. In the following, we consider a population of point sources following a source-count distribution function [$\mathrm{d}N/\mathrm{d}S$]{}. In a generic pixel $p$, covering the solid angle $\Omega_\mathrm{pix}$, we expect an average number of point sources $\mu = \Omega_\mathrm{pix}\,\Delta S\,\mathrm{d}N/\mathrm{d}S$ in the flux interval $[S,S+\Delta S]$.[^21] The number, $n$, of sources of this kind in pixel $p$ follows a Poisson distribution, $$\label{app:eq_p1}
\frac{\mu^n}{n!} e^{-\mu} .$$ Given $n$ sources in the pixel, the average number of gamma-ray counts contributed by sources is $n\,\mathcal{C}(\overline{S})$ (see Equation ), where $\overline{S}$ denotes the average flux of the interval $[S,S+\Delta S]$. In general, the number of counts, $m$, contributed by these sources also follows a Poisson distribution, $$\label{app:eq_p2}
\frac{ (n\,\mathcal{C})^m}{m!} e^{-n\,\mathcal{C}} .$$ Taking into account the distribution in $n$, the probability distribution function $p_m$ of counts $m$ in the given pixel can be obtained from marginalizing over the product of the two distributions and : $$\label{app:pm}
p_m = \sum_n \frac{\mu^n}{n!} e^{-\mu} \, \frac{ (n\,\mathcal{C})^m}{m!} e^{-n\,\mathcal{C}} .$$ This distribution is more conveniently expressed in terms of a generating function, simplifying to $$\label{app:gen_func}
\sum_m p_m \, t^m = \exp \left[ \mu \left(e^{\mathcal{C}(t-1)} -1\right) \right] .$$ Equation is only valid for sources of a given flux interval $[S,S+\Delta S]$. To get the final distribution function of $m$ we need to integrate over the full distribution of $S$, i.e., the source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{}. The generating function for the final distribution of $m$ is given by the product of all individual generating functions , i.e., $$\prod_{\overline{S}} \exp \left[ \mu \left(e^{\mathcal{C} (t-1)} -1\right) \right] =
\exp \left[ \sum_{\overline{S}} \mu \left(e^{\mathcal{C} (t-1)} -1\right) \right] .$$ Using the definition of $\mu$ in the limit $\Delta S \rightarrow
\mathrm{d}S$ and rewriting in terms of $x_m$ as defined in Equation eventually gives the representation of the generating function quoted in Equation , i.e., $$\exp \left[ \sum_{m=1}^{\infty} x_m \left( t^m -1 \right) \right] .$$
DERIVATION OF [$\mathrm{d}N/\mathrm{d}S$]{} FOR CATALOGED SOURCES {#app:dnds_cat}
=================================================================
This section describes our approach of deriving the source-count distribution [$\mathrm{d}N/\mathrm{d}S$]{} (uncorrected for detection efficiency) from the 3FGL catalog. The [$\mathrm{d}N/\mathrm{d}S$]{} distribution was derived self-consistently for each ROI considered in the article. We first selected all 3FGL sources contained in a given ROI. For each source we adopted the best-fit spectral model (power law, log-parabola, power law with exponential or super-exponential cutoff) indicated in the catalog, using the reported best-fit parameters. The source photon flux in the energy range of interest was calculated by integrating this spectrum. The [$\mathrm{d}N/\mathrm{d}S$]{} was built as a histogram from the above-mentioned flux collection, using appropriate binning and normalizing it to the solid angle covered by the ROI.
NODE-BASED APPROACH {#app:node_based}
===================
The node-based approach as introduced in Section \[sssec:fit\_approach\] serves as an independent cross-check for the complementary approach of keeping the positions of breaks as free fit parameters. We applied the node-based approach to the $|b|\geq 30^\circ$ data between 1GeV and 10GeV. The choice of the node positions was driven by two criteria, i.e., (a) to reasonably approximate the bright-source and intermediate regions covered by catalog data, and (b) to approximate possible features in the faint-source region without overfitting the data. We therefore chose seven nodes: $5\times 10^{-7}$, $10^{-8}$, $10^{-9}$, $3\times
10^{-10}$, $3\times 10^{-11}$, $10^{-11}$, and $5\times
10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. Remaining parameters and priors were chosen in the same way as discussed in Section \[sssec:priors\] for the hybrid approach.
The [$\mathrm{d}N/\mathrm{d}S$]{} fit employing the node-based approach is shown in Figure \[fig:nodes\]. The fit matches well the results found in Section \[sec:application\] within statistical uncertainties.
MONTE CARLO SIMULATIONS {#app:sims}
=======================
The analysis method and the techniques of fitting the pixel-count distribution were validated with Monte Carlo simulations. We used the `gtobssim` utility of the Fermi Science Tools package to simulate realistic mock maps including a point-source contribution, the Galactic foreground, and a diffuse isotropic background component. Mock maps were analyzed with the same analysis chain as used for the real data.
Setup {#sapp:sims_setup}
-----
Mock data were simulated for a time period of 5years, using `P7REP` instrumental response functions and the *Fermi*-LAT spacecraft file corresponding to the real data set. Data selection resembled the procedure applied for real data. Accordingly, an energy range between 1GeV and 10GeV was chosen, and the effective PSF was derived in compliance with the simulated data set.
To demonstrate the applicability of the analysis and to investigate the sensitivity, we simulated realizations of four different toy source-count distributions, tagged A1, A2, B, and C. In all four cases, [$\mathrm{d}N/\mathrm{d}S$]{} was modeled with a broken power law, where $n_1$ denotes the index above the break and $n_2$ the index below the break: (A1) no break, with $n_1 \equiv n_2 = 2.0$, (A2) break at $10^{-10}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, with $n_1=2.0$, $n_2=1.6$, (B) break at $10^{-10}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, with $n_1=2.3$, $n_2=1.6$, and (C) break at $10^{-10}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, with $n_1=1.6$, $n_2=2.5$. In particular, model A1 approximates what was found in the real data (see Section \[sec:application\]). Model A2 was chosen to investigate the sensitivity of the analysis in the faint-source region, while models B and C impose two extreme scenarios.
Point-source fluxes were simulated according to the given [$\mathrm{d}N/\mathrm{d}S$]{} model, and positions were distributed isotropically across the sky. Realized sources were passed to `gtobssim` individually. The flux range covered by the [$\mathrm{d}N/\mathrm{d}S$]{} distributions was limited to the interval $[10^{-12},10^{-8}]\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$. The lower bound of this interval was chosen to be sufficiently small to investigate the sensitivity limit. At the same time, the upper bound ensures a setup that is reasonably simple to study, while resembling the real data in all flux regions except the bright-source region. Flux spectra of individual point sources were modeled with power laws with a fixed power-law index of $\Gamma = 2.0$. In addition, models A1 and A2 were simulated incorporating a distribution of point-source spectral indices. We assumed a Gaussian distribution centered on $\overline{\Gamma}=2.4$, with a half-width $\sigma_\mathrm{\Gamma} = 0.2$.
The Galactic foreground was modeled using the template discussed in Section \[ssec:bckgs\]. The isotropic background emission was modeled with respect to the analysis cuts. The model is given by the corresponding analysis template `iso_clean_front_v05.txt`[^22]. The simulated background emission between 1GeV and 10GeV was normalized to an integral flux of $\sim\!3\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ in the case of the fixed-index simulations and to $\sim\!1.5\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ otherwise. To investigate a possible bias caused by a distribution of spectral indices, model A2 was simulated without any backgrounds (source-only), increasing sensitivity.
Results {#sapp:sims_results}
-------
The mock data were analyzed applying the procedure established in Section \[sec:application\]. The MBPL approach was conducted allowing three free breaks. Priors were adjusted to cover the intermediate and faint-source regions appropriately. The hybrid approach was carried out choosing two free breaks and a node at $5\times 10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ ($10^{-12}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$) in the case of simulations with a fixed (variable) point-source spectral index. The node was placed at the faint cutoff deduced from the MBPL fit.
The results of the analyses are depicted in Figure \[fig:sim\] for the fixed-index simulations and in Figure \[fig:sim\_spread\] for the simulations including the spectral-index distribution. Figures \[sfig:sim\_mbpl\] and \[sfig:sim\_mbpl\_sp\] demonstrate that the MBPL approach recovered well the simulated [$\mathrm{d}N/\mathrm{d}S$]{} distributions (red data points) in the intermediate and faint-source regions. It can also be seen that the [$\mathrm{d}N/\mathrm{d}S$]{} fit follows statistical fluctuations around the model within allowed degrees of freedom. The position of the break, corresponding to parameter $S_\mathrm{b2}$ of the model fit, is well constrained and in good agreement with the simulated input. However, uncertainty bands are biased for very faint sources; in particular, for model C a sensitivity cutoff before the faint end of the simulated source distribution was found. The mismatch increases for the results obtained from the Bayesian posterior, while the profile likelihood fit is comparably more accurate. This behavior becomes most pronounced for model C.
The bias of the fit in the faint-source region can be significantly reduced with the hybrid approach; see Figures \[sfig:sim\_hbd\] and \[sfig:sim\_hbd\_sp\]. The hybrid approach resolved the sampling issues affecting the Bayesian posterior. The data points are well covered by the derived uncertainty bands.
Possible systematics caused by a distribution of point-source spectral indices are addressed by Figure \[fig:sim\_spread\]. The data sets with [$\mathrm{d}N/\mathrm{d}S$]{} realizations of models A1 and A2, each simulated incorporating the Gaussian distribution of spectral indices, were analyzed with the same analysis chain as used for real data, i.e., assuming a constant spectral index of 2.4. Figure \[fig:sim\_spread\] shows that no evidence for a systematic effect on the [$\mathrm{d}N/\mathrm{d}S$]{} fit was found for $S \gtrsim
S_\mathrm{sens}$. Below the sensitivity limit $S_\mathrm{sens}$, the uncertainty bands shift slightly downward in comparison to model A1 in Figure \[fig:sim\]. The high statistics of the source-only simulation of model A2 indeed increased the sensitivity (see bottom row of Figure \[fig:sim\_spread\]), as expected. We found that the break was recovered well, again indicating no important systematic effect.
The Galactic foreground normalization parameter $A_\mathrm{gal}$ was found to be $\sim$1.05 in all considered scenarios, with no evidence for a dependence on the Galactic latitude cut. For the realization of model A1 for fixed spectral indices, for instance, the value of $A_\mathrm{gal}$ obtained from the posterior was $1.050 \pm 0.002$, $1.055 \pm 0.005$, and $1.066 \pm 0.014$ for Galactic latitude cuts of $10^\circ$, $30^\circ$, and $50^\circ$, respectively. Profile likelihood parameter estimates were similar, with slightly larger uncertainties. The overall effect of obtaining $A_\mathrm{gal}$ larger than 1 can be attributed to remaining degeneracies between the Galactic foreground model and the diffuse isotropic background component. However, a slight dependence on the Galactic latitude cut cannot be excluded within statistical uncertainties.
In conclusion, all toy distributions were well reproduced with the hybrid approach within statistical uncertainties. The mock data indicate that the actual sensitivity depends on the source-count distribution and the background components, matching our expectation (see Section \[sec:analysis\_routine\]). One can nevertheless conclude from the two extreme scenarios (models B and C) that the sensitivity estimate $S_\mathrm{sens}$ constitutes a conservative benchmark for the energy band between 1GeV and 10GeV.
[^1]: The experiment exposure, which depends on energy and position, is discussed in Section \[sec:Fermi\_data\].
[^2]: Equation [(\[eq:Dgen\])]{} can be derived from Equation [(\[eq:gf\])]{} by taking $p^{{(p)}}_k$ as a Poissonian with mean $x_\mathrm{diff}^{{(p)}}$.
[^3]: See also http://fermi.gsfc.nasa.gov/ssc/data/access/lat/\
BackgroundModels.html for details.
[^4]: http://galprop.stanford.edu/
[^5]: Version v3.8, 2014 October
[^6]: We defined $\Delta \ln
\mathcal{L} = \ln \left(\mathcal{L}/\mathcal{L}_\mathrm{max}
\right)$, where $\mathcal{L}_\mathrm{max} = \max(\mathcal{L})$.
[^7]: *Fermi*-LAT data are publicly available at http://heasarc.gsfc.nasa.gov/FTP/fermi/data/lat/weekly/p7v6d/
[^8]: The data set covers the time period between 2008 August 4 (239,557,417 MET) and 2014 August 4 (428,859,819 MET).
[^9]: See http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/
[^10]: The number of pixels of the all-sky map is given by $N_\mathrm{pix} = 12 N_\mathrm{side}^2$; $N_\mathrm{side}$ can be obtained from the resolution parameter by $N_\mathrm{side} = 2^\kappa$.
[^11]: See Section 4.2 in @2015ApJS..218...23A. The catalog threshold has been rescaled to the 1GeV to 10GeV energy band assuming an average photon index of 2.4.
[^12]: Given that the choice of $S_0$ turns out to be larger than the position of the first break, we note that increasing the interval to larger fluxes is not required.
[^13]: See Section \[sssec:par\_est\_freq\]. Further details on the derivation of uncertainties are given in the caption of Figure \[sfig:plike\_mbpl\_3\].
[^14]: The value approximates the faint cutoff positions obtained from the posterior of the MBPL fit.
[^15]: The contribution from the interval below the sensitivity estimate, $[S_\mathrm{nd1},S_\mathrm{sens}]$, is subdominant, i.e., $(16 \pm
7)$% of $F_\mathrm{ps}$.
[^16]: The IGRB obtained by @2015ApJ...799...86A in the 1GeV to 10GeV energy band is between $\sim\!3.2 \times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ and $\sim\!4.3 \times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$, including systematic uncertainties of the Galactic foreground modeling. Note that this measurement refers to the 2FGL catalog, which has been used for subtracting resolved sources from the EGB. We therefore attribute a flux of $1.8^{+0.3}_{-0.2}\times
10^{-7}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\,\mathrm{sr}^{-1}$ to unresolved point sources in this IGRB measurement (using $F^\mathrm{2FGL}_\mathrm{cat}$ as quoted in Table \[tab:comp\_hybrid\_hp6\_2\]).
[^17]: Given that most source photons are emitted at low energies, we remark that the value of $2.5^\circ$ corresponds to almost $4 \sigma_\mathrm{psf}(1\,\mathrm{GeV})$. The 68% containment radius of the PSF at 1GeV is $\sigma_\mathrm{psf}(1\,\mathrm{GeV}) \simeq 0.67^\circ$.
[^18]: The Galactic foreground emission model was smoothed with a Gaussian kernel of $2^\circ$ before applying the threshold.
[^19]: See http://fermi.gsfc.nasa.gov/ssc/data/access/lat/\
BackgroundModels.html
[^20]: Given large uncertainties and increasing degeneracies, the $|b|\geq
70^\circ$ ROI has been excluded from this discussion.
[^21]: For clarity, we omit the pixel index ${{(p)}}$ in the following.
[^22]: See http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Hideyuki UMEDA$^1$, Naoki IWAMOTO$^2$\
Sachiko TSURUTA$^3$, Letao QIN$^3$, & Ken’ichi NOMOTO$^1$\
\
[*$^1$Research Center for the Early Universe and Department of Astronomy,* ]{}\
[*School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan*]{}\
[*[email protected]*]{}\
[*$^2$ Department of Physics and Astronomy, University of Toledo, Toledo, Ohio 43606-3390, USA* ]{}\
[*$^3$ Department of Physics, Montana State University, Bozeman, Montana 59717-0350, USA*]{}\
\
[ Proceedings of the Symposium (17 - 20 November 1997, Tokyo)]{}\
[“Neutron Stars and Pulsars” ]{}\
[ eds. N. Shibazaki (World Scientific), p. 213, 1998 ]{}\
title: Axion Mass Limits from Cooling Neutron Stars
---
\#1\#2[3.6pt]{}
\#1 \#1 \#1 \#1 1\#1 1\#1 ø1\#1 2\#1 4\#1
[Axion Mass Limits from Cooling Neutron Stars]{}
[Umeda, et al.]{}
The axion arises as a solution to the strong CP problem (Turner 1990, Raffelt 1990). While the standard axion model was excluded by experiments, the invisible axion model has survived mainly because the axion’s coupling to matter is weak, which is an unknown parameter in the theory. Over the years, various laboratory experiments as well as astrophysical arguments have been used to constrain its parameters. Since laboratory experiments can explore only a limited parameter regime, including those planned in the foreseeable future, astrophysical considerations have played an important role in placing the limits on the axion parameters. Within these limits, the axion remains as one of the candidates for dark matter.
There are two types of axion models—the KSVZ (hadronic) model (Kim 1979, Shifman [[*et al.* ]{}]{}1980) and the DFSZ model (Dine [[*et al.* ]{}]{}1981, Zhitnitskii 1980). In the KSVZ model, the axion couples only to the photons and hadrons, while in the DFSZ model the axion couples to the charged leptons as well. The axion-fermion and axion-photon coupling constants as well as the axion mass are unknown parameters in these theories. Currently, cosmological arguments give $m_a > 10^{-5}$ eV (Abbott and Sikivie 1983, Dine and Fischler 1983). The limit from Supernova 1987A, which used to give $m_a < 10^{-3}$ eV, is now somewhat relaxed $m_a < 0.01$ eV (Raffelt and Seckel 1991, Janka [[*et al.* ]{}]{}1996). The red giant limit $m_a < 0.009/\cos^2\beta $ (Raffelt and Weiss 1995) applies only to the DFSZ model. The laboratory experiments give weaker limits.
In the present paper we study how axion emission affects the thermal evolution of neutron stars. We use the neutron star evolutionary code with three types of equation of state to calculate the surface temperature of neutron stars. We compare theoretical cooling curves with observation and obtain the upper limits on the axion mass, which are weaker than, but comparable with, the limit from SN 1987A.
In neutron stars, the dominant axion emission mechanisms are the following bremsstrahlung processes in the stellar core: $n + n\rightarrow n+ n+a$, $p+p \rightarrow p+p+a$, and $n+p \rightarrow n+p+a$, where $n, p,$ and $a$ are the neutron, proton and axion. The energy loss rate of each process, in the units $\hbar = c=1$, is given by (Iwamoto [[*et al.* ]{}]{}1998), $$\epsilon_{ann} = \frac{31 g_{ann}^2 }{3780\pi} {m_n^*}^2 p_F(n)
\left(\frac{f}{m_\pi}\right)^{4} F(x) (k_B T)^6,$$ $$\epsilon_{app} = \frac{31 g_{app}^2 }{3780\pi} {m_p^*}^2 p_F(p)
\left(\frac{f}{m_\pi}\right)^{4} F(y) (k_B T)^6,$$ $$\epsilon_{anp} \simeq \frac{31}{5670\pi} m_N^2 p_F(p)
\left(\frac{f}{m_\pi}\right)^{4} G(x,y) (k_B T)^6,$$ where $$F(z)\equiv 1-\frac{3}{2}z~\rm{arctan} \left(\frac{1}{z}\right)
+\frac{z^2}{2(1+z^2)},$$
$$\begin{aligned}
G(x,y) &\equiv& {\frac{1}{2}}(g^2+h^2)F(y) \nonumber \\
& +& (g^2+ {\frac{1}{2}}h^2) \Bigl[ F(\frac{2xy}{x+y})
+ F(\frac{2xy}{y-x}) \nonumber \\
&& +(\frac{y}{x}) \bigl\{ F(\frac{2xy}{x+y})-
F(\frac{2xy}{y-x})\bigr\} \Bigr] \nonumber \\
&+&(g^2+h^2)(1-y ~\rm{arctan}(1/y)), \end{aligned}$$
$g\equiv g_{app}+g_{ann}$, $h\equiv g_{app}-g_{ann}$; $x\equiv m_\pi /2p_F(n)$, $y\equiv m_\pi /2p_F(p)$; $f\simeq 1$ is the pion-nucleon coupling constant; $p_F(n)\simeq 340(\rho/\rho_0)^{1/3}$ MeV/c, $p_F(p)\simeq 85(\rho/\rho_0)^{2/3}$ MeV/c are the nucleon Fermi momenta; $m_p, m_n$ and $m_\pi$ are the proton and neutron effective masses and pion mass, respectively. $$g_{aii} \equiv \frac{c_i m_N}{(f_a/12)}$$ is the axion-nucleon coupling constant, where $i=p$ (proton) or $n$ (neutron), $m_N$ is the nucleon mass, and $f_a$ is the axion decay constant. $c_i$ depends on the models: the DFSZ model gives $$c_p=-0.10-0.45 \cos^2\beta, ~c_n=-0.18+0.39 \cos^2\beta,$$ and the KSVZ (hadronic) model gives $$c_p=-0.385, \quad c_n=-0.044.$$ The axion mass is related to $f_a$ via $$m_a = \frac{0.0074}{f_a/(10^{10}{\rm GeV})} \rm{eV}.$$ We note that axion emission is suppressed if nucleons become superfluid, as in the case of neutrino emission involving nucleons.
12.5cm -.0cm -9.cm
12.5cm -.0cm -9.cm
We employ the numerical calculation code essentially the same as the one described in Umeda [[*et al.* ]{}]{}(1994), except for the inclusion of the energy loss due to axion emission. We neglect internal and other possible heating mechanisms as well as the existence of non-standard cooling mechanisms. The baryon mass of the neutron star is set to 1.4 $M_\odot$.
Theoretical cooling curves are compared with the observational data for three pulsars: PSR 1055-52, Geminga and PSR 0656+14 (see Tsuruta 1998 and Becker 1994 for references). The energy loss rate due to axion emission is proportional to the axion mass squared, $m_a^2$; therefore, we can obtain the upper limit on the axion mass from the condition that the cooling curve does not pass below the lower bounds on the observational points.
In Figure 1, we show the [*standard*]{} cooling curve and those with the KSVZ axion model for four different axion masses (or $f_a$). The FP equation of state and the TT neutron $^3P_2$ superfluid energy gap (Takatsuka and Tamagaki 1993) are adopted. Since the data point for PSR 1055-52 is located above the standard cooling curve, we do not use this data: this is likely to be due to some other (unknown) effects. Conservative limits can be obtained by using the other two data. Figure 1 shows that the PSR 0656+14 gives a more stringent limit than Geminga, and hence we obtain the axion mass limit from the lower bound on the PSR 0656+14 data.
The results for both the KSVZ and DFSZ axion models with stiff (PS), medium (FP) and soft (BPS) equations of state are summarized in Figures 2-4. The BPS model gives the more stringent limit. This is because the TT gap vanishes in the high density region (i.e., inside the stellar core) with this equation of state; thus, axion emission is not suppressed. Extending superfluidity to higher density regions will have an effect similar to increasing the stiffness of the equation of state. For example, in the FP model, if the AO neutron $^3P_2$ gap (Amundsen and Østgaard 1985) is adopted, $m_a^{\rm max}$ is 0.3 eV, while if there is no neutron $^3P_2$ superfluid, $m_a^{\rm max}$ is 0.06 eV. Note, however, that the AO model probably overestimates the energy gap at high densities, because the density dependence of the neutron effective mass is neglected. Future refinements of the observation will provide more stringent limits.
This work is supported in part by the grant-in-Aid for Scientific Research (05242102, 06233101, 6728) and COE research (07CE2002) of the Ministry of Education, Science and Culture in Japan, and by the NASA (NAGW-2208, NAG5-2557) and NSF (PHY-9722138).
12.5cm -0cm -9.cm
12.5cm -0cm -9.cm
Abbott, L. F., and Sikivie, P., 1983, Phys. Lett., B120, 133 Amundsen, L., and Østgaard, E., 1985, Nucl. Phys., A437, 487
Becker, W., 1994, Ph.D. Thesis, München University
Dine, M., Fischler, W., and Srednicki, M., 1981, Phys. Lett., 104B 199
Dine, M., and Fischler, W., 1983, Phys. Lett, B120, 137
Iwamoto, N., Umeda, H., Tsuruta, S., Nomoto, K. and Qin, L., 1998, in preparation
Janka, H.-T., Keil, W., Raffelt, G., and Seckel, D., 1996, Phys. Rev. Lett., 76, 2621
Kim, J. E., 1979, Phys. Rev. Lett., 43, 103
Raffelt, G., 1990, Phys. Rep., 198, 1
Raffelt, G., and Seckel, D., 1991, Phys. Rev. Lett., 67, 2605
Raffelt, G., and Weiss, A,, 1995, Phys. Rev., D51, 1495
Shifman, M., Vainshtein, A., and Zakharov, V., 1980, Nucl. Phys., B166, 493
Takatsuka, T., and Tamagaki, R., 1993, Prog. Theor. Phys. Suppl., 112, 27
Tsuruta, S., 1998, Phys. Rep., 292, 1
Turner, M. S., 1990, Phys. Rep., 197, 67
Umeda, H., Tsuruta, S., and Nomoto, K., 1994, ApJ, 433, 256
Zhitnitskii, A. P., 1980, Sov. J. Nucl. Phys., 31, 260
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We model and interpret the Kilohertz QPOs from the hydrodynamical description of accretion disk around a rapidly rotating compact strange star. The higher QPO frequency is described by the viscous effects of accretion disk leading to shocks, while the lower one is taken to be the Keplerian motion of the accreting matter. Comparing our results with the observations for two of the fastest rotating compact stellar candidates namely, 4U 1636$-$53 and KS 1731$-$260, we find that they match to a very good approximation, thus interpreting them as strange stars.'
author:
- 'Banibrata Mukhopadhyay$^1$, Subharthi Ray$^2$, Jishnu Dey$^3$, Mira Dey$^4$'
title: 'Origin and interpretation of kilohertz QPOs from strange stars in X-ray binary system: theoretical hydrodynamical description'
---
[1. Inter-University Centre for Astronomy and Astrophysics, Ganeshkhind, Pune, India\
2. Instituto de Fisica, Universidade Federal Fluminense, Niterói, RJ, Brazil; FAPERJ Fellow\
3. Abdus Salam ICTP, Trieste, Italy; on leave from Maulana Azad College, Kolkata, India\
4. Abdus Salam ICTP, Trieste, Italy; on leave from Presidency College, Kolkata, India\
]{}
Accepted for publication in [*Astrophysical Journal Letter*]{}
Introduction
============
In X-ray binaries, matter is transferred from a normal star to a compact star. As the characteristic velocities near the compact object are of order $\rm (GM/R)^{1/2}\sim 0.5c$, the (dynamical) time scale for the motion of matter through the emitting region, is short. So, the significance of millisecond X-ray variability from X-ray binaries is clear from the phenomenon that milliseconds is the natural time scale of the accretion process in the X-ray emitting regions, and hence strong X-ray variability on such time scales is certainly caused by the motion of matter in these regions. Orbital motion, stellar spin, disk and the stellar oscillations are all expected to happen on these time scales.
In recent years, kilohertz quasi-periodic oscillation (kHz QPOs) peaks, with a certain width, have been found in the power spectra of more than 20 low-mass binaries (LMXBs). In most of the sources, the kHz QPOs are found in pairs and at least one of the peaks should reflect the orbital motion of matter outside, but not too far from the compact star, in near-Keplerian orbits.
Another high frequency phenomenon, namely nearly coherent oscillations which slightly drift in frequency, was detected during some type I X-ray bursts. These so-called burst oscillations have been detected in the power spectra of some ten sources and their X-ray flux modulation is consistent with being due to the changing aspect of a drifting hot spot on the surface of the compact star. Therefore, these burst oscillations are thought to reflect the spin frequency of the star.
In a very important work, Jonker et al. (2002, hereinafter JMK) report on high frequency QPO phenomena that have been observed in the LMXBs, particularly 4U 1636$-$53; two kHz QPOs (Zhang et al. 1996a; Wijnands et al. 1997) as well as a sideband to the lower of the two kHz QPOs (Jonker et al. 2000) and burst oscillations with frequency $\nu_b = 581$ Hz (Zhang et al. 1996b; Strohmayer et al. 1998). Miller (1999) presented evidence that these oscillations are in fact the second harmonic of the spin of the star $\sim 290.5$ Hz. However, using another data set these findings were not confirmed (Strohmayer 2001).
Since the frequency difference between the two kHz QPO peaks (the peak separation $\Delta \nu$) is nearly equal to half the burst oscillation frequency, a beat frequency model was proposed for the kHz QPOs (Strohmayer et al. 1996). In such a model, the higher frequency QPO is attributed, as in most other models, to the orbital frequency of chunks of plasma at a special radius near the compact star while the lower QPO is due to beat between the orbital frequency and the stellar spin frequency. A specific model incorporating the beat frequency mechanism, the sonic-point beat frequency model, is due to Miller et al. (1998). The model faced criticism in the work of Méndez et al. (1998) when the $\Delta \nu$ was found to be less than half the burst oscillation frequency $\nu_b$. This could be explained in the sonic-point model (Lamb & Miller 2001) by taking into account the inward plasma velocity (see JMK for details). However, in the recent observations of JMK, the $\Delta \nu$ is significantly larger than half of $\nu_b$ causing further problems. The only change in the flow pattern previously described that could produce the observed change in $\Delta \nu$ is that in which the accretion disk changes from prograde to retrograde when the lower kHz QPO is seen to move from 750 to 800 Hz.
In contrast to the beat frequency, the sonic-point and other models, Osherovich & Titarchuk (1999) and Titarchuk & Osherovich (1999) take the inner QPO to be the Keplerian accretion disk, leading to a substantially smaller radius for the compact object that is hard to explain from any of the known neutron star models. Li et al. (1999) showed that the observed compact star 4U 1728$-$34 can be fitted to a strange star model that uses the realistic strange matter equation of state of Dey et al. (1998). The higher QPO in the Osherovich & Titarchuk (1999) model is due the modification of Keplerian frequency under the influence of the Coriolis force in a rotating frame of reference.
The present letter deals with a model which takes the lower QPO to be the Keplerian motion of the accreting fluid around compact strange star, rotating very fast. It also seeks to explain the higher QPO as being due to viscous effects that lead a shock formation (Chakrabarti 1989, 1996; Molteni et al. 1996b, hereinafter MSC) and solves the hydrodynamic equations in the presence of a [*pseudo-Newtonian potential*]{} (Mukhopadhyay 2002a; Mukhopadhyay & Misra 2003, hereinafter MM) which describes the relativistic properties of accretion disk close to the compact object with an effective cutoff for an appropriate boundary condition that needs the accreting particles to go to asymptotic zero velocity. The same model applied to black holes would allow them to get to luminal velocities.
The formation of shock in accretion disk around compact object was discussed by several independent groups (e.g. Sponholz & Molteni 1994; Nabuta & Hanawa 1994; Molteni et al. 1994; Yang & Kafatos 1995; Chakrabarti 1996; Molteni et al. 1996a; MSC; Lu & Yuan 1997; Chakrabarti & Sahu 1997) either by analytical approach or numerical simulations. Recently, Mukhopadhyay (2002b) showed, even two shocks are possible to form in accretion disk around neutron star (or other compact object which has hard surface). It was shown in MSC that the shock location may oscillate in the disk and this oscillatory nature is directly related to the cooling and advective time scale of the matter. Subsequently it was shown that corresponding oscillation frequency is related to the location of the shock and the observed QPO frequency for the various black hole candidates could be explained. They (MSC) showed the theoretical calculations for QPO tally with that of observation for black hole candidate (which is of the order of Hz), say GS 339-4 and GS 1124-68. As the shock location comes closer to the compact object, the QPO frequency increases (MSC). In case of accretion flow around compact object other than black hole, the shock(s) may form even closer to the compact object with respect to the case of black hole (Mukhopadhyay 2000b). Thus the corresponding QPO frequencies are expected to be higher as of kilohertz order which could explain the origin of observed higher frequency (HF) oscillations of kilohertz QPO for compact objects.
Recently, MM prescribed a couple of potentials to describe the time varying relativistic properties of accretion disk around compact objects. They considered the Keplerian accretion disk and proposed modified gravitational force which could give the Keplerian angular frequency of the accreting matter around the compact object. As mentioned above, the lower frequency (LF) oscillation of the kilohertz QPO may arise due to the Keplerian motion of the accreting matter towards compact object, and if we know the radius of the Keplerian orbit, following MM we can calculate the LF.
In this letter we will calculate the pair of kHz QPOs for fast rotating candidates mainly for 4U 1636$-$53 ($\nu=582$Hz) and KS 1731$-$260 ($\nu=523.92$Hz). To calculate HF, we will more or less follow MSC but with the necessary modification using the model of MM due to the rotation of compact object (the work of MSC was confined with non-rotating black holes only). For LF, we will follow MM where the [*pseudo-Newtonian potentials and corresponding forces*]{} are given to describe the Keplerian motion of the accreting matter. In next section, we will briefly describe the basic viscous set of accretion disk equation and formalism to calculate QPO frequencies. Then in §3, we will tabulate QPO frequencies from our theory and compare with observation and finally in §4, we will present a summary.
Basic Equations and formalism
==============================
We will follow Chakrabarti (1996) to describe the viscous set of equations for accretion disk which are given as $$\begin{aligned}
\nonumber
&& -4\pi x\Sigma v=\dot{M},\hskip0.5cm
% \nonumber
v\frac{dv}{dx}+\frac{1}{\rho}\frac{dP}{dx}-\frac{\lambda^2}{x^3}+F(x)=0, \\
\nonumber
&& v\frac{d\lambda}{dx}=\frac{1}{\Sigma x}\frac{d}{dx}\left[x^2\alpha\left
(\frac{I_{n+1}}{I_n}P+v^2\rho\right)h(x)\right], \\
\nonumber
&& \Sigma vT\frac{ds}{dx}=\frac{vh(x)}{\Gamma_3-1}\left(\frac{dP}{dx}-
\Gamma_1\frac{p}{\rho}\frac{d\rho}{dx}\right)=Q^+-Q^-. \\
\label{diskeq}
\end{aligned}$$
Here, throughout our calculations we express the radial coordinate in unit of $GM/c^2$, where $M$ is mass of the compact star, $G$ is the gravitational constant and $c$ is the speed of light. We also express the velocity in unit of speed of light and the angular momentum in unit of $GM/c$. Following Cox & Giuli (1968), we define $\Gamma_1,\Gamma_3$ and to define vertically integrated density, $\Sigma$ and pressure, $W$ follow Matsumoto et al. (1984). $\beta$ and $h(x)$ are defined as the ratio of gas pressure to total pressure and half-thickness of the disk, $h(x)=c_s x^{1/2}F^{-1/2}$ respectively. We follow Mukhopadhyay (2002a) and MM to define the effective gravitational pseudo-Newtonian force, $F(x)$. We consider the adiabatic flow, where the speed of sound is given as $c_s^2=\frac{\gamma P}{\rho}$, related to the temperature ($T$) of the system as, $c_s=\sqrt{\frac{\gamma kT}{m_p}}$. We consider the magnetic field strength in the disk to be negligible compared to the viscous effect and the heat evolved is solely due to viscosity which is defined as $Q^+=\frac{W_{x\phi}^2}{\eta}$ where $W_{x\phi}$ and $\eta$ are the viscous stress tensor and coefficient of viscosity respectively. For simplicity, the heat lost is considered proportional to the heat gained by the flow. Thus the net heat contained in the flow is chosen to be $Q^+f$, where $f$ is the cooling factor, which is close to $0$ and $1$ for cooling and heating dominated flow respectively. $\alpha$ is Shakura-Sunyaev (Shakura & Sunyaev 1973) viscosity parameter, $s$ is entropy density. To get the thermodynamic properties of disk, we need to solve (\[diskeq\]).
It is shown in MSC that when shock is formed in accretion disk, the cooling ($t_{\rm cool}$) and advection ($t_{\rm adv}$) time scale of matter from the shock location to the inner edge of the disk are responsible for the oscillatory behaviour of shock that is related to the QPO. When the physical condition of the disk is such that $t_{\rm cool}\sim
t_{\rm adv}$, corresponding QPO can be found and that oscillation frequency is of the order of $1/t_{\rm adv}$. Therefore, we can calculate $$\begin{aligned}
t_{\rm adv}=\int_{x_s}^{x_{\rm in}}\frac{dx}{v}
\label{ts}\end{aligned}$$ where, $x_s$ and $x_{\rm in}$ are the location of shock and inner edge of the disk respectively. In our case, $x_{\rm in}$ is the outer radius of the compact object.
Therefore, to calculate the QPO frequency, the detailed knowledge of the thermodynamic character of the disk from shock to the surface of the compact object is very essential. Once this is known for a particular accretion flow around compact object, the corresponding QPO frequency can be calculated which is basically HF. Mukhopadhyay (2002b) already indicated that the kilohertz QPO can be calculated from the accretion disk model around slowly rotating neutron star. Here we would like to consider the rapidly rotating compact star which results the shift in shock location with respect to that of non-rotating cases and calculate the QPO frequency for different viscosity.
MM proposed a couple of pseudo-Newtonian potentials to describe the time varying properties of the Keplerian accretion disk. On the other hand, LF may arise due to the Keplerian motion of the accreting fluid. One of the potentials proposed by MM, that can describe the Keplerian angular frequency of the accretion flow is given as $$\begin{aligned}
2\pi\nu_K=\Omega_K=\frac{1}{x^{3/2}}\left[1-\left(\frac{x_{\rm ms}}{x}\right)+
\left(\frac{x_{\rm ms}}{x}\right)^2\right]^{1/2}.
\label{lbo}\end{aligned}$$ Here $x_{\rm ms}$ is the radius of marginally stable orbit, for Kerr geometry that was given by Bardeen (1973) as $$\begin{aligned}
\nonumber
x_{\rm ms} &=& 3 + Z_2 \mp[(3-Z_1)(3+Z_1+2Z_2)]^{1/2} \\
\nonumber
Z_1 & =& 1 +(1-J^2)^{1/3}[(1+J)^{1/3}+(1-J)^{1/3}],\\
\nonumber
Z_2 & = & (3J^2+Z_1^2)^{1/2},\end{aligned}$$ where the ’-’ (’+’) sign is for the co-rotating (counter-rotating) flow and $J$ (which varies from $0-1$) is the specific angular momentum of compact object. In our present work, we propose $\nu_K$ as the lower kilohertz frequency (LF) that varies with the angular frequency of the compact object. If we know the angular frequency of the compact object and the Keplerian radius of the corresponding accretion disk, LF can be calculated for that particular candidate.
QPO frequencies
================
The observed range of frequency of the QPOs for different candidates are quite large. However, there exists a relation between the LF and the HF which relates them to the intrinsic characteristic of the central compact object like the mass-radius relation, spin, etc. Also it has been seen that for a particular candidate, there is a variability in the frequency range for different times of observation, however keeping the co-relation between the HF and the LF constant. We encash upon these to our present interpretation.
[: Variation of accretion speed in unit of light speed as a function of radial coordinate around 4U 1636-53. Solid, dotted and dashed curves are respectively for (i) $\alpha=0, f=0$, (ii) $\alpha=0.02, f=0.2$, (iii) $\alpha=0.05,
f=0.5$. Other parameters are $J=0.2877, \lambda_c=3, \dot{M}=1, \beta=0.03$. ]{} 0.2cm
-2.0cm
--------------- ---------- ----- ------- ------- -------- ------------- ------- --------- --------- -------- --------
source $\alpha$ f $r_s$ $r_k$ J M R $\nu_h$ $\nu_K$
(km) (km) $(M_\odot$) (km) Hz Hz HF(Hz) LF(Hz)
0.05 0.5 15.37 18 0.2877 1.18 7.114 1030.8 719.7
4U 1636$-$53 0.02 0.2 14.27 19 0.2877 1.32 7.23 1019.2 705.2 1030 700
0 0 13 17 0.2877 0.991 6.828 1005.2 715.2
0.05 0.5 15.37 15.2 0.2585 1.106 7.013 1155.4 907.6
KS 1731$-$260 0.02 0.2 14.27 16 0.2585 1.23 7.16 1174 892.6 1159 898
0 0 13 14.5 0.2585 0.893 6.64 1151.9 863.1
--------------- ---------- ----- ------- ------- -------- ------------- ------- --------- --------- -------- --------
Our present consideration will mainly be focussed upon the QPOs of 4U 1636$-$53 and KS 1731$-$260 which are among the fast rotating compact stars having angular frequency $\sim
582$Hz ($J=0.2877$) and $\sim 524$Hz ($J=0.2585$) respectively. Both these stars have displayed long lasting superbursts. Long lasting superbursts find a natural explanation in the diquark surface pairing after these pairs are broken (Sinha et al. 2002). Thus, it is justified to invoke the realistic strange star model for these stars and our inputs, namely the star mass and radius is taken from this model of Dey et al. (1998). We estimate the LF from the stellar frequency and the Keplerian radius of the disk. As HF is directly related to the accreting matter speed, in Fig. 1, we show the variation of accretion speed, for various viscosities of the accretion disk, around 4U 1636$-$53. As the angular frequency of KS 1731$-$260 is close to 4U 1636$-$53, profile of matter speed around KS 1731$-$260 does not differ significantly from that around 4U 1636$-$53 and is not shown. Obviously, we consider those cases where shock does form in the accretion disk which is responsible for HF. In case of inviscid flow, there is only a single shock formation in the accretion disk, while for the viscous flow, shock is formed twice, the shock locations being listed in Table-1. Now following (\[ts\]) and (\[lbo\]) we can calculate HF and LF for various physical parameters of the disk and compact objects. The inner boundary condition of the model decelerates the speed of the infalling matter close to the stellar surface, and hence loses its energy. This in turn reduces the temperature of the falling matter to a considerable amount and hence the contribution of the energy transfer to the surface burst phenomena (Kuulkers 2002) is negligible.
In the same table, it is shown that the values of ‘$\alpha$’ when varied from 0 to 0.05 reproduce an admissible value of the QPO frequencies. At this point, it is necessary to mention that the mass-radius relation of the central compact source also plays a vital role in the determination of the frequencies. Compact objects with hard surface are generally considered as neutron stars, but in our study we find that the mass-radius relation obtained from the neutron star equations of state (EOS) are not satisfactory enough to reproduce the matching QPOs. They need to be more compact and hence we had to look for an alternative to the conventional neutron star EOS. As already hinted, recent developments in the theory of strange stars and their EOS gave us the alternative choice. We choose the EOS (SS1 as in Li et al. 1999) for compact strange star adapted from the model of Dey et al. (1998). In Table-1, we enlist the values of QPO frequencies for both the candidates 4U 1636$-$53 and KS 1731$-$260 for different cases of variation of the parameters. The corresponding chosen mass and radius of the star are all in the ‘allowed range’ as per the model of Dey et al. (1998). The $\nu_h$s are calculated from fluid dynamical model and compared with observed HF. Also, $\nu_K$s are calculated theoretically and compared with observed LF. It is very exciting to see that our calculated $\nu_h$ and $\nu_K$ are very close to the observed HF and LF respectively. The results not only reflect the success of the model, but also reinforces the claim of the existence of strange stars.
Considering the stellar spin frequency is half of the assumed values (Titarchuk 2003) the change of $\nu_K$ is negligible ($<<1\%$) (eq. \[\[lbo\]\]). In this situation that matter will adjust itself in the disk at the cost of changing other physical parameters, namely, angular momentum ($\lambda$), sonic points etc. (Mukhopadhyay 2003) to keep unchanged the shock location as well as $\nu_h$, at least for $|J|\leq 0.3$. Similarly, with the presence of moderate magnetic field the shock can be formed at a same location for a different choices of $\lambda$, $\alpha$, $s$ etc.
Summary
=======
It is beyond doubt that the origin of the kilohertz QPOs is the accretion disk centering around a compact object. We model here, the source of origin of the kilohertz QPOs in the light of fluid dynamical calculation of the inflowing matter in the accretion disk. It is perhaps for the first time such a theoretical calculation is presented which matches with the results of the observation to a very satisfactory limit. Although we have used here two of the fastest rotating candidates, 4U 1636$-$53 and KS 1731$-$260, to compare our results and also identify them as strange stars rather than conventional neutron stars, a detailed study of all the other candidates in this model is necessary. The choice of the two above mentioned stars was also motivated by the fact that they have recently displayed very long lasting superbursts (Sinha et al. 2002).
Out of the QPO pair, higher one is directly related to the accretion flow around the compact object. If shock is formed in the accretion disk and the advective time-scale is of the same order as the cooling time-scale in the disk from the shock location to the stellar surface, we immediately expect higher kHz QPO. In earlier works, theoretical calculations based on this idea, had been carried out for the black hole candidates GS 339$-$4 and GS 1124$-$68 (MSC). In the present letter, similar procedure is applied for other stellar candidates and with a modification for inclusion of rotation of the compact objects. The results ended up in the discovery of those candidates as strange stars. Lower QPO frequency arises, when we consider the Keplerian motion of the accretion fluid. The most important aspect of our work is that we have not made use of any ‘toy-model’ to describe the QPOs, but they arrived naturally from the hydrodynamical calculations.
BM acknowledges a discussion made three years back with Sandip K. Chakrabarti.
Bardeen, J. M., 1973, in Black Holes, Les Houches 1972 (France), ed. B. & C. DeWitt (New York: Gordon & Breach), 215 Chakrabarti, S. K., 1989, [*ApJ*]{}, [**347**]{}, 365 Chakrabarti, S. K., 1996, [*ApJ*]{}, [**464**]{}, 664 Chakrabarti, S. K., & Sahu, S., 1997, [*A&A*]{}, [**323**]{}, 382 Cox, J., & Giuli, R., 1968, in Principles of Stellar Structure (New York: Gordon & Breach) Dey, M., Bombaci, I., Dey, J., Ray, S., & Samanta, B. C., 1998, [*Phys. Lett. B*]{}, [**438**]{}, 123; 1999, [*Addendum B*]{}, [**447**]{}, 352; 1999, [*Indian J. Phys.*]{}, [**73B**]{}, 377 Jonker, P. G., Méndez, M., & van der Klis, M., 2000, [*ApJ*]{}, [**540**]{}, L29 Jonker, P. G., Méndez, M., & van der Klis, M., 2002, [*MNRAS*]{}, [**336**]{}, L1; JMK Kuulkers, E., 2002, [*A&A*]{}, [**383**]{}, L5 Lamb, F. K., Miller, M. C., 2001, [*ApJ*]{}, [**554**]{}, 1210 Li, X., D., Ray, S., Dey, J., Dey, M., & Bombaci, I., 1999, [*ApJ*]{}, [**527**]{}, L51 Lu, J.-F., & Yuan, F., 1997, [*PASJ*]{}, [**49**]{}, 525 Matsumoto, R., Kato, S., Fukue, J., & Okazaki, A., 1984, [*PASJ*]{}, [**36**]{}, 71 Méndez, M., van der Klis, M., van Paradijs, J., 1998, [*ApJ*]{}, [**506**]{}, L117 Miller, M. C., Lamb, F. K., & Psaltis, D., 1998, [*ApJ*]{}, [**508**]{}, 791 Miller, M. C., 1999, [*BAAS*]{}, [**31**]{}, 904; 1999, [*AAS*]{}, [**194**]{}, 52.13 Molteni, D., Lanzafame, G., & Chakrabarti, S. K., 1994, [*ApJ*]{}, [**425**]{}, 161 Molteni, D., Ryu, D., & Chakrabarti, S. K., 1996a, [*ApJ*]{}, [**470**]{}, 460 Molteni, D., Sponholz, H., & Chakrabarti, S. K., 1996b, [*ApJ*]{}, [**457**]{}, 805; MSC Mukhopadhyay, B., 2002a, [*ApJ*]{}, [**581**]{}, 427 Mukhopadhyay, B., 2002b, [*IJMPD*]{}, [**11**]{}, 1305 Mukhopadhyay, B., 2003, [*ApJ*]{}, [**586**]{}, N2 (to Appear); astro-ph/0212186; ApJ preprint doi:10.1086/367830427 Mukhopadhyay, B., & Misra, R., 2003, [*ApJ*]{}, [**582**]{}, 347; MM Nabuta, K., & Hanawa, T., 1994, [*PASJ*]{}, [**46**]{}, 257 Osherovich, V., & Titarchuk, L, 1999, [*ApJ*]{}, [**522**]{}, L113 Shakura, N., & Sunyaev, R., 1973, [*A&A*]{}, [**24**]{}, 337 Sinha, M., Dey, M., Ray, S., & Dey, J., 2002, [*MNRAS*]{}, [**337**]{}, 1368 Sponholz, H., & Molteni, D., 1994, [*MNRAS*]{}, [**271**]{}, 233 Strohmayer, T. E., Zhang, W., Swank, J. H., Smale, A., Titarchuk, L, & Day, C., 1996, [*ApJ*]{}, [**469**]{}, L9 Strohmayer, T. E., Zhang, W., Swank, J. H., White, N. E., & Lapidus, I., 1998, [*ApJ* ]{}, [**498**]{}, L135 Strohmayer, T. E., 2001, [*AdSpR*]{}, [**28**]{}, 511 Titarchuk, L., 2003, [*ApJ*]{} (to Appear); astro-ph/0211575 Titarchuk, L., & Osherovich, V., 1999, [*ApJ*]{}, [**518**]{}, L95 Wijnands, R. A. D., van der Klis, M., van Paradijs, J., Lewin, W. H. G., Lamb, F. K., Vaughan, B., & Kuulkers, E., 1997, [*ApJ*]{}, [**479**]{}, L141 Yang, R., & Kafatos, M., 1995, [*A&A*]{}, [**295**]{}, 238 Zhang, W., Lapidus, I., White, N. E., & Titarchuk, L., 1996a, [*ApJ*]{}, [**469**]{}, L17 Zhang, W., Lapidus, I., White, N. E., & Titarchuk, L., 1996b, [*ApJ*]{}, [**473**]{}, L135
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In several families of iron-based superconducting materials, a d-wave pairing instability may compete with the leading s-wave instability. Here we show that when both states have comparable free energies, superconducting and nematic degrees of freedom are strongly coupled. While nematic order causes a sharp non-analytic increase in $T_{c}$, nematic fluctuations can change the character of the s-wave to d-wave transition, favoring an intermediate state that does not break time-reversal symmetry but does break tetragonal symmetry. The coupling between superconductivity and nematicity is also manifested in the strong softening of the shear modulus across the superconducting transition. Our results show that nematicity can be used as a diagnostic tool to search for unconventional pairing states in iron pnictides and chalcogenides.'
author:
- 'Rafael M. Fernandes'
- 'Andrew J. Millis'
title: 'Nematicity as a probe of superconducting pairing in iron-based superconductors'
---
Two of the main themes in the current studies of iron-based superconductors are the possibility of unconventional forms of superconducting (SC) pairing [@magnetic] (most likely mediated by spin fluctuations [@reviews_pairing]) and the importance of electronic nematic degrees of freedom [@Fisher10; @ZXshen11; @Matsuda12; @Fisher12; @Fernandes12]. Pairing interactions mediated by spin fluctuations promote both $s^{+-}$ and d-wave superconducting instabilities, with the former typically winning over the latter [@Kuroki09; @Graser10; @Maiti11; @Thomale11; @DHLee13]. The same spin fluctuations [@Fernandes12], possibly combined with orbital degrees of freedom [@w_ku10; @Devereaux10; @Phillips12; @Kontani12], can give rise to an emergent electronically-driven breaking of rotational symmetry [@Kivelson; @Sachdev; @shear_modulus], often referred to as nematic order [@Fradkin_review]. The interplay between $s^{+-}$ and d-wave superconductivity has been extensively studied [@Kuroki09; @Graser10; @Maiti11; @CWu09; @Thomale11; @Maiti12; @Fernandes13] as has the interplay between $s^{+-}$ and nematic order [@Nandi10; @Moon12; @Fernandes_SUST; @Fernandes_arxiv13], but the coupling of all three seems not to have previously been considered. Here we show that such a coupling can have dramatic effects, qualitatively changing the phase diagram, increasing the SC transition temperature $T_{c}$, and helping to distinguish an $s$-$d$ competition from other proposed phases.
![Schematic Fermi surfaces of three different systems where competing $s{}^{+-}$ and d-wave instabilities have been proposed [@CWu09; @Fernandes13; @Maiti12; @s_plus_id_Khodas; @s_plus_id_Maier; @s_plus_id_Thomale]. Thick/red (thin/blue) lines denote electron (hole) pockets. (a) In $\mathrm{Ba(Fe_{1-x}Mn_{x})_{2}As_{2}}$, the $s^{+-}$ state arise from $\left(\pi,0\right)/\left(0,\pi\right)$ stripe-type fluctuations, whereas the d-wave state comes from $\left(\pi,\pi\right)$ Neel-type fluctuations [@Fernandes13]. (b) In $A\mathrm{_{1-y}Fe_{2-x}Se_{2}}$ chalcogenides, a d-wave state appears due to the direct $XY$ interaction [@s_plus_id_Maier], whereas $s^{+-}$ is favored by FeAs hybridization [@s_plus_id_Khodas]. (c) In strongly doped $\mathrm{(Ba_{1-x}K{}_{x})Fe_{2}As_{2}}$, the $s^{+-}$ state appears when small electron pockets emerge with doping, whereas a d-wave state can appear due to the $M$ intra-pocket interaction [@s_plus_id_Thomale; @Maiti12]. \[fig\_Fermi\_surfaces\]](Fermi_surfaces_mod){width="0.85\columnwidth"}
While in most iron-based superconductors the pairing state is believed to be $s^{+-}$, both theoretical and experimental work suggests that a d-wave state may be nearby in free energy or even actually occur. In particular, in $\mathrm{(Ba_{1-x}K{}_{x})Fe_{2}As_{2}}$ and $\mathrm{Ba(Fe_{1-x}Mn_{x})_{2}As_{2}}$ pnictides and $A\mathrm{_{1-y}Fe_{2-x}Se_{2}}$ chalcogenides (see Fig. \[fig\_Fermi\_surfaces\]), calculations indicate that the a d-wave state may be tuned by varying the pnictogen height [@s_plus_id_Thomale], the $p-d$ orbital hybridization [@s_plus_id_Khodas], applied pressure [@Balatsky12], and strength of Neel fluctuations [@Fernandes13]. Near the point where the $s$ and $d$ wave states cross in free energy, a time reversal symmetry breaking (TRSB) $s+id$ state has been predicted [@CWu09; @Stanev10]. The experimental situation is not settled: in $\mathrm{(Ba_{1-x}K{}_{x})Fe_{2}As_{2}}$ the consensus is that at optimal doping $(x\approx0.4)$ the state is fully gapped and of $s$ symmetry [@Shin_nodeless] while in the $x=1$ compound thermal conductivity [@thermoconduct_KFe2As2] and ARPES measurements [@ARPES_KFe2As2] favor respectively a d-wave and a nodal $s^{+-}$ state. In $A\mathrm{_{1-y}Fe_{2-x}Se_{2}}$, inelastic neutron scattering [@Keimer_Fe2Se2] favors a d-wave state whereas ARPES indicates a nodeless s-wave state [@ARPES_Fe2Se2]. In the hole-doped $\mathrm{Ba(Fe_{1-x}Mn_{x})_{2}As_{2}}$, neutron scattering finds both Neel and stripe type magnetic fluctuations [@Mn_neutron]– which favor d-wave and s-wave states, respectively – but no superconductivity has been observed. Raman scattering [@raman_mode] in some of these materials indicate the existence of a Bardasis-Schrieffer mode, suggesting the presence of two competing SC instabilities. The unsettled experimental situation along with the compelling theoretical reasons to expect a proximal d-wave state motivates a more detailed examination of the physics associated with a change from $s$ to $d$-symmetry.
The change from $s^{+-}$ to d-wave superconductivity in the absence of nematicity [@CWu09; @Stanev10] and the interplay between nematicity and a single SC order parameter [@Nandi10; @Moon12; @Fernandes_SUST] have been studied. On general grounds, one expects that a single superconducting order parameter $\Delta$ couples to a nematic order parameter $\varphi$ via the biquadratic term $\Delta^{2}\varphi^{2}$ in the free energy [@Fernandes_arxiv13]. This coupling leads to a suppression of superconductivity in the presence of nematicity and vice-versa, as well as to a hardening of the shear modulus below $T_{c}$. These features have been reported in the $\mathrm{Ba(Fe_{1-x}Co_{x})_{2}As_{2}}$ materials [@Nandi10; @shear_modulus].
The key new aspect of our analysis is that if both $s$ and $d$-symmetry superconductivity are important, then the free energy will contain also a tri-linear term
$$F_{\mathrm{SC-nem}}\propto\varphi\Delta_{s}\Delta_{d}\cos\theta\label{coupling}$$
connecting the s-wave, d-wave, and nematic order parameters (here $\theta$ is the relative phase of the two SC order parameters). As we shall show this coupling implies that
- nematic order leads to an enhancement of the SC transition temperature;
- superconductivity can lead to the appearance of a nematic phase;
- an $s+d$ symmetry phase (similar to the one proposed in Ref. [@Livanas12]) or a first-order transition can separate the pure $s^{+-}$ and d-wave states;
- a softening of the shear modulus below $T_{c}$ is an experimental signature of proximity to the regime where $s^{+-}$ and $d$-wave SC states are degenerate.
These results are robust and do not rely on any specific shape of the Fermi surface, as they follow from a general Ginzburg-Landau analysis based on a free energy that respects the gauge and rotational symmetries of the system: $$\begin{aligned}
F & = & F_{\mathrm{nem}}\left(\varphi^{2}\right)+\frac{t_{s}}{2}\Delta_{s}^{2}+\frac{t_{d}}{2}\Delta_{d}^{2}+\frac{\beta_{s}}{4}\Delta_{s}^{4}+\frac{\beta_{d}}{4}\Delta_{d}^{4}\nonumber \\
& + & \frac{1}{2}\Delta_{s}^{2}\Delta_{d}^{2}\left(\beta_{sd}+\alpha\cos2\theta\right)+\lambda\varphi\Delta_{s}\Delta_{d}\cos\theta\label{F}\end{aligned}$$
Here $F_{\mathrm{nem}}$ is the free energy of the pure nematic phase, $t_{j}=a_{j}\left(T-T_{c,j}\right)$ with $a_{j}>0$ gives the distance to the SC transition temperatures in the $j=s^{+-},d$ channels, and $\lambda$, $\alpha$, and the $\beta_{i}$ are coupling constants. Note, the bi-quadratic couplings $\Delta_{s/d}^{2}\varphi^{2}$ are subleading near the s-d transition and are not written explicitly here. In the materials discussed above, $T_{c,s}$ and $T_{c,d}$ are tuned by the doping concentration $x$ due to different mechanisms: In $\mathrm{Ba(Fe_{1-x}Mn_{x})_{2}As_{2}}$ (Fig. \[fig\_Fermi\_surfaces\]a), increasing $x$ leads to stronger Neel fluctuations which favor the d-wave state [@Fernandes13]. In $A\mathrm{_{1-y}Fe_{2-x}Se_{2}}$ (Fig. \[fig\_Fermi\_surfaces\]b), changing $x$ modifies the Fe-As hybridization, which in turn favors either s-wave or d-wave [@s_plus_id_Khodas]. In $\mathrm{(Ba_{1-x}K{}_{x})Fe_{2}As_{2}}$ (Fig. \[fig\_Fermi\_surfaces\]c), increasing $x$ gives rise to a large hole pocket at the $M$ point, which favors a d-wave state [@s_plus_id_Thomale; @Maiti12]. For illustration, in the Supplementary Material we derive this free energy from a BCS model appropriate for the system in Fig. \[fig\_Fermi\_surfaces\]a, but we emphasize that our conclusions are more general.
In the absence of significant nematicity, we find $\alpha>0$, implying that the free energy is minimized by setting $\theta=\pi/2$. We also find that $\left(\beta_{sd}-\left|\alpha\right|\right)^{2}<\beta_{s}\beta_{d}$, implying that the s-wave and d-wave states can be simultaneously present [@FernandesPRB10]. In this case, near the degeneracy point $T_{c,s}=T_{c,d}=T^{*}$, the two order parameters enter in the form $s+id$, breaking time-reversal symmetry. Note that microscopic models also found $s+id$ states in systems with the Fermi surfaces of Figs. \[fig\_Fermi\_surfaces\]b and \[fig\_Fermi\_surfaces\]c [@s_plus_id_Khodas; @s_plus_id_Thomale]. The resulting phase diagram in the absence of nematicity is shown schematically in panel (a) of Fig. \[fig\_phasediagram\].
![Schematic phase diagrams as function of temperature ($T$) and doping ($x$) for the interplay between $s^{+-}$-wave and d-wave superconductivity in iron pnictide materials. Dotted (solid) lines denote second (first) order phase transitions. Panel (a): no nematic order and weak nematic fluctuations ($\chi_{\mathrm{nem}}<2\alpha/\lambda^{2}$). The s-wave and d-wave states are separated by an intermediate time-reversal symmetry-breaking (TRSB) $s+id$ state. Panel (b): pre-existing nematic order. $T_{c}$ is enhanced with respect to the tetragonal case (dashed line), and the superconducting order parameter is characterized by the real combination $s+d$ and evolves smoothly with $x$ with no TRSB. Panel (c): no nematic order, but larger nematic fluctuations ($2\alpha<\lambda^{2}\chi_{\mathrm{nem}}<\beta_{sd}+\alpha+\sqrt{\beta_{s}\beta_{d}}$). The coexistence region is enhanced but the intermediate state is of $s+d$ character, spontaneously breaking rotational but not time reversal symmetry. Panel (d): no nematic order, but even larger nematic fluctuations ($\lambda^{2}\chi_{\mathrm{nem}}>\beta_{sd}+\alpha+\sqrt{\beta_{s}\beta_{d}}$). The s-wave to d-wave transition becomes first-order. \[fig\_phasediagram\] ](phase_diagrams_s_d){width="0.95\columnwidth"}
Including nematicity leads to significant changes. Consider first the case that a nematic phase transition occurs at a temperature far above the SC transition temperature. In this case, extremizing $F_{\mathrm{nem}}$ leads to a non-zero expectation value of the nematic order parameter $\left\langle \varphi\right\rangle =\varphi_{0}$ so the SC free energy contains an effective bilinear term $\lambda\varphi_{0}\Delta_{s}\Delta_{d}\cos\theta$. Diagonalizing the quadratic part of the free energy reveals that the energy minimum is at $\theta=0$ so the SC order parameter becomes a real admixture of $s$ and d-wave gaps, evolving smoothly across the degeneracy point (see Supplementary Material). $T_{c}$, determined from the solution of $t_{c}t_{d}=\lambda^{2}\varphi_{0}^{2}$, is enhanced relative to its tetragonal value $T_{c,s/d}$, with the enhancement being largest at the degeneracy point $T_{c,s}=T_{c,d}=T^{*}$ where we find the non-analytic behavior $T_{c}-T^{*}\propto\left|\varphi_{0}\right|$ and the maximal admixture between s-wave and d-wave states. Away from this point, $T_{c}-T_{c,s/d}\propto\varphi_{0}^{2}$. Figure \[fig\_phasediagram\](b) shows the phase diagram corresponding to this situation. We note that if the coupling $\lambda$ is not too strong, an $s+id$ phase may appear at lower temperatures [@s_plus_is].
We now consider that nematic order is absent but nematic fluctuations are important. In this case, we approximate $F_{\mathrm{nem}}=\frac{1}{2}\chi_{\mathrm{nem}}^{-1}\varphi^{2}$, where $\chi_{\mathrm{nem}}$ is the nematic susceptibility which would diverge at the nematic transition. Minimizing with respect to the nematic order parameter, we find $\varphi=-\lambda\chi_{\mathrm{nem}}\Delta_{s}\Delta_{d}\cos\theta$. Substituting back into Eq. (\[F\]) yields:
$$\begin{aligned}
\tilde{F} & = & \frac{t_{s}}{2}\Delta_{s}^{2}+\frac{t_{d}}{2}\Delta_{d}^{2}+\frac{\beta_{s}}{4}\Delta_{s}^{4}+\frac{\beta_{d}}{4}\Delta_{d}^{4}\nonumber \\
& + & \frac{1}{2}\Delta_{s}^{2}\Delta_{d}^{2}\left(\tilde{\beta}_{sd}+\tilde{\alpha}\cos2\theta\right)\label{F_eff}\end{aligned}$$
with $\tilde{\alpha}=\alpha-\frac{1}{2}\lambda^{2}\chi_{\mathrm{nem}}$ and $\tilde{\beta}_{sd}=\beta_{sd}-\frac{1}{2}\lambda^{2}\chi_{\mathrm{nem}}$. For weak nematic fluctuations, $\chi_{\mathrm{nem}}<2\alpha/\lambda^{2}$, $\tilde{\alpha}$ remains positive and the relative phase remains at $\theta=\pi/2$ so that the phase diagram retains the form displayed in Fig. \[fig\_phasediagram\](a), with $\varphi=0$.
As the nematic instability is approached, $\chi_{\mathrm{nem}}$ increases and eventually $\tilde{\alpha}$ changes sign so that the energy minimum shifts from $\theta=\pi/2$ to $\theta=0,\pi$. Note that the BCS calculations, which indicate that $\alpha<\beta_{sd}$, imply that the sign change in $\tilde{\alpha}$ happens before the condition for a second order phase transition is violated. Consequently, the SC state takes the real form $s\pm d$ and the nematic order parameter acquires a non-vanishing expectation value $\varphi=\pm\lambda\chi_{\mathrm{nem}}\Delta_{s}\Delta_{d}$ indicating a spontaneous breaking of tetragonal symmetry as shown in Fig. \[fig\_phasediagram\](c). Note that an $s\pm d$ state was also found in the $T=0$ numerical results of Ref. [@Livanas12]. As the nematic susceptibilty further increases, $\tilde{\beta}_{sd}$ changes sign and eventually the magnitude of $\left|\tilde{\beta}_{sd}-\tilde{\alpha}\right|$ becomes large enough that the transition between $s$ and $d$ becomes first order as shown in Fig. \[fig\_phasediagram\](d). An estimate for the critical nematic susceptibility above which $s\pm d$ emerges reveals that it corresponds to moderate fluctuations, which are reasonable to be expected in the real materials (see Supplementary Material). In this regard, note that shear modulus measurements have revealed the presence of significant nematic fluctuations in the phase diagrams of 122 compounds [@shear_modulus; @Yoshizawa12].
The analysis so far has been based only on symmetry arguments, but it is of interest to demonstrate a mechanism and provide an estimate for the magnitude of the effect. We present a spin fluctuation Eliashberg calculation following Ref. [@Fernandes13] but including nematicity, for the system whose Fermi surface is displayed in Fig. \[fig\_Fermi\_surfaces\](a), with hole pockets at the center of the Brillouin zone $\Gamma=\left(0,0\right)$ and electron pockets centered at $X=\left(\pi,0\right)$ and $Y=\left(0,\pi\right)$. Stripe spin fluctuations (peaked at $\mathbf{Q}_{X}=\left(\pi,0\right)$ and $\mathbf{Q}_{Y}=\left(0,\pi\right)$) induce repulsive $\Gamma-X$ and $\Gamma-Y$ interactions that favor an $s^{+-}$ state, whereas Neel fluctuations (peaked at $\mathbf{Q}_{N}=\left(\pi,\pi\right)$) induce a repulsive $X-Y$ interaction that favors a d-wave state [@Fernandes13].
In the Eliashberg formalism, the pairing interactions are determined by the dynamic magnetic susceptibilities $\chi_{i}\left(\mathbf{Q}_{i}+\mathbf{q},\omega\right)$ with $i=X,Y,N$ (see Supplementary Material for more details). Neutron scattering experiments reveal that all of the relevant spin fluctuations are overdamped [@Mn_neutron], $\chi_{i}^{-1}\left(\mathbf{Q}_{i}+\mathbf{q},\omega\right)=\xi_{i}^{-2}+q^{2}-i\omega\gamma_{i}^{-1}$ and are characterized by two parameters: the magnetic correlation length $\xi_{i}$ and the Landau damping $\gamma_{i}$. As we have previously shown [@Fernandes13], in the tetragonal phase where $\xi_{X}=\xi_{Y}=\xi_{S}$ the system undergoes a transition from an $s^{+-}$ to a d-wave SC state as the Neel correlation length $\xi_{N}$ increases from zero (see Fig. \[fig\_Eliashberg\](a)).
In the presence of long-range nematic order, tetragonal symmetry is broken and the two stripe-type correlation lengths $\xi_{X}$ and $\xi_{Y}$ become different, with $\varphi=\ln\left(\xi_{X}/\xi_{Y}\right)$ [@Fernandes12], implying that the pairing interaction is different between the $\Gamma-X$ and $\Gamma-Y$ pockets. In Fig. \[fig\_Eliashberg\](a), we show the numerically calculated $T_{c}$ in the nematic phase. We observe a behavior similar to the schematic phase diagram of Fig. \[fig\_phasediagram\](b), with the maximum relative increase of $T_{c}$ at the s-wave/d-wave degeneracy point $\xi_{N}\approx0.33\xi_{S}$. Far from this point, $T_{c}$ decreases as $\varphi^{2}$ for increasing nematic order, reflecting the usual competing bi-quadratic coupling $\varphi^{2}\Delta_{s}^{2}$ between orders that break different symmetries (Fig. \[fig\_Eliashberg\]b). As the degeneracy point is approached, the d-wave instability becomes closer in energy to the $s^{+-}$ one, and $T_{c}$ starts to increase with increasing nematic order as $\varphi^{2}$. In the vicinities of the degeneracy point, this behavior changes and we observe the increase of $T_{c}$ with $\left|\varphi\right|$ - a signature of the tri-linear coupling (\[coupling\]), as discussed within the Ginzburg-Landau model. From our numerical results, we can estimate the coupling constant $\lambda\approx0.33$, i.e. making $\xi_{X}\approx1.35\xi_{Y}$ leads to a $10\%$ enhancement of the relative transition temperature $\left(T_{c}-T_{c,0}\right)/T_{c,0}$.
![Dependence of $T_{c}$ on the Neel-type ($\xi_{\mathrm{Neel}}$) and stripe-type ($\xi_{\mathrm{stripe}}$) magnetic correlation lengths obtained from Eliashberg calculations as described in the text. Panel (a) shows the evolution of $T_{c}$ (in units of $\gamma_{\mathrm{stripe}}/2\pi$) as function of $\xi_{\mathrm{Neel}}/\xi_{\mathrm{stripe}}$ in the absence (dashed line) and presence of nematic order (solid line, $\varphi=1.0$). Panel (b) presents the variation of $T_{c}$, $\Delta T_{c}=T_{c}-T_{c,0}$, as function of the nematic order parameter $\varphi=\ln\left(\xi_{X}/\xi_{Y}\right)$, for three fixed values of the ratio $\xi_{\mathrm{Neel}}/\xi_{\mathrm{stripe}}$ indicated by the arrows in panel (a): $\xi_{\mathrm{Neel}}/\xi_{\mathrm{stripe}}=0.1$ (dotted-dashed, blue online), $\xi_{\mathrm{Neel}}/\xi_{\mathrm{stripe}}=0.26$ (dashed, green online), and $\xi_{\mathrm{Neel}}/\xi_{\mathrm{stripe}}=0.33$ (solid, red online).\[fig\_Eliashberg\]](Eliashberg_results_mod){width="1\columnwidth"}
Measurements of elastic anomalies across the superconducting transition can also reveal the strength of the tri-linear coupling. The idea, which goes back to the work of Testardi and others on the A-15 materials [@Testardi] and was revisited in the context of the cuprates [@Millis_Rabe], is that within mean field theory, as the temperature is decreased below $T_{c}$, the free energy acquires an additional contribution $$\Delta F=-\frac{1}{2}\frac{\Delta C}{T_{c}}\left(T-T_{c}(\varphi)\right)^{2}\label{Fnew}$$
Here $\Delta C$ is the specific heat jump across the transition. The crucial point is that the dependence of $T_{c}$ on the strain (proportional to $\varphi$) leads to new contributions to the elastic free energy which are singular at $T_{c}$ and proportional to the strain derivatives of $T_{c}$ and to $\Delta C$. Differentiating Eq. \[Fnew\] twice with respect to strain and retaining only the most singular terms at $T_{c}$ gives discontinuities in the shear elastic modulus $C_{66}$ and its first temperature derivative $$\begin{aligned}
\Delta C_{66}\equiv C_{66}(T_{c}^{-})-C_{66}(T_{c}^{+}) & = & -\frac{\Delta C}{T_{c}}\left(\frac{\partial T_{c}}{\partial\varphi}\right)^{2}\label{jump}\\
\Delta\frac{dC_{66}}{dT} & = & \frac{\Delta C}{T_{c}}\frac{\partial^{2}T_{c}}{\partial\varphi^{2}}\label{derivdiscon}\end{aligned}$$
In the nematic phase or at the $s-d$ degeneracy point in Fig. \[fig\_phasediagram\](c), because $T_{c}$ depends linearly on $\varphi$, the elastic modulus exhibits a downwards jump (softening) across $T_{c}$. In the tetragonal phase, $T_{c}$ depends quadratically on $\varphi$. Far from the $s-d$ degeneracy point, the $\varphi^{2}\Delta^{2}$ free energy term discussed in [@Nandi10; @Fernandes_arxiv13] - present in the Eliashberg calculations but not explicitly written in Eq. (\[F\]) - gives a negative $\partial^{2}T_{c}/\partial\varphi^{2}$ (see Fig. \[fig\_Eliashberg\]b). This implies a hardening of $C_{66}$ below $T_{c}$, as observed in optimally doped $\mathrm{Ba(Fe_{1-x}Co_{x})_{2}As_{2}}$ [@shear_modulus; @Yoshizawa12]. However, as the d-wave state is approached, the tri-linear coupling leads to a positive contribution $\lambda^{2}/t_{d}$ to $\partial^{2}T_{c}/\partial\varphi^{2}$ which diverges at the degeneracy point, causing a softening in $C_{66}$. A softening of $C_{66}$ across $T_{c}$ is thus a clear signal of proximity between s-wave and d-wave states.
Compounds to which the considerations of this paper may be relevant include $A\mathrm{_{1-y}Fe_{2-x}Se_{2}}$ chalcogenides, where neutron scattering [@Keimer_Fe2Se2] and ARPES [@ARPES_Fe2Se2] seem to support different pairing states, and $\mathrm{KFe_{2}As_{2}}$, where experiment suggests a change in pairing state with applied pressure [@Taillefer_pressure]. Further, in the optimally doped compound $\mathrm{BaFe_{2}(As_{1-x}P_{x})_{2}}$, recent detwinning experiments found an unexpected enhancement of $T_{c}$ with the applied strain [@Kuo12], as expected if the tri-linear coupling is relevant.
The results here may also help to resolve a controversy concerning the superconducting state of the extremely overdoped pnictide compound $\mathrm{(Ba_{1-x}K{}_{x})Fe_{2}As_{2}}$, which is believed to possess the Fermi surface shown in Fig. \[fig\_Fermi\_surfaces\](c). ARPES experiments [@ARPES_KFe2As2] support a scenario where the SC state evolves from nodeless $s^{+-}$ at optimal doping $x_{\mathrm{opt}}\approx0.4$ towards nodal $s^{+-}$ at $x=1$ (with a possible intermediate TRSB $s+is$ state [@s_plus_is]). Thermal conductivity measurements [@thermoconduct_KFe2As2] support a transition from nodeless $s^{+-}$ at $x_{\mathrm{opt}}$ to d-wave at $x=1$. Calculations [@Thomale11; @Maiti12] indicate that the two states have comparable transition temperatures. The results of this paper indicate that if the second state is d-wave then a structural/nematic “dome”, detectable by x-ray [@Nandi10] or torque magnetometry [@Matsuda12], could appear in the vicinity of the critical $x$. Also, application of a stress field to induce long-range nematic order [@Fisher12] would cause a linear increase in $T_{c}$. A softening of the elastic modulus across the transition would further support a d-wave state.
In summary, our results unveil a unique feature of the interplay between nematicity and SC in iron-based materials. The tri-linear coupling (\[coupling\]) shows that at the same time that the d-wave and s-wave gaps work together as an effective field conjugate to the nematic order parameter, allowing for spontaneous tetragonal symmetry breaking in the superconducting state, nematicity leads to an effective attraction between the two otherwise competing states. This physics can also be expected in other situations where multiple SC instabilities are present, such as the ruthenates $\mathrm{Sr_{2}RuO_{4}}$, where a chiral triplet $p+ip$ state has been proposed, and the consequences for the elastic modulus discontinuties of tri-linear coupling $\varphi p_{x}p_{y}$ have been discussed [@Sigrist; @Walker02].
*Acknowledgments *We thank A. Chubukov, E. Fradkin, S. Maiti, C. Meingast, J. Schmalian, and L. Taillefer for inspiring discussions. AJM was supported by NSF DMR 1006282.
[10]{} I. I. Mazin, D. J. Singh, M. D. Johannes, and M. H. Du, Phys. Rev. Lett. **101**, 057003 (2008); A. V. Chubukov, D. V. Efremov and I Eremin, Phys. Rev. B **78**, 134512 (2008); K. Kuroki, S. Onari, R. Arita, H. Usui, Y. Tanaka, H. Kontani, and H. Aoki, Phys. Rev. Lett. **101**, 087004 (2008); V. Cvetković and Z. Te¨anović, Phys. Rev. B **80**, 024512 (2009); J. Zhang, R. Sknepnek, R. M. Fernandes, and J. Schmalian, Phys. Rev. B **79**, 220502(R) (2009); A.F. Kemper, T.A. Maier, S. Graser, H-P. Cheng, P.J. Hirschfeld and D.J. Scalapino, New J. Phys. **12**, 073030 (2010).
P. J. Hirschfeld, M. M. Korshunov, and I. I. Mazin, Rep. Prog. Phys. **74**, 124508 (2011); A. V. Chubukov, Annu. Rev. Cond. Mat. Phys. **3**, 57 (2012).
J.-H. Chu, J. G. Analytis, K. De Greve, P. L. McMahon, Z. Islam, Y. Yamamoto, and I. R. Fisher, Science **329**, 824 (2010).
M. Yi, D. Lu, J.-H. Chu, J. G. Analytis, A. P. Sorini, A. F. Kemper, B. Moritz, S.-K. Mo, R. G. Moore, M. Hashimoto, W. S. Lee, Z. Hussain, T. P. Devereaux, I. R. Fisher, Z.-X. Shen, Proc. Nat. Acad. Sci. 2011 **108**, 6878 (2011).
J.-H. Chu, H.-H. Kuo, J. G. Analytis, and I. R. Fisher, Science **337**, 710 (2012).
S. Kasahara, H. J. Shi, K. Hashimoto, S. Tonegawa, Y. Mizukami, T. Shibauchi, K. Sugimoto, T. Fukuda, T. Terashima, A. H. Nevidomskyy, and Y. Matsuda, Nature **486**, 382 (2012).
R. M. Fernandes, A. V. Chubukov, J. Knolle, I. Eremin, and J. Schmalian, Phys. Rev. B **85**, 024534 (2012).
K. Kuroki, H. Usui, S. Onari, R. Arita, and H. Aoki, Phys. Rev. B **79**, 224511 (2009).
S. Graser, A. F. Kemper, T. A. Maier, H.-P. Cheng, P. J. Hirschfeld, and D. J. Scalapino, Phys. Rev. B **81**, 214503 (2010).
S. Maiti, M. M. Korshunov, T. A. Maier, P. J. Hirschfeld, and A. V. Chubukov, Phys. Rev. B **84**, 224505 (2011); *ibid* Phys. Rev. Lett. **107**, 147002 (2011).
F. Yang, F. Wang, and D.-H. Lee, arXiv:1305.0605
C. C. Lee, W. G. Yin, and W. Ku, Phys. Rev. Lett. **103**, 267001 (2009).
C.-C. Chen, J. Maciejko, A. P. Sorini, B. Moritz, R. R. P. Singh, and T. P. Devereaux, Phys. Rev. B **82**, 100504 (2010).
W.-C. Lee and P. W. Phillips, Phys. Rev. B **86**, 245113 (2012).
S. Onari H. and Kontani, Phys. Rev. Lett. **109**, 137001 (2012).
C. Fang, H. Yao, W.-F. Tsai, J. Hu, and S. A. Kivelson, Phys. Rev. B **77**, 224509 (2008).
C. Xu, M. Muller, and S. Sachdev, Phys. Rev. B **78**, 020501(R) (2008).
R. M. Fernandes, L. H. VanBebber, S. Bhattacharya, P. Chandra, V. Keppens, D. Mandrus, M. A. McGuire, B. C. Sales, A. S. Sefat, and J. Schmalian, Phys. Rev. Lett. **105**, 157003 (2010).
E. Fradkin, S. A. Kivelson, M. J. Lawler, J. P. Eisenstein, and A. P. Mackenzie, Annu. Rev. Condens. Matter Phys. **1**, 153 (2010).
W.-C. Lee, S.-C. Zhang, and C. Wu, Phys. Rev. Lett. **102**, 217002 (2009).
R. Thomale, C. Platt, W. Hanke, J. Hu, and B. A. Bernevig, Phys. Rev. Lett. **107**, 117001 (2011).
S. Maiti, M. M. Korshunov, and A. V. Chubukov, Phys. Rev. B **85**, 014511 (2012).
R. M. Fernandes and A. J. Millis, Phys. Rev. Lett. **110**, 117004 (2013).
S. Nandi, M. G. Kim, A. Kreyssig, R. M. Fernandes, D. K. Pratt, A. Thaler, N. Ni, S. L. Bud’ko, P. C. Canfield, J. Schmalian, R. J. McQueeney, and A. I. Goldman, Phys. Rev. Lett. **104**, 057006 (2010).
E. G. Moon and S. Sachdev, Phys. Rev. B. **85**, 184511 (2012).
R. M. Fernandes and J. Schmalian, Supercond. Sci. Technol. **25**, 084005 (2012).
R. M. Fernandes, S. Maiti, P. Wölfle, and A. V. Chubukov, arXiv:1305.4670
T. A. Maier, P. J. Hirschfeld, and D. J. Scalapino, arXiv:1206.5235.
C. Platt, R. Thomale, C. Honerkamp, S.-C. Zhang, and W. Hanke, Phys. Rev. B **85**, 180502(R) (2012).
M. Khodas and A. V. Chubukov, Phys. Rev. Lett. **108**, 247003 (2012).
T. Das and A. V. Balatsky, arXiv:1208.2468
V. Stanev and Z. Te¨anović, Phys. Rev. B **81**, 134522 (2010).
T. Shimojima, F. Sakaguchi, K. Ishizaka, Y. Ishida, T. Kiss, M. Okawa, T. Togashi, C.-T. Chen, S. Watanabe, M. Arita, K. Shimada, H. Namatame, M. Taniguchi, K. Ohgushi, S. Kasahara, T. Terashima, T. Shibauchi, Y. Matsuda, A. Chainani, and S. Shin, Science **332**, 564 (2011).
J.-Ph. Reid, M. A. Tanatar, A. Juneau-Fecteau, R. T. Gordon, S. Rene de Cotret, N. Doiron-Leyraud, T. Saito, H. Fukazawa, Y. Kohori, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, R. Prozorov, and Louis Taillefer, Phys. Rev. Lett. **109**, 087001 (2012); J.-Ph. Reid, A. Juneau-Fecteau, R. T. Gordon, S. Rene de Cotret, N. Doiron-Leyraud, X. G. Luo, H. Shakeripour, J. Chang, M. A. Tanatar, H. Kim, R. Prozorov, T. Saito, H. Fukazawa, Y. Kohori, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, B. Shen, H.-H. Wen, and Louis Taillefer, Supercond. Sci. Technol. **25**, 084013 (2012).
K. Okazaki, Y. Ota, Y. Kotani, W. Malaeb, Y. Ishida, T. Shimojima, T. Kiss, S. Watanabe, C.-T. Chen, K. Kihou, C. H. Lee, A. Iyo, H. Eisaki, T. Saito, H. Fukazawa, Y. Kohori, K. Hashimoto, T. Shibauchi, Y. Matsuda, H. Ikeda, H. Miyahara, R. Arita, A. Chainani, and S. Shin, Science **337**, 1314 (2012).
J. T. Park, G. Friemel, Yuan Li, J.-H. Kim, V. Tsurkan, J. Deisen- hofer, H.-A. Krug von Nidda, A. Loidl, A. Ivanov, B. Keimer, D. S. Inosov, Phys. Rev. Lett. **107**, 177005 (2011); G. Friemel, J. T. Park, T. A. Maier, V. Tsurkan, Yuan Li, J. Deisen- hofer, H.-A. Krug von Nidda, A. Loidl, A. Ivanov, B. Keimer, D. S. Inosov, Phys. Rev. B **85**, 140511(R) (2012).
M. Xu, Q. Q. Ge, R. Peng, Z. R. Ye, Juan Jiang, F. Chen, X. P. Shen, B. P. Xie, Y. Zhang, and D. L. Feng, Phys. Rev. B **85**, 220504(R) (2012).
G. S. Tucker, D. K. Pratt, M. G. Kim, S. Ran, A. Thaler, G. E. Granroth, K. Marty, W. Tian, J. L. Zarestky, M. D. Lumsden, S. L. Bud’ko, P. C. Canfield, A. Kreyssig, A. I. Goldman, and R. J. McQueeney, Phys. Rev. B **86**, 020503(R) (2012).
F. Kretzschmar, B. Muschler, T. Böhm, A. Baum, R. Hackl, H.-H. Wen, V. Tsurkan, J. Deisenhofer, and A. Loidl, Phys. Rev. Lett. **110**, 187002 (2013).
M. A. Tanatar, E. C. Blomberg, A. Kreyssig, M. G. Kim, N. Ni, A. Thaler, S. L. Bud’ko, P. C. Canfield, A. I. Goldman, I. I. Mazin, and R. Prozorov, Phys. Rev. B **81**, 184508 (2010).
S. Maiti and A. V. Chubukov, Phys. Rev. B **87**, 144511 (2013).
R. M. Fernandes and J. Schmalian, Phys. Rev. B **82**, 014521 (2010).
G. Livanas, A. Aperis, P. Kotetes, and G. Varelogiannis, arXiv:1208.2881.
L. R. Testardi, in Physical Acoustics, edited by Warren P. Mason and R. N. Thurston (Academic, New York, 1973).
A. J. Millis and K. M. Rabe, Phys. Rev. B **38**, 8908 (1988).
M. Yoshizawa, D. Kimura, T. Chiba, A. Ismayil, Y. Nakanishi, K. Kihou, C.-H. Lee, A. Iyo, H. Eisaki, M. Nakajima, and S. Uchida, J. Phys. Soc. Jpn. **81**, 024604 (2012).
F. F. Tafti, A. Juneau-Fecteau, M.-E. Delage, S. Rene de Cotret, J.-Ph. Reid, A. F. Wang, X.-G. Luo, X. H. Chen, N. Doiron-Leyraud, and L. Taillefer, Nature Phys. **9**, 349 (2013).
H.-H. Kuo, J. G. Analytis, J.-H. Chu, R. M. Fernandes, J. Schmalian, and I. R. Fisher, Phys. Rev. B **86**, 134507 (2012).
M. Sigrist, Prog. Theor. Phys. **107**, 917 (2002).
M. B. Walker and P. Contreras, Phys. Rev. B **66**, 214508 (2002).
[**Supplementary material for “Nematicity as a probe of superconducting pairing in iron-based superconductors"**]{}\
Microscopic derivation of the free energy
=========================================
We consider the Fermi surface displayed in Fig. 1a of the main text, with hole pockets at the center of the Brillouin zone $\Gamma=\left(0,0\right)$ and electron pockets centered at $X=\left(\pi,0\right)$ and $Y=\left(0,\pi\right)$. For simplicity, we assume the two hole pockets to be degenerate and label the pockets by $i=\Gamma,X,Y$. Stripe-type spin fluctuations induce repulsive hole pocket-electron pocket interactions $\bar{U}_{\Gamma X}$ and $\bar{U}_{\Gamma Y}$, whereas Neel-type fluctuations give rise to a repulsive electron pocket-electron pocket interaction $\bar{U}_{XY}$. The free energy density $F$ is given by [@FernandesPRB10; @s_plus_is]:
$$F=\sum_{i,j}\Delta_{i}U_{ij}^{-1}\Delta_{j}^{*}-\sum_{i}\frac{1}{N_{i}}\left(\int_{k}G_{i,k}G_{i,-k}\right)\left|\Delta_{i}\right|^{2}+\sum_{i}\frac{1}{2N_{i}^{2}}\left(\int_{k}G_{i,k}^{2}G_{i,-k}^{2}\right)\left|\Delta_{i}\right|^{4}\label{aux_F}$$
where $i=\Gamma,X,Y$ is the band index, $G_{i,k}^{-1}=i\omega_{n}-\varepsilon_{i,\mathbf{k}}$ is the bare Green’s function of band $i$, $k=\left(\omega_{n},\mathbf{k}\right)$ labels the momentum $\mathbf{k}$ and the fermionic Matsubara frequency $\omega_{n}=(2n+1)\pi T$, $\int_{k}=T\sum_{\omega_{n}}\int\frac{d^{d}k}{\left(2\pi\right)^{d}}$. The gap functions have been rescaled from the standard BCS definitions as $\Delta_{i}=\Delta_{i,0}\sqrt{N_{i}}$ where $N_{i}$ is the density of states of band $i$ and $U_{ij}$ are the components of the interaction matrix $$\mathbf{U}=\left(\begin{array}{ccc}
0 & -\lambda_{X\Gamma} & -\lambda_{Y\Gamma}\\
-\lambda_{X\Gamma} & 0 & -\lambda_{XY}\\
-\lambda_{Y\Gamma} & -\lambda_{XY} & 0
\end{array}\right)\label{Udef}$$ where $\lambda_{ij}=\bar{U}_{ij}\sqrt{N_{i}N_{j}}$. In the tetragonal phase, $\lambda_{X\Gamma}=\lambda_{Y\Gamma}$; nematic order leads to a difference between the two coefficients and also makes $N_{X}\neq N_{Y}$.
Evaluation of the Green’s function products yields
$$F=\sum_{i,j}\Delta_{i}U_{ij}^{-1}\Delta_{j}^{*}-\ln\left(\frac{W}{T}\right)\sum_{i}\left|\Delta_{i}\right|^{2}+\sum_{i}\frac{u_{0}}{N_{i}}\left|\Delta_{i}\right|^{4}\label{FF}$$
with $u_{0}=\frac{7\zeta\left(3\right)}{16\pi^{2}T^{2}}>0$ and $W$ a cutoff set by the smaller of the frequency cutoff of the interaction and the distance from the Fermi level to the band edge.
We begin our analysis of Eq. \[FF\] by diagonalizing the quadratic term. In the tetragonal symmetry case, where $\lambda_{X\Gamma}=\lambda_{Y\Gamma}$, the three eigenvalues and corresponding eigenvectors of the $U$ matrix are (we define the basis as $\left(\Gamma,X,Y\right)$) $$\begin{aligned}
\Delta_{s^{++}} & = & \left(\begin{array}{c}
\sin\Psi\\
\frac{1}{\sqrt{2}}\cos\Psi\\
\frac{1}{\sqrt{2}}\cos\Psi
\end{array}\right);~~\lambda_{s^{++}}=-\sqrt{2}\lambda_{X\Gamma}\cot\Psi\\
\Delta_{s^{+-}} & = & \left(\begin{array}{c}
-\cos\Psi\\
\frac{1}{\sqrt{2}}\sin\Psi\\
\frac{1}{\sqrt{2}}\sin\Psi
\end{array}\right);~~\lambda_{s^{+-}}=\sqrt{2}\lambda_{X\Gamma}\tan\Psi\\
\Delta_{d} & = & \left(\begin{array}{c}
0\\
-\frac{1}{\sqrt{2}}\\
\frac{1}{\sqrt{2}}
\end{array}\right);~~\lambda_{d}=\lambda_{XY}\end{aligned}$$ with $$\tan\Psi=\frac{\sqrt{8\lambda_{X\Gamma}^{2}+\lambda_{XY}^{2}}-\lambda_{XY}}{2\sqrt{2}\lambda_{X\Gamma}}\label{Psidef}$$
The three solutions correspond, respectively, to the $s_{++}$ state (gap functions of equal sign in all the Fermi pockets), to the $s_{++}$ state (equal sign in the electron pockets, opposite sign in the hole pocket), and to the d-wave state (opposite signs in the electron pockets).
Inverting the equations to obtain expressions for the order parameter in the band basis yields:
$$\begin{aligned}
\Delta_{\Gamma} & = & -\cos\Psi\Delta_{s^{+-}}+\sin\Psi\Delta_{s^{++}}\nonumber \\
\Delta_{X} & = & \frac{1}{\sqrt{2}}\left(-\Delta_{d}+\sin\Psi\Delta_{s^{+-}}+\cos\Psi\Delta_{s^{++}}\right)\nonumber \\
\Delta_{Y} & = & \frac{1}{\sqrt{2}}\left(\Delta_{d}+\sin\Psi\Delta_{s^{+-}}+\cos\Psi\Delta_{s^{++}}\right)\label{aux_transformation_matrix}\end{aligned}$$
which can be equivalently written in terms of the vectors $\boldsymbol{\Delta}_{\mathrm{band}}=\left(\begin{array}{ccc}
\Delta_{\Gamma} & \Delta_{X} & \Delta_{Y}\end{array}\right)^{T}$, $\boldsymbol{\Delta}_{\mathrm{sym}}=\left(\begin{array}{ccc}
\Delta_{s^{++}} & \Delta_{s^{+-}} & \Delta_{d}\end{array}\right)^{T}$, and transformation matrix:
$$\boldsymbol{\Lambda}=\left(\begin{array}{ccc}
\sin\Psi & -\cos\Psi & 0\\
\frac{\cos\Psi}{\sqrt{2}} & \frac{\sin\Psi}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\
\frac{\cos\Psi}{\sqrt{2}} & \frac{\sin\Psi}{\sqrt{2}} & +\frac{1}{\sqrt{2}}
\end{array}\right)\label{transformation_matrix}$$
as: $$\boldsymbol{\Delta}_{\mathrm{band}}=\boldsymbol{\Lambda}\boldsymbol{\Delta}_{\mathrm{sym}}\label{eq:}$$
Substituting this into the first two terms of Eq. \[FF\], we obtain the quadratic term $F^{(2)}$: $$F^{(2)}=\frac{t_{s^{++}}}{2}\left|\Delta_{s^{+-}}\right|^{2}+\frac{t_{s^{+-}}}{2}\left|\Delta_{s^{+-}}\right|^{2}+\frac{t_{d}}{2}\left|\Delta_{d}\right|^{2}\label{F2def}$$ with $$t_{i}=\frac{1}{\lambda_{i}}-\ln\left(\frac{W}{T}\right)\label{tSD}$$
Therefore, the transition temperature is given by $T_{c}=W\exp\left(-1/\lambda_{\mathrm{max}}\right)$, where $\lambda_{\mathrm{max}}$ is the largest of the eigenvalues of the $U_{ij}$ interaction matrix. Since $\lambda_{s^{++}}<0$ always, the $s^{++}$ state is never realized, so we set $\Delta_{s^{++}}=0$ hereafter. Analyzing the eigenvalues, we find that, for $\lambda_{XY}<\lambda_{X\Gamma}$, the leading instability is towards an $s^{+-}$ state, whereas for $\lambda_{XY}>\lambda_{X\Gamma}$, it is towards a d-wave state. The phase diagram is shown in Fig. \[fig\_BCS\_Tc\]. Note that the $s^{+-}$/d-wave degeneracy point $\lambda_{XY}=\lambda_{X\Gamma}$ corresponds to $\tan\Psi=1/\sqrt{2}$, what implies $\cos\Psi=\sqrt{2/3}$ and $\sin\Psi=1/\sqrt{3}$.
To obtain the quartic term $F^{(4)}$ of the free energy, we substitute (\[aux\_transformation\_matrix\]) in the last term of the free energy (\[FF\]), obtaining: $$\frac{F^{(4)}}{u_{0}}=\left(\frac{\cos^{4}\Psi}{N_{\Gamma}}+\frac{\sin^{4}\Psi}{2N_{X}}\right)\left|\Delta_{s^{+-}}\right|^{4}+\frac{1}{2N_{X}}\left|\Delta_{d}\right|^{4}+\frac{2\sin^{2}\Psi}{N_{X}}\left|\Delta_{s^{+-}}\right|^{2}\left|\Delta_{d}\right|^{2}\left(1+\frac{1}{2}\cos2\theta\right)\label{F4}$$ which can also be expressed in the form:
$$F^{(4)}=\frac{\beta_{s}}{4}\left|\Delta_{s^{+-}}\right|^{4}+\frac{\beta_{d}}{4}\left|\Delta_{d}\right|^{4}+\frac{1}{2}\left|\Delta_{s^{+-}}\right|^{2}\left|\Delta_{d}\right|^{d}\left(\beta_{sd}+\alpha\cos2\theta\right)\label{F4_aux}$$
with the Ginzburg-Landau coefficients $\beta_{d}=2u_{0}/N_{X}$ and: $$\begin{aligned}
\beta_{s} & = & \beta_{d}\left(\frac{2N_{X}}{N_{\Gamma}}\cos^{4}\Psi+\sin^{4}\Psi\right)\\
\beta_{sd} & = & 2\beta_{d}\sin^{2}\Psi\\
\alpha & = & \frac{\beta_{sd}}{2}\end{aligned}$$
Here, $\theta$ is the relative phase between the d-wave and $s^{+-}$ gaps. Note that we have $\alpha>0$ and:
$$\frac{\left(\beta_{sd}-\alpha\right)^{2}}{\beta_{s}\beta_{d}}=\left(1+\frac{2N_{X}}{N_{\Gamma}}\cot^{4}\Psi\right)^{-1}<1$$ implying that there is an $s+id$ coexistence state below the $s^{+-}$/d-wave degeneracy point in the tetragonal-symmetric case. The expressions given in the main text are obtained by evaluating the equations above at the degeneracy point, where $\tan\Psi=1/\sqrt{2}$, and also assuming $N_{X}\approx N_{\Gamma}$.
The formulae given above are derived assuming tetragonal symmetry. In the nematic phase, the leading order effect of a tetragonal symmetry breaking is a change in the interaction matrix $\mathbf{U}\rightarrow{\mathbf{U}+\delta\mathbf{U}}$ with (in the $\Gamma,X,Y$ basis) $$\delta\mathbf{U}=\frac{\zeta}{2}\left(\begin{array}{ccc}
0 & -\varphi & \varphi\\
-\varphi & 0 & 0\\
\varphi & 0 & 0
\end{array}\right)$$ where $\varphi$ is the nematic order parameter and $\zeta$ is a coupling constant describing how the interactions $\lambda_{ij}$ change in the presence of nematic order, i.e. $\lambda_{\left(X,Y\right)\Gamma}\rightarrow\lambda_{\left(X,Y\right)\Gamma}\pm\zeta\varphi$. Numerically, it is straightforward to obtain $T_{c}$ for a finite $\varphi$ by directly diagonalizing $\mathbf{U}+\delta\mathbf{U}$. The results, presented in figure \[fig\_BCS\_Tc\], show that $T_{c}$ increases for a finite nematic order parameter, with a pronounced peak at the degeneracy point $\lambda_{X\Gamma}=\lambda_{Y\Gamma}$. Figure \[fig\_BCS\_Tc\] shows that in the entire phase diagram, the eigenvector corresponding to the leading eigenvalue has contributions coming from all three components $s_{++}$, $s_{+-}$, and d-wave, with the latter being responsible for the main contributions.
![(upper left panel) $T_{c}$ (in units of the cutoff $W$) as function of the d-wave pairing interaction $\lambda_{XY}$ (in units of the s-wave interaction $\lambda_{X\Gamma}=0.2$) for $\varphi=0$ (blue curve) and $\varphi=0.05/\zeta$ (red curve). (upper right panel) Ratio between $T_{c}$ for $\varphi=0.05/\zeta$ and $T_{c,0}$ in the tetragonal phase ($\varphi=0$) as function of the d-wave pairing interaction $\lambda_{XY}$ (in units of the s-wave interaction $\lambda_{X\Gamma}=0.2$). (lower panel). As function of the d-wave pairing interaction $\lambda_{XY}$, we present the projection $P_{i}=\left\langle \Delta_{\varphi}\left.\right|\Delta_{i}\right\rangle $ of the eigenvector $\Delta_{\varphi}$ that diagonalizes the problem in the nematic phase ($\varphi=0.01/\zeta$) along the three eigenvectors $\Delta_{i}$ of the tetragonal phase: $s_{++}$ (magenta), $s_{+-}$ (green curve), and d-wave (orange). \[fig\_BCS\_Tc\]](BCS_Tc_total "fig:"){width="0.4\columnwidth"}![(upper left panel) $T_{c}$ (in units of the cutoff $W$) as function of the d-wave pairing interaction $\lambda_{XY}$ (in units of the s-wave interaction $\lambda_{X\Gamma}=0.2$) for $\varphi=0$ (blue curve) and $\varphi=0.05/\zeta$ (red curve). (upper right panel) Ratio between $T_{c}$ for $\varphi=0.05/\zeta$ and $T_{c,0}$ in the tetragonal phase ($\varphi=0$) as function of the d-wave pairing interaction $\lambda_{XY}$ (in units of the s-wave interaction $\lambda_{X\Gamma}=0.2$). (lower panel). As function of the d-wave pairing interaction $\lambda_{XY}$, we present the projection $P_{i}=\left\langle \Delta_{\varphi}\left.\right|\Delta_{i}\right\rangle $ of the eigenvector $\Delta_{\varphi}$ that diagonalizes the problem in the nematic phase ($\varphi=0.01/\zeta$) along the three eigenvectors $\Delta_{i}$ of the tetragonal phase: $s_{++}$ (magenta), $s_{+-}$ (green curve), and d-wave (orange). \[fig\_BCS\_Tc\]](BCS_Tc_relative "fig:"){width="0.4\columnwidth"}
![(upper left panel) $T_{c}$ (in units of the cutoff $W$) as function of the d-wave pairing interaction $\lambda_{XY}$ (in units of the s-wave interaction $\lambda_{X\Gamma}=0.2$) for $\varphi=0$ (blue curve) and $\varphi=0.05/\zeta$ (red curve). (upper right panel) Ratio between $T_{c}$ for $\varphi=0.05/\zeta$ and $T_{c,0}$ in the tetragonal phase ($\varphi=0$) as function of the d-wave pairing interaction $\lambda_{XY}$ (in units of the s-wave interaction $\lambda_{X\Gamma}=0.2$). (lower panel). As function of the d-wave pairing interaction $\lambda_{XY}$, we present the projection $P_{i}=\left\langle \Delta_{\varphi}\left.\right|\Delta_{i}\right\rangle $ of the eigenvector $\Delta_{\varphi}$ that diagonalizes the problem in the nematic phase ($\varphi=0.01/\zeta$) along the three eigenvectors $\Delta_{i}$ of the tetragonal phase: $s_{++}$ (magenta), $s_{+-}$ (green curve), and d-wave (orange). \[fig\_BCS\_Tc\]](BCS_projection_eigvc){width="0.4\columnwidth"}
To understand this increase in $T_{c}$, we use the transformation matrix $\boldsymbol{\Lambda}$ in Eq. (\[transformation\_matrix\]) to project the gap equation $\left(\mathbf{U+\delta U}\right)\ln\frac{W}{T_{c}}=\mathbf{1}$ onto the $s^{+-}$ and $d$ subspace, yielding:
$$1=\ln\frac{W}{T_{c}}\left(\begin{array}{cc}
\lambda_{s^{+-}} & -\frac{\zeta\cos\Psi}{\sqrt{2}}\,\varphi\\
-\frac{\zeta\cos\Psi}{\sqrt{2}}\,\varphi & \lambda_{d}
\end{array}\right)$$
Diagonalizing this matrix, we find $T_{c}=W\exp\left(-1/\lambda_{\mathrm{max}}\right)$ with the leading eigenvalue $$\lambda_{\mathrm{max}}=\left(\frac{\lambda_{s^{+-}}+\lambda_{d}}{2}\right)+\sqrt{\left(\frac{\lambda_{s^{+-}}-\lambda_{d}}{2}\right)^{2}+\frac{\zeta^{2}\cos^{2}\Psi}{2}\,\varphi^{2}}$$ which is clearly greater than either $\lambda_{s^{+-}}$ or $\lambda_{d}$ if $\varphi\neq0$, so that $T_{c}$ is increased in the nematic phase. In particular, at the degeneracy point $\lambda_{s^{+-}}=\lambda_{d}$ the increase is linear in $\left|\varphi\right|$; away from this point, the variation with $\varphi$ is quadratic. Note that the eigenvector is a real admixture of $s^{+-}$ and d-wave contributions, with equal weights at the degeneracy point.
To obtain the coupling between the nematic and the SC order parameters, we take the inverse $\left(\mathbf{U}+\delta\mathbf{U}\right)^{-1}\approx\mathbf{U}^{-1}-\mathbf{U}^{-1}\delta\mathbf{U}\,\mathbf{U}^{-1}$ to leading order in $\zeta$, substitute in the first term of Eq. (\[FF\]), and change basis via $\boldsymbol{\Lambda}$, yielding: $$F\left(\varphi\right)=F\left(\varphi=0\right)-\boldsymbol{\Delta}\left(\boldsymbol{\Lambda}^{T}\mathbf{U}^{-1}\boldsymbol{\Lambda}\right)\left(\boldsymbol{\Lambda}^{T}\delta\mathbf{U}\:\boldsymbol{\Lambda}\right)\left(\boldsymbol{\Lambda}^{T}\mathbf{U}^{-1}\boldsymbol{\Lambda}\right)\boldsymbol{\Delta}^{*}$$
Evaluation of the matrix products then yields the tri-linear term:
$$F\left(\varphi\right)=F\left(\varphi=0\right)+\left(\frac{\zeta\cos^{2}\Psi}{\lambda_{X\Gamma}\lambda_{XY}\sin\Psi}\right)\varphi\left|\Delta_{s^{+-}}\right|\left|\Delta_{d}\right|\cos\theta\label{tri_linear}$$
The tri-linear coupling constant reduces to $\lambda=2\zeta/\left(\sqrt{3}\lambda_{X\Gamma}^{2}\right)$ at the degeneracy point.
Eliashberg equations for the interplay between $s^{+-}$, $d$-wave, and nematicity
=================================================================================
We now generalize the weak-coupling BCS model of the previous section to an Eliashberg calculation that takes into account the explicit form of the dynamic spin fluctuation susceptibilities $\chi_{i}\left(\mathbf{Q}_{i}+\mathbf{q},\omega\right)$, where $\mathbf{Q}_{i}$ refers to either the magnetic stripe-state ordering vectors $\mathbf{Q}_{1}=\left(\pi,0\right)$ and $\mathbf{Q}_{2}=\left(0,\pi\right)$ or the Neel ordering vector $\mathbf{Q}_{3}=\left(\pi,\pi\right)$. In each channel, we have overdamped spin dynamics:
$$\chi_{i}\left(\mathbf{q}+\mathbf{Q}_{i},\Omega_{n}\right)=\frac{1}{\left|\Omega_{n}\right|\gamma_{i}^{-1}+q^{2}+\xi_{i}^{-2}}$$ where $\gamma_{i}$ is the Landau damping and $\xi_{i}$ is the magnetic correlation length (measured in units of the lattice parameter). When coupled to the electronic degrees of freedom, via coupling constants $g_{i}$, these magnetic fluctuations give rise to the repulsive electronic interactions responsible for $s^{+-}$ and $d$-wave pairing.
This model is a generalization of the 3-band Eliashberg formalism introduced by us in Ref. [@Fernandes13]. Following that notation, we define the effective SC coupling constants:
$$\begin{aligned}
\lambda_{1} & \equiv & 2g_{1}^{2}\sqrt{N_{\Gamma}N_{X}}\nonumber \\
\lambda_{3} & \equiv & g_{3}^{2}N_{x}\label{coupling_constants}\end{aligned}$$
and the ratio between the density of states $r\equiv N_{X}/N_{\Gamma}$. Then, the Eliashberg equations are given by:
$$\begin{aligned}
\frac{Z_{\Gamma,n}\omega_{n}}{T} & = & \left(2n+1\right)+\frac{\lambda_{1}\sqrt{r}}{2}\sum_{m}\mathrm{sgn}\left(2m+1\right)\left(\xi_{1}a_{nm}^{(1)}+\xi_{2}a_{nm}^{(2)}\right)\nonumber \\
\frac{Z_{X,n}\omega_{n}}{T} & = & \left(2n+1\right)+\sum_{m}\mathrm{sgn}\left(2m+1\right)\left(\frac{\lambda_{1}\xi_{1}}{\sqrt{r}}a_{nm}^{(1)}+\lambda_{3}\xi_{3}a_{nm}^{(3)}\right)\nonumber \\
\frac{Z_{Y,n}\omega_{n}}{T} & = & \left(2n+1\right)+\sum_{m}\mathrm{sgn}\left(2m+1\right)\left(\frac{\lambda_{1}\xi_{2}}{\sqrt{r}}a_{nm}^{(2)}+\lambda_{3}\xi_{3}a_{nm}^{(3)}\right)\label{final_Z_eqs}\end{aligned}$$
as well as:
$$\begin{aligned}
W'_{\Gamma,n} & = & -\frac{\lambda_{1}}{2}T\sum_{m}\left[\frac{W'_{X,m}}{Z_{X,m}\left|\omega_{m}\right|}\,\xi_{1}a_{nm}^{(1)}+\frac{W'_{Y,m}}{Z_{Y,m}\left|\omega_{m}\right|}\,\xi_{2}a_{nm}^{(2)}\right]\nonumber \\
W'_{X,n} & = & -\lambda_{1}\xi_{1}T\sum_{m}\frac{W'_{\Gamma,m}}{Z_{\Gamma,m}\left|\omega_{m}\right|}\, a_{nm}^{(1)}-\lambda_{3}\xi_{3}T\sum_{m}\frac{W'_{Y,m}}{Z_{Y,m}\left|\omega_{m}\right|}\, a_{nm}^{(3)}\nonumber \\
W'_{Y,n} & = & -\lambda_{1}\xi_{2}T\sum_{m}\frac{W'_{\Gamma,m}}{Z_{\Gamma,m}\left|\omega_{m}\right|}\, a_{nm}^{(2)}-\lambda_{3}\xi_{3}T\sum_{m}\frac{W'_{X,m}}{Z_{X,m}\left|\omega_{m}\right|}\, a_{nm}^{(3)}\label{final_gap_eqs}\end{aligned}$$
where $Z_{i,n}$ and $W_{i}$ are the frequency-dependent normal and anomalous components of the self-energy, associated with the mass renormalization and the gap functions, respectively. These quantities correspond to averages around each Fermi pocket - note that the orbital content of the Fermi surface is incorporated in the coupling constants, as explained in Ref. [@Fernandes13]. Finally, notice that we rescaled the $W_{i}$ functions as $W{}_{X,Y}=W'_{X,Y}\sqrt{N_{\Gamma}}$ and $W{}_{\Gamma}=W'_{\Gamma}\sqrt{N_{X}}$. The Matsubara-axis interactions $a_{nm}^{(i)}$, generated by the spin fluctuation spectra, are given by:
$$a_{nm}^{(i)}=\frac{1}{\sqrt{1+\left|n-m\right|2\pi T\gamma_{i}^{-1}\xi_{i}^{2}}}\label{a_nm}$$
The sums in the $Z$ functions can be evaluated analytically. By introducing the auxiliary function:
$$S_{i,n}=\frac{2\,\mathrm{sgn}\left(n\right)}{\sqrt{2\pi T\gamma_{i}^{-1}\xi_{i}^{2}}}\left[\mathrm{Hw}\left(\frac{1}{2},1+\frac{1}{2\pi T\gamma_{i}^{-1}\xi_{i}^{2}}\right)-\mathrm{Hw}\left(\frac{1}{2},\left|n\right|+\frac{\mathrm{sgn}\left(n\right)+1}{2}+\frac{1}{2\pi T\gamma_{i}^{-1}\xi_{i}^{2}}\right)\right]+\mathrm{sgn}\left(n\right)\label{sum_final}$$
for $n\neq0,-1$ and $S_{i,n}=2\,\mathrm{sgn}\left(n\right)+1$ for $n=0,-1$, where $\mathrm{Hw}(x)$ is the Huruwitz zeta function, we obtain:
$$\begin{aligned}
\frac{Z_{\Gamma,n}\omega_{n}}{T} & = & \left(2n+1\right)+\frac{\sqrt{r}\lambda_{1}\xi_{1}}{2}S_{1,n}+\frac{\sqrt{r}\lambda_{1}\xi_{2}}{2}S_{2,n}\nonumber \\
\frac{Z_{X,n}\omega_{n}}{T} & = & \left(2n+1\right)+\frac{\lambda_{1}\xi_{1}}{\sqrt{r}}S_{1,n}+\lambda_{3}\xi_{3}S_{3,n}\nonumber \\
\frac{Z_{Y,n}\omega_{n}}{T} & = & \left(2n+1\right)+\frac{\lambda_{1}\xi_{2}}{\sqrt{r}}S_{2,n}+\lambda_{3}\xi_{3}S_{3,n}\label{Z_analytical}\end{aligned}$$
For the gap functions, we introduce $\bar{\Delta}_{i,n}\equiv\frac{W'_{i,n}}{Z_{i,n}\left|\omega_{n}\right|}$, yielding:
$$\begin{aligned}
\bar{\Delta}_{\Gamma,n}\frac{Z_{\Gamma,n}\left|\omega_{n}\right|}{T} & = & -\frac{\lambda_{1}}{2}\,\xi_{1}\sum_{m}\bar{\Delta}_{X,m}a_{nm}^{(1)}-\frac{\lambda_{1}}{2}\,\xi_{2}\sum_{m}\bar{\Delta}_{Y,m}a_{nm}^{(2)}\nonumber \\
\bar{\Delta}_{X,n}\frac{Z_{X,n}\left|\omega_{n}\right|}{T} & = & -\lambda_{1}\xi_{1}\sum_{m}\bar{\Delta}_{\Gamma,m}a_{nm}^{(1)}-\lambda_{3}\xi_{3}\sum_{m}\bar{\Delta}_{Y,m}a_{nm}^{(3)}\nonumber \\
\bar{\Delta}_{Y,n}\frac{Z_{Y,n}\left|\omega_{n}\right|}{T} & = & -\lambda_{1}\xi_{2}\sum_{m}\bar{\Delta}_{\Gamma,m}a_{nm}^{(2)}-\lambda_{3}\xi_{3}\sum_{m}\bar{\Delta}_{X,m}a_{nm}^{(3)}\label{aux_matrix_eq}\end{aligned}$$
Thus, we can write the gap equations in matrix form as:
$$\sum_{m,\nu}\tilde{K}_{mn}^{\mu\nu}\tilde{\Delta}_{m}^{\nu}=0\label{matrix_eq}$$
where $\mu,\nu=1,2,3$ and the matrices are given by:
$$\left(\tilde{\Delta}_{m}\right)\equiv\left(\begin{array}{c}
\bar{\Delta}_{\Gamma,m}\\
\bar{\Delta}_{X,m}\\
\bar{\Delta}_{Y,m}
\end{array}\right)\label{matrix_gap}$$
and:
$$\left(\tilde{K}_{nm}\right)\equiv\left(\begin{array}{ccc}
-\delta_{nm}\frac{Z_{\Gamma,n}\left|\omega_{n}\right|}{T} & -\frac{1}{2}\,\lambda_{1}\xi_{1}a_{nm}^{(1)} & -\frac{1}{2}\,\lambda_{1}\xi_{2}a_{nm}^{(2)}\\
-\lambda_{1}\xi_{1}a_{nm}^{(1)} & -\delta_{nm}\frac{Z_{X,n}\left|\omega_{n}\right|}{T} & -\lambda_{3}\xi_{3}a_{nm}^{(3)}\\
-\lambda_{1}\xi_{2}a_{nm}^{(2)} & -\lambda_{3}\xi_{3}a_{nm}^{(3)} & -\delta_{nm}\frac{Z_{Y,n}\left|\omega_{n}\right|}{T}
\end{array}\right)\label{matrix_kernel}$$
The transition temperature is found when the largest eigenvalue of the $\tilde{K}$ matrix vanishes. Following Ref. [@Fernandes13], we used the parameters $\lambda_{1}=0.4$, $\lambda_{2}=0.8$, $r=0.65$, $\gamma_{3}/\gamma_{1}=0.33$, $\gamma_{1}=25$meV, and $\xi_{0}=5$. All temperatures are given in units of $\gamma_{1}/2\pi$. In the tetragonal phase, we have $\xi_{1}=\xi_{2}=\xi_{0}$, and changing the Neel correlation length $\xi_{3}$ induces an $s^{+-}$ to d-wave transition, as shown in Fig. 3a of the main text.
In the nematic phase, long-range nematic order changes the magnetic spectrum, making the $\left(\pi,0\right)$ and $\left(0,\pi\right)$ correlation lengths unequal, $\xi_{1}\neq\xi_{2}$. To perform our calculations in the nematic phase, displayed in Fig. 3 of the main text, we used the model of Ref. [@Fernandes12] to relate the nematic order parameter $\varphi$ to the changes in the correlation lengths for a quasi-2D system, $\xi_{1,2}=\xi_{0}/\sqrt{\varphi\left(\coth\varphi\mp1\right)}$, implying $\varphi=\ln\left(\xi_{1}/\xi_{2}\right)$.
Estimate for the critical nematic susceptibility
================================================
In the main text, we derived the critical nematic susceptibility $\chi_{\mathrm{nem}}^{c}$ above which the system displays an $s\pm d$ state and spontaneously breaks tetragonal symmetry, $\chi_{\mathrm{nem}}^{c}\equiv2\alpha\lambda^{-2}$. Using the results of the previous sections, we can estimate this critical value. We have $\alpha=\frac{2}{3N_{X}}\left(\frac{7\zeta\left(3\right)}{16\pi^{2}T_{c}^{2}}\right)$ and $\lambda\approx0.33$, according to the numerical calculations presented in Fig. 3 of the main text. The density of states can be estimated as $N_{X}\approx\varepsilon_{0}^{-1}$ where $\varepsilon_{0}\approx100$ meV is the Fermi energy of the Fermi pockets. Using $T_{c}\approx\Delta\approx3$ meV, we obtain $\chi_{\mathrm{nem}}^{c}\approx7$ meV$^{-1}$. To have an idea of how strong this susceptibility is, we can estimate the magnitude of the shear modulus softening caused by it. Using the expression of Ref. [@shear_modulus], the relative reduction in the high-temperature shear modulus $C_{s,0}$ is given by $\left(\frac{C_{s}}{C_{s,0}}\right)=\left(1+\frac{\lambda_{\mathrm{el}}^{2}\chi_{\mathrm{nem}}}{C_{s,0}}\right)^{-1}$, where $\lambda_{\mathrm{el}}$ is the magneto-elastic coupling. Using the values $C_{s,0}\approx35$ GPa and $\lambda_{\mathrm{el}}\approx30$ meV then gives a reduction of only $14\%$ of the shear modulus, i.e. the critical nematic susceptibility is rather modest and reasonable to be realized experimentally.
|
{
"pile_set_name": "ArXiv"
}
|
Introduction {#sec1}
============
We consider the differential-difference operators $T_j$, $j =
1, 2,\dots,d$, on $\mathbb{R}^d$ associated to a root system $R$ and a multiplicity function $k$, introduced by C.F. Dunkl in [@3] and called the Dunkl operators in the literature. These operators are very important in pure mathematics and in physics. They provide a useful tool in the study of special functions related to root systems [@4; @6; @2]. Moreover the commutative algebra generated by these operators has been used in the study of certain exactly solvable models of quantum mechanics, namely the Calogero–Sutherland–Moser models, which deal with systems of identical particles in a one dimensional space (see [@8; @11; @12]).
C.F. Dunkl proved in [@4] that there exists a unique isomorphism $V_k$ from the space of homogeneous polynomials $\mathcal{P}_n$ on $\mathbb{R}^d$ of degree $n$ onto itself satisfying the transmutation relations $$\begin{gathered}
\label{eq1.1}
T_jV_k = V_k \frac{\partial}{\partial x_j},\qquad j = 1,
2,\dots,d,\end{gathered}$$ and $$\begin{gathered}
V_k(1) = 1.\label{eq1.2}
\end{gathered}$$ This operator is called the Dunkl intertwining operator. It has been extended to an isomorphism from $\mathcal{E}(\mathbb{R}^d)$ (the space of $C^\infty$-functions on $\mathbb{R}^d)$ onto itself satisfying the relations and (see [@15]).
The operator $V_k$ possesses the integral representation $$\begin{gathered}
\label{eq1.3} \forall\; x \in \mathbb{R}^d,\qquad V_k(f)(x) =
\int_{\mathbb{R}^d}f(y)d\mu_x(y), \qquad f \in
\mathcal{E}(\mathbb{R}^d),\end{gathered}$$ where $\mu_x$ is a probability measure on $\mathbb{R}^d$ with in the closed ball $B(0, \|x\|)$ of center $0$ and radius $\|x\|$ (see [@14; @15]).
We have shown in [@15] that for each $x \in \mathbb{R}^d$, there exists a unique distribution $\eta_x$ in $\mathcal{E}'(\mathbb{R}^d)$ (the space of distributions on $\mathbb{R}^d$ of compact ) with in $B(0, \|x\|)$ such that $$\begin{gathered}
V_k^{-1}(f)(x) = \langle \eta_x,f\rangle, \qquad f \in \mathcal{E}
(\mathbb{R}^d).\label{eq1.4}\end{gathered}$$
We have studied also in [@15] the transposed operator ${}^tV_k$ of the operator $V_k$, satisfying for $f$ in $\mathcal{S}(\mathbb{R}^d)$ (the space of $C^\infty$-functions on $\mathbb{R}^d$ which are rapidly decreasing together with their derivatives) and $g$ in $\mathcal{E}(\mathbb{R}^d)$, the relation $$\begin{gathered}
\int_{\mathbb{R}^d}{}^tV_k(f)(y)g(y)dy =
\int_{\mathbb{R}^d}V_k(g)(x) f(x) \omega_k(x)dx, $$ where $\omega_k$ is a positive weight function on $\mathbb{R}^d$ which will be defined in the following section. It has the integral representation $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad {}^tV_k(f)(y) =
\int_{\mathbb{R}^d}f(x)d\nu_y(x), \label{eq1.6}\end{gathered}$$ where $\nu_y$ is a positive measure on $\mathbb{R}^d$ with in the set $\{x \in \mathbb{R}^d ;\;\|x\| \geq \|y\|\}$. This operator is called the dual Dunkl intertwining operator.
We have proved in [@15] that the operator ${}^tV_k$ is an isomorphism from $\mathcal{D}(\mathbb{R}^d)$ (the space of $C^\infty$-functions on $\mathbb{R}^d$ with compact support) (resp. $\mathcal{S} (\mathbb{R}^d))$ onto itself, satisfying the transmutation relations $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad {}^tV_k(T_jf)(y) =
\frac{\partial}{\partial y_j} {}^tV_k(f)(y),\qquad j = 1,
2,\dots,d.$$ Moreover for each $y \in \mathbb{R}^d$, there exists a unique distribution $Z_y$ in $\mathcal{S}'(\mathbb{R}^d)$ (the space of tempered distributions on $\mathbb{R}^d)$ with in the set $\{x \in
\mathbb{R}^d ; \|x\| \geq \|y\|\}$ such that $$\begin{gathered}
{}^tV_k^{-1}(f)(y) = \langle
Z_y, f\rangle ,\qquad f \in \mathcal{S} (
\mathbb{R}^d).\label{eq1.8}\end{gathered}$$ Using the operator $V_k$, C.F. Dunkl has defined in [@5] the Dunkl kernel $K$ by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad \forall\; z \in
\mathbb{C}^d,\qquad K(x, -iz) = V_k(e^{-i\langle \cdot ,
z\rangle})(x).\label{eq1.9}\end{gathered}$$ Using this kernel C.F. Dunkl has introduced in [@5] a Fourier transform $\mathcal{F}_D$ called the Dunkl transform.
In this paper we establish the following inversion formulas for the operators $V_k$ and ${}^tV_k$: $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad
V^{-1}_k(f)(x) = P{}^tV_k(f)(x),\qquad
f \in \mathcal{S}( \mathbb{R}^d),\label{eq1.10}\\
\forall\; x \in \mathbb{R}^d, \qquad {}^tV_k^{-1}(f)(x) =
V_k(P(f))(x),\qquad f \in \mathcal{S}(\mathbb{R}^d),\nonumber
\end{gathered}$$ where $P$ is a pseudo-differential operator on $\mathbb{R}^d$.
When the multiplicity function takes integer values, the formula can also be written in the form $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad V^{-1}_k(f)(x)
= {}^tV_k(Q(f))(x), \qquad f \in \mathcal{S}(\mathbb{R}^d), $$ where $Q$ is a differential-difference operator.
Also we give another expression of the operator ${}^tV_k^{-1}$ on the space $\mathcal{E}'( \mathbb{R}^d)$. From these relations we deduce the expressions of the representing distributions $\eta_x$ and $Z_x$ of the inverse operators $V^{-1}_k$ and ${}^tV^{-1}_k$ by using the representing measures $\mu_x$ and $\nu_x$ of $V_k$ and ${}^tV_k$. They are given by the following formulas [$$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad \eta_x =
{}^tQ(\nu_x),\\ \forall\; x \in \mathbb{R}^d,\qquad
Z_x = {}^tP(\mu_x),$$ where ${}^tP$ and ${}^tQ$ are the transposed operators of $P$ and $Q$ respectively. ]{}
The contents of the paper are as follows. In Section \[sec2\] we recall some basic facts from Dunkl’s theory, and describe the Dunkl operators and the Dunkl kernel. We define in Section \[sec3\] the Dunkl transform introduced in [@5] by C.F. Dunkl, and we give the main theorems proved for this transform, which will be used in this paper. We study in Section \[sec4\] the Dunkl convolution product and the Dunkl transform of distributions which will be useful in the sequel, and when the multiplicity function takes integer values, we give another proof of the geometrical form of Paley–Wiener–Schwartz theorem for the Dunkl transform. We prove in Section \[sec5\] some inversion formulas for the Dunkl intertwining operator $V_k$ and its dual ${}^tV_k$ on spaces of functions and distributions. Section \[sec6\] is devoted to proving under the condition that the multiplicity function takes integer values an inversion formula for the Dunkl intertwining operator $V_k$, and we deduce the expression of the representing distributions of the inverse operators $V^{-1}_k$ and ${}^tV^{-1}_k$. In Section \[sec7\] we give some applications of the preceding inversion formulas.
The eigenfunction of the Dunkl operators {#sec2}
========================================
In this section we collect some notations and results on the Dunkl operators and the Dunkl kernel (see [@3; @4; @5; @7; @9; @10]).
Reflection groups, root systems and multiplicity functions {#sec2.1}
----------------------------------------------------------
We consider $\mathbb{R}^d$ with the Euclidean scalar product $\langle \cdot,\cdot\rangle$ and $\|x\| = \sqrt{\langle x ,
x\rangle}$. On $\mathbb{C}^d$, $\|\cdot\|$ denotes also the standard Hermitian norm, while $\langle z, w\rangle =
\sum^d_{j=1}z_j \overline{w_j}$ .
For $\alpha \in \mathbb{R}^d \backslash \{0\}$, let $\sigma_\alpha$ be the reflection in the hyperplane $H_\alpha
\subset \mathbb{R}^d$ orthogonal to $\alpha$, i.e. $$\begin{gathered}
\sigma_\alpha(x) = x - \left(\frac{2\langle
\alpha,x\rangle}{\|\alpha\|^2}\right)\alpha .$$ A finite set $R \subset \mathbb{R}^d \backslash \{0\}$ is called a root system if $R \cap \mathbb{R}\alpha = \{\pm \alpha\}$ and $\sigma_\alpha R = R$ for all $\alpha \in R$. For a given root system $R$ the reflections $\sigma_\alpha$, $\alpha \in R$, generate a finite group $W \subset O(d)$, the reflection group associated with $R$. All reflections in $W$ correspond to suitable pairs of roots. For a given $\beta \in \mathbb{R}^d
\backslash \cup_{\alpha \in R }H_\alpha$, we fix the positive subsystem $R_+ = \{\alpha \in R ; \langle \alpha, \beta\rangle >
0\}$, then for each $\alpha \in R$ either $\alpha \in R_+$ or $-\alpha \in R_+$.
A function $k : R \rightarrow \mathbb{C}$ on a root system $R$ is called a multiplicity function if it is invariant under the action of the associated reflection group $W$. If one regards $k$ as a function on the corresponding reflections, this means that $k$ is constant on the conjugacy classes of reflections in $W$. For abbreviation, we introduce the index $$\begin{gathered}
\gamma = \gamma(R) =
\sum_{\alpha \in R_+}
k(\alpha).$$ Moreover, let $\omega_k$ denotes the weight function $$\begin{gathered}
\omega_k(x) = \prod_{\alpha \in R_+}|\langle
\alpha,x\rangle|^{2k(\alpha)},$$ which is $W$-invariant and homogeneous of degree $2\gamma$.
For $d = 1$ and $W= \mathbb{Z}_2$, the multiplicity function $k$ is a single parameter denoted $\gamma$ and $$\begin{gathered}
\forall\; x \in
\mathbb{R},\qquad \omega_k(x) = |x|^{2\gamma}.$$ We introduce the Mehta-type constant $$\begin{gathered}
c_k = \left( \int_{\mathbb{R}^d}
e^{-\|x\|^2}\omega_k(x)dx\right)^{-1},$$ which is known for all Coxeter groups $W$ (see [@3; @6]).
The Dunkl operators and the Dunkl kernel {#sec2.2}
----------------------------------------
The Dunkl operators $T_j$, $j = 1,\dots,d$, on $\mathbb{R}^d$, associated with the finite reflection group $W$ and the multiplicity function $k$, are given for a function $f$ of class $C^1$ on $\mathbb{R}^d$ by $$\begin{gathered}
T_jf(x) = \frac{\partial}{\partial x_j}f(x) + \sum_{\alpha \in
R_+} k(\alpha) \alpha_j \frac{f(x) -
f(\sigma_\alpha(x))}{\langle\alpha,x\rangle}. $$ In the case $k \equiv 0$, the $T_j$, $j = 1, 2,\dots,d$, reduce to the corresponding partial derivatives. In this paper, we will assume throughout that $k \geq 0$ and $\gamma > 0$.
For $f$ of class $C^1$ on $\mathbb{R}^d$ with compact and $g$ of class $C^1$ on $\mathbb{R}^d$ we have $$\begin{gathered}
\int_{\mathbb{R}^d}T_jf(x) g(x) \omega_k(x)dx =
-\int_{\mathbb{R}^d} f(x) T_j g(x) \omega_k(x) dx,\qquad j = 1,
2,\dots, d.\label{eq2.7}\end{gathered}$$ For $y \in \mathbb{R}^d$, the system $$\begin{gathered}
T_j u(x, y) = y_j u(x, y),\qquad j =
1, 2,\dots,d,\nonumber\\ u(0,y) = 1, \label{eq2.8}\end{gathered}$$ admits a unique analytic solution on $\mathbb{R}^d$, denoted by $K(x,y)$ and called the Dunkl kernel.
This kernel has a unique holomorphic extension to $\mathbb{C}^d
\times \mathbb{C}^d$.
\[example2.1\] From [@5], if $d = 1$ and $W = \mathbb{Z}_2$, the Dunkl kernel is given by $$\begin{gathered}
K(z,t) = j_{\gamma-1/2} (izt) + \frac{zt}{2\gamma+1}
j_{\gamma+1/2} (izt),\qquad z, t \in
\mathbb{C},$$ where for $\alpha \geq - 1/2$, $j_\alpha$ is the normalized Bessel function defined by $$\begin{gathered}
j_\alpha(u) = 2^\alpha \Gamma(\alpha + 1)
\frac{J_\alpha(u)}{u^\alpha} = \Gamma(\alpha +1) \sum^\infty_{n=0}
\frac{(-1)^n(u/2)^{2n}}{n!\Gamma(n+\alpha +1)},
\qquad u \in \mathbb{C},$$ with $J_\alpha$ being the Bessel function of first kind and index $\alpha$ (see [@16]).
The Dunkl kernel possesses the following properties.
- For $z, t \in \mathbb{C}^d$, we have $K(z,t) = K(t,z)$, $K(z,0) = 1$, and $K(\lambda z,t) = K(z, \lambda t)$ for all $\lambda \in \mathbb{C}$.
- For all $\nu \in \mathbb{Z}^d_+$, $x \in
\mathbb{R}^d$, and $z \in \mathbb{C}^d$ we have $$\begin{gathered}
|D^\nu_z K(x,z)| \leq \|x\|^{|\nu|} \exp \left[\max_{w \in W}
\langle wx, {\rm Re}\, z\rangle\right].\label{eq2.11}\end{gathered}$$ with $$D^\nu_z = \frac{\partial^{|\nu|}} {\partial z_1^{\nu_1} \cdots
\partial z^{\nu_d}_d } \qquad \mbox{and} \qquad |\nu| =
\nu_1 +
\cdots + \nu_d.$$ In particular $$\begin{gathered}
|D^\nu_z K(x,z)|
\leq \|x\|^{|\nu|} \exp [\|x\| \|{\rm Re}\, z\|]],\label{eq2.12} \\
|K(x,z)| \leq \exp [\|x\| \|{\rm Re}\, z\|],\nonumber$$ and for all $x, y \in \mathbb{R}^d$ $$\begin{gathered}
|K(ix,y)| \leq 1, \label{eq2.14}\end{gathered}$$
- For all $x, y \in \mathbb{R}^d$ and $w \in W$ we have $$\begin{gathered}
K(-ix, y) = \overline{K(ix, y)} \qquad \mbox{and} \qquad K(wx, wy)
=
K(x,y). \end{gathered}$$
- The function $K(x,z)$ admits for all $x \in
\mathbb{R}^d$ and $z \in \mathbb{C}^d$ the following Laplace type integral representation $$\begin{gathered}
K(x,z) = \int_{\mathbb{R}^d} e^{\langle y,z\rangle}d\mu_x(y),
\label{eq2.16}\end{gathered}$$ where $\mu_x$ is the measure given by the relation (see [@14]).
\[remark2.1\] When $d = 1$ and $W = \mathbb{Z}_2$, the relation is of the form $$\begin{gathered}
K(x,z) = \frac{\Gamma(\gamma+1/2)}
{\sqrt{\pi}\Gamma(\gamma)}|x|^{-2\gamma}\int^{|x|}_{-|x|}(|x| -
y)^{\gamma-1}(|x| + y)^\gamma e^{yz}dy.$$ Then in this case the measure $\mu_x$ is given for all $x \in
\mathbb{R}\backslash \{0\}$ by $d\mu_x(y) = \mathcal{K}(x,y) dy$ with $$\begin{gathered}
\mathcal{K}(x,y) =
\frac{\Gamma(\gamma+1/2)}{\sqrt{\pi}\Gamma(\gamma)}|x|^{-2\gamma}(|x|
-y)^{\gamma-1}(|x|+y)^\gamma 1_{]{-}|x|, |x|[}(y),$$ where $1_{]{-}|x|, |x|[}$ is the characteristic function of the interval $]{-}|x|, |x|[$.
The Dunkl transform {#sec3}
===================
In this section we define the Dunkl transform and we give the main results satisfied by this transform which will be used in the following sections (see [@5; @9; @10]).
[**Notation.**]{} We denote by $\mathbb{H}(\mathbb{C}^d)$ the space of entire functions on $\mathbb{C}^d$ which are rapidly decreasing and of exponential type. We equip this space with the classical topology.
The Dunkl transform of a function $f$ in $\mathcal{S}(\mathbb{R}^d)$ is given by $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad \mathcal{F}_D(f)(y) =
\int_{\mathbb{R}^d}f(x)K(x, -iy)\omega_k(x)dx.\label{eq3.1}\end{gathered}$$ This transform satisfies the relation $$\begin{gathered}
\mathcal{F}_D(f) = \mathcal{F}\circ {}^tV_k(f),\qquad f \in
\mathcal{S}(\mathbb{R}^d),\label{eq3.2}\end{gathered}$$ where $\mathcal{F}$ is the classical Fourier transform on $\mathbb{R}^d$ given by $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad \mathcal{F}(f)(y) =
\int_{\mathbb{R}^d}f(x) e^{-i\langle x,y\rangle}dx,\qquad f\in
\mathcal{S}(\mathbb{R}^d).$$ The following theorems are proved in [@9; @10].
\[theorem3.1\] The transform $\mathcal{F}_D$ is a topological isomorphism
- from $\mathcal{D}(\mathbb{R}^d)$ onto $\mathbb{H}(\mathbb{C}^d)$,
- from $\mathcal{S}(\mathbb{R}^d)$ onto itself.
The inverse transform is given by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad \mathcal{F}^{-1}_D(h)(x) =
\frac{c^2_k}{2^{2\gamma+d}}\int_{\mathbb{R}^d}h(y)K(x,iy)
\omega_k(y)dy.\label{eq3.4}\end{gathered}$$
\[remark3.1\] Another proof of Theorem \[theorem3.1\] is given in [@17].
When the multiplicity function satisfies $k(\alpha)\in
\mathbb{N}$ for all $\alpha \in R_+$, M.F.E. de Jeu has proved in [@10] the following geometrical form of Paley–Wiener theorem for functions.
\[theorem3.2\] Let $E$ be a $W$-invariant compact convex set of $
\mathbb{R}^d$ and $f$ an entire function on $ \mathbb{C}^d$. Then $f$ is the Dunkl transform of a function in $ \mathcal{D}(
\mathbb{R}^d)$ with support in $E$, if and only if for all $q \in
\mathbb{N}$ there exists a positive constant $C_q$ such that $$\begin{gathered}
\forall \; z \in \mathbb{C}^d, \qquad |f(z)| \leq C_q
(1+||z||)^{-q}
e^{I_E({\rm Im}\,z)},$$ where $I_E$ is the gauge associated to the polar of $E$, given by $$\begin{gathered}
\forall \; y \in \mathbb{R}^d, \qquad I_E(y) = \sup_{x\in E}
\langle x,y\rangle .\label{eq3.6}\end{gathered}$$
The Dunkl convolution product and the Dunkl transform\
of distributions {#sec4}
======================================================
The Dunkl translation operators and the Dunkl convolution product\
of functions {#sec4.1}
------------------------------------------------------------------
The definitions and properties of the Dunkl translation operators and the Dunkl convolution product of functions presented in this subsection are given in the seventh section of [@17 pages 33–37].
The Dunkl translation operators $\tau_x$, $x \in \mathbb{R}^d$, are defined on $\mathcal{E}(\mathbb{R}^d)$ by $$\begin{gathered}
\forall\; y \in
\mathbb{R}^d, \qquad \tau_x f(y) = (V_k)_x(V_k)_y[V^{-1}_k
(f)(x+y)].\label{eq4.1}\end{gathered}$$ For $f$ in $\mathcal{S}(\mathbb{R}^d)$ the function $\tau_x f$ can also be written in the form $$\begin{gathered}
\forall\; y \in \mathbb{R}^d, \qquad \tau_x f(y) = (V_k)_x
({}^tV^{-1}_k)_y [{}^tV_k(f)(x+y)].\label{eq4.2}\end{gathered}$$
Using properties of the operators $V_k$ and ${}^tV_k$ we deduce that for $f$ in $\mathcal{D}(\mathbb{R}^d)$ (resp. $\mathcal{S}(\mathbb{R}^d))$ and $x \in \mathbb{R}^d$, the function $y \rightarrow \tau_x f(y)$ belongs to $\mathcal{D}(\mathbb{R}^d)$ (resp. $\mathcal{S}(\mathbb{R}^d))$ and we have $$\begin{gathered}
\forall\; t \in \mathbb{R}^d,\qquad \mathcal{F}_D(\tau_x f)(t) =
K(ix, t) \mathcal{F}_D(f)(t).\label{eq4.3}\end{gathered}$$
The Dunkl convolution product of $f$ and $g$ in $\mathcal{D}(\mathbb{R}^d)$ is the function $f\ast_D g$ defined by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad f \ast_D g(x) =
\int_{\mathbb{R}^d}\tau_x f(-y) g(y) \omega_k(y)dy.
$$ For $f$, $g$ in $\mathcal{D}(\mathbb{R}^d)$ (resp. $\mathcal{S}(\mathbb{R}^d))$ the function $f\ast_Dg$ belongs to $\mathcal{D}(\mathbb{R}^d)$ (resp. $\mathcal{S}(\mathbb{R}^d))$ and we have $$\begin{gathered}
\forall\; t \in \mathbb{R}^d,\qquad \mathcal{F}_D(f\ast_D g)(t) =
\mathcal{F}_D(f)(t)
\mathcal{F}_D(g)(t).$$
The Dunkl convolution product of tempered distributions {#sec4.2}
-------------------------------------------------------
\[definition4.1\] Let $S$ be in $\mathcal{S}'(\mathbb{R}^d)$ and $\varphi$ in $\mathcal{S}(\mathbb{R}^d)$. The Dunkl convolution product of $S$ and $\varphi$ is the function $S\ast_D \varphi$ defined by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad
S\ast_D\varphi(x) = \langle S_y,
\tau_x \varphi(-y)\rangle.$$
\[proposition4.1\] For $S$ in $\mathcal{S}'(\mathbb{R}^d)$ and $\varphi$ in $\mathcal{S}(\mathbb{R}^d)$ the function $S \ast_D \varphi$ belongs to $\mathcal{E}(\mathbb{R}^d)$ and we have $$\begin{gathered}
T^\mu
(S\ast_D \varphi) = S \ast_D(T^\mu (\varphi)),$$ where $$\begin{gathered}
T^\mu = T_1^{\mu_1}\circ T_2^{\mu_2} \circ \cdots \circ
T_d^{\mu_d} \qquad \mbox{with} \quad \mu = (\mu_1,
\mu_2,\dots,\mu_d) \in \mathbb{N}^d.\end{gathered}$$
We remark first that the topology of $\mathcal{S}(\mathbb{R}^d)$ is also generated by the seminorms $$\begin{gathered}
Q_{k,l}(\psi) = \sup_{\substack{|\mu| \leq k\\ x \in
\mathbb{R}^d}}\big(1+||x||^2\big)^{l}|T^{\mu}\psi(x)|, \qquad k,l
\in \mathbb{N}.\end{gathered}$$
i\) Let $x_0 \in \mathbb{R}^d$. We prove first that $S\ast_D\varphi$ is continuous at $x_0$. We have $$\forall\; x \in \mathbb{R}^d, \qquad S \ast_D\varphi(x) -
S\ast_D\varphi(x_0) = \langle S_y, (\tau_x \varphi -
\tau_{x_0}\varphi)(-y)\rangle.$$ We must prove that $(\tau_x\varphi - \tau_{x_0}\varphi)$ converges to zero in $\mathcal{S}(\mathbb{R}^d)$ when $x$ tends to $x_0$.
Let $k, \ell \in \mathbb{N}$ and $\mu \in \mathbb{N}^d$ such that $|\mu| \leq k$. From , Theorem \[theorem3.1\] and the relations , we have $$\begin{gathered}
\big(1 +\|y\|^2\big)^\ell T^\mu (\tau_x \varphi - \tau_{x_0}\varphi)(-y)
\\\qquad{} = \frac{i^{|\mu|}c^2_k}{2^{2\gamma+d}}
\int_{\mathbb{R}^d}(1+\|\lambda\|^2)^p K(i\lambda,
-y)(I-\Delta_k)^\ell \Big[\lambda^\mu(K(-ix,\lambda)\\
\qquad{}- K(-ix_0,\lambda)) \mathcal{F}_D(\varphi)(\lambda)\Big]
\frac{\omega_k(\lambda)}{(1+\|\lambda\|^2)^p}d\lambda,\end{gathered}$$ with $\lambda^\mu = \lambda_1^{\mu_1}
\lambda_2^{\mu_2}\cdots\lambda^{\mu_d}_d,$ $\Delta_k =
\sum^d_{j=1}T_j^2$ the Dunkl Laplacian and $p \in \mathbb{N}$ such that $p > \gamma + \frac{d}{2}+1$.
Using and we deduce that $$\begin{gathered}
Q_{k,\ell}(\tau_x \varphi - \tau_{x_0}\varphi)
=\sup_{\substack{|\mu| \leq k\\ y \in \mathbb{R}^d}} (1
+\|y\|)^\ell |T^\mu (\tau_x\varphi - \tau_{x_0}\varphi)(-y)|
\rightarrow 0 \qquad \mbox{as} \quad x \rightarrow x_0.\end{gathered}$$ Then the function $S
*_D \varphi$ is continuous at $x_0$, and thus it is continuous on $ \mathbb{R}^d$.
Now we will prove that $S
\ast_D\varphi$ admits a partial derivative on $\mathbb{R}^d$ with respect to the variable $x_j$. Let $h \in \mathbb{R}\backslash
\{0\}$. We consider the function $f_h$ defined on $\mathbb{R}^d$ by $$\begin{gathered}
f_h(y)
=\frac{1}{h}\big(\tau_{(x_1,\dots,x_j+h,\dots,x_d)}\varphi(-y) -
\tau_{(x_1,\dots,x_j,\dots,x_d)} \varphi(-y)\big) -
\frac{\partial}{\partial x_j} \tau_x \varphi(-y).\end{gathered}$$ Using the formula $$\begin{gathered}
\forall\; y \in \mathbb{R}^d, f_h(y) = \frac{1}{h}
\int^{x_j+h}_{x_j}\left(\int^{u_j}_{x_j}
\frac{\partial^2}{\partial t^2_j}\tau_{(x_1,\dots,t_j,\dots,x_d)}
\varphi(-y)dt_j\right)du_j,\end{gathered}$$ we obtain for all $k, \ell \in \mathbb{N}$ and $\mu \in
\mathbb{N}^d$ such that $|\mu| \leq k$: $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad
(1+\|y\|^2)^\ell
T^\mu f_h(y) \nonumber\\
\qquad\qquad\qquad{}=
\frac{1}{h}\int_{x_j}^{x_j+h}\left(\int^{u_j}_{x_j}\big(1+\|y\|^2\big)^\ell
T^{\mu}\frac{\partial^2}{\partial
t^2_j}\tau_{(x_1,\dots,t_j,x_d)} \varphi(-y)dt_j\right)du_j.
\label{eq4.8}\end{gathered}$$ By applying the preceding method to the function $$\begin{gathered}
\big(1+\|y\|^2\big)^\ell T^\mu \frac{\partial^2}{\partial
t^2_j}\tau_{(x_1,\dots,t_j,\dots,x_d)} \varphi(-y),\end{gathered}$$ we deduce from the relation that $$\begin{gathered}
Q_{k,\ell}(f_h) = \sup_{\substack{|\mu| \leq k \\ y \in
\mathbb{R}^d}} \big(1+\|y\|^2\big)^\ell |T^\mu f_h(y)| \rightarrow
0\qquad \mbox{as} \quad h \rightarrow 0.\end{gathered}$$ Thus the function $S \ast_D
\varphi(x)$ admits a partial derivative at $x_0$ with respect to $x_j$ and we have $$\begin{gathered}
\frac{\partial}{\partial x_j} S\ast_D \varphi(x_0) = \langle S_y,
\frac{\partial}{\partial x_j}\tau_{x_0}
\varphi(-y)\rangle .$$ These results is true on $
\mathbb{R}^d$. Moreover the partial derivatives are continuous on $ \mathbb{R}^d$. By proceeding in a similar way for partial derivatives of all order with respect to all variables, we deduce that $S*_D \varphi$ belongs to $ \mathcal{E}( \mathbb{R}^d)$.
ii\) From the i) we have $$\begin{gathered}
\forall \; x \in \mathbb{R}^d, \qquad \frac{\partial}{\partial
x_j} S*_D \varphi (x) = \langle S_y,\frac{\partial}{\partial x_j}
\tau_x \varphi (-y) \rangle.\end{gathered}$$ On the other hand using the definition of the Dunkl operator $T_j$ and the relation $$\begin{gathered}
T_j(\tau_x \varphi (-y)) = \tau_x (T_j\varphi) (-y),\end{gathered}$$ we obtain $$\begin{gathered}
\forall \; x \in \mathbb{R}^d,
\qquad T_j( S*_D \varphi )(x) = \langle S_y, \tau_x (T_j\varphi)
(-y) \rangle = S*_D (T_j \varphi)(x).\end{gathered}$$ By iteration we get $$\begin{gathered}
\forall \; x \in \mathbb{R}^d, \qquad T^\mu( S*_D \varphi )(x) =
S*_D (T^\mu \varphi)(x).\tag*{\qed}\end{gathered}$$
The Dunkl transform of distributions {#sec4.3}
------------------------------------
\[definition4.2\]
- The Dunkl transform of a distribution $S$ in $\mathcal{S}'(\mathbb{R}^d)$ is defined by $$\begin{gathered}
\langle \mathcal{F}_D(S), \psi\rangle = \langle S,
\mathcal{F}_D(\psi)\rangle, \psi \in
\mathcal{S}(\mathbb{R}^d).$$
- We define the Dunkl transform of a distribution $S$ in $\mathcal{E}'(\mathbb{R}^d)$ by $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad \mathcal{F}_D(S)(y) = \langle
S_x, K(-iy,x)\rangle.\label{eq4.11}\end{gathered}$$
\[remark4.1\] When the distribution $S$ in $\mathcal{E}'(\mathbb{R}^d)$ is given by the function $g \omega_k$ with $g$ in $\mathcal{D}(\mathbb{R}^d)$, and denoted by $T_{g \omega_k}$, the relation coincides with .
[**Notation.**]{} We denote by $\mathcal{H}(\mathbb{C}^d)$ the space of entire functions on $\mathbb{C}^d$ which are slowly increasing and of exponential type. We equip this space with the classical topology.
The following theorem is given in [@17 page 27].
\[theorem4.1\] The transform $\mathcal{F}_D$ is a topological isomorphism
- from $\mathcal{S}'(\mathbb{R}^d)$ onto itself;
- from $\mathcal{E}'(\mathbb{R}^d)$ onto $\mathcal{H}(\mathbb{C}^d)$.
\[theorem4.2\] Let $S$ be in $\mathcal{S}'(\mathbb{R}^d)$ and $\varphi$ in $\mathcal{S}(\mathbb{R}^d)$. Then, the distribution on $\mathbb{R}^d$ given by $(S\ast_D \varphi)\omega_k$ belongs to $\mathcal{S}'(\mathbb{R}^d)$ and we have $$\begin{gathered}
\mathcal{F}_D(T_{(S\ast_D \varphi)\omega_k})=
\mathcal{F}_D(\varphi)\mathcal{F}_D(S).\label{eq4.12}\end{gathered}$$
i\) As $S$ belongs to $\mathcal{S}'(\mathbb{R}^d)$ then there exists a positive constant $C_0$ and $k_0, \ell_0 \in \mathbb{N}$ such that $$\begin{gathered}
|S\ast_D\varphi(x)| = |\langle S_y,
\tau_x\varphi(-y)\rangle|\leq C_0
Q_{k_0,\ell_0}(\tau_x\varphi).\label{eq4.13}\end{gathered}$$
But by using the inequality $$\begin{gathered}
\forall\; x, y \in \mathbb{R}^d,\qquad 1 +\|x+y\|^2 \leq
2\big(1+\|x\|^2\big)\big(1+\|y\|^2\big),\end{gathered}$$ the relations , and the properties of the operator $^{t}V_k$ (see Theorem 3.2 of [@17]), we deduce that there exists a positive constant $C_1$ and $k, \ell \in \mathbb{N}$ such that $$\begin{gathered}
Q_{\ell_0,\ell_0}(\tau_x\varphi) \leq
C_1\big(1+\|x\|^2\big)^{\ell_0}Q_{k,\ell}(\varphi).$$ Thus from we obtain $$\begin{gathered}
|S\ast_D \varphi(x)| \leq C\big(1+\|x\|^2\big)^{\ell_0}
Q_{k,\ell}(\varphi), \label{eq4.15}\end{gathered}$$ where $C$ is a positive constant. This inequality shows that the distribution on $\mathbb{R}^d$ associated with the function $(S*_{D} \varphi)\omega_k$ belongs to $\mathcal{S}'(\mathbb{R}^d)$.
ii\) Let $\psi$ be in $\mathcal{S}(\mathbb{R}^d)$. We shall prove first that $$\begin{gathered}
\langle T_{(S\ast_D \varphi)\omega_k},\psi \rangle = \langle
\check{S}, \varphi \ast_D \check{\psi}\rangle ,\label{eq4.16}\end{gathered}$$ where $ \check{S}$ is the distribution in $ \mathcal{S'}(
\mathbb{R}^d)$ given by $$\begin{gathered}
\langle \check{S},\phi\rangle = \langle
S, \check{\phi} \rangle,\end{gathered}$$ with $$\begin{gathered}
\forall\;x \in \mathbb{R}^d,\qquad \check{ \phi}(x) = \phi(-x).\end{gathered}$$ We consider the two sequences $\{\varphi_n\}_{n \in \mathbb{N}}$ and $\{\psi_m\}_{m
\in \mathbb{N}}$ in $\mathcal{D}(\mathbb{R}^d)$ which converge respectively to $\varphi$ and $\psi$ in $\mathcal{S}(\mathbb{R}^d)$. We have $$\begin{gathered}
\langle T_{(S\ast_D \varphi_n)\omega_k}, \psi_m\rangle = \int_{
\mathbb{R}^d} \langle S_y, \tau_{x}\varphi_n(-y)\rangle \psi_m
(x)\omega_k(x)dx,\\
\phantom{\langle T_{(S\ast_D \varphi_n)\omega_k}, \psi_m\rangle
}{} = \langle S_y, \int_{\mathbb{R}^d}\!\!\psi_m(x)
\tau_x\varphi_n(-y)\omega_k(x) dx\rangle = \langle S_y,
\int_{\mathbb{R}^d}\!\!\check{\psi}_m(x) \tau_{-x} \varphi_n(-y)
\omega_k(x) dx\rangle.\end{gathered}$$
Thus $$\begin{gathered}
\langle T_{(S\ast_D \varphi_n)\omega_k},\psi_m\rangle =
\langle \check{S},\varphi_n *_D \check{\psi}_m\rangle.\label{eq4.17}\end{gathered}$$ But $$\begin{gathered}
\langle T_{(S\ast_D \varphi_n)\omega_k}, \psi_m\rangle -
\langle T_{(S\ast_D \varphi)\omega_k}, \psi_m\rangle
= \int_{\mathbb{R}^d} \check{S}\ast_D(\varphi_n - \varphi)(x) \check{\psi}_m(x)
\omega_k(x)dx.\end{gathered}$$ Thus from there exist a positive constant $M$ and $k, \ell \in
\mathbb{N}$ such that $$\begin{gathered}
|T_{(S\ast_D \varphi_n)\omega_k},\psi_m\rangle - \langle
T_{(S\ast_D \varphi)\omega_k},
\psi_m\rangle | \leq M Q_{k,\ell}
(\varphi_n-\varphi).\end{gathered}$$ Thus $$\begin{gathered}
\langle T_{(S\ast_D \varphi_n)\omega_k}, \psi_m\rangle
\underset{n\rightarrow + \infty}{\longrightarrow}
\langle T_{(S\ast_D \varphi)\omega_k},\psi_m\rangle.\label{eq4.18}\end{gathered}$$ On the other hand we have $$\begin{gathered}
\langle T_{(S\ast_D \varphi)\omega_k},\psi_m\rangle
\underset{m\rightarrow + \infty}{\longrightarrow}
\langle T_{(S\ast_D \varphi)\omega_k},\psi
\rangle,\label{eq4.19}\end{gathered}$$ and $$\begin{gathered}
\varphi_n *_D \check{\psi}_m \underset{\substack{n\rightarrow +
\infty\\ m \rightarrow + \infty}}{\longrightarrow}
\varphi
*_D\check{\psi},\label{eq4.20}
\end{gathered}$$ the limit is in $ \mathcal{S}( \mathbb{R}^d)$.
We deduce from , , and .
We prove now the relation . Using we obtain for all $\psi$ in $\mathcal{S}(\mathbb{R}^d)$ $$\begin{gathered}
\langle \mathcal{F}_D(T_{(S\ast_D \varphi)\omega_k}),\psi\rangle =
\langle T_{(S\ast_D \varphi)\omega_k}, \mathcal{F}_D(\psi)\rangle,
= \langle \check{S}, \varphi*_D
{(\mathcal{F}_D(\psi))}\check{}\rangle.
\end{gathered}$$ But $$\begin{gathered}
\varphi*_D {(\mathcal{F}_D(\psi))}\check{} =
(\mathcal{F}_D[\mathcal{F}_D(\varphi)\psi])\check{}.\end{gathered}$$ Thus $$\begin{gathered}
\langle \breve{S}, \varphi*_D (\mathcal{F}_D(\psi))\check{}\rangle
= \langle S, \mathcal{F}_D[\mathcal{F}_D(\varphi)\psi]\rangle, =
\langle \mathcal{F}_D(\varphi) \mathcal{F}_D(S), \psi\rangle.\end{gathered}$$ Then $$\begin{gathered}
\langle \mathcal{F}_D(T_{(S\ast_D \varphi)\omega_k}),\psi\rangle =
\langle \mathcal{F}_D(\varphi)\mathcal{F}_D(S),\psi\rangle .\end{gathered}$$ This completes the proof of .
We consider the positive function $\varphi$ in $\mathcal{D}(\mathbb{R}^d)$ which is radial for $d \geq 2$ and even for $d = 1$, with in the closed ball of center $0$ and radius 1, satisfying $$\begin{gathered}
\int_{\mathbb{R}^d}\varphi(x) \omega_k(x)dx = 1,\end{gathered}$$ and $\phi$ the function on $[0, + \infty[$ given by $$\begin{gathered}
\varphi(x) = \phi(\|x\|) = \phi(r) \qquad \mbox{with}\quad r =
\|x\|.\end{gathered}$$ For $\varepsilon \in ]0,1]$, we denote by $\varphi_\varepsilon$ the function on $\mathbb{R}^d$ defined by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad \varphi_\varepsilon(x) =
\frac{1}{\varepsilon^{2\gamma+d}}
\phi(\frac{\|x\|}{\varepsilon}).\label{eq4.21}\end{gathered}$$ This function satisfies the following properties:
- Its is contained in the closed ball $B_\varepsilon$ of center $0$, and radius $\varepsilon$.
- From [@13 pages 585–586] we have $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad
\mathcal{F}_D(\varphi_\varepsilon)(y) = \frac{2^{\gamma +
\frac{d}{2}}}{c_k} \mathcal{F}_B^{\gamma +
\frac{d}{2}-1}(\phi)(\varepsilon\|y\|),\label{eq4.22}\end{gathered}$$ where $\mathcal{F}_B^{\gamma + \frac{d}{2}-1}(f)(\lambda)$ is the Fourier–Bessel transform given by $$\begin{gathered}
\forall\; \lambda \in \mathbb{R}, \qquad \mathcal{F}^{\gamma +
\frac{d}{2}-1}_B(f)(\lambda) = \int^\infty_0f(r)j_{\gamma +
\frac{d}{2}-1}(\lambda r) \frac{r^{2\gamma +d-1}}{2^{\gamma +
\frac{d}{2}}\Gamma\left(\gamma +
\frac{d}{2}\right)}dr,\label{eq4.23}\end{gathered}$$ with $j_{\gamma +
\frac{d}{2}-1}(\lambda r)$ the normalized Bessel function.
- There exists a positive constant $M$ such that $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad
|\mathcal{F}_D(\varphi_\varepsilon)(y)-1| \leq \varepsilon
M\|y\|^2.\label{eq4.24}\end{gathered}$$
\[theorem4.3\] Let $S$ be in $\mathcal{S}'(\mathbb{R}^d)$. We have $$\begin{gathered}
\lim_{\varepsilon \rightarrow 0}(S\ast_D\phi_\varepsilon)\omega_k
= S,\label{eq4.25}\end{gathered}$$ where the limit is in $\mathcal{S}'(\mathbb{R}^d)$.
We deduce from , , and Theorem \[theorem4.1\].
\[definition4.3\] Let $S_1$ be in $\mathcal{S}'(\mathbb{R}^d)$ and $S_2$ in $\mathcal{E}'(\mathbb{R}^d)$. The Dunkl convolution product of $S_1$ and $S_2$ is the distribution $S_1 \ast_D S_2$ on $\mathbb{R}^d$ defined by $$\begin{gathered}
\langle S_1 \ast_D S_2,\psi\rangle = \langle S_{1,x}, \langle
S_{2,y}, \tau_x\psi(y) \rangle\rangle,\qquad \psi \in
\mathcal{D}(\mathbb{R}^d).\label{eq4.26}\end{gathered}$$
\[remark4.2\] The relation can also be written in the form $$\begin{gathered}
\langle S_1 \ast_D S_2,\psi\rangle = \langle S_1, \check{S}_2
\ast_D \psi\rangle.\label{eq4.27}\end{gathered}$$
\[theorem4.4\] Let $S_1$ be in $\mathcal{S}'(\mathbb{R}^d)$ and $S_2$ in $\mathcal{E}'(\mathbb{R}^d)$. Then the distribution $S_1 \ast_D
S_2$ belongs to $\mathcal{S}'(\mathbb{R}^d)$ and we have $$\begin{gathered}
\mathcal{F}_D(S_1 \ast_D S_2) =
\mathcal{F}_D(S_2)\cdot \mathcal{F}_D(S_1).$$
We deduce the result from , the relation $$\begin{gathered}
T_{(\check{S_2}*_D \mathcal{F}_D(\psi))\omega_k } = \check{S_2}*_D
T_{\mathcal{F}_D(\psi)\omega_k },\end{gathered}$$ and Theorem \[theorem4.2\].
Another proof of the geometrical form\
of the Paley–Wiener–Schwartz theorem for the Dunkl transform {#sec4.4}
------------------------------------------------------------
In this subsection we suppose that the multiplicity function satisfies $k(\alpha) \in \mathbb{N}
\backslash \{0\}$ for all $\alpha \in R_+$.
The main result is to give another proof of the geometrical form of Paley–Wiener–Schwartz theorem for the transform $\mathcal{F}_D$, given in [@17 pages 23–33].
\[theorem4.5\] Let $E$ be a $W$-invariant compact convex set of $\mathbb{R}^d$ and $f$ an entire function on $\mathbb{C}^d$. Then $f$ is the Dunkl transform of a distribution in $\mathcal{E}'(\mathbb{R}^d)$ with in $E$ if and only if there exist a positive constant $C$ and $N \in \mathbb{N}$ such that $$\begin{gathered}
\forall\; z \in \mathbb{C}^d, \qquad |f(z)| \leq C (1 + \|z\|^2)^N
e^{I_E({\rm Im}\, z)},\label{eq4.29}\end{gathered}$$ where $I_E$ is the function given by .
*Necessity condition.* We consider a distribution $S$ in $\mathcal{E}'(\mathbb{R}^d)$ with in $E$.
Let $\mathcal{X}$ be in $\mathcal{D}(\mathbb{R}^d)$ equal to 1 in a neighborhood of $E$, and $\theta$ in $\mathcal{E}(\mathbb{R})$ such that $$\theta(t) = \left\{ \begin{array}{ll}
1, &\mbox{if} \ t \leq 1,\\
0, &\mbox{if} \ t > 2.
\end{array}\right.$$ We put $\eta = {\rm Im}\, z$, $z \in \mathbb{C}^d$ and we take $\varepsilon > 0$. We denote by $\psi_z$ the function defined on $\mathbb{R}^d$ by $$\psi_z(x) = \chi(x) K(-ix,z)
|W|^{-1}\sum_{w \in W}
\theta(\|z\|^\varepsilon (\langle w x,\eta\rangle -
I_E(\eta))).$$ This function belongs to $\mathcal{D}(\mathbb{R}^d)$ and as $E$ is $W$-invariant, then it is equal to $K(-ix,z)$ in a neighborhood of $E$. Thus $$\begin{gathered}
\forall\; z \in \mathbb{C}^d,\qquad \mathcal{F}_D(S)(z)
= \langle S_x, \psi_z(x)\rangle. \end{gathered}$$ As $S$ is with compact , then it is of finite order $N$. Then there exists a positive constant $C_0$ such that $$\begin{gathered}
\forall\; z \in \mathbb{C}^d, \qquad |\mathcal{F}_D(S)(z)| \leq
C_0 \sum_{|p| \leq N}
\sup_{x \in \mathbb{R}^d}|D^p \psi_z(x)|.\label{eq4.31}
\end{gathered}$$ Using the Leibniz rule, we obtain $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad D^p \psi_z(x) =
\sum_{q+r+s=p}
\frac{p!}{q!r!s!} D^q \mathcal{X}
(x)D^r K(-ix,z)\nonumber\\
\phantom{\forall\; x \in \mathbb{R}^d, \qquad D^p \psi_z(x) =}{}
\times D^s |W|^{-1}
\sum_{w \in W}\theta(\|z\|^\varepsilon (\langle wx, \eta\rangle
- I_E(\eta))).\label{eq4.32}
\end{gathered}$$ We have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad |D^q\chi(x)| \leq {\rm
const},\label{eq4.33}\end{gathered}$$ and if $M$ is the estimate of $\sup\limits_{t \in \mathbb{R}}
|\theta^{(k)}(t)|$, $k \leq N$, we obtain $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad \left|D^s\left(\sum_{w \in W}
\theta(\|z\|^\varepsilon (\langle wx,\eta\rangle
- I_E(\eta)))\right)\right| \leq M(\|z\|^\varepsilon \|\eta\|)^{|s|}.
\label{eq4.34}\end{gathered}$$ On the other hand from we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad
|D^r K(-ix,z)| \leq \|z\|^r
e^{\max_{w \in W}\langle wx,\eta\rangle}.
\label{eq4.35}\end{gathered}$$ Using inequalities , , and we deduce that there exists a positive constant $C_1$ such that $$\forall\; x \in \mathbb{R}^d, \qquad |D^p \psi_z(x)|
\leq C_1
(1+\|z\|^2)^{N(1+\varepsilon)}
e^{\max_{w \in W}\langle wx,\eta\rangle}.$$ From this relation and we obtain $$\begin{gathered}
\forall\; z \in \mathbb{C}^d,\qquad |\mathcal{F}_D(S)(z)| \leq
C_2(1+\|z\|^2)^{N(1+\varepsilon)}
\sup_{x \in E} e^{\max_{w \in W}
\langle w x,\eta\rangle},\label{eq4.36}\end{gathered}$$ where $C_2$ is a positive constant, and the supremum is calculated when $\|z\| \geq 1$, for $$\langle wx, \eta\rangle \leq I_E(\eta) +
\frac{2}{\|z\|^\varepsilon},$$ because if not we have $\theta = 0$. This inequality implies $$\begin{gathered}
\sup_{x \in E} e^{\max_{w \in W}\langle w x,\eta\rangle}
\leq e^2 \cdot e^{I_E(\eta)}.\label{eq4.37}\end{gathered}$$ From , we deduce that there exists a positive constant $C_3$ independent from $\varepsilon$ such that $$\begin{gathered}
\forall\; z \in \mathbb{C}^d, \qquad \|z\| \geq 1, \qquad |
\mathcal{F}_D(S)(z)|
\leq C_3(1+\|z\|^2)^{N(1+\varepsilon)}
e^{I_E(\eta)}.$$ If we make $\varepsilon \rightarrow 0$ in this relation we obtain for $\|z\| \geq 1$. But this inequality is also true (with another constant) for $\|z\| \leq 1$, because in the set $\{z \in \mathbb{C}^d,
\|z\| \leq 1\}$ the function $\mathcal{F}_D(S)(z)e^{- I_E(\eta)}$ is bounded.
**Sufficient condition.** Let $f$ be an entire function on $\mathbb{C}^d$ satisfying the condition . It is clear that the distribution given by the restriction of $f\omega_k$ to $\mathbb{R}^d$ belongs to $\mathcal{S}'(\mathbb{R}^d)$. Thus from Theorem \[theorem4.1\]i there exists a distribution $S$ in $\mathcal{S}'(\mathbb{R}^d)$ such that $$\begin{gathered}
T_{f\omega_k} = \mathcal{F}_D(S).\label{eq4.39}\end{gathered}$$ We shall show that the of $S$ is contained in $E$. Let $\varphi_\varepsilon$ be the function given by the relation . We consider the distribution $$\begin{gathered}
T_{f_\varepsilon \omega_k} = \mathcal{F}_D
(T_{(S \ast_D \varphi_\varepsilon)
\omega_k}).\label{eq4.40}
\end{gathered}$$ From Theorem \[theorem4.2\] and , we deduce that $$\begin{gathered}
f_\varepsilon = \mathcal{F}_D
(\varphi_\varepsilon)f.$$ The properties of the function $f$ and , and show that the function $f_\varepsilon$ can be extended to an entire function on $\mathbb{C}^d$ which satisfies: for all $q \in
\mathbb{N}$ there exists a positive constant $C_q$ such that $$\begin{gathered}
\forall\; z \in \mathbb{C}^d,\qquad |f_\varepsilon(z)| \leq
C_q(1+\|z\|)^{-q}
e^{I_{E+B_\varepsilon}({\rm Im}\,z)} .\label{eq4.42}\end{gathered}$$ Then from , Theorem \[theorem3.2\] and , the function $(S \ast
\varphi_\varepsilon)\omega_k$ belongs to $\mathcal{D}(\mathbb{R}^d)$ with in $E + B_\varepsilon$. But from Theorem \[theorem4.3\], the family $(S \ast \varphi_\varepsilon)\omega_k$ converges to $S$ in $\mathcal{S}'(\mathbb{R}^d)$ when $\varepsilon$ tends to zero. Thus for all $\varepsilon > 0$, the of $S$ is in $E + B_\varepsilon$, then it is contained in $E$.
\[remark4.3\] In the following we give an ameliorated version of the proof of Proposition 6.3 of [@17 page 30].
Let $E$ be a $W$-invariant compact convex set of $\mathbb{R}^d$ and $x \in E$. The function $f(x,\cdot)$ defined on $\mathbb{C}^d$ by $$f(x,z) = e^{-i\big(\sum\limits^d_{j=1}x_jz_j\big)},$$ is entire on $\mathbb{C}^d$ and satisfies $$\forall\; z \in \mathbb{C}^d,\qquad |f(x,z)|
\leq e^{I_E({\rm Im}\, z)}.$$ Thus from Theorem \[theorem4.5\] there exists a distribution $\tilde{\eta}_x$ in $\mathcal{E}'(\mathbb{R}^d)$ with in $E$ such that $$\forall\; y \in \mathbb{R}^d, \qquad f(x,y) =
e^{-i\langle x,y\rangle} = \langle
\tilde{\eta}_x, K(-iy,\cdot )\rangle.$$ Applying now the remainder of the proof given in [@17 page 32], we deduce that the of the representing distribution $\eta_x$ of the inverse Dunkl intertwining operator $V^{-1}_k$ is contained in $E$.
Inversion formulas for the Dunkl intertwining operator\
and its dual {#sec5}
=======================================================
The pseudo-differential operators $\boldsymbol{P}$ {#sec5.1}
--------------------------------------------------
\[definition5.1\] We define the pseudo-differential operator $P$ on $\mathcal{S}(\mathbb{R}^d)$ by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x) =
\frac{\pi^d c_k^2}{2^{2\gamma}}
\mathcal{F}^{-1}[\omega_k \mathcal{F}(f)](x).
\label{eq5.1}\end{gathered}$$
\[proposition5.1\] The distribution $T_{\omega_k}$ given by the function $\omega_k$, is in ${\cal S'}(\mathbb{R}^d)$ and for all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x) =
\frac{\pi^d c_k^2}{2^{2\gamma}}
\mathcal{F}(T_{\omega_k}) * \breve{f}(-x). \end{gathered}$$ where $*$ is the classical convolution production of a distribution and a function on $\mathbb{R}^d$.
It is clear that the distribution $T_{\omega_k}$ given by the function $\omega_k$ belongs to ${\cal S'}(\mathbb{R}^d)$. On the other hand from the relation we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x) =
\frac{\pi^d c_k^2}{2^{2\gamma}}
\int_{\mathbb{R}^d}\mathcal{F}(f(\xi+x))(y)\omega_k(y)dy.\end{gathered}$$ Thus $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x) =
\frac{\pi^d c_k^2}{2^{2\gamma}}\langle \mathcal{F}(T_{\omega_k})_y, f(x+y)
\rangle. \label{eq5.3}\end{gathered}$$ With the definition of the classical convolution product of a distribution and a function on $\mathbb{R}^d$, the relation can also be written in the form $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x) =
\frac{\pi^d c_k^2}{2^{2\gamma}}
\mathcal{F}(T_{\omega_k}) *
\breve{f}(-x).\tag*{{}}
\end{gathered}$$
\[proposition5.2\] For all $f$ in $\mathcal{S}(\mathbb{R}^d)$ the function $P(f)$ is of class $C^\infty$ on $\mathbb{R}^d$ and we have $$\begin{gathered}
\forall \; x \in \mathbb{R}^d, \qquad \frac{\partial}{\partial
x_j} P(f)(x) = P\left(\frac{\partial}{\partial
\xi_j}f\right)(x), \qquad j = 1, 2,\dots,d.\label{eq5.4}\end{gathered}$$
By derivation under the integral sign, and by using the relation $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad
iy_j \mathcal{F}(f)(y) = \mathcal{F}\left(\frac{\partial}
{\partial \xi_j}f\right)(y), \end{gathered}$$ we obtain .
Inversion formulas for the Dunkl intertwining operator\
and its dual on the space $\boldsymbol{\mathcal{S}( \mathbb{R}^d)}$ {#sec5.2}
-------------------------------------------------------------------
\[theorem5.1\] For all $f$ in $\mathcal{S}( \mathbb{R}^d)$ we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad {}^tV^{-1}_k(f)(x) =
V_k(P(f))(x).\label{eq5.6}\end{gathered}$$
From [@15 Theorem 4.1] for all $f$ in $\mathcal{S}( \mathbb{R}^d)$, the function ${}^tV_k^{-1}(f)$ belongs to $\mathcal{S}( \mathbb{R}^d)$. Then from Theorem \[theorem3.1\] we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad {}^tV_k^{-1}
(f)(x) = \frac{c^2_k}{2^{2\gamma+d}}
\int_{\mathbb{R}^d}
K(iy,x) \mathcal{F}_D({}^tV_k^{-1}(f))(y) \omega_k(y)dy.
\label{eq5.7}
\end{gathered}$$ But from the relations , , , we have $$\forall\; y \in \mathbb{R}^d,\qquad
\mathcal{F}_D({}^tV^{-1}_k(f))(y) =
\mathcal{F}(f)(y),$$ and $$\forall\; y \in \mathbb{R}^d, \qquad K(iy, x) =
\mathcal{F}( \breve{\mu}_x)(y),$$ where $\breve{\mu}_x$ is the probability measure given for a continuous function $f$ on $\mathbb{R}^d$ by $$\int_{\mathbb{R}^d}f(t)d\check{\mu}_x(t)
= \int_{\mathbb{R}^d}f(-t)d\mu_x(t).$$ Thus can also be written in the form $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad
{}^tV_k^{-1}
(f)(x) = \frac{c^2_k}{2^{2\gamma+d}}\int_{ \mathbb{R}}
\mathcal{F}(\breve{\mu}_x)(y) \omega_k(y)
\mathcal{F}(f)(y)dy.\end{gathered}$$ Then by using , the properties of the Fourier transform $\mathcal{F}$ and Fubini’s theorem we obtain $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad {}^tV_k^{-1}(f)(x) =
\frac{c^2_k}{2^{2\gamma+d}}\int_{\mathbb{R}^d}
\mathcal{F}[\omega_k \mathcal{F}(f)](y) d\breve{\mu}_x(y) = \int_{\mathbb{R}^d} P(f)(y)d\mu_x(y).
\end{gathered}$$ Thus $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad {}^tV_k^{-1}
(f)(x) = V_k(P(f))(x).\tag*{{}}
\end{gathered}$$
\[theorem5.2\] For all $f$ in $\mathcal{S}( \mathbb{R}^d)$ we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad V_k^{-1}(f)(x)
= P{}^tV_k(f)(x).
\label{eq5.8}\end{gathered}$$
We deduce the relation by replacing $f$ by ${}^tV_k(f)$ in and by using the fact that the operator $V_k$ is an isomorphism from $\mathcal{E}( \mathbb{R}^d)$ onto itself.
Inversion formulas for the dual Dunkl intertwining operator\
on the space $\boldsymbol{\mathcal{E}'(\mathbb{R}^d)}$ {#sec5.3}
------------------------------------------------------------
The dual Dunkl intertwining operator ${}^tV_k$ on $\mathcal{E}'(
\mathbb{R}^d)$ is defined by $$\begin{gathered}
\langle {}^tV_k(S), f\rangle = \langle S, V_k(f)\rangle,\qquad
f \in \mathcal{E}( \mathbb{R}^d).$$
The operator ${}^tV_k$ is a topological isomorphism from $\mathcal{E}'( \mathbb{R}^d)$ onto itself. The inverse operator is given by $$\begin{gathered}
\langle {}^tV^{-1}_k(S),f\rangle = \langle S,
V_k^{-1}(f)\rangle,\qquad f \in \mathcal{E}(
\mathbb{R}^d),\label{eq5.10}\end{gathered}$$ see [@17 pages 26–27].
\[theorem5.3\] For all $S$ in $\mathcal{E}'( \mathbb{R}^d)$ the operator ${}^tV_k^{-1}$ satisfies also the relation $$\begin{gathered}
\langle {}^tV_k^{-1}(S),f\rangle = \langle S, P {}^tV_k(f)\rangle,
\qquad f \in \mathcal{S}( \mathbb{R}^d).\label{eq5.11}\end{gathered}$$
We deduce from and .
Other expressions of the inversion formulas\
for the Dunkl intertwining operator and its dual\
when the multiplicity function is integer {#sec6}
=================================================
In this section we suppose that the multiplicity function satisfies $k(\alpha) \in \mathbb{N}\backslash \{0\}$ for all $\alpha \in
R_+$. The following two Propositions give some other properties of the operator $P$ defined by .
\[proposition6.1\] Let $E$ be a compact convex set of $\mathbb{R}^d$. Then for all $f$ in $\mathcal{D}(\mathbb{R}^d)$ we have $$\begin{gathered}
\mbox{\rm supp}\, f \subset E \Rightarrow \mbox{\rm supp}\, P(f)
\subset E. $$
From the relation we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x)
= \frac{\pi^d c_k^2}{2^{2\gamma}}
\int_{\mathbb{R}^d}\mathcal{F}f(y)e^{i\langle x,y
\rangle}\omega_k(y)dy.\label{eq6.2}\end{gathered}$$ We consider the function $F$ defined by $$\begin{gathered}
\forall \; z \in \mathbb{C}^d, \qquad F(z) =
\left(\prod_{ \alpha \in R_+} (\langle\alpha,z\rangle)^{2k(\alpha)}\right)
\mathcal{F}(f)(z).$$ This function is entire on $\mathbb{C}^d$ and by using Theorem 2.6 of [@1] we deduce that for all $q \in
\mathbb{N}$, there exists a positive constant $C_q$ such that $$\begin{gathered}
\forall\; z \in \mathbb{C}^d, \qquad |F(z)|
\leq C_q (1+\|z\|^2)^{-q}e^{I_E
({\rm Im}\,z)},
\label{eq6.4}\end{gathered}$$ where $I_E$ is the function given by .
The relation can also be written in the form $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad P(f)(x) =
\frac{\pi^d c_k^2}{2^{2\gamma}}
\int_{\mathbb{R}^d}F(y)e^{i\langle x,y\rangle}
dy.\label{eq6.5}\end{gathered}$$ Thus , and Theorem 2.6 of [@1], imply that $\mbox{\rm supp} \, Pf \subset E$.
\[proposition\] For all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\begin{gathered}
P(f) = \frac{\pi^d c_k^2}{2^{2\gamma}}\left[ \prod_{\alpha \in R_+
}(-1)^{k(\alpha)}\left(\alpha_1 \frac{\partial}{\partial \xi_1 }
+\cdots+ \alpha_d \frac{\partial}{\partial \xi_d
}\right)^{2k(\alpha)}\right](f). \label{eq6.6}\end{gathered}$$
For all $f$ in $\mathcal{S}(\mathbb{R}^d)$, we have $$\begin{gathered}
\forall \; y \in \mathbb{R}^d, \quad \omega_k(y) \mathcal{F}(f)(y)
= \prod_{\alpha \in
R_+}(\langle \alpha,y
\rangle)^{2k(\alpha)}\mathcal{F}(f)(y).\label{eq6.7}\end{gathered}$$ But $$\begin{gathered}
\forall\; y \in \mathbb{R}^d, \qquad \langle \alpha,y\rangle
\mathcal{F}(f)(y) = \mathcal{F}\left[- i \left(\alpha_1
\frac{\partial}{\partial \xi_1} + \cdots+ \alpha_d
\frac{\partial}{\partial \xi_d} \right)f\right](y).
\label{eq6.8}\end{gathered}$$ From , we obtain $$\forall\; y \in \mathbb{R}^d, \qquad \omega_k (y)
\mathcal{F}(f)(y) = \mathcal{F}\left[\prod_{\alpha \in
R_+}(-1)^{k(\alpha)}\left(\alpha_1 \frac{\partial}{\partial \xi_1}
+ \cdots+ \alpha_d \frac{\partial}{\partial
\xi_d}\right)^{2k(\alpha)}f \right](y).$$
This relation, Definition \[definition5.1\] and the inversion formula for the Fourier transform $\mathcal{F}$ imply .
\[remark6.1\] In this case the operator $P$ is not a pseudo-differential operator but it is a partial differential operator.
The differential-difference operator $\boldsymbol{Q}$ {#sec6.1}
-----------------------------------------------------
\[definition6.1\] We define the differential-difference operator $Q$ on $\mathcal{S}(\mathbb{R}^d)$ by $$\begin{gathered}
\forall\;x \in \mathbb{R}^d,\qquad Q(f)(x) = {}^tV^{-1}_k \circ P
\circ \,{}^t
V_k(f)(x).$$
\[proposition6.3\]
- The operator $Q$ is linear and continuous from $\mathcal{S}(\mathbb{R}^d)$ into itself.
- For all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\forall\;x \in \mathbb{R}^d,\qquad T_j Q(f)(x) = Q(T_j f)(x),
\qquad j=1,\dots ,d,$$ where $T_j$, $j = 1, 2,\dots,d$, are the Dunkl operators.
We deduce the result from the properties of the operator $^{t}V_k$ (see Theorem 3.2 of [@17]), and Proposition \[proposition5.2\].
\[proposition6.4\] For all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\begin{gathered}
\forall\;x \in \mathbb{R}^d,\qquad Q(f)(x) = \frac{\pi^d c_k^2
}{2^{2\gamma}}\mathcal{F}_D^{-1}(\omega_k \mathcal{F}_D (f)
)(x).\label{eq6.10}\end{gathered}$$
Using the relations , and the properties of the operator $^{t}V_k$ (see Theorem 3.2 of [@17]), we deduce from Definition \[definition6.1\] that $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad Q(f)(x)
= \mathcal{F}_D^{-1}\{\mathcal{F} \circ P ({}^t V_k(f))\}(x) =
\frac{\pi^d c_k^2 }{2^{2\gamma}} \mathcal{F}_D^{-1}\{\mathcal{F}
\circ \mathcal{F}^{-1}[\omega_k \mathcal{F}_D (f)]\}(x).\end{gathered}$$ As the function $\omega_k \mathcal{F}_D (f)$ belongs to $\mathcal{S}(\mathbb{R}^d)$, then by applying the fact that the classical Fourier transform $\mathcal{F}$ is bijective from $
\mathcal{S}( \mathbb{R}^d)$ onto itself, we obtain $$\begin{gathered}
\forall\;x \in \mathbb{R}^d,\qquad Q(f)(x) = \frac{\pi^d c_k^2
}{2^{2\gamma}}\mathcal{F}_D^{-1}(\omega_k
\mathcal{F}_D (f) )(x).\tag*{{}}
\end{gathered}$$
\[proposition6.5\] The distribution $T_{\omega_k^2}$ given by the function $\omega_k^2$ is in $\mathcal{S'}(
\mathbb{R}^d)$ and for all $f $ in $\mathcal{S} (\mathbb{R}^d)$ we have $$\begin{gathered}
\forall\;x \in \mathbb{R}^d,\qquad Q(f)(x) = \frac{\pi^d c_k^4
}{2^{4\gamma+d}}\mathcal{F}_D(T_{\omega_k^2})*_D \breve{f}(-x)
,$$ where $*_D$ is the Dunkl convolution product of a distribution and a function on $ \mathbb{R}^d$.
It is clear that the distribution $T_{\omega_k^2}$ given by the function ${\omega_k^2}$ belongs to $\mathcal{S'}( \mathbb{R}^d)$. On the other hand from the relations , and we obtain $$\begin{gathered}
\forall\;x \in
\mathbb{R}^d,\qquad Q(f)(x) = \frac{\pi^d c_k^4
}{2^{4\gamma+d}}\int_{
\mathbb{R}^d} \mathcal{F}_D(\tau_x(f))(y) \omega_k^2(y)dy \nonumber\\
\phantom{\forall\;x \in \mathbb{R}^d,\qquad Q(f)(x)}{}
= \frac{\pi^d c_k^4 }{2^{4\gamma+d}} \langle \mathcal{F}
(T_{\omega_k^2})_y,\tau_x(f)(y) \rangle.$$ Thus Definition \[definition4.1\] implies $$\begin{gathered}
\forall\;x \in \mathbb{R}^d,\qquad Q(f)(x) = \frac{\pi^d c_k^4
}{2^{4\gamma+d}}\mathcal{F}_D(T_{\omega_k^2})*_D
\breve{f}(-x).\tag*{{}}\end{gathered}$$
\[proposition6.6\] For all $f$ in $\mathcal{S}( \mathbb{R}^d)$ we have $$\begin{gathered}
Q(f) = \frac{\pi^d c_k^2}{2^{2\gamma}}\left[\prod_{\alpha \in R_+}
(-1)^{k(\alpha)}(\alpha_1 T_1 + \dots + \alpha_d
T_d)^{2k(\alpha)}\right] (f).\label{eq6.13}\end{gathered}$$
For all $f$ in $\mathcal{S}( \mathbb{R}^d)$, we have $$\begin{gathered}
\forall \; y \in \mathbb{R}^d, \qquad \omega_k(y)\mathcal{F}_D
(f)(y) = \prod_{\alpha \in R_+} (\langle
\alpha,y\rangle)^{2k(\alpha)} \mathcal{F}_D (f)(y).\label{eq6.14}\end{gathered}$$ But using , we deduce that $$\begin{gathered}
\forall \; y \in \mathbb{R}^d, \qquad \langle \alpha,y\rangle
\mathcal{F}_D (f)(y) = \mathcal{F}_D\big[-i (\alpha_1 T_1 + \cdots
+ \alpha_d T_d)f\big](y).\label{eq6.15}\end{gathered}$$ From , we obtain $$\begin{gathered}
\forall \; y \in \mathbb{R}^d, \qquad \omega_k(y)\mathcal{F}_D
(f)(y) = \mathcal{F}_D \left[\prod_{\alpha \in R_+}
(-1)^{k(\alpha)}(\alpha_1 T_1 + \cdots + \alpha_d
T_d)^{2k(\alpha)}f\right](y).\end{gathered}$$ This relation, Propositions \[proposition6.3\], \[proposition6.4\] and Theorem \[theorem3.1\] imply .
Other expressions of the inversion formulas for the Dunkl intertwining\
operator and its dual on spaces of functions and distributions {#sec6.2}
-----------------------------------------------------------------------
In this subsection we give other expressions of the inversion formulas for the operators $V_k$ and ${}^tV_k$ and we deduce the expressions of the representing distributions of the operators $V^{-1}_k$ and ${}^tV_k^{-1}$.
\[theorem6.1\] For all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad V^{-1}_k(f)(x) =
{}^tV_k(Q(f))(x). \label{eq6.16}\end{gathered}$$
We obtain this result by using of Proposition \[proposition6.3\], Theorem \[theorem5.2\] and Definition \[definition6.1\].
\[proposition6.7\] Let $E$ be a $W$-invariant compact convex set of $\mathbb{R}^d$. Then for all $f$ in $\mathcal{D}(\mathbb{R}^d)$ we have $$\begin{gathered}
\mbox{\rm supp}\, f \subset E \Longleftrightarrow \mbox{\rm
supp}\, {}^tV_k(f) \subset E. \label{eq6.17}\end{gathered}$$
For all $f$ in $\mathcal{D}(\mathbb{R}^d)$, we obtain from the relations $$\begin{gathered}
^{t}V_k(f) = {\cal F}^{-1}\circ
{\cal F}_D (f),\\ ^{t}V_k^{-1}(f) = {\cal F}^{-1}_D\circ
{\cal F} (f).$$ We deduce from these relations, Theorem \[theorem3.2\] and Theorem 2.6 of [@1].
\[proposition6.8\] Let $E$ be a $W$-invariant compact convex set of $\mathbb{ R}^d$. Then for all $f$ in $\mathcal{D}(\mathbb{R}^d)$ we have $$\begin{gathered}
\mbox{\rm supp}\, f \subset E \Rightarrow \mbox{\rm supp}\,
Q(f) \subset E. \label{eq6.20}\end{gathered}$$
We obtain from Definition \[definition6.1\], Propositions \[proposition6.1\] and \[proposition6.7\].
\[theorem6.2\] For all $S$ in $\mathcal{E}'( \mathbb{R}^d)$ the operator ${}^tV^{-1}_k$ satisfies also the relation $$\begin{gathered}
\langle {}^tV_k^{-1}(S), f\rangle = \langle S,
{}^tV_k(Q(f))\rangle,
\qquad f \in \mathcal{S}( \mathbb{R}^d).\label{eq6.21}\end{gathered}$$
We deduce from and .
\[corollary6.1\] Let $E$ be a $W$-invariant compact convex set of $\mathbb{R}^d$. For all $S$ in $\mathcal{E}'(
\mathbb{R}^d)$ with ${\rm supp}\, S \subset E$, we have $$\begin{gathered}
{\rm supp}\,{}^tV_k^{-1}(S) \subset E.$$
\[definition6.2\] We define the transposed operators ${}^tP$ and ${}^tQ$ of the operators $P$ and $Q$ on $\mathcal{S}'(\mathbb{R}^d)$ by $$\begin{gathered}
\langle {}^tP(S),f\rangle = \langle S, P(f) \rangle ,\qquad f \in
\mathcal{S}(\mathbb{R}^d),\\ \langle {}^tQ(S), f\rangle = \langle S,Q(f)\rangle, \qquad f \in
\mathcal{S}(\mathbb{R}^d).$$
\[proposition6.9\] For all $S$ in $\mathcal{S}'(\mathbb{R}^d)$ we have $$\begin{gathered}
{}^tP(S) = \frac{\pi^d c_k^2}{2^{2\gamma}} \left[\prod_{\alpha \in
R_+} \left(\alpha \frac{\partial}{\partial \xi_1} + \cdots +
\alpha_d \frac{\partial}{\partial
\xi_d}\right)^{2k(\alpha)}\right]S,\\ {}^tQ(S) = \frac{\pi^d c_k^2}{2^{2\gamma}} \left[\prod_{\alpha \in
R_+}(\alpha T_1 + \cdots+ \alpha_dT_d)^{2k(\alpha)}
\right]S,$$ where $T_j$, $j = 1, 2,\dots,d$, are the Dunkl operators defined on $\mathcal{S}'(\mathbb{R}^d)$ by $$\langle T_j S,f\rangle = - \langle S, T_jf\rangle,\qquad f\in
\mathcal{S}(\mathbb{R}^d).$$
\[proposition6.10\] For all $S$ in $\mathcal{S'}(\mathbb{R}^d) $ we have $$\begin{gathered}
\mathcal{F}^{-1}({}^tP(S)) = \frac{\pi^d c_k^2}{2^{2\gamma}}
\mathcal{F}^{-1}(S)
\omega_k,\\ \mathcal{F}^{-1}_D({}^tQ(S)) =
\frac{\pi^d c_k^2}{2^{2\gamma}} \mathcal{F}^{-1}_D(S)
\omega_k.$$
We deduce these relations from , and the definitions of the classical Fourier transform and the Dunkl transform of tempered distributions on $\mathbb{R}^d$.
\[theorem6.3\] The representing distributions $\eta_x$ and $Z_x$ of the inverse of the Dunkl intertwining operator and its dual, are given by $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad \eta_x =
{}^tQ(\nu_x)\label{eq6.29}\end{gathered}$$ and $$\begin{gathered}
\forall\;x \in \mathbb{R}^d, \qquad Z_x =
{}^tP(\mu_x),\label{eq6.30}\end{gathered}$$ where $\mu_x$ and $\nu_x$ are the representing measures of the Dunkl intertwining operator $V_k$ and its dual ${}^tV_k$.
From , for all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\begin{gathered}
\forall\; x \in \mathbb{R}^d,\qquad {}^tV_k(Q(f))(x) =
\langle\nu_x, Q(f)\rangle = \langle
{}^tQ(\nu_x),f\rangle.\label{eq6.31}\end{gathered}$$ On the other hand from $$\forall\; x \in \mathbb{R}^d,\qquad V^{-1}_k(f)(x) = \langle
\eta_x,f\rangle.$$ We obtain from this relation, and .
Using , for all $f$ in $\mathcal{S}(\mathbb{R}^d)$ we can also write the relation in the form $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad {}^tV^{-1}_k(f)(x) = \langle
\mu_x, P(f)\rangle =
\langle{}^tP(\mu_x),f\rangle.\label{eq6.32}\end{gathered}$$ But from we have $$\forall\; x \in \mathbb{R}^d, \qquad {}^tV^{-1}(f)(x) = \langle
Z_x,f\rangle.$$ We deduce from this relation and .
\[corollary6.2\] We have $$\begin{gathered}
\forall\;
x \in \mathbb{R}^d, \qquad \eta_x =
\frac{\pi^d c_k^2}{2^{2\gamma}} \left[\prod_{\alpha \in R_+
}\left(\alpha_1T_1+\cdots+ \alpha_d T_d
\right)^{2k(\alpha)}\right](\nu_x)$$ and $$\begin{gathered}
\forall\; x \in \mathbb{R}^d, \qquad Z_x = \frac{\pi^d
c_k^2}{2^{2\gamma}}\left[\prod_{\alpha \in R_+}\left(\alpha_1
\frac{\partial}{\partial \xi_1}+\cdots+ \alpha_d
\frac{\partial}{\partial
\xi_d}\right)^{2k(\alpha)}\right](\mu_x).$$
We deduce these relations from Theorem \[theorem6.3\] and Proposition \[proposition6.9\].
Applications {#sec7}
============
Other proof of the sufficiency condition of Theorem \[theorem4.4\] {#sec7.1}
------------------------------------------------------------------
Let $f$ be an entire function on $\mathbb{C}^d$ satisfying the condition . Then from Theorem 2.6 of [@1], the distribution $\mathcal{F}^{-1}(f)$ belongs to $\mathcal{E}'(
\mathbb{R}^d)$ and we have $${\rm supp}\, \mathcal{F}^{-1}(f) \subset E.$$
From the relation $$\mathcal{F}^{-1}_D(f) = {}^tV_k^{-1} \circ \mathcal{F}^{-1}(f)$$ given in [@17 page 27] and Corollary \[corollary6.1\], we deduce that the distribution $\mathcal{F}_D^{-1}(f)$ is in $\mathcal{E}'(
\mathbb{R}^d)$ and its support is contained in $E$.
Other expressions of the Dunkl translation operators {#sec7.2}
----------------------------------------------------
We consider the Dunkl translation operators $\tau_x$, $x \in \mathbb{R}^d$, given by the relations , .
\[theorem 7.1\]
- When the multiplicity function $k(\alpha)$ satisfies $k(\alpha) >
0$ for all $\alpha \in R_+$, we have $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad
\tau_x(f)(y) = \mu_x \ast
\mu_y(P {}^tV_k(f)),\qquad f \in
\mathcal{S}( \mathbb{R}^d),\label{eq7.1}
\end{gathered}$$ where $\ast$ is the classical convolution product of measures on $\mathbb{R}^d$.
- When the multiplicity function satisfies $k(\alpha)\in \mathbb{N}\backslash \{0\}$ for all $\alpha \in R_+$, we have $$\begin{gathered}
\forall\; y \in \mathbb{R}^d,\qquad \tau_x(f)(y) =
\mu_x\ast \mu_y({}^tV_k(Q(f))),\qquad f \in \mathcal{S}
( \mathbb{R}^d).\label{eq7.2}
\end{gathered}$$
i\) From the relations and , for $f$ in $\mathcal{S}(\mathbb{R}^d)$ we have $$\forall\;x ,y \in \mathbb{R}^d, \qquad \tau_x(f)(y) =
\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}
V^{-1}_k(f)(\xi+\eta)d\mu_x(\xi)d\mu_y(\eta).$$ By using the definition of the classical convolution product of two measures with compact support on $\mathbb{R}^d$, we obtain $$\forall\; x,y \in \mathbb{R}^d, \qquad \tau_x(f)(y) = \mu_x \ast
\mu_y(V^{-1}_k(f)).$$ Thus Theorem \[theorem5.2\] implies the relation .
ii\) The same proof as for i) and Theorem \[theorem6.1\] give the relation .
Acknowledgements {#acknowledgements .unnumbered}
----------------
The author would like to thank the referees for their interesting and useful remarks.
[99]{}
Chazarain J., Piriou A., Introduction to the theory of linear partial differential equations, North-Holland Publishing Co., Amsterdam – New York, 1982.
van Diejen J.F., Confluent hypergeometric orthogonal polynomials related to the rational quantum Calogero system with harmonic confinement, [*Comm. Math. Phys.*]{} [**188**]{} (1997), 467–497, [q-alg/9609032](http://arxiv.org/abs/q-alg/9609032).
Dunkl, C.F., Differential-difference operators associated to reflection groups, [*Trans. Amer. Math. Soc.*]{} [**311**]{} (1989), 167–183.
Dunkl C.F., Integral kernels with reflection group invariance, [*Canad. J. Math.*]{} [**43**]{} (1991), 1213–1227.
Dunkl C.F., Hankel transform associated to finite reflection groups, [*Contemp. Math.*]{} [**138**]{} (1992), 123–138.
Heckman G.J., An elementary approach to the hypergeometric shift operators of Opdam, [*Invent. Math.*]{} [**103**]{} (1991), 341–350.
Humphreys J.E., Reflection groups and Coxeter groups, Cambridge University Press, Cambridge, 1990.
Hikami K., Dunkl operators formalism for quantum many-body problems associated with classical root systems, [*J. Phys. Soc. Japan*]{} [**65**]{} (1996), 394–401.
de Jeu M.F.E., The Dunkl transform, [*Invent. Math.*]{} [**113**]{} (1993), 147–162.
de Jeu M.F.E., Paley–Wiener theorems for the Dunkl transform, [*Trans. Amer. Math. Soc.*]{} [**258**]{} (2006), 4225–4250, [math.CA/0404439](http://arxiv.org/abs/math.CA/0404439).
Kakei S., Common algebraic structure for the Calogero–Sutherland models, [*J. Phys. A: Math. Gen.*]{} [**29**]{} (1996), L619–L624, [solv-int/9608009](http://arxiv.org/abs/solv-int/9608009).
Lapointe M., Vinet L., Exact operator solution of the Calogero–Sutherland model, [*Comm. Math. Phys.*]{} [**178**]{} (1996), 425–452, [q-alg/9509003](http://arxiv.org/abs/q-alg/9509003).
Rösler M., Voit M., Markov processes related with Dunkl operators, [*Adv. in Appl. Math.*]{} [**21**]{} (1998), 575–643.
Rösler M., Positivity of Dunkl’s intertwining operator, [*Duke. Math. J.*]{} [**98**]{} (1999), 445–463, [q-alg/9710029](http://arxiv.org/abs/q-alg/9710029).
Trimèche K., The Dunkl intertwining operator on spaces of functions and distributions and integral representation of its dual, [*Integral Transform. Spec. Funct.*]{} [**12**]{} (2001), 349–374.
Trimèche K., Generalized harmonic analysis and wavelet packets, Gordon and Breach Science Publishers, Amsterdam, 2001.
Trimèche K., Paley–Wiener theorems for the Dunkl transform and Dunkl translation operators, [*Integral Transform. Spec. Funct.*]{} [**13**]{} (2002), 17–38.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper argues that the ideas underlying the renormalization group technique used to characterize phase transitions in condensed matter systems could be useful for distinguishing computational complexity classes. The paper presents a renormalization group transformation that maps an arbitrary Boolean function of $N$ Boolean variables to one of $N-1$ variables. When this transformation is applied repeatedly, the behavior of the resulting sequence of functions is different for a generic Boolean function than for Boolean functions that can be written as a polynomial of degree $\xi$ with $\xi \ll N$ as well as for functions that depend on composite variables such as the arithmetic sum of the inputs. Being able to demonstrate that functions are non-generic is of interest because it suggests an avenue for constructing an algorithm capable of demonstrating that a given Boolean function cannot be computed using resources that are bounded by a polynomial of $N$.'
author:
- |
S.N. Coppersmith, Department of Physics, University of Wisconsin–Madison,\
1150 University Avenue, Madison, WI 53706
title: Renormalization group approach to the P versus NP question
---
Introduction
============
Computational complexity characterizes how the computational resources to solve a problem depend on the size of the problem specification [@papadimitriou94]. Two well-known complexity classes [@complexityzoo] are P, problems that can be solved with resources that scale polynomially with the problem size, and NP, the class of problems for which a solution can be verified with polynomial resources. Whether or not P is equal to NP [@cook71; @levin72] is a great outstanding question in computational complexity theory and in mathematics generally [@claywebsite; @boppana90; @sipser92; @aaronson03; @wigderson06].
In this paper it is argued that a method known in statistical physics as the renormalization group (RG) [@kadanoff66; @wilson71; @wilson79; @goldenfeld92] may yield useful insight into the P versus NP question. This technique, originally formulated to provide insight into the nature of phase transitions in statistical mechanical systems [@kadanoff66; @wilson71], involves taking a problem with $N$ variables and then rewriting it as a problem involving fewer variables. Here, we will define a procedure by which a given Boolean function of $N$ Boolean variables is used to generate a Boolean function of $N-1$ variables, and investigate the properties of the resulting sequence of functions as this procedure is iterated [@white92]. The transformation used here is very simple — the new function is one if the original function changes its output value when a given input variable’s value is changed, and is zero if it does not. It is shown that when this transformation is applied repeatedly, the behavior of the resulting sequence of functions can be used to distinguish generic Boolean functions from functions that are known to be computable using polynomially bounded resources.
Any Boolean function $f(x_1,\ldots,x_N)$ of the $N$ Boolean variables $x_1,\ldots,x_N$ can be written as a polynomial in the $x_j$ using modulo-two addition. This follows because the variables and function all can be only $0$ or $1$, so $f(x_1,\ldots,x_N)$ can be written as $$\begin{aligned}
f(x_1,\ldots,x_N) &=& A_{00\ldots 00}(1\oplus x_1)(1 \oplus x_2)
\ldots(1\oplus x_{N-1})(1\oplus x_N)\nonumber\\
&\oplus& A_{00\ldots 01}(1\oplus x_1)(1\oplus x_2)\ldots
(1\oplus x_{N-1})(x_N) \nonumber \\
&\ldots&\nonumber\\
&\oplus&
A_{11\ldots 10}(x_1)(x_2)\ldots(x_{N-1})(1\oplus x_N)\nonumber\\
&\oplus& A_{11\ldots 11}(x_1)(x_2)\ldots(x_{N-1})(x_N)~,
\label{eq:general_form}\end{aligned}$$ where $A_{x_1,\ldots,x_N}=f(x_1,\ldots,x_N)$. As Shannon pointed out [@shannon49], the number of different possible functions is $2^{2^N}$ (this follows because each of the $2^N$ coefficients $A_{\alpha_1,\ldots,\alpha_N}$ can be either one or zero). This is much larger than the number of functions that can be computed using resources that scale no faster than as a polynomial of $N$, which scales asymptotically as $(CN)^t$, where $C$ is a constant and $t$ is a polynomial in $N$ [@riordan42; @webcourse267]. This counting argument demonstrates that almost all functions cannot be evaluated using polynomially bounded resources and hence are not in P. However, it does not provide a means for determining whether or not a given function can be computed with polynomial resources.
It is shown here that different classes of functions have different behavior upon repeated application of a renormalization group transformation. In analogy with well-known results in statistical mechanics [@goldenfeld92], we interpret functions exhibiting different behaviors after many renormalizations as being in different phases. Generic Boolean functions exhibit simple “fixed point" behavior upon renormalization, and hence we claim that they comprise a phase. A function that can be written either as a low-order polynomial or as a function of a composite variable such as the arithmetic sum of the values of the inputs yields non-generic behavior upon renormalization, and so is in a non-generic phase. We then discuss what would be needed to be able to use the renormalization group approach to demonstrate that a given Boolean function of $N$ variables cannot be evaluated with resources that are bounded above by a polynomial in $N$. This issue is relevant to the P versus NP question because if we can identify a function in NP that we can show is not in P, then we will have shown that P and NP are not equal. Some functions that are in P depend on the arithmetic sum of the inputs, including MAJORITY, which is one if more than half the inputs are nonzero and zero otherwise [@razborov87], and DIVISIBILITY MOD $p$, which is one if the sum of the inputs is divisible by an odd prime $p$ [@smolensky87; @smolensky93], and the renormalization group approach identifies these functions as non-generic. The renormalization group approach identifies low-order polynomials as non-generic, and some but not all low-order polynomials are in P. Because there are functions in P that are the sum of a low-order polynomial plus a small random component that is nonzero on a small fraction of the inputs, and because such functions will “flow” to the generic fixed point upon renormalization, P is not a phase in the statistical mechanical sense. Therefore, there are functions known to be in P that can be identified as non-generic only because they are close to a phase boundary in the sense that they differ from a low-order polynomial on a small fraction of the inputs. Thus, the renormalization group approach provides a means for understanding why the P versus NP question is so difficult — showing that a function is not in P using the renormalization group approach requires determining not only that it is not in a non-generic phase but also that it is not near a phase boundary, a task that appears to require resources that grow faster than exponentially with $N$. This superexponential scaling means that the procedure proposed here cannot be used to break pseudorandom number generators, a difficulty that would arise if the procedure could be implemented with resources that scale no faster than exponentially with $N$ [@razborov94natural]. However, at this point we cannot prove that a given function is not in P—our procedure distinguishes every function in P of which we are aware from a generic Boolean function, but we have not demonstrated that the procedure works for all functions that are in P.
The paper is organized as follows. Sec. \[sec:RG\] presents the transformation that maps a Boolean function of $N$ variables into a Boolean function of $N-1$ variables. Repeatedly applying this transformation yields a sequence of functions, and in Sec. \[sec:RG\] it is shown that (1) if one starts with a generic random Boolean function, then the resulting sequence of functions has the property that all functions in it are nonzero for just about half the input configurations, (2) applying the RG transformation $\xi$ times to a function that is a polynomial of order less than $\xi$ yields zero, and (3) applying the RG transformation to functions that depend on a composite variable such as the sum of the values of all the inputs also yields a sequence of functions that differs from from the result for a generic Boolean function. In Sec. (\[sec:RG\_for\_P\_functions\]) it is shown that simply applying the RG transformation many times does not identify functions that can be written as the sum of a low-order polynomial plus a contribution that is nonzero on a small fraction of the inputs. One can identify functions of this type by examining the set of functions whose outputs differ from the original one on a small fraction of the input configurations — one of the functions in the set will be a low-order polynomial. Sec. \[sec:discussion\] discusses the results in the framework of phase transitions in condensed matter systems, which renormalization group transformations are typically used to study, and also discusses how the strategy discussed here avoids the difficulties of “natural proofs” described in Ref. [@razborov94natural]. Sec. \[sec:conclusions\] presents the conclusions. Appendix A presents the arguments demonstrating why it is plausible most functions that can be computed with polynomially bounded resources can be written as a low-order polynomial plus a term that is nonzero for a fraction of input configurations that is exponentially small in $N/\log(N)$, and discusses the non-generic nature of the functions in P that do not have that property. Appendix B shows that a typical Boolean function cannot be written as a low-order polynomial plus a term that is exponentially small in $N/\log(N)$.
Renormalization group transformation {#sec:RG}
====================================
The renormalization group (RG) procedure we define takes a given function of $N$ variables and generates a function of $N-1$ variables [@goldenfeld92; @kadanoff66; @wilson71; @wilson79; @white92]. The variable that is eliminated is called the “decimated" variable. The procedure can be iterated, mapping a function of $N-1$ variables into one of $N-2$ variables, etc.
The transformation studied here specifies whether the original function’s value changes if a given input variable is changed. Specifically, given a function $f(x_1,\ldots,x_N)\equiv f({ \vec{x} })$, we define $$\begin{aligned}
&~&g_{i_1}(x_1,x_1,\ldots,x_{i_1-1},x_{i_1+1},\ldots,x_N)
\equiv g_{i_1}({ \vec{x} }^\prime)\nonumber\\*
&~&~~=f(x_1,x_2,\ldots,x_{i_1-1},0,x_{i_1+1},\ldots,x_N)
\oplus
f(x_1,x_2,\ldots,x_{i_1-1},1,x_{i_1+1},\ldots,x_N)~,
\label{eq:g_1}\end{aligned}$$ where $\oplus$ denotes addition modulo $2$ [@arithmetic_reference], and the vector ${ \vec{x} }^\prime$ denotes the set of undecimated variables. The function $g_{i_1}(x_1,x_2,\ldots,x_{i_1-1},x_{i_1+1},\ldots,x_N)$ is one if the output of the function $f$ changes when the value of the decimated variable $x_{i_1}$ is changed and zero if it does not. Once $g_{i_1}$ has been obtained, the procedure can be repeated and one can define $g_{i_1,i_2}$ as $$\begin{aligned}
&~&g_{i_1,i_2}(x_1,x_1,\ldots,x_{i_1-1},x_{i_1+1},\ldots,x_{i_2-1},x_{i_2+1},x_N)
\equiv g_{i_1,i_2}({ \vec{x} }^\prime)
\nonumber\\*
&&=~~g_{i_1}(x_1,x_2,\ldots,x_{i_2-1},0,x_{i_2+1},\ldots,x_N) \nonumber\\*
&&~~\oplus
g_{i_1}(x_1,x_2,\ldots,x_{i_2-1},1,x_{i_2+1},\ldots,x_N)\nonumber\\*
&&= ~~~f(x_1,x_2,\ldots,x_{i_1-1},0,x_{i_1+1},\ldots,x_{i_2-1},0,x_{i_2+1},\ldots,x_N)\nonumber\\*
&& ~~~\oplus
f(x_1,x_2,\ldots,x_{i_1-1},0,x_{i_1+1},\ldots,x_{i_2-1},1,x_{i_2+1},\ldots,x_N)
\nonumber\\*
&& ~~~\oplus f(x_1,x_2,\ldots,x_{i_1-1},1,x_{i_1+1},\ldots,x_{i_2-1},0,x_{i_2+1},\ldots,x_N)\nonumber\\*
&&~~~\oplus
f(x_1,x_2,\ldots,x_{i_1-1},1,x_{i_1+1},\ldots,x_{i_2-1},1,x_{i_2+1},\ldots,x_N)~,\end{aligned}$$ where the sums all denote addition modulo two. The function $g_{x_{i_1},\ldots,x_{i_m}}({ \vec{x} }^\prime)$ obtained by decimating the $m$ variables $x_{i_1},\ldots,x_{i_m}$ does not depend on the order in which the variables are decimated.
First we examine functions for which each of the coefficients $A_{\alpha_1,\alpha_2,\ldots,\alpha_N}^{(0)}$ in Eq. (\[eq:general\_form\]) is an independent random variable chosen to be one with probability $p_0$ and zero with probability $q_0=1-p_0$, where $0<p_0<1$. We consider the sequence of functions obtained by successive application of the renormalization group transformation to such a generic random function. The coefficients $A^{(i_1)}_{x_1,\ldots,x_{i_1-1},x_{i_1+1},\ldots,x_N}$ that characterize the function $g_{i_1}({ \vec{x} }^\prime)$ obtained by decimating the variable $i_1$ via Eq. (\[eq:g\_1\]) are $$\begin{aligned}
A^{(i_1)}_{x_1,\ldots,x_{i_1-1},x_{i_1+1},\ldots,x_N} = A_{x_1,\ldots,x_{i_1-1},0,x_{i_1+1},\ldots,x_N} \oplus
A_{x_1,\ldots,x_{i_1-1},1,x_{i_1+1},\ldots,x_N}~.\end{aligned}$$ The original $A^{(0)}$’s are uncorrelated random variables, so it follows that the $A^{({i_1})}$’s are independent random variables that are one with probability $p_1=2p_0q_0$ and zero with probability $1-p_1$. After $\ell$ iterations (after which $\ell$ variables have been eliminated), the coefficients are still uncorrelated random variables, and they are now one with probability $p_\ell$ and zero with probability $1-p_\ell$, where the $p_\ell$ satisfy the recursion relation $$\begin{aligned}
p_{\ell+1}=2p_\ell(1-p_\ell)~.
\label{eq:p_recursion}\end{aligned}$$ The solution to Eq. (\[eq:p\_recursion\]) is $$\begin{aligned}
p_\ell = \frac{1}{2}\left ( 1 - (1-2p_0)^{2^\ell} \right ).
\label{eq:p_flow_result}\end{aligned}$$ For any $p_0$ satisfying $0<p_0<1$, the values of the $p_\ell$ “flow” as $\ell$ increases and eventually approach the “fixed-point value” of $1/2$ [@goldenfeld92]. This behavior is exactly analogous to that displayed by the partition functions describing thermodynamic phases in statistical mechanical systems, and so we interpret this behavior as evidence that there is a phase of generic Boolean functions.
In Sec. (\[sec:RG\_for\_P\_functions\]) we will be considering values of $p_0$ that are very small but nonzero, for which case $p_\ell$ grows exponentially with $\ell$: $$p_\ell = 2^\ell p_0\qquad({\rm when~}p_\ell \ll 1)~.
\label{eq:small_p_flow_result}$$ After many renormalizations such functions will “flow” to the generic fixed point, so they are in the generic phase. If one chooses $p_0=P(N)2^{-N}$, where $P(N)$ is a polynomial in $N$, the function can be specified with polynomially bounded resources by enumerating all input configurations for which the function is nonzero.
Note that when the RG transformation is applied to a generic Boolean function, all the functions that are generated yield an output that is zero on a fraction of the inputs that differs from $1/2$ by an amount that is exponentially small in $N$. This follows because almost all Boolean functions have an initial value of $p_0$ that differs from $1/2$ by an amount that is the square root of the number of values chosen, or $(2^{N})^{1/2}=2^{N/2}$. Since all the $p_\ell$ deviate from $1/2$ by an amount that is exponentially small in $N$, and since the number of independent input configurations remains exponentially large in $N$ until the number of decimated variables is of order $N$, for every function obtained via the renormalization transformation, the fraction of input configurations yielding zero deviates from $1/2$ by an amount that is exponentially small in $N$.
We next demonstrate that Boolean functions that can be written as polynomials of degree of $\xi$ or less when $\xi < N$ have the property that they yield zero after $\xi+1$ renormalizations, for any choice of the decimated variables.
First we examine a simple example. The parity function $\mathcal{P}(x_1,\ldots,x_N)$, which is $1$ if an odd number of input variables are 1 and $0$ if an even number of the input variables are 1 [@furst84; @yao85; @hastad86; @wigderson06], can be written as $$\mathcal{P}(x_1,\ldots,x_N)=x_1\oplus x_2 \oplus \ldots \oplus x_N~.$$ There are many less efficient ways to write the parity function, but the result of the renormalization procedure does not depend on how one has chosen to write the function, since it can be computed knowing only the values of the function for all different input configurations. For the parity function, one finds, for any choice of decimated variables $x_{j_1}$ and $x_{j_2}$, the functions resulting from one and two renormalizations, $g^P_{j_1} ({ \vec{x} }^\prime)$ and $g^P_{i_{j_1,j_2}}({ \vec{x} }^\prime)$, are: $$\begin{aligned}
&& g^P_{j_1} ({ \vec{x} }^\prime) = x_{j_1} \oplus (1-x_{j_1}) = 1~;\\
&& g^P_{i_{j_1,j_2}}({ \vec{x} }^\prime) = 0~.\end{aligned}$$ Thus, applying the renormalization transformation to the parity function yields zero after two iterations, in contrast to the behavior of a generic Boolean function.
More generally, for any term of the form $T=y_{i_1} y_{i_2} \ldots y_{i_m}$, with $y_i=x_i$ or $1-x_i$, the quantity $T(x_i=1) \oplus T(x_i=0)$ is either zero (if $y_i$ does not occur in $T$) or else is the product of $m-1$ instead of $m$ of the $y$’s; for example $$T(y_{i_1}=1) \oplus T(y_{i_1}=0)=y_{i_2} \ldots y_{i_m}~.$$ Because the effect of the RG procedure on the sum of terms is equal to the sum of the results of the transformation applied to the individual terms, any function that is the mod-2 sum of terms that are all products of fewer than $m$ $y$’s will yield zero after $m$ renormalizations, for any choice of the decimated variables. It follows immediately that a function that is a polynomial of degree $\xi$ or less has the property that applying the RG transformation to it $\xi+1$ times yields zero for any choice of the decimated variables.
This result demonstrates that the RG transformation distinguishes generic Boolean functions from functions that can be written as polynomials of degree $\xi$ or less, when $\xi<N$. The qualitatively different behavior upon renormalization of polynomials of degree $\xi$ from generic Boolean functions can be interpreted as evidence that these two classes of functions are in different phases.
We now demonstrate that the RG method also identifies as non-generic functions that depend on a composite quantity such as the arithmetic sum of the variables. Functions in P with this property include MAJORITY (which is one if more than half the inputs are set to one, and zero otherwise) [@razborov87] and DIVISIBILITY MOD p (which is one if the number of inputs that are set to one is divisible by an odd prime p and zero otherwise) [@smolensky87; @smolensky93]. The renormalization group approach distinguishes such functions from generic Boolean functions because the output of all the functions in the sequence is constrained to be identical for very large sets of input configurations. We first show that MAJORITY and DIVISIBILITY MOD p are both distinguished from a generic Boolean function by the renormalization group procedure, and then we argue that the RG procedure distinguishes any function of the arithmetic sum of the inputs from a generic Boolean function. We expect that the argument will be generalizable to apply to a broad class of functions that depend on other composite quantities that are specific combinations of the input variables.
First we consider the behavior when the RG transformation is applied to DIVISIBILITY MOD 3. Since this function is nonzero when the arithmetic sum $\sum_{j=1}^N x_j$ is divisible by $3$, changing an input $x_i$ changes the output value when the sum of the other input variables is either zero or two. Thus, the renormalized function $g_i(\vec{x}^\prime)$ is nonzero for any $i$ on a fraction of the input configurations that is very close to $2/3$. Every succeeding renormalization also yields a function that is nonzero when the sum of the remaining variables is either zero or two. This behavior differs from that of a generic Boolean function, in which the renormalized functions are nonzero for a fraction of inputs that is very close to $1/2$. More generally, when the RG is applied to DIVISIBILITY MOD p, with p an odd prime, the behavior of the sequence of functions is determined by the value of the mod p remainder of the undecimated variables. The functions in the sequence yield the output one when the remainder mod p takes on certain values, and typically, after a small number of iterations, these values cycle with a finite period. Therefore, the fraction of input configurations that lead to a nonzero input essentially cycles also (the cycling is not exact only because the fraction of input configurations with a given value of the remainder mod p changes very slightly with $N$), and, since p is odd, none of the fractions in the cycle is close to $1/2$.
The behavior obtained when the RG procedure is applied to the MAJORITY function is also significantly different from that of a generic Boolean function. The first renormalization step yields a function that is nonzero when the sum of the undecimated variables is $N/2-1$, and the second step yields a function that is nonzero when the sum of the undecimated variables is either $N/2-2$ or $N/2-1$. The functions obtained after $j$ decimations are nonzero on a fraction of inputs that is bounded above by $C j/\sqrt{N}$, where $C$ is a constant of order unity, so long as $j\ll\sqrt{N}$. The original function is thus identified as non-generic because so long as the number of renormalizations applied is much smaller than $\sqrt{N}$ the renormalized functions are all nonzero on a fraction of input configurations that is much less than $1/2$.
Next we argue that the renormalization group approach distinguishes any function of the arithmetic sum of the inputs from a generic Boolean function. The physical intuition underlying the argument is that all the functions in the sequence depend only on the arithmetic sum of the undecimated variables, and when the number of undecimated variables is $\mathcal{N}$, the number of configurations of the undecimated variables whose arithmetic sum is constrained to be $\mathcal{S}$, is $\mathcal{N}!/\mathcal{S}!(\mathcal{N}-\mathcal{S})!$. One can use Stirling’s series [@marsaglia90] to show explicitly that when $N$ is large, then the number of configurations with a given value of $\mathcal{S}$ is a polynomial in $1/N$ times $2^N$ for a number of values of $\mathcal{S}$ that grows as the square root of $N$. Therefore, the [*differences*]{} in the fraction of configurations yielding different values of $N$ decay polynomially with $N$, and the fraction of input configurations yielding one should either be exactly 1/2 or else must deviate from 1/2 by an amount that decreases only polynomially with $N$.
Renormalization procedure for characterizing functions that can be constructed using polynomially bounded resources. {#sec:RG_for_P_functions}
====================================================================================================================
\[sec:RG\_for\_low\_order\_polynomial\_functions\] This section addresses the relationship between non-generic phases of Boolean functions and the computational complexity class P of functions that can be computed with polynomially bounded resources.
There are functions that are in P that are neither polynomials of degree $\xi$ with $\xi<N$ nor functions of composite variables. For example, because the sum of two functions that are in P is in P, a sum of any function that is in P with a small “generic” piece specified by Eq. (\[eq:general\_form\]) with the coefficients chosen independently and randomly to be one with probability $p_0=\mathcal{P}(N)2^{-N}$, where $\mathcal{P}(N)$ is a polynomial in $N$, is in P. Eq. (\[eq:small\_p\_flow\_result\]) shows that $\ell$ renormalizations cause the value of $p_\ell$ to grow exponentially with $\ell$, $p_\ell=2^\ell p_0$; in renormalization group parlance [@goldenfeld92] the remainder is a “relevant” perturbation. Since the generic piece renormalizes towards the generic fixed point at which exponentially close to half the inputs yield a nonzero output, whether or not the function resulting from many renormalizations can be identified as non-generic depends on whether the first piece yields a nonzero result after many renormalizations. A function of a composite variable yields a result different both from zero and from that of generic functions, and when a small generic piece is added to such a function, renormalization still yields a non-generic result. However, because after $\xi+1$ renormalizations of a polynomial of order $\xi$ one obtains zero, renormalizing functions that are the sum of a low-order polynomial and a small generic piece yields zero plus the generic result, and so cannot be identified as non-generic by straightforward application of the renormalization transformation.
The number of polynomials of $N$ variables with degree $\xi$ is $2^{\sum_{k=1}^\xi N!/(\xi!(N-\xi)!)}$ [@polynomial_count], which when $\xi\ll N$ can be approximated as $2^{e(N/\xi)^\xi}$. Therefore, when $\xi$ scales as a fractional power of $N$, there are many more polynomials of degree $\xi$ than there are functions in P. On the other hand, the product of all $N$ variables $x_1\ldots x_N$ is in P, so there are functions in P that cannot be written as polynomials of degree $\xi$ for any $\xi<N$. Therefore, using our definition of a phase based on the behavior yielded by repeated renormalization, P is not a phase. There are non-generic functions that are not in P and there are functions in P that are in the generic phase. However, note that a product of $M$ variables is nonzero for only a fraction $2^{-M}$ of the input configurations. For example, the term $x_1x_2\ldots x_R$ is nonzero only for input configurations that have $x_1=x_2=\ldots=x_R=1$. The sum of a polynomially large number $M$ of terms of this type is nonzero only on a fraction of inputs that is bounded above by $M/2^R$. In Appendix A it is argued that the functions in P that are in the generic phase have the property that for any $\xi<N$, any Boolean function of $N$ variables $f(x_1,\ldots,x_N)$ that is in P can be written as the sum: $$f(x_1,\ldots,x_N)=\mathcal{P}_\xi(x_1,\ldots,x_N)\oplus \mathcal{R}_\xi(x_1,\ldots,x_N)~,
\label{eq:bound_for_fns_in_P}
\label{eq:decomposition}$$ where $\mathcal{P}_\xi(x_1,\ldots,x_N)$ is a polynomial of degree no more than $\xi$ and the remainder term $\mathcal{R}_\xi(x_1,\ldots,x_N)$ is nonzero on a fraction of input configurations that is bounded above by $\mathcal{C}2^{-\alpha\xi/\log_2(N)}$, with $\mathcal{C}$ and $\alpha$ positive constants.
As discussed above, using the RG transformation to identify functions that satisfy Eq. (\[eq:decomposition\]) is not entirely straightforward — the obvious strategy, seeing if the functions obtained after renormalizing $\xi+1$ times have a small remainder term, fails because renormalization yields exponential growth in the fraction of input configurations for which the remainder term is nonzero. This difficulty can be circumvented by examining [*all*]{} functions that differ from the function in question on a fraction of input configurations no greater than $\mathcal{C}2^{-\alpha \xi/\log_2(N)}$. If the original function obeys Eq. (\[eq:decomposition\]), then one of the “perturbed” functions will have a remainder term that is zero, and applying the renormalization transformation to it $\xi+1$ times yields zero for all choices of the decimated variables.
There are functions known to be in P that can written as the sum of a function of a composite variable plus a function that is nonzero on a small fraction of inputs. Nongeneric behavior is obtained upon renormalization for all such functions except for those for which all functions in the renormalization sequence yield one for exactly half the input configurations. The procedure for identifying such functions is exactly analogous as for identifying functions that can be approximated as low-order polynomials — examine the properties under renormalization of all the functions that are yield the same output as the one in question except for a small fraction of the inputs.
Finally, we note that in Appendix B it is demonstrated that almost all generic random functions do not satisfy Eq. (\[eq:decomposition\]) when $\xi$ scales as a fractional power of $N$.
Discussion {#sec:discussion}
==========
This paper presents a renormalization group approach that distinguishes generic Boolean functions of $N$ variables from functions that can be written as a polynomial of degree $\xi$, with $\xi \ll N$, and also from functions that depend only on composite quantities such as the arithmetic sum of all the input variables. The method provides a consistent framework for identifying many different functions as non-generic.
The renormalization group approach also provides a natural framework for understanding why the P versus NP question is so difficult. Functions computable with polynomial resources do not comprise a phase — there are functions that are in a non-generic phase that are not in P, and there are functions in P for which the renormalization group yields a “flow” that is towards the generic fixed point and hence are in the “generic” phase. The possibility of using the RG approach to demonstrate that a given Boolean function is not in P arises because it is possible that all functions in P that are in the generic phase are all close to a phase boundary of a non-generic phase. Whether the renormalization group approach can provide a means for determining whether or P is distinct from NP depends on whether it is possible to demonstrate that all efficiently computable functions are in or near a non-generic phase.
The procedure used here of using the behavior yielded by a renormalization group transformation to identify different phases of Boolean functions is entirely analogous to a procedure presented by Wilson [@wilson79] to identify different thermodynamic phases of the Ising model, used to describe magnetism in solids. Wilson showed that individual configurations of Ising models could be identified as being in either a ferromagnetic phase or paramagnetic phase by repeatedly eliminating spins and examining the resulting configurations — if after many renormalizations all the spins are aligned, then the system is in the ferromagnetic phase, while if after many renormalizations the spin orientations are random, then the system is in the paramagnetic phase. Viewing the analogy between the results for magnets and the qualitatively different behavior of the renormalization group “flows” for polynomials of degree $\xi$, for functions of composite variables, and for generic Boolean functions as an indication that low-degree polynomials and functions of composite variables are both non-generic “phases," we propose the schematic phase diagram for Boolean functions, shown in Fig. \[fig:phase\_diagram\].
![Schematic phase diagram for Boolean functions. Within the set of all Boolean functions of $N$ Boolean variables there is a generic phase, a phase consisting of functions that can be written as polynomials of order no greater than $\xi$ with $\xi \ll N$, and there are phases corresponding to functions of composite variables such as the arithmetic sum of all the inputs. Some polynomials of degree $\xi$ are not in P, and some functions that can be computed with polynomial resources cannot be written either as polynomials of degree $\xi$ for any $\xi<N$ or as functions of a composite variable. Therefore, P does not denote a phase. However, we conjecture that that all functions in P are either in a non-generic phase or else very close to the low-order-polynomial phase boundary. []{data-label="fig:phase_diagram"}](new_phase_diagram1.eps){height="5cm"}
If it can be shown that all functions in P are either in a non-generic phase or else very close to a phase boundary, then the procedure described here leads to a specific algorithmic approach to the P versus NP question — if a given function that is obtained as the answer to a problem in NP fails to be close enough to a non-generic phase, then one has shown that P is not equal to NP. (Ref. [@coppersmith06b] advocates a family of candidate functions for testing using the strategy proposed in this paper, but the strategy can be implemented for any candidate function.) Appendix B shows that almost all Boolean functions are not close to non-generic phase boundaries. Appendix A argues that the construction of a function in P that does not satisfy Eq. (\[eq:decomposition\]) requires delicate balancing that may signal the existence of a composite variable, but the argument is only speculative. Progress on this issue is the key to using the RG approach to be able to address the P versus NP question.
Because the procedure discussed in Sec. \[sec:RG\_for\_P\_functions\] requires a number of operations that scales superexponentially with N, the procedure proposed here is not a “natural proof" as discussed in Ref. [@razborov94natural] and therefore does not yield a method for breaking pseudorandom number generators. However, direct numerical implementation of the procedure is not likely to be computationally feasible.
Conclusions {#sec:conclusions}
===========
This paper presents a renormalization group approach that can be used to distinguish a generic Boolean function from (1) a Boolean function of $N$ variables that can be written as a polynomial of degree $\xi$ with $\xi<N$, and (2) a function that depends only on a composite variable (such as the arithmetic sum of the inputs). An algorithm for determining whether a function differs from a polynomial of degree $\xi$ on a fraction of inputs that is exponentially small in $\xi/\log(N)$ is presented. The possible relevance of these results to the question of whether P and NP are distinct is discussed.
Acknowledgments {#sec:acknowledgments}
===============
The author is grateful to Prof. Daniel Spielman for pointing out a serious error in the original version of the manuscript, and for support from NSF grants CCF 0523680 and DMR 0209630.
[**Appendix A Characterization of the functions that can be constructed with a polynomially large number of operations.** ]{}
In this appendix we examine the properties of functions that can be computed with polynomially bounded resources. First we discuss why it is plausible that almost all functions in P can written in the form Eq. (\[eq:decomposition\]), which is the sum of two terms, the first a polynomial of degree $\xi$, and the second a correction term that is nonzero on a fraction of input configurations that is exponentially small in $\xi/\log(N)$. We then examine known functions in P that cannot be written in this form, arguing that they have special properties that may give rise to the emergence of a composite variable on which the function depends, which would lead to non-generic behavior upon renormalization.
To see why it is hard to construct functions in P that do not satisfy Eq. (\[eq:decomposition\]), we consider the process by which functions can be constructed. First we show that a starting polynomial that is the sum of polynomially many terms whose factors are all either $x_i$ or $(1-x_i)$ satisfies Eq. (\[eq:decomposition\]). Then we show that the sum of two functions that each obey Eq. (\[eq:decomposition\]) also satisfies Eq. (\[eq:decomposition\]), and also that the coefficient multiplying the correction term grows sufficiently slowly that the bound remains true even after a number of additions that grows polynomially with $N$. We then consider products of such functions. The behavior is more complicated, but we argue that a similar decomposition works in most circumstances because when many terms are multiplied together, the result is nonzero only on a small fraction of inputs. Finally, we examine some functions in P which do not satisfy Eq. (\[eq:decomposition\]) and note that they involve a delicate balance that enables the sum of a finite number of products to be nonzero on the same fraction of inputs as the individual terms. It is plausible that this nongeneric property is associated with the nongeneric behavior of these functions upon renormalization.
First consider a polynomial $A(x_1,\ldots,x_N)$ that is the mod-2 sum of polynomially many terms that are all of the form $y_{i_1}\ldots y_{i_m}$, where $y_i$ is either $x_i$ or $1-x_i$: $$\begin{aligned}
{A}(x_1,\ldots,x_N)=C_0 + \sum_{\eta=1}^N
\sum_{k_\eta=1}^{M_\eta}
y_{i_1(\eta ,k_\eta)} \ldots y_{i_{\eta}(\eta, k_\eta)}~.\end{aligned}$$ Here, $C_0$ is a constant, $\eta$ denotes the number of factors of $y_i$ in a term, $k_\eta$ is the index labeling the different terms with $\eta$ factors, $i_j(\eta,k_\eta)$ denotes the index of the $j^{th}$ factor in the term $k_\eta$, and each $M_\eta$, the number of terms with $\eta$ factors, is bounded above by a polynomial of $N$. We will obtain bounds on the number of configurations for which the output is nonzero by considering standard addition instead of modulo-two addition, which means that we will overcounting by including configurations for which an even number of terms in the polynomial expansion are nonzero. Each term with $\eta$ factors is nonzero only on a fraction $2^{-\eta}$ of the inputs. Therefore, if we define $\rho_{A}(\eta)$ to be the fraction of inputs of $A(x_1,\ldots,x_N)$ for which the sum of all the terms with $\eta$ factors is nonzero, we have $$\rho_A(\eta) \le C_A 2^{-\alpha \eta}~,
\label{eq:decay_condition}$$ for constant $C_A$ and $\alpha=\frac{1}{2}-\epsilon$, with $\epsilon$ infinitesimal.
Now consider the addition of two functions $P(x_1,\ldots,x_N)$ and $Q(x_1,\ldots,x_N)$ that satisfy Eq. (\[eq:decomposition\]) for positive $\mathcal{C_P}$, $\mathcal{C_Q}$, and $\alpha$. Again we consider standard addition instead of modulo-two addition. Because the sum $S(x_1,\ldots,x_N)=P(x_1,\ldots,x_N)+Q(x_1,\ldots,x_N)$ has the property that all terms in the sum appears in at least one of the summands, we have $$\rho_S(\eta) \le \rho_P(\eta)+\rho_Q(\eta)~;$$ the sum obeys Eq. (\[eq:decay\_condition\]) with the same value of $\alpha$ and with $C_S \le C_P+C_Q$. Adding polynomially many terms can increase the prefactor only by an amount that grows no faster than polynomially in $N$.
We next consider the product of two functions that satisfy Eq. (\[eq:decay\_condition\]). We write $$\begin{aligned}
A(\vec{x}) &=& P_A^\xi(\vec{x})+R_A^\xi(\vec{x})\nonumber\\
B(\vec{x}) &=& P_B^\xi(\vec{x})+R_B^\xi(\vec{x})~,\end{aligned}$$ where $P_A^\xi$ and $P_B^\xi$ are polynomials of order $\xi$ with $T_A$ and $T_B$ terms respectively, and $R_A^\xi(\vec{x})$ and $R_B^\xi(\vec{x})$ are both nonzero on a fraction of inputs that is less than $\mathcal{C} 2^{-\alpha \xi}$ for positive constants $\mathcal{C}$ and $\alpha$.
We write the product of $A(\vec{x})$ and $B(\vec{x})$ as $$\begin{aligned}
D({ \vec{x} }) &=& A({ \vec{x} })B({ \vec{x} })\nonumber\\
&=& (P_A^\xi({ \vec{x} })+R_A^\xi({ \vec{x} }))(P_B^\xi({ \vec{x} })+R_B^\xi({ \vec{x} }))\nonumber\\
&=& P_A^\xi({ \vec{x} })P_B^\xi({ \vec{x} })+P_A^\xi({ \vec{x} })R_B^\xi({ \vec{x} })
+R_A^\xi({ \vec{x} })P_B^\xi({ \vec{x} })+R_A^\xi({ \vec{x} })R_B^\xi({ \vec{x} })~.\end{aligned}$$ Now $P_A^\xi({ \vec{x} })R_B^\xi({ \vec{x} })$ is nonzero on fewer inputs than $R_B^\xi({ \vec{x} })$ (this follows since a product is nonzero only if each of its factors is nonzero), and, similarly, $R_A^\xi({ \vec{x} })P_B^\xi({ \vec{x} })$ is nonzero on fewer inputs than either $R_A^\xi({ \vec{x} })$ or $R_A^\xi({ \vec{x} })$, so the sum of the last three terms must be less than $3\mathcal{C} 2^{-\alpha\xi}$. Therefore, these contributions to the remainder term in the product remain exponentially small, with a coefficient that remains bounded by a polynomial in $N$ after polynomially many multiplications. Therefore, it only remains to consider the properties of the product $P_A^\xi({ \vec{x} })P_B^\xi({ \vec{x} })$, which we write $$P_A^\xi({ \vec{x} })P_B^\xi({ \vec{x} })=P_D^\xi({ \vec{x} })+R_D^\xi({ \vec{x} })~,
\label{eq:P_Dequation}$$ where $P_D^\xi({ \vec{x} })$ is a polynomial of degree $\xi$ and $R_D^\xi({ \vec{x} })$ is a remainder term that we need to bound.
To bound the magnitude of the remainder, let us multiply out the polynomials in Eq. (\[eq:P\_Dequation\]) so that they are all sums of terms that are products of the form $y_{i_1}\ldots y_{i_j}$, terms that we will denote as “primitive." Let $T_A$ be the number of primitive terms in $P_A^\xi({ \vec{x} })$, and $T_B$ be the number of primitive terms in $P_B^\xi({ \vec{x} })$. Note that every primitive term in the product with more than $\xi$ factors is nonzero on a fraction $2^{-\xi}$ or less of the input configurations.
Since the total number of primitive terms in $R_D^\xi({ \vec{x} })$ is bounded above by $T_AT_B$, the fraction of inputs on which the sum of the terms with at least $\xi$ factors is nonzero is bounded above by $T_AT_B2^{-\xi}$. So long as $T_A$ and $T_B$ are both less than exponentially large in $\xi$, then this remainder term is exponentially small in $\xi$. The multiplication process must start with values of $T_A$ and $T_B$ that are both bounded by a polynomial of $N$, but because multiplications can be composed, we need to examine the behavior of $T_D$, the number of primitive terms in $P_D^\xi({ \vec{x} })$.
A simple upper bound for $T_D$ is obtained by ignoring all possible simplifications that could reduce the total number of terms in the product: $$T_D \le T_AT_B~.$$ This equation describes geometric growth. If $\mathcal{M}$ polynomials are multiplied together, all of which have fewer than $CN^Y$ terms for fixed $C$ and $Y$, then the total number of terms in the product, $T_\mathcal{M}$, satisfies the bound $$T_\mathcal{M} \le (CN^{Y})^ \mathcal{M}~.
\label{eq:largeMbound}$$ This bound on the number of terms in the product is much smaller than $2^\xi$ so long as $\mathcal{M}$ satisfies $$\mathcal{M} \ll \xi/ (Y\log_2 N+\log_2 C)~.$$
A useful bound on multiplicative terms that are products of more than $\xi/(Y\log_2 N) $ factors can be obtained by exploiting the fact that the product of two functions is nonzero for a given input only if each of the factors is. Specifically, consider the product $AB$, and say that $A$ is nonzero on a set of $M_A$ inputs. If $B$ is nonzero on less than a fraction $\sigma$ of the inputs in this set for some $1/2<\sigma<1$, then the product $AB$ is nonzero on fewer than $\sigma M_A$ inputs, and if not, then the product $A(1-B)$ is nonzero on fewer than $(1-\sigma)M_A$ inputs, and one can write $AB=A+A(1-B)$. [@proliferation_footnote]
The result of $\mathcal{M}$ multiplications is then nonzero only on a fraction of inputs bounded above by $2^{-\mathcal{M}\log_2\sigma}$. Therefore, a product of more than $\xi/Y\log_2(N)$ factors is nonzero on no more than a fraction $2^{-\tilde{C}\xi/\log_2(N)}$ of the inputs, where $\tilde{C}$ is a positive constant, and the entire product can be moved into the remainder term.
The arguments above indicate that the remainder term tends to be small for products because the number of terms in the polynomial that are of order $\xi$ or less can be bounded for products of small numbers of terms, and products of many terms are nonzero on a small enough fraction of the input configurations that they can be considered to be part of the remainder term. However, there are functions in P that do not obey Eq. (\[eq:decomposition\]). Two examples of functions that are in P that have been proven to violate Eq. (\[eq:decomposition\]) are MAJORITY (which is one when more than half input variables have been set to one and zero otherwise) [@razborov87] and DIVISIBILITY MOD p, which is one if the sum of the input variables is divisible by an odd prime p [@smolensky87; @smolensky93]. Both these functions depend only on the arithmetic (not mod-2) sum of all the variables, $x_1+x_2+\ldots x_N$. Calculating the sum of $N$ variables can be done with polynomially bounded resources because one need only keep track of a running sum, which is the same for many different values of the individual $x_j$. For instance, when $k=N$, there are $N!/((N/2)!)^2 \approx 2^N/\sqrt{2\pi N}$ different ways to choose the $x_1\ldots x_k$ so that their sum is $N/2$.
It is instructive to consider an algorithm for computing DIVISIBILITY MOD $3$ to see how the function avoids being a low order polynomial. Some pseudocode for a simple algorithm for this problem is: $$\begin{aligned}
&& {\rm divisibility~ mod~ 3:}\\
&&~~~~~ {\rm start: remainder0[0]=1, remainder1[0]=remainder2[0]=0}\\
&&~~~~~ {\rm for~ each~ i>0}\\
&&~~~~~ {\rm remainder0[i+1] = remainder0[i]*(1-x_{i+1})\oplus remainder2[i]*x_{i+1}}\\
&&~~~~~ {\rm remainder1[i+1] = remainder1[i]*(1-x_{i+1})\oplus remainder0[i]*x_{i+1}}\\
&&~~~~~ {\rm remainder2[i+1] = remainder2[i]*(1-x_{i+1})\oplus remainder1[i]*x_{i+1}}\\
&&~~~~~ {\rm answer=remainder0[N]}\end{aligned}$$ The quantity remainder0\[i\]+remainder1\[i\]+remainder2\[i\] is unity for every i, and the fraction of inputs for which each remainder variable is nonzero is very close to $1/3$ and does not decay exponentially with i. The fractions do not decay or grow because the equation for each remainder for a given $i$ is the sum of two products. The product ${\rm remainder0[i](1-x_{i+1})}$ is nonzero on half the inputs on which remainder0\[i\] is nonzero, and similarly for the other term ${\rm remainder2[i]*x_{i+1}}$. Because ${\rm remainder0[i+1]}$ is the sum of two terms, each of which is nonzero on almost exactly half the outputs for which ${\rm remainder0[i]}$ is nonzero, ${\rm remainder0[j]}$ remains of order of but less than unity for all j. It is plausible that this exquisite cancellation leads to the existence of a composite variable on which the function depends, or, more generally, to non-generic behavior upon renormalization. Because obtaining a function that cannot be written as a low-order polynomial plus a term that is nonzero except for a small fraction of input configurations requires a series of delicate cancellations, it is also extremely plausible that the fraction of functions that are in P and do not satisfy Eq. (\[eq:decomposition\]) is extremely small.
As discussed in the main text, the renormalization group distinguishes functions that depend on the sum of the values of the input variables from generic Boolean functions because the renormalization transformation preserves the property that a given value for the composite variable occurs for an exponentially large numbers of input configurations. Moreover, at least when the composite variable is the arithmetic sum of the inputs, the fractions of input configurations for which the sum of the variables takes on different values differ by an amount that decays only polynomially with $N$. Therefore, such functions can yield one either on exactly half the inputs or else on a fraction of the inputs that differs from $1/2$ by an amount that is at least as large of $N^{-x}$ for some positive $x$.
To summarize, in this appendix we discuss the restrictions on Boolean functions of $N$ variables that can be computed with resources that are bounded above by a polynomial in $N$. Many functions in P have the property that they can be written, for any fixed $\xi$, as the sum of a polynomial of degree $\xi$ and a term that is bounded above by $\mathcal{C}2^{-\alpha\xi/\log_2(N)}$ for positive constants $\mathcal{C}$ and $\alpha$. Known functions in P that cannot be approximated by low-order polynomials have the property that they have a dependence on a composite variable. The renormalization group transformation provides a means for distinguishing both types of functions from generic Boolean functions.
[**Appendix B: Demonstration that a typical Boolean function does not satisfy Eq. (\[eq:bound\_for\_fns\_in\_P\]).**]{}
In this appendix it is shown that for a typical Boolean function, changing the outputs for an exponentially small fraction of the inputs does not yield a low-order polynomial. Specifically, given a value of $\xi$ with $\xi\propto N^y$ with $0<y<1$, if one changes the output value of a typical Boolean function for no more than $\mathcal{C} 2^{N-\alpha\xi/\log_2(N)}$ input configurations, then the resulting function cannot be written as a polynomial of degree $\xi$ or less. This is done by showing that the number of Boolean functions that satisfy Eq. (\[eq:bound\_for\_fns\_in\_P\]) is much less than the number of Boolean functions of $N$ variables.
The number of Boolean functions of $N$ variables satisfying Eq. (\[eq:bound\_for\_fns\_in\_P\]), $\mathcal{B}(N,\xi)$, satisfies $$\mathcal{B}(N,\xi) \le \mathcal{F}(N,\xi) \mathcal{M}(N,\xi)~,$$ where $\mathcal{F}(N,\xi)$ denotes the number ways to choose up to $\mathcal{C}2^{N-\alpha\xi/\log_2(N)}$ input configurations and $\mathcal{M}(N,\xi)$ is the number of polynomials of degree $\xi$.
Let $\Phi=\mathcal{C} 2^{N-\alpha\xi/\log_2(N)}$ be the maximum number of configurations whose outputs we are allowed to alter, and $\Omega=2^N$ be the total number of input configurations. The quantity $\mathcal{F}(N,\xi)$ is the number of ways that one can choose up to $\Phi$ items out of $\Omega$ possibilities. We have $$\begin{aligned}
\mathcal{F}(N,\xi) &=&\sum_{s=1}^\Phi
\frac{\Omega!}{s !(\Omega-s)!}\nonumber\\
&\sim&
e(\Omega/s)^s
= e{(2^{\alpha \xi/\log_2(N)}/\mathcal{C} )}^{\mathcal{C}2^{N-\alpha\xi/\log_2(N)}}~,\end{aligned}$$ where the last line applies when $1 \ll \xi \ll N$. Next note that $\mathcal{M}_\xi$, the number of different polynomials of degree less than or equal to $\xi$, is: $$\begin{aligned}
\mathcal{M}_\xi &=& 2^{\sum_{j=0}^\xi N!/j!(N-j)!}\nonumber\\
&\sim& 2^{e(N/\xi)^\xi}~,
\label{eq:num_polynomials}\end{aligned}$$ where again the last line assumes $1 \ll \xi \ll N$. Eq. (\[eq:num\_polynomials\]) follows because all polynomials of degree $\xi$ or less can be written as a sum over all terms that are products of the form $x_{i_1}\ldots x_{i_j}$ with $j\le\xi$. There are $\sum_{j=1}^\xi N!/[j!(N-j)!]$ such terms, and each coefficient can be either $1$ or $0$. Thus, when $1 \ll \xi \ll N$, the total number of functions that satisfy Eq. (\[eq:bound\_for\_fns\_in\_P\]) is bounded above by $$\begin{aligned}
\mathcal{B}(N,\xi) &\le& \left (
e(2^{\alpha \xi/\log_2(N)}/\mathcal{C})^{\mathcal{C}2^{N-\alpha\xi/\log_2(N)}}
\right )
\left (2^{e(N/\xi)^\xi}\right )~,
$$ which, as $N\rightarrow\infty$ and $\xi \propto N^a$ with $0<a<$1, is much smaller than $2^{2^N}$, the total number of Boolean functions of $N$ Boolean variables.
A second non-rigorous but informative argument to see that generic Boolean functions do not satisfy Eq. (\[eq:bound\_for\_fns\_in\_P\]) is to consider a generic Boolean function in which each coefficient $A_{i_1,\ldots,i_N}$ is chosen independently and randomly to be $1$ or $0$ with equal probability. For a typical Boolean function, one can always find a configuration satisfying Eq. (\[eq:bound\_for\_fns\_in\_P\]) by changing just about half the output values so that the function has the same value for all inputs. The question is whether one can obtain $g_{x_{i_1},\ldots,x_{i_M}}({ \vec{x} }^\prime)=0$ for all choices of the $M$ decimated variables by changing the function for many fewer configurations than that. For a given $g$ in which $M$ variables have been decimated, one can find a configuration satisfying $g_{x_{i_1},\ldots,x_{i_M}}({ \vec{x} }^\prime)=0$ for the $2^{N-M}$ different possible ${{ \vec{x} }^\prime}$ by changing the output for just about $2^{N-M-1}$ different input configurations. But one must arrange for $g_{x_{j_1},\ldots,x_{j_M}}({ \vec{x} }^\prime)$ to vanish for all possible choices of the $M$ variables to be decimated. There are $N!/[M!(N-M)!] \sim e(N/M)^M$ different ways to choose the decimated variables, so a naive estimate is that one must adjust $2^{N-M}$ configurations for each of $e(N/M)^M$ choices of the decimated variables, or $2^{N-M+1+M\log(N/M)}$, which exceeds $2^N$ for all $M \ll N$. This argument is useful because it makes it clear why one must examine all choices of the decimated variables to distinguish functions that do not satisfy Eq. (\[eq:bound\_for\_fns\_in\_P\]).
[30]{}
Papadimitriou C.: [*Computational Complexity*]{}, Addison-Wesley, 1994.
See `qwiki.caltech.edu/wiki/Complexity_Zoo`.
Cook, S.: The complexity of theorem proving procedures, in Proceedings of the third annual ACM symposium on the theory of computing, ACM, New York, pp. [151–158]{}, 1971.
Levin, L.: Universal’nyie perebornyie zadachi (Universal search problems: in Russian). Problemy Peredachi Informatsii 9:3 (1972), pp. 265–266. English translation, “Universal Search Problems," in B. A. Trakhtenbrot (1984). “A Survey of Russian Approaches to Perebor (Brute-Force Searches) Algorithms." Annals of the History of Computing 6 (4): 384–400.
See `http://claymath.org/millennium/P_vs_NP/.`
Boppana, R. and Sipser, M.: The Complexity of finite functions, In: The Handbook of Theoretical Computer Science, (J. van Leeuwen, ed.), Elsevier Science Publishers B.V., 1990 pp. 759–804.
Sipser, M.: The history and status of the P versus NP question, in Proceedings of ACM STOCÕ92, pp. 603Ð-618, 1992.
Aaronson, S.: Is P Versus NP Formally Independent?, Bulletin of the EATCS 81, October 2003.
Wigderson, A.: P, NP and Mathematics - a computational complexity perspective, STOC 06, 2006. `http://www.math.ias.edu/~avi/PUBLICATIONS/MYPAPERS/W06/W06.pdf`.
Goldenfeld, N.: Lectures on Phase Transitions and the Renormalization Group (Academic Press, Boston, 1991).
Kadanoff, L.P.: Scaling Laws for Ising Models Near $T_c$, Physics [**2**]{}, 263–272 (1966).
Wilson, K.G.: Renormalization Group and Critical Phenomena. I. Renormalization Group and the Kadanoff Scaling Picture, Physical Review B4, 3174–3183 (1971).
Wilson, K.G.: Problems in Physics with Many Scales of Length, Scientific American 241: 158–179, 1979.
It is more usual for renormalization group transformations to reduce the number of variables by a factor of two instead of by one. An example of a renormalization group that eliminates one variable at a time is the density matrix renormalization group introduced in White, S. Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863–2866 (1992).
Shannon, C.E.: The Synthesis of Two-Terminal Switching Circuits, Bell System Technical Journal 28, 59–98, (1949).
Riodan, J. and Shannon, C.E.: The number of two-terminal series-parallel networks, Journal of Mathematics and Physics, 21(2): 83–93, 1942.
See `http://www.math.ucsd.edu/~sbuss/CourseWeb/Math267_1992WS/wholecourse.pdf`, p. 73.
Razborov, A.: Lower bounds on the size of bounded-depth networks over a complete basis with logical addition (Russian), in Matematicheskie Zametki, Vol. 41, No 4, 1987, pages 598-607. English translation in Mathematical Notes of the Academy of Sci. of the USSR, 41(4):333–338, 1987.
Smolensky, R.: On representations by low-degree polynomials. In FOCS34, IEEE, 130–138, 1993.
Smolensky, R.: Algebraic methods in the theory of lower bounds for Boolean circuit complexity. In Proc. of 19th STOC, pages 77–82, 1987.
Razborov, A.A. and Steven Rudich. S.: Natural proofs, in Proc. 26th ACM Symp. on Theor. Computing, 204–213, (1994). These polynomials have a natural interpretation in terms of arithmetic circuits. See, e.g., Raz, R.: Lecture notes on arithmetic circuits, `http://www.cs.mcgill.ca/~denis/notes05.ps`.
Furst, M.L., Saxe, J.B., and Sipser, M.: Parity, circuits, and the polynomial-time hierarchy. Mathematical Systems Theory, 17(1):13-Ð27, 1984.
Yao, A.C.: Separating the polynomial-time hierarchy by oracles. In Proceedings of the 26th IEEE Symposium on Foundations of Computer Science, pages 1Ð-10, 1985.
Hstad, J.: Almost optimal lower bounds for small depth circuits. In Proceedings of the 18th ACM Symposium on Theory of Computing, pages 6Ð-20, 1986.
To obtain the number of polynomials of degree $\xi$ or less, note that each can be written as a sum of terms of the form $x_{i_1}\ldots x_{i_k}$ for all $k \le \xi$. There are $N!/k!(N-k)!$ ways to choose $k$ indices out of $N$ possibilities, so there are $\sum_{k=1}^\xi N!/k!(N-k)!$ different possible terms in the polynomial, each of which occurs with a coefficient of either one or zero. Thus, there are $2^{\sum_{k=1}^\xi N!/k!(N-k)!}$ different polynomials of degree $\xi$ or less.
Hill, T.: An Introduction to Statistical Thermodynamics, Dover Books, New York , appendix 2, p. 478, 1986.
Marsaglia, G. and Marsaglia, J. C.: “A New Derivation of Stirling’s Approximation to n!." Amer. Math. Monthly 97:826–829, 1990.
Coppersmith, S.N.: The computational complexity of Kauffman nets and the P versus NP problem, preprint cond-mat/0510840.
One might worry that products of the form $A_1A_2\ldots A_M$, where each $A_i$ is nonzero on more than half of the inputs, and $M$ is of order $N$, might pose a problem, for if one writes $A_1A_2\ldots A_M=(1-A_1^\prime)(1-A_2^\prime)\ldots(1-A_M^\prime)$, then the number of terms with $m$ factors is $M!/(M-m)!m!$, which can be as large as $2^{M/2}$ (when $m=M/2$). This term proliferation is not a problem if one chooses $\sigma$ to be strictly greater than $1/2$ (say, $3/4$), since the number of terms with a given number of terms in the product is overwhelmed by the decrease in the fraction of inputs for which each individual term is nonzero.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We introduce the first program synthesis engine implemented inside an SMT solver. We present an approach that extracts solution functions from unsatisfiability proofs of the negated form of synthesis conjectures. We also discuss novel counterexample-guided techniques for quantifier instantiation that we use to make finding such proofs practically feasible. A particularly important class of specifications are single-invocation properties, for which we present a dedicated algorithm. To support syntax restrictions on generated solutions, our approach can transform a solution found without restrictions into the desired syntactic form. As an alternative, we show how to use evaluation function axioms to embed syntactic restrictions into constraints over algebraic datatypes, and then use an algebraic datatype decision procedure to drive synthesis. Our experimental evaluation on syntax-guided synthesis benchmarks shows that our implementation in the CVC4 SMT solver is competitive with state-of-the-art tools for synthesis.'
author:
- Andrew Reynolds
- Morgan Deters
- |
\
Viktor Kuncak
- Cesare Tinelli
- Clark Barrett
bibliography:
- 'main.bib'
- 'managed.bib'
title: |
[On Counterexample Guided Quantifier Instantiation\
for Synthesis in CVC4[^1] [^2]]{}
---
Introduction
============
The synthesis of functions that meet a given specification is a long-standing fundamental goal that has received great attention recently. This functionality directly applies to the synthesis of functional programs [@KuncakETAL13FunctionalSynthesisLinearArithmeticSets; @KuncakETAL12SoftwareSynthesisProcedures] but also translates to imperative programs through techniques that include bounding input space, verification condition generation, and invariant discovery [@SolarLezama13ProgramSketching; @SolarLezamaETAL06CombinatorialSketchingFinitePrograms; @SrivastavaGulwaniFoster13TemplatebasedProgramVerificationProgramSynthesis]. Function synthesis is also an important subtask in the synthesis of protocols and reactive systems, especially when these systems are infinite-state [@AlurETAL14SynthesizingFinitestateProtocolsFromScenariosRequirements; @RyzhykETAL14UserguidedDeviceDriverSynthesis]. The SyGuS format and competition [@AlurETAL13SyntaxguidedSynthesis; @AlurETAL2014SyGuSMarktoberdorf; @DBLP:journals/corr/RaghothamanU14] inspired by the success of the SMT-LIB and SMT-COMP efforts [@BarrettETAL136YearsSmtcomp], has significantly improved and simplified the process of rigorously comparing different solvers on synthesis problems.
Connection between synthesis and theorem proving was established already in early work on the subject [@MannaWaldinger80DeductiveApproachToProgramSynthesis; @Green69ApplicationTheoremProvingToProblemSolving]. It is notable that early research [@MannaWaldinger80DeductiveApproachToProgramSynthesis] found that the capabilities of theorem provers were the main bottleneck for synthesis. Taking lessons from automated software verification, recent work on synthesis has made use of advances in theorem proving, particularly in SAT and SMT solvers. However, that work avoids formulating the overall synthesis task as a theorem proving problem directly. Instead, existing work typically builds custom loops outside of an SMT or SAT solver, often using numerous variants of counterexample-guided synthesis. A typical role of the SMT solver has been to validate candidate solutions and provide counterexamples that guide subsequent search, although approaches such as symbolic term exploration [@KneussETAL13SynthesisModuloRecursiveFunctions] also use an SMT solver to explore a representation of the space of solutions. In existing approaches, SMT solvers thus receive a large number of separate queries, with limited communication between these different steps.
In this paper, we revisit the formulation of the overall synthesis task as a theorem proving problem. We observe that SMT solvers already have some of the key functionality for synthesis; we show how to improve existing algorithms and introduce new ones to make SMT-based synthesis competitive. Specifically, we do the following.
- We show how to formulate an important class of synthesis problems as the problem of disproving universally quantified formulas, and how to synthesize functions automatically from selected instances of these formulas.
- We present counterexample-guided techniques for quantifier instantiation, which are crucial to obtain competitive performance on synthesis tasks.
- We discuss techniques to simplify the synthesized functions, to help ensure that they are small and adhere to specified syntactic requirements.
- We show how to encode syntactic restrictions using theories of algebraic datatypes and axiomatizable evaluation functions.
- We show that for an important class of single-invocation properties, the synthesis of functions from relations, the implementation of our approach in CVC4 significantly outperforms leading tools from the SyGuS competition.
Since synthesis involves finding (and so proving the existence) of functions, we use notions from many-sorted *second-order* logic to define the general problem. We fix a set ${\mathbf{S}}$ of [*sort symbols*]{} and an (infix) equality predicate ${\approx}$ of type $\sigma \times \sigma$ for each $\sigma \in {\mathbf{S}}$. For every non-empty sort sequence $\vec \sigma \in {\mathbf{S}}^+$ with $\vec \sigma = \sigma_1 \cdots \sigma_n\sigma$, we fix an infinite set ${\mathbf{X}}_{\vec \sigma}$ of [*variables $x^{\sigma_1 \cdots \sigma_n \sigma}$ of type $\sigma_1 \times \cdots \times \sigma_n \to \sigma$*]{}. For each sort $\sigma$ we identity the type $() \to \sigma$ with $\sigma$ and call it a [*first-order type*]{}. We assume the sets ${\mathbf{X}}_{\vec \sigma}$ are pairwise disjoint and let ${\mathbf{X}}$ be their union. A [*signature*]{} $\Sigma$ consists of a set ${\Sigma^\mathrm{s}} \subseteq {\mathbf{S}}$ of sort symbols and a set ${\Sigma^\mathrm{f}}$ of [*function symbols $f^{\sigma_1 \cdots \sigma_n \sigma}$ of type $\sigma_1 \times \cdots \times \sigma_n \to \sigma$*]{}, where $n \geq 0$ and $\sigma_1, \ldots, \sigma_n, \sigma \in {\Sigma^\mathrm{s}}$. We drop the sort superscript from variables or function symbols when it is clear from context or unimportant. We assume that signatures always include a Boolean sort ${{\mathsf{Bool}}}$ and constants ${\top}$ and ${\bot}$ of type ${{\mathsf{Bool}}}$ (respectively, for true and false). Given a many-sorted signature $\Sigma$ together with quantifiers and lambda abstraction, the notion of well-sorted ($\Sigma$-)term, atom, literal, clause, and formula with variables in ${\mathbf{X}}$ are defined as usual in second-order logic. All atoms have the form $s {\approx}t$. Having ${\approx}$ as the only predicate symbol causes no loss of generality since we can model other predicate symbols as function symbols with return sort ${{\mathsf{Bool}}}$. We will, however, write just $t$ in place of the atom $t {\approx}{\top}$, to simplify the notation. A $\Sigma$-term/formula is [*ground*]{} if it has no variables, it is [*first-order*]{} if it has only [*first-order variables*]{}, that is, variables of first-order type. When $\vec{x} = (x_1,\ldots,x_n)$ is a tuple of variables and $Q$ is either $\forall$ or $\exists$, we write $Q \vec x\, \varphi$ as an abbreviation of $Q x_1 \cdots Q x_n\, \varphi$. If $e$ is a $\Sigma$-term or formula and $\vec{x} = (x_1,\ldots,x_n)$ has no repeated variables, we write $e[\vec x]$ to denote that all of $e$’s free variables are from $\vec x$; if $\vec{t} = (t_1,\ldots,t_n)$ is a term tuple, we write $e[\vec t]$ for the term or formula obtained from $e$ by simultaneously replacing, for all $i=1,\ldots,n$, every occurrence of $x_i$ in $e$ by $t_i$. A [*$\Sigma$-interpretation ${\mathcal{I}}$*]{} maps: each $\sigma \in {\Sigma^\mathrm{s}}$ to a non-empty set $\sigma^{\mathcal{I}}$, the [*domain*]{} of $\sigma$ in ${\mathcal{I}}$, with ${{\mathsf{Bool}}}^{\mathcal{I}}= \{{\top}, {\bot}\}$; each $u^{\sigma_1 \cdots \sigma_n\sigma} \in {\mathbf{X}}\cup {\Sigma^\mathrm{f}}$ to a total function $u^{\mathcal{I}}: \sigma_1^{\mathcal{I}}\times \cdots \times \sigma_n^{\mathcal{I}}\rightarrow \sigma^{\mathcal{I}}$ when $n > 0$ and to an element of $\sigma^{\mathcal{I}}$ when $n = 0$. The interpretation ${\mathcal{I}}$ induces as usual a mapping from terms $t$ of sort $\sigma$ to elements $t^{\mathcal{I}}$ of $\sigma^{\mathcal{I}}$. If $x_1, \ldots,x_n$ are variables and $v_1,\ldots,v_n$ are well-typed values for them, we denote by ${\mathcal{I}}[x_1 \mapsto v_1, \ldots, x_n \mapsto v_n]$ the $\Sigma$-interpretation that maps each $x_i$ to $v_i$ and is otherwise identical to ${\mathcal{I}}$. A satisfiability relation between $\Sigma$-interpretations and $\Sigma$-formulas is defined inductively as usual.
A [*theory*]{} is a pair $T = (\Sigma, {\mathbf{I}})$ where $\Sigma$ is a signature and ${\mathbf{I}}$ is a non-empty class of $\Sigma$-interpretations, the [*models*]{} of $T$, that is closed under variable reassignment (i.e., every $\Sigma$-interpretation that differs from one in ${\mathbf{I}}$ only in how it interprets the variables is also in ${\mathbf{I}}$) and isomorphism. A $\Sigma$-formula $\varphi[\vec x]$ is [*$T$-satisfiable*]{} (resp., [*$T$-unsatisfiable*]{}) if it is satisfied by some (resp., no) interpretation in ${\mathbf{I}}$. A satisfying interpretation for $\varphi$ [*models (or is a model of)*]{} $\varphi$. A formula $\varphi$ is [*$T$-valid*]{}, written ${{\models_{T}}}\varphi$, if every model of $T$ is a model of $\varphi$. Given a fragment ${\mathbf{L}}$ of the language of $\Sigma$-formulas, a $\Sigma$-theory $T$ is [*satisfaction complete with respect to ${\mathbf{L}}$*]{} if every $T$-satisfiable formula of ${\mathbf{L}}$ is $T$-valid. In this paper we will consider only theories that are satisfaction complete wrt the formulas we are interested in. Most theories used in SMT (in particular, all theories of a specific structure such various theories of the integers, reals, strings, algebraic datatypes, bit vectors, and so on) are satisfaction complete with respect to the class of closed first-order $\Sigma$-formulas. Other theories, such as the theory of arrays, are satisfaction complete only with respect to considerably more restricted classes of formulas.
Synthesis inside an SMT Solver
==============================
We are interested in synthesizing computable functions automatically from formal logical specifications stating properties of these functions. As we show later, under the right conditions, we can formulate a version of the synthesis problem in *first-order logic* alone, which allows us to tackle the problem using SMT solvers.
We consider the synthesis problem in the context of some theory $T$ of signature $\Sigma$ that allows us to provide the function’s specification as a $\Sigma$-formula. Specifically, we consider [*synthesis conjectures*]{} expressed as (well-sorted) formulas of the form $$\begin{aligned}
\label{eqn:syn_conj}
\exists f^{\sigma_1 \cdots\sigma_n\sigma} \:
\forall x_1^{\sigma_1} \: \cdots \: \forall x_n^{\sigma_n} \:
P[f, x_1, \ldots, x_n]\end{aligned}$$ or $\exists f\, \forall \vec x\, P[f, \vec x]$, for short, where the second-order variable $f$ represents the function to be synthesized and $P$ is a $\Sigma$-formula encoding properties that $f$ must satisfy for all possible values of the input tuple $\vec x = (x_1,\ldots,x_n)$. In this setting, finding a witness for this satisfiability problem amounts to finding a function of type $\sigma_1 \times \cdots \times \sigma_n \to \sigma$ in some model of $T$ that satisfies $\forall \vec x\, P[f, \vec x]$. Since we are interested in automatic synthesis, we the restrict ourselves here to methods that search over a subspace $S$ of solutions representable syntactically as $\Sigma$-terms. We will say then that a synthesis conjecture is [*solvable*]{} if it has a syntactic solution in $S$.
In this paper we present two approaches that work with classes ${\mathbf{L}}$ of synthesis conjectures and $\Sigma$-theories $T$ that are satisfaction complete wrt ${\mathbf{L}}$. In both approaches, we solve a synthesis conjecture $\exists f\, \forall \vec x\, P[f, \vec x]$ by relying on quantifier-instantiation techniques to produce a first-order $\Sigma$-term $t[\vec x]$ of sort $\sigma$ such that $\forall \vec x\, P[t, \vec x]$ is $T$-satisfiable. When this $t$ is found, the synthesized function is denoted by $\lambda \vec x.\, t$.
In principle, to determine the satisfiability of $\exists f\, \forall \vec x\, P[f, \vec x]$ an SMT solver supporting the theory $T$ can consider the satisfiability of the (open) formula $\forall \vec x\, P[f, \vec x]$ by treating $f$ as an uninterpreted function symbol. This sort of Skolemization is not usually a problem for SMT solvers as many of them can process formulas with uninterpreted symbols. The real challenge is the universal quantification over $\vec x$ because it requires the solver to construct internally (a finite representation of) an interpretation of $f$ that is guaranteed to satisfy $P[f, \vec x]$ for every possible value of $\vec x$ [@GeDeM-CAV-09; @ReyEtAl-CADE-13].
More traditional SMT solver designs to handle universally quantified formulas have focused on instantiation-based methods to show *un*satisfiability. They generate ground instances of those formulas until a refutation is found at the ground level [@Detlefs03simplify:a]. While these techniques are incomplete in general, they have been shown to be quite effective in practice [@MouraBjoerner07EfficientEmatchingSmtSolvers; @reynolds14quant_fmcad]. For this reason, we advocate approaches to synthesis geared toward establishing the *unsatisfiability of the negation* of the synthesis conjecture: $$\begin{aligned}
\label{eqn:neg_syn_conj}
\forall f\,\exists \vec x\, \lnot P[f, \vec x]\end{aligned}$$ Thanks to our restriction to satisfaction complete theories, (\[eqn:neg\_syn\_conj\]) is $T$-unsatisfiable exactly when the original synthesis conjecture (\[eqn:syn\_conj\]) is $T$-satisfiable.[^3] Moreover, as we explain in this paper, a syntactic solution $\lambda x.\,t$ for (\[eqn:syn\_conj\]) can be constructed from a refutation of (\[eqn:neg\_syn\_conj\]), as opposed to being extracted from the valuation of $f$ in a model of $\forall \vec x\, P[f, \vec x]$. Proving (\[eqn:neg\_syn\_conj\]) unsatisfiable poses its own challenge to current SMT solvers, namely, dealing with the second-order universal quantification of $f$. To our knowledge, no SMT solvers so far had direct support for higher-order quantification. In the following, however, we describe two specialized methods to refute negated synthesis conjectures like (\[eqn:neg\_syn\_conj\]) that build on existing capabilities of these solvers.
The first method applies to a restricted, but fairly common, case of synthesis problems $\exists f\, \forall\vec x\, P[f, \vec x]$ where every occurrence of $f$ in $P$ is in terms of the form $f(\vec x)$. In this case, we can express the problem in the first-order form $\forall \vec x. \exists y. Q[\vec x,y]$ and then tackle its negation using appropriate quantifier instantiation techniques.
The second method follows the *syntax-guided synthesis* paradigm [@AlurETAL13SyntaxguidedSynthesis; @AlurETAL2014SyGuSMarktoberdorf] where the synthesis conjecture is accompanied by an explicit syntactic restriction on the space of possible solutions. Our syntax-guided synthesis method is based on encoding the syntax of terms as first-order values. We use a deep embedding into an extension of the background theory $T$ with a theory of algebraic data types, encoding the restrictions of a syntax-guided synthesis problem.
[*For the rest of the paper, we fix a $\Sigma$-theory $T$ and a class ${\mathbf{P}}$ of quantifier-free $\Sigma$-formulas $P[f,\vec x]$ such that $T$ is satisfaction complete with respect to the class of synthesis conjectures ${\mathbf{L}}:= \{\exists f\, \forall\vec x\, P[f, \vec x] \mid P \in {\mathbf{P}}\}$.* ]{}
Refutation-Based Synthesis {#sec:refutation-based}
==========================
When axiomatizing properties of a desired function $f$ of type $\sigma_1 \times \cdots \times \sigma_n \to \sigma$, a particularly well-behaved class are *single-invocation properties* (see, e.g., [@jacobs2011towards]). These properties include, in particular, standard function contracts, so they can be used to synthesize a function implementation given its postcondition as a relation between the arguments and the result of the function. This is also the form of the specification for synthesis problems considered in complete functional synthesis [@KuncakETAL10CompleteFunctionalSynthesis; @KuncakETAL12SoftwareSynthesisProcedures; @KuncakETAL13FunctionalSynthesisLinearArithmeticSets]. Note that, in our case, we aim to prove that the output exists for all inputs, as opposed to, more generally, computing the set of inputs for which the output exists.
A [*single-invocation property*]{} is any formula of the form $Q[\vec x, f(\vec x)]$ obtained as an instance of a quantifier-free formula $Q[\vec x, y]$ not containing $f$. Note that the only occurrences of $f$ in $Q[\vec x, f(\vec x)]$ are in subterms of the form $f(\vec x)$ with the *same* tuple $\vec x$ of *pairwise distinct* variables.[^4] The conjecture $\exists f\, \forall \vec x\, Q[\vec x, f(\vec x)]$ is logically equivalent to the *first-order* formula $$\label{eqn:syn_conj_no_syntax}
\forall \vec x\, \exists y\, Q[\vec x, y]$$ By the semantics of $\forall$ and $\exists$, finding a model ${\mathcal{I}}$ for it amounts (under the axioms of choice) to finding a function $h:\sigma_1^{\mathcal{I}}\times \cdots \times \sigma_n^{\mathcal{I}}\rightarrow \sigma^{\mathcal{I}}$ such that for all $\vec s \in \sigma_1^{\mathcal{I}}\times \cdots \times \sigma_n^{\mathcal{I}}$, the interpretation ${\mathcal{I}}[\vec x \mapsto \vec s, y \mapsto h(\vec s)]$ satisfies $Q[\vec x, y]$. This section considers the case when ${\mathbf{P}}$ consists of single-invocation properties and describes a general approach for determining the satisfiability of formulas like (\[eqn:syn\_conj\_no\_syntax\]) while computing a syntactic representation of a function like $h$ in the process. For the latter, it will be convenient to assume that the language of functions contains an if-then-else operator ${{\mathsf{ite}}}$ of type ${{\mathsf{Bool}}}\times \sigma \times \sigma \to \sigma$ for each sort $\sigma$, with the usual semantics.
If (\[eqn:syn\_conj\_no\_syntax\]) belongs to a fragment that admits quantifier elimination in $T$, such as the linear fragment of integer arithmetic, determining its satisfiability can be achieved using an efficient method for quantifier elimination [@Monniaux10QuantifierEliminationLazyModelEnumeration; @Bjoerner10LinearQuantifierEliminationAsAbstractDecision]. Such cases have been examined in the context of software synthesis [@KuncakETAL12SoftwareSynthesisProcedures]. Here we propose instead an alternative instantiation-based approach aimed at establishing the unsatisfiability of the *negated* form of (\[eqn:syn\_conj\_no\_syntax\]): $$\label{eqn:neg_syn_conj_no_syntax}
\exists \vec x\, \forall y\, \lnot Q[\vec x, y]$$ or, equivalently, of a Skolemized version $\forall y\, \lnot Q[\vec{{\mathsf{k}}}, y]$ of (\[eqn:neg\_syn\_conj\_no\_syntax\]) for some tuple $\vec{{\mathsf{k}}}$ of fresh uninterpreted constants of the right sort. Finding a $T$-unsatisfiable finite set $\Gamma$ of ground instances of $\lnot Q[\vec k, y]$, which is what an SMT solver would do to prove the unsatisfiability of , suffices to solve the original synthesis problem. The reason is that, then, a solution for $f$ can be constructed directly from $\Gamma$, as indicated by the following result.
\[prop:ite-form\] *Suppose some set $\Gamma = \{\lnot Q[\vec{{\mathsf{k}}}, t_1[\vec{{\mathsf{k}}}]], \ldots, \lnot Q[\vec{{\mathsf{k}}}, t_p[\vec{{\mathsf{k}}}]]\}$ where $t_1[\vec x]$, $\ldots$, $t_p[\vec x]$ are $\Sigma$-terms of sort $\sigma$ is $T$-unsatisfiable. One solution for $\exists f\, \forall \vec x\, Q[\vec x, f(\vec x)]$ is $\lambda \vec x.\, {{\mathsf{ite}}}( Q[\vec x, t_p], t_p, (\,\cdots\, {{\mathsf{ite}}}( Q[\vec x, t_2], t_2, t_1 ) \,\cdots\, ))$.*
Let $\ell$ be the solution specified above, and let $\vec u$ be an arbitrary set of ground terms of the same sort as $\vec x$. Given a model ${\mathcal{I}}$, we show that ${\mathcal{I}}\models Q[ \vec u, \ell( \vec u ) ]$. Consider the case that ${\mathcal{I}}\models Q[ \vec u, t_i [\vec{u}] ]$ for some $i \in \{ 2, \ldots, p \}$; pick the greatest such $i$. Then, $\ell( \vec u )^{\mathcal{I}}= ( t_i [\vec{u}] )^{\mathcal{I}}$, and thus ${\mathcal{I}}\models Q[ \vec u, \ell( \vec u ) ]$. If no such $i$ exists, then ${\mathcal{I}}\models \neg Q[ \vec u, t_i [\vec{u}] ]$ for all $i = 2, \ldots, p$, and $\ell( \vec u )^{\mathcal{I}}= ( t_1 [\vec{u}] )^{\mathcal{I}}$. Since $\Gamma$ is $T$-unsatisfiable and $\vec{{\mathsf{k}}}$ are fresh, we have $\neg Q[ \vec u, t_2 [\vec{u}] ], \ldots, \neg Q[ \vec u, t_p [\vec{u}] ] \models_T Q[ \vec u, t_1 [\vec{u}] ]$, which is $Q[ \vec u, \ell( \vec u ) ]$.
\[ex:max\] Let $T$ be the theory of linear integer arithmetic with the usual signature and integer sort ${{\mathsf{Int}}}$. Let $\vec x = (x_1, x_2)$. Now consider the property $$\begin{aligned}
\label{eq:max-orig}
P[f, \vec x] :=
f( \vec x ) \geq x_1 \land f( \vec x ) \geq x_2 \land
( f( \vec x ) {\approx}x_1 \lor f( \vec x ) {\approx}x_2 )\end{aligned}$$ with $f$ of type ${{\mathsf{Int}}}\times {{\mathsf{Int}}}\rightarrow {{\mathsf{Int}}}$ and $x_1, x_2$ of type ${{\mathsf{Int}}}$. The synthesis problem $\exists f\, \forall \vec x\, P[f, \vec x]$ is solved exactly by the function that returns the maximum of its two inputs. Since $P$ is a single-invocation property, we can solve that problem by proving the $T$-unsatisfiability of the conjecture $\exists \vec x\, \forall y\, \lnot Q[\vec x, y]$ where $$\begin{aligned}
\label{eq:max}
Q[\vec x, y] & := &
y \geq x_1 \land y \geq x_2 \land ( y {\approx}x_1 \lor y {\approx}x_2 )\end{aligned}$$ After Skolemization the conjecture becomes $\forall y\, \lnot Q[\vec{{\mathsf{a}}}, y]$ for fresh constants $\vec{{\mathsf{a}}} = ({\mathsf{a}}_1, {\mathsf{a}}_2)$. When asked to determine the satisfiability of that conjecture an SMT solver may, for instance, instantiate it with ${\mathsf{a}}_1$ and then ${\mathsf{a}}_2$ for $y$, producing the $T$-unsatisfiable set $\{\lnot Q[\vec{{\mathsf{a}}}, {\mathsf{a}}_1], \lnot Q[\vec{{\mathsf{a}}}, {\mathsf{a}}_2]\}$. By Proposition \[prop:ite-form\], one solution for $\forall \vec x\, P[f, \vec x]$ is $f = \lambda \vec x.\, {{\mathsf{ite}}}( Q[\vec x, x_2], x_2, x_1 )$, which simplifies to $\lambda \vec x.\, {{\mathsf{ite}}}( x_2 \geq x_1, x_2, x_1 )$, representing the desired maximum function.
1. $\Gamma := \{{\mathsf{G}} \Rightarrow Q[\vec{{\mathsf{k}}}, {\mathsf{e}}]\}$ where $\vec{{\mathsf{k}}}$ consists of distinct fresh constants
2. Repeat
- If there is a model ${\mathcal{I}}$ of $T$ satisfying $\Gamma$ and ${\mathsf{G}}$\
then let $\Gamma := \Gamma \cup \{ \lnot Q[\vec{{\mathsf{k}}},t[\vec{{\mathsf{k}}}]] \}$ for some $\Sigma$-term $t[\vec x]$ such that $t[\vec{{\mathsf{k}}}]^{\mathcal{I}}= {{\mathsf{e}}}^{\mathcal{I}}$;\
otherwise, return “no solution found”
until $\Gamma$ contains a $T$-unsatisfiable set $\{\lnot Q[\vec{{\mathsf{k}}}, t_1[\vec{{\mathsf{k}}}]], \ldots, \lnot Q[\vec{{\mathsf{k}}}, t_p[\vec{{\mathsf{k}}}]] \}$
3. Return $\lambda \vec x.\, {{\mathsf{ite}}}( Q[\vec x, t_p[\vec x]], t_p[\vec x],\ (\,\cdots\, {{\mathsf{ite}}}( Q[\vec x, t_2[\vec x]], t_2[\vec x], t_1[\vec x] ) \,\cdots\, ))$ for $f$
Given Proposition \[prop:ite-form\], the main question is how to get the SMT solver to generate the necessary ground instances from $\forall y\, \lnot Q[\vec{{\mathsf{k}}}, y]$. Typically, SMT solvers that reason about quantified formulas use heuristic quantifier instantiation techniques based on E-matching [@MouraBjoerner07EfficientEmatchingSmtSolvers], which instantiates universal quantifiers with terms occurring in some current set of ground terms built incrementally from the input formula. Using E-matching-based heuristic instantiation alone is unlikely to be effective in synthesis, where required terms need to be synthesized based on the semantics of the input specification. This is confirmed by our preliminary experiments, even for simple conjectures. We have developed instead a specialized new technique, which we refer to as *counterexample-guided quantifier instantiation*, that allows the SMT solver to quickly converge in many cases to the instantiations that refute the negated synthesis conjecture (\[eqn:neg\_syn\_conj\_no\_syntax\]).
The new technique is similar to a popular scheme for synthesis known as counterexample-guided inductive synthesis, implemented in various synthesis approaches (e.g., [@SolarLezamaETAL06CombinatorialSketchingFinitePrograms; @JhaETAL10OracleguidedComponentbasedProgramSynthesis]), but with the major difference of being built-in directly into the SMT solver. The technique is illustrated by the procedure in Figure \[fig:proc1\], which grows a set $\Gamma$ of ground instances of $\lnot Q[\vec{{\mathsf{k}}}, y]$ starting with the formula ${\mathsf{G}} \Rightarrow Q[\vec{{\mathsf{k}}}, {\mathsf{e}}]$ where ${\mathsf{G}}$ and ${\mathsf{e}}$ are fresh constants of sort ${{\mathsf{Bool}}}$ and $\sigma$, respectively. Intuitively, ${\mathsf{e}}$ represents a current, partial solution for the original synthesis conjecture $\exists f\, \forall \vec x\, Q[\vec x, f(\vec x)]$, while ${\mathsf{G}}$ represents the possibility that the conjecture has a (syntactic) solution in the first place.
The procedure, which may not terminate in general, terminates either when $\Gamma$ becomes unsatisfiable, in which case it has found a solution, or when $\Gamma$ is still satisfiable but all of its models falsify ${\mathsf{G}}$, in which case the search for a solution was inconclusive. The procedure is not [*solution-complete*]{}, that is, it is not guaranteed to return a solution whenever there is one. However, thanks to Proposition \[prop:ite-form\], it is [*solution-sound*]{}: every $\lambda$-term it returns is indeed a solution of the original synthesis problem.
The choice of the term $t$ in Step 2 of the procedure is intentionally left underspecified because it can be done in a number of ways. Having a good heuristic for such instantiations is, however, critical to the effectiveness of the procedure in practice. In a $\Sigma$-theory $T$, like integer arithmetic, with a fixed interpretation for symbols in $\Sigma$ and a distinguished set of ground $\Sigma$-terms denoting the elements of a sort, a simple, if naive, choice for $t$ in Figure \[fig:proc1\] is the distinguished term denoting the element ${{\mathsf{e}}}^{\mathcal{I}}$. For instance, if $\sigma$ is ${{\mathsf{Int}}}$ in integer arithmetic, $t$ could be a concrete integer constant ($0,\pm 1, \pm 2, \ldots$). This choice amounts to testing whether points in the codomain of the sought function $f$ satisfy the original specification $P$.
More sophisticated choices for $t$, in particular where $t$ contains the variables $\vec x$, may increase the generalization power of this procedure and hence its ability to find a solution. For instance, our present implementation in the [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}solver relies on the fact that the model ${\mathcal{I}}$ in Step 2 is constructed from a set of equivalence classes over terms computed by the solver during its search. The procedure selects the term $t$ among those in the equivalence class of $e$, other than $e$ itself. For instance, consider formula (\[eq:max\]) from the previous example that encodes the single-invocation form of the specification for the max function. The DPLL(T) architecture, on which [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}is based, finds a model for $Q[\vec{{\mathsf{a}}}, {\mathsf{e}} ]$ with $\vec{{\mathsf{a}}} = ({\mathsf{a}}_1, {\mathsf{a}}_2)$ only if it can first find a subset $M$ of that formula’s literals that collectively entail $Q[ \vec{{\mathsf{a}}}, {\mathsf{e}} ]$ at the propositional level. Due to the last conjunct of (\[eq:max\]), $M$ must include either ${\mathsf{e}} {\approx}{\mathsf{a}}_1$ or ${\mathsf{e}} {\approx}{\mathsf{a}}_2$. Hence, whenever a model can be constructed for $Q[ \vec{{\mathsf{a}}}, e ]$, the equivalence class containing $e$ must contain either ${\mathsf{a}}_1$ or ${\mathsf{a}}_2$. Thus using the above selection heuristic, the procedure in Figure \[fig:proc1\] will, after at most two iterations of the loop in Step 2, add the instances $\neg Q[ \vec{{\mathsf{a}}}, {\mathsf{a}}_1 ]$ and $\neg Q[ \vec{{\mathsf{a}}}, {\mathsf{a}}_2 ]$ to $\Gamma$. As noted in Example \[ex:max\], these two instances are jointly $T$-unsatisfiable. We expect that more sophisticated instantiation techniques can be incorporated. In particular, both quantifier elimination techniques [@Bjoerner10LinearQuantifierEliminationAsAbstractDecision; @Monniaux10QuantifierEliminationLazyModelEnumeration] and approaches currently used to infer invariants from templates [@MadhavanKuncak14SymbolicResourceBoundInferenceFunctionalPrograms; @Cousot05ProvingProgramInvarianceTerminationParametricAbstraction] are likely to be beneficial for certain classes of synthesis problems. The advantage of developing these techniques within an SMT solver is that they directly benefit both synthesis and verification in the presence of quantified conjectures, thus fostering cross-fertilization between different fields.
Refutation-Based Syntax-Guided Synthesis {#sec:syntax-guided}
========================================
In syntax-guided synthesis, the functional specification is strengthened by an accompanying set of syntactic restrictions on the form of the expected solutions. In a recent line of work [@AlurETAL13SyntaxguidedSynthesis; @AlurETAL2014SyGuSMarktoberdorf; @DBLP:journals/corr/RaghothamanU14] these restrictions are expressed by a grammar $R$ (augmented with a kind of *let* binder) defining the language of solution terms, or [*programs*]{}, for the synthesis problem. In this section, we present a variant of the approach in the previous section that incorporates the syntactic restriction directly into the SMT solver via a deep embedding of the syntactic restriction $R$ into the solver’s logic. The main idea is to represent $R$ as a set of algebraic datatypes and build into the solver an interpretation of these datatypes in terms of the original theory $T$.
While our approach is parametric in the background theory $T$ and the restriction $R$, it is best explained here with a concrete example.
$
\begin{array}{l@{\qquad}l}
\forall x\,y\: {{\mathsf{ev}}}( {\mathsf{x}}_1, x, y ) {\approx}x &
\forall s_1\, s_2\, x\,y\:
{{\mathsf{ev}}}( {\mathsf{leq}}(s_1, s_2), x, y ) {\approx}({{\mathsf{ev}}}( s_1, x, y ) \leq {{\mathsf{ev}}}( s_2, x, y ))
\\[.2ex]
\forall x\,y\: {{\mathsf{ev}}}( {\mathsf{x}}_2, x, y ) {\approx}y &
\forall s_1\, s_2\, x\,y\:
{{\mathsf{ev}}}( {\mathsf{eq}}(s_1, s_2), x, y ) {\approx}({{\mathsf{ev}}}( s_1, x, y ) {\approx}{{\mathsf{ev}}}( s_2, x, y ))
\\[.2ex]
\forall x\,y\: {{\mathsf{ev}}}( {\mathsf{zero}}, x, y ) {\approx}0 &
\forall c_1\, c_2\, x\, y\:
{{\mathsf{ev}}}( {\mathsf{and}}(c_1, c_2), x, y ) {\approx}({{\mathsf{ev}}}( c_1, x, y ) \land {{\mathsf{ev}}}( c_2, x, y ))
\\[.2ex]
\forall x\,y\: {{\mathsf{ev}}}( {\mathsf{one}}, x, y ) {\approx}1 &
\forall c\,x\,y\: {{\mathsf{ev}}}( {\mathsf{not}}(c), x, y ) {\approx}\lnot {{\mathsf{ev}}}( c, x, y )
\\[.2ex]
\multicolumn{2}{l}{
\forall s_1\, s_2\, x\,y\:
{{\mathsf{ev}}}( {\mathsf{plus}}(s_1, s_2), x, y ) {\approx}{{\mathsf{ev}}}( s_1, x, y ) + {{\mathsf{ev}}}( s_2, x, y )}
\\[.2ex]
\multicolumn{2}{l}{
\forall s_1\, s_2\, x\,y\:
{{\mathsf{ev}}}( {\mathsf{minus}}(s_1, s_2), x, y ) {\approx}{{\mathsf{ev}}}( s_1, x, y ) - {{\mathsf{ev}}}( s_2, x, y )}
\\[.2ex]
\multicolumn{2}{l}{
\forall c\, s_1\, s_2\, x\,y\:
{{\mathsf{ev}}}( {\mathsf{if}}( c, s_1, s_2 ), x, y ) {\approx}{{\mathsf{ite}}}( {{\mathsf{ev}}}( c, x, y ), {{\mathsf{ev}}}( s_1, x, y ), {{\mathsf{ev}}}( s_2, x, y ) )}
\end{array}
$
\[ex:max-sygus\] Consider again the synthesis conjecture (\[eq:max\]) from Example \[ex:max\] but now with a syntactic restriction $R$ for the solution space expressed by these algebraic datatypes: $$\begin{array}{l@{\quad}l@{\quad}l}
{\mathsf{S}} & := & {\mathsf{x}}_1 \mid {\mathsf{x}}_2 \mid {\mathsf{zero}} \mid {\mathsf{one}} \mid
{\mathsf{plus}}({\mathsf{S}}, {\mathsf{S}}) \mid {\mathsf{minus}}({\mathsf{S}}, {\mathsf{S}}) \mid
{\mathsf{if}}( {\mathsf{C}}, {\mathsf{S}}, {\mathsf{S}} )
\\[1ex]
{\mathsf{C}} & := & {\mathsf{leq}}({\mathsf{S}}, {\mathsf{S}}) \mid {\mathsf{eq}}({\mathsf{S}}, {\mathsf{S}}) \mid
{\mathsf{and}}({\mathsf{C}}, {\mathsf{C}}) \mid {\mathsf{not}}({\mathsf{C}})
\end{array}$$ The datatypes are meant to encode a term signature that includes nullary constructors for the variables $x_1$ and $x_2$ of (\[eq:max\]), and constructors for the symbols of the arithmetic theory $T$. Terms of sort ${\mathsf{S}}$ (resp., ${\mathsf{C}}$) refer to theory terms of sort ${{\mathsf{Int}}}$ (resp., ${{\mathsf{Bool}}}$). Instead of the theory of linear integer arithmetic, we now consider its combination ${T_\mathrm{D}}$ with the theory of the datatypes above extended with two [*evaluation operators*]{}, that is, two function symbols ${{\mathsf{ev}}}^{{\mathsf{S}} \times {{\mathsf{Int}}}\times {{\mathsf{Int}}}\to {{\mathsf{Int}}}}$ and ${{\mathsf{ev}}}^{{\mathsf{C}} \times {{\mathsf{Int}}}\times {{\mathsf{Int}}}\to {{\mathsf{Bool}}}}$ respectively embedding ${\mathsf{S}}$ in ${{\mathsf{Int}}}$ and ${\mathsf{C}}$ in ${{\mathsf{Bool}}}$. We define ${T_\mathrm{D}}$ so that all of its models satisfy the formulas in Figure \[fig:ev\]. The evaluation operators effectively define an interpreter for programs (i.e., terms of sort ${\mathsf{S}}$ and ${\mathsf{C}}$) with input parameters $x_1$ and $x_2$.
It is possible to instrument an SMT solver that support user-defined datatypes, quantifiers and linear arithmetic so that it constructs automatically from the syntactic restriction $R$ both the datatypes ${\mathsf{S}}$ and ${\mathsf{C}}$ and the two evaluation operators. Reasoning about ${\mathsf{S}}$ and ${\mathsf{C}}$ is done by the built-in subsolver for datatypes. Reasoning about the evaluation operators is achieved by reducing ground terms of the form ${{\mathsf{ev}}}(d, t_1, t_2)$ to smaller terms by means of selected instantiations of the axioms from Figure \[fig:ev\], with a number of instances proportional to the size of term $d$. It is also possible to show that ${T_\mathrm{D}}$ is satisfaction complete with respect to the class $$\begin{aligned}
{\mathbf{L}}_2 & := & \{
\exists g\, \forall \vec z\, P[\lambda \vec z.\, {{\mathsf{ev}}}(g, \vec z),\, \vec x]
\mid
P[f, \vec x] \in {\mathbf{P}}\}\end{aligned}$$ where instead of terms of the form $f(t_1, t_2)$ in $P$ we have, modulo $\beta$-reductions, terms of the form ${{\mathsf{ev}}}(g, t_1, t_2)$.[^5] For instance, the formula $P[f, \vec x]$ in Equation (\[eq:max-orig\]) from Example \[ex:max\] can be restated in ${T_\mathrm{D}}$ as the formula below where $g$ is a variable of type ${\mathsf{S}}$: $$\begin{aligned}
P_{{\mathsf{ev}}}[ g, \vec x ] & := &
{{\mathsf{ev}}}( g, \vec x ) \geq x_1 \land {{\mathsf{ev}}}( g,\vec x ) \geq x_2 \land
( {{\mathsf{ev}}}( g, \vec x ) {\approx}x_1 \lor {{\mathsf{ev}}}( g,\vec x ) {\approx}x_2 )\end{aligned}$$ In contrast to $P[f, \vec x]$, the new formula $P_{{\mathsf{ev}}}[ g, \vec x ]$ is first-order, with the role of the second-order variable $f$ now played by the first-order variable $g$. When asked for a solution for (\[eq:max-orig\]) under the restriction $R$, the instrumented SMT solver will try to determine instead the ${T_\mathrm{D}}$-unsatisfiability of $\forall g\, \exists \vec x\, \lnot P_{{\mathsf{ev}}}[g, \vec x]$. Instantiating $g$ in the latter formula with $s := {\mathsf{if}}( {\mathsf{leq}}({\mathsf{x}}_1, {\mathsf{x}}_2), {\mathsf{x}}_2, {\mathsf{x}}_1 )$, say, produces a formula that the solver can prove to be ${T_\mathrm{D}}$-unsatisfiable. This suffices to show that the program ${{\mathsf{ite}}}(x_1 \leq x_2, x_2, x_1)$, the analogue of $s$ in the language of $T$, is a solution of the synthesis conjecture (\[eq:max-orig\]) under the syntactic restriction $R$.
1. $\Gamma := \emptyset$
2. Repeat
1. \[it:model-i\] Let $\vec{{\mathsf{k}}}$ be a tuple of distinct fresh constants.\
If there is a model ${\mathcal{I}}$ of ${T_\mathrm{D}}$ satisfying $\Gamma$ *and* ${\mathsf{G}}$, then $\Gamma := \Gamma \cup \{ \lnot P_{{\mathsf{ev}}}[{{\mathsf{e}}}^{\mathcal{I}}, \vec{{\mathsf{k}}}] \}$ ;\
otherwise, return “no solution found”
2. \[it:model-j\] If there is a model $\mathcal J$ of ${T_\mathrm{D}}$ satisfying $\Gamma$, then $\Gamma := \Gamma \cup \{ {\mathsf{G}} \Rightarrow P_{{\mathsf{ev}}}[{\mathsf{e}}, \vec{{\mathsf{k}}}^{\mathcal J}] \}$ ;\
otherwise, return ${{\mathsf{e}}}^{\mathcal{I}}$ as a solution
To prove the unsatisfiability of formulas like $\forall g\, \exists \vec x\, \lnot P_{{\mathsf{ev}}}[g, \vec x]$ in the example above we use a procedure similar to that in Section \[sec:refutation-based\], but specialized to the extended theory ${T_\mathrm{D}}$. The procedure is described in Figure \[fig:proc2\]. Like the one in Figure \[fig:proc1\], it uses an uninterpreted constant ${\mathsf{e}}$ representing a solution candidate, and a Boolean variable ${\mathsf{G}}$ representing the existence of a solution. The main difference, of course, is that now ${\mathsf{e}}$ ranges over the datatype representing the restricted solution space. In any model of ${T_\mathrm{D}}$, a term of datatype sort evaluates to a term built exclusively with constructor symbols. This is why the procedure returns in Step \[it:model-j\] the value of ${\mathsf{e}}$ in the model ${\mathcal{I}}$ found in Step \[it:model-i\]. As we showed in the previous example, a program that solves the original problem can then be reconstructed from the returned datatype term.
$$\begin{array}{c@{\hspace{1em}}l@{\hspace{1em}}l}
\hline
\text{Step} & \text{Model} & \text{Added Formula} \
\\
\hline
\ref{it:model-i} & \{ {\mathsf{e}} \mapsto {\mathsf{x}}_1, \ldots \} &
\lnot P_{{\mathsf{ev}}}[ {\mathsf{x}}_1, {\mathsf{a}}_1, {\mathsf{b}}_1 ]
\\
\ref{it:model-j} & \{ {\mathsf{a}}_1 \mapsto 0, {\mathsf{b}}_1 \mapsto 1, \ldots \} &
{\mathsf{G}} \Rightarrow P_{{\mathsf{ev}}}[ {\mathsf{e}}, 0, 1 ]
\\
\ref{it:model-i} & \{ {\mathsf{e}} \mapsto {\mathsf{x}}_2, \ldots \} &
\lnot P_{{\mathsf{ev}}}[ {\mathsf{x}}_2, {\mathsf{a}}_2, {\mathsf{b}}_2 ]
\\
\ref{it:model-j} & \{ {\mathsf{a}}_2 \mapsto 1, {\mathsf{b}}_2 \mapsto 0, \ldots \} &
{\mathsf{G}} \Rightarrow P_{{\mathsf{ev}}}[ {\mathsf{e}}, 1, 0 ]
\\
\ref{it:model-i} & \{ {\mathsf{e}} \mapsto {\mathsf{one}}, \ldots \} &
\lnot P_{{\mathsf{ev}}}[ {\mathsf{one}}, {\mathsf{a}}_3, {\mathsf{b}}_3 ]
\\
\ref{it:model-j} & \{ {\mathsf{a}}_3 \mapsto 2, {\mathsf{b}}_3 \mapsto 0, \ldots \} &
{\mathsf{G}} \Rightarrow P_{{\mathsf{ev}}}[ {\mathsf{e}}, 2, 0 ]
\\
\ref{it:model-i} & \{ {\mathsf{e}} \mapsto {\mathsf{plus}}({\mathsf{x}}_1, {\mathsf{x}}_2), \ldots \} &
\lnot P_{{\mathsf{ev}}}[ {\mathsf{plus}}({\mathsf{x}}_1, {\mathsf{x}}_2), {\mathsf{a}}_4, {\mathsf{b}}_4 ]
\\
\ref{it:model-j} & \{ {\mathsf{a}}_4 \mapsto 1, {\mathsf{b}}_4 \mapsto 1, \ldots \} &
{\mathsf{G}} \Rightarrow P_{{\mathsf{ev}}}[ {\mathsf{e}}, 1, 1 ]
\\
\ref{it:model-i} & \{ {\mathsf{e}} \mapsto {\mathsf{if}}( {\mathsf{leq}}({\mathsf{x}}_1, {\mathsf{one}}), {\mathsf{one}}, {\mathsf{x}}_1 ), \ldots \} &
\lnot P_{{\mathsf{ev}}}[ {\mathsf{if}}( {\mathsf{leq}}({\mathsf{x}}_1, {\mathsf{one}}), {\mathsf{one}}, {\mathsf{x}}_1 ), {\mathsf{a}}_5, {\mathsf{b}}_5 ]
\\
\ref{it:model-j} & \{ {\mathsf{a}}_5 \mapsto 1, {\mathsf{b}}_5 \mapsto 2, \ldots \} &
{\mathsf{G}} \Rightarrow P_{{\mathsf{ev}}}[ {\mathsf{e}}, 1, 2 ]
\\
\ref{it:model-i} & \{ {\mathsf{e}} \mapsto {\mathsf{if}}( {\mathsf{leq}}({\mathsf{x}}_1, {\mathsf{x}}_2), {\mathsf{x}}_2, {\mathsf{x}}_1 ), \ldots \} &
\lnot P_{{\mathsf{ev}}}[ {\mathsf{if}}( {\mathsf{leq}}({\mathsf{x}}_1, {\mathsf{x}}_2), {\mathsf{x}}_2, {\mathsf{x}}_1 ), {\mathsf{a}}_6, {\mathsf{b}}_6 ]
\\
\ref{it:model-j} & \text{none} & \\
\hline
\end{array}$$
For $i=1,\ldots,6$, ${\mathsf{a}}_i$ and ${\mathsf{b}}_i$ are fresh constants of type ${{\mathsf{Int}}}$.
We implemented the procedure in the [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}solver. Figure \[fig:run\] shows a run of that implementation over the conjecture from Example \[ex:max-sygus\]. In this run, note that each model found for ${\mathsf{e}}$ satisfies all values of counterexamples found for previous candidates. After the sixth iteration of Step \[it:model-i\], the procedure finds the candidate ${\mathsf{if}}( {\mathsf{leq}}({\mathsf{x}}_1, {\mathsf{x}}_2), {\mathsf{x}}_2, {\mathsf{x}}_1 )$, for which no counterexample exists, indicating that the procedure has found a solution for the synthesis conjecture. Currently, this problem can be solved in about $0.5$ seconds in the latest development version of [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}.
To make the procedure practical it is necessary to look for *small* solutions to synthesis conjectures. A simple way to limit the size of the candidate solutions is to consider smaller programs before larger ones. Adapting techniques for finding finite models of minimal size [@reynolds2013finite], we use a strategy that starting, from $n = 0$, searches for programs of size $n+1$ only after its has exhausted the search for programs of size $n$. In solvers based on the DPLL($T$) architecture, like [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}, this can be accomplished by introducing a splitting lemma of the form $( {{\mathsf{size}}}( {\mathsf{e}} ) \leq 0 \lor \lnot {{\mathsf{size}}}( {\mathsf{e}} ) \leq 0 )$ and asserting ${{\mathsf{size}}}( {\mathsf{e}} ) \leq 0$ as the first decision literal, where ${{\mathsf{size}}}$ is a function symbol of type $\sigma \to {{\mathsf{Int}}}$ for every datatype sort $\sigma$ and stands for the function that maps each datatype value to its term size (i.e., the number of non-nullary constructor applications in the term). We do the same for ${{\mathsf{size}}}( {\mathsf{e}} ) \leq 1$ if and when $\lnot {{\mathsf{size}}}( {\mathsf{e}} ) \leq 0$ becomes asserted. We extended the procedure for algebraic datatypes in [<span style="font-variant:small-caps;">cvc</span>[4]{}]{} [@BarST-JSAT-07] to handle constraints involving ${{\mathsf{size}}}$. The extended procedure remains a decision procedure for input problems with a concrete upper bound on terms of the form ${{\mathsf{size}}}(u)$, for each variable or uninterpreted constant $u$ of datatype sort in the problem. This is enough for our purposes since the only term $u$ like that in our synthesis procedure is ${\mathsf{e}}$.
*\[prop:sygus-sound-complete\] With the search strategy above, the procedure in Figure \[fig:proc2\] has the following properties:*
1. (Solution Soundness) Every term it returns can be mapped to a solution of the original synthesis conjecture $\exists f\,\forall \vec x\, P[f, \vec x]$ under the restriction $R$.
2. (Refutation Soundness) If it answers “no solution found”, the original conjecture has no solutions under the restriction $R$.
3. (Solution Completeness) If the original conjecture has a solution under $R$, the procedure will find one.
To show solution soundness, consider the case when the procedure returns $e^{\mathcal{I}}$ as a solution. Then, $\Gamma \cup \neg P_{{\mathsf{ev}}}[ e^{\mathcal{I}}, \vec{{\mathsf{k}}}]$ is ${T_\mathrm{D}}$-unsatisfiable for some $\Gamma, \vec{{\mathsf{k}}}$, where $\Gamma$ is ${T_\mathrm{D}}$-satisfiable and $\vec{{\mathsf{k}}}$ is a tuple of distinct fresh constants. Since $\vec{{\mathsf{k}}}$ are fresh, $\Gamma \cup \exists \vec{x}\, \neg P_{{\mathsf{ev}}}[ e^{\mathcal{I}}, \vec{x}]$ is ${T_\mathrm{D}}$-unsatisfiable. Since $\Gamma$ is ${T_\mathrm{D}}$-satisfiable and $\Gamma \cup \exists \vec{x}\, \neg P_{{\mathsf{ev}}}[ e^{\mathcal{I}}, \vec{x}]$ is not, then at least one model of ${T_\mathrm{D}}$ (namely, one for $\Gamma$) does not satisfy $\exists \vec{x}\, \neg P_{{\mathsf{ev}}}[ e^{\mathcal{I}}, \vec{x}]$. Thus, since ${T_\mathrm{D}}$ is satisfaction complete, no models of ${T_\mathrm{D}}$ satisfy $\exists \vec{x}\, \neg P_{{\mathsf{ev}}}[ e^{\mathcal{I}}, \vec{x}]$, and thus all models of ${T_\mathrm{D}}$ satisfy $\forall \vec{x}\, P_{{\mathsf{ev}}}[ e^{\mathcal{I}}, \vec{x}]$. Assuming our translation from $P$ to $P_{{\mathsf{ev}}}$ is faithful, the analogue of $e^{\mathcal{I}}$ in the language of $T$ is a solution for the conjecture $\exists f\,\forall \vec x\, P[f, \vec x]$.
To show refutation soundness, consider the case when the procedure returns “no solution found". Then, there exists a $\Gamma = ( \Gamma' \cup G \Rightarrow P_{{\mathsf{ev}}}[ e, \vec{{\mathsf{k}}}^{\mathcal J}] )$ such that $\Gamma'$ is ${T_\mathrm{D}}$-satisfiable, and $\Gamma \cup G$ is ${T_\mathrm{D}}$-unsatisfiable. Clearly based on the clauses added by the procedure, we have that $\Gamma$ is equivalent to $\Gamma'' \cup G \Rightarrow ( P_{{\mathsf{ev}}}[ {\mathsf{e}}, \vec{{\mathsf{u}}}_1] \wedge \ldots \wedge P_{{\mathsf{ev}}}[ {\mathsf{e}}, \vec{{\mathsf{u}}}_n] )$, for some $\vec{{\mathsf{u}}}_1 \ldots \vec{{\mathsf{u}}}_n$ where $\Gamma'' \subseteq \Gamma'$ is ${T_\mathrm{D}}$-satisfiable and does not contain $G$ or ${\mathsf{e}}$. Since $\Gamma \cup G$ is ${T_\mathrm{D}}$-unsatisfiable, we have that $\Gamma'' \cup P_{{\mathsf{ev}}}[ {\mathsf{e}}, \vec{{\mathsf{u}}}_1] \wedge \ldots \wedge P_{{\mathsf{ev}}}[ {\mathsf{e}}, \vec{{\mathsf{u}}}_n]$ is ${T_\mathrm{D}}$-unsatisfiable. Since $\Gamma''$ does not contain ${\mathsf{e}}$, $\Gamma'' \cup \exists y\, ( P_{{\mathsf{ev}}}[ y, \vec{{\mathsf{u}}}_1] \wedge \ldots \wedge P_{{\mathsf{ev}}}[ y, \vec{{\mathsf{u}}}_n] )$ is ${T_\mathrm{D}}$-unsatisfiable. Since ${T_\mathrm{D}}$ is satisfaction complete and $\Gamma''$ is ${T_\mathrm{D}}$-satisfiable, $\exists y\, ( P_{{\mathsf{ev}}}[ y, \vec{{\mathsf{u}}}_1] \wedge \ldots \wedge P_{{\mathsf{ev}}}[ y, \vec{{\mathsf{u}}}_n] )$ is ${T_\mathrm{D}}$-unsatisfiable. Thus, $\exists y\, ( P_{{\mathsf{ev}}}[ y, \vec{{\mathsf{u}}}_1] \wedge \ldots \wedge P_{{\mathsf{ev}}}[ y, \vec{{\mathsf{u}}}_n] )$ is ${T_\mathrm{D}}$-unsatisfiable, and thus $\exists y\, \forall \vec{x}\, P_{{\mathsf{ev}}}[ y, \vec{x}]$ is ${T_\mathrm{D}}$-unsatisfiable. Assuming our translation from $P$ to $P_{{\mathsf{ev}}}$ is faithful, this implies there is no solution for the conjecture $\exists f\,\forall \vec x\, P[f, \vec x]$.
Given solution and refutation soundness of the procedure, to show the procedure is solution complete, it suffices to show that the procedure terminates when the original conjecture has a solution under $R$. Let $\lambda \vec x.\ t$ be such a solution, and let $d$ be the analogue of $t$ in the language of ${T_\mathrm{D}}$. Let $n$ be equal to the number of datatypes of the same type as $d$ that are at most the size of $d$, which we know is finite. For $i = 1, 2, \ldots$, let ${\mathcal{I}}_i$ and $\mathcal J_i$ be the models found on the $i^{th}$ iteration of Steps \[it:model-i\] and \[it:model-j\] respectively. Assume the procedure runs at least $k$ iterations, and let $1 \leq j < k$. Since $\mathcal J_j$ satisfies $\neg P_{{\mathsf{ev}}}[e^{{\mathcal{I}}_j}, \vec{{\mathsf{k}}}]$, all models of ${T_\mathrm{D}}$ satisfy $\neg P_{{\mathsf{ev}}}[{\mathsf{e}}^{{\mathcal{I}}_j}, \vec{{\mathsf{k}}}^{\mathcal J_j}]$ since ${T_\mathrm{D}}$ is satisfaction complete. Since ${\mathcal{I}}_k$ satisfies $G$, it must also satisfy $P_{{\mathsf{ev}}}[{\mathsf{e}}, \vec{{\mathsf{k}}}^{\mathcal J_j}]$, and thus ${\mathsf{e}}^{{\mathcal{I}}_k} \neq {\mathsf{e}}^{{\mathcal{I}}_j}$. Thus, each $e^{{\mathcal{I}}_1}, e^{{\mathcal{I}}_2}, \ldots$ is distinct, and the procedure in Figure \[fig:proc2\] executes at most $n$ iterations of Step \[it:model-i\]. Since the background theory ${T_\mathrm{D}}$ is decidable, Steps \[it:model-i\] and \[it:model-j\] are terminating, and thus the procedure is terminating when a solution exists.
Note that by this proposition the procedure can diverge only if the input synthesis conjecture has no solution.
Single Invocation Techniques for Syntax-Guided Problems {#sec:si-syntax-guided}
=======================================================
In this section, we considered the combined case of *single-invocation synthesis conjectures with syntactic restrictions*. Given a set $R$ of syntactic restrictions expressed by a datatype ${\mathsf{S}}$ for programs and a datatype ${\mathsf{C}}$ for Boolean expressions, consider the case where $(i)$ ${\mathsf{S}}$ contains the constructor $\mathsf{if} : {\mathsf{C}} \times {\mathsf{S}} \times {\mathsf{S}} \rightarrow {\mathsf{S}}$ (with the expected meaning) and $(ii)$ the function to be synthesized is specified by a single-invocation property that can be expressed as a term of sort ${\mathsf{C}}$. This is the case for the conjecture from Example \[ex:max-sygus\] where the property $P_{{\mathsf{ev}}}[g, \vec x]$ can be rephrased as: $$\begin{aligned}
\label{eqn:gen-sygus}
P_{{\mathsf{C}}}[g, \vec x] & := & {{\mathsf{ev}}}( {\mathsf{and}}( {\mathsf{leq}}( {\mathsf{x}}_1, g ), {\mathsf{and}}( {\mathsf{leq}}( {\mathsf{x}}_2, g ), {\mathsf{or}}( {\mathsf{eq}}( g, {\mathsf{x}}_1 ), {\mathsf{eq}}( g, {\mathsf{x}}_2 ) ) ) ), \vec x)\end{aligned}$$ where again $g$ has type ${\mathsf{S}}$, $\vec x = (x_1, x_2)$, and $x_1$ and $x_2$ have type ${{\mathsf{Int}}}$. The procedure in Figure \[fig:proc1\] can be readily modified to apply to this formula, with $P_{{\mathsf{C}}}[g, \vec{{\mathsf{k}}}]$ and $g$ taking the role respectively of $Q[\vec{{\mathsf{k}}}, y]$ and $y$ in that figure, since it generates solutions meeting our syntactic requirements. Running this modified procedure instead the one in Figure \[fig:proc2\] has the advantage that only the outputs of a solution need to be synthesized, not conditions in $\mathsf{ite}$-terms. However, in our experimental evaluation found that the overhead of using an embedding into datatypes for syntax-guided problems is significant with respect to the performance of the solver on problems with no syntactic restrictions. For this reason, we advocate an approach for single-invocation synthesis conjectures with syntactic restrictions that runs the procedure from Figure \[fig:proc1\] as is, ignoring the syntactic restrictions $R$, and subsequently reconstructs from its returned solution one satisfying the restrictions. For that it is useful to assume that terms $t$ in $T$ can be effectively reduced to some ($T$-equivalent and unique) *normal form*, which we denote by ${t\!\downarrow}$.
Say the procedure from Figure \[fig:proc1\] returns a solution $\lambda \vec x.\, t$ for a function $f$. To construct from that a solution that meets the syntactic restrictions specified by datatype ${\mathsf{S}}$, we run the iterative procedure described in Figure \[fig:proc3\]. This procedure maintains an evolving set $A$ of triples of the form $( t, s, D )$, where $D$ is a datatype, $t$ is a term in normal form, $s$ is a term satisfying the restrictions specified by $D$. The procedure incrementally makes calls to the subprocedure [$\mathrm{rcon}$]{}, which takes a normal form term $t$, a datatype $D$ and the set $A$ above, and returns a pair $( s, U )$ where $s$ is a term equivalent to $t$ in $T$, and $U$ is a set of pairs $(s', D')$ where $s'$ is a subterm of $s$ that fails to satisfy the syntactic restriction expressed by datatype $D'$. Overall, the procedure alternates between calling [$\mathrm{rcon}$]{} and adding triples to $A$ until ${\ensuremath{\mathrm{rcon}}}( t, D, A )$ returns a pair of the form $( s, \emptyset )$, in which case $s$ is a solution satisfying the syntactic restrictions specified by ${\mathsf{S}}$.
1. $A : = \emptyset$ ; $t' := {t\!\downarrow}$
2. for $i = 1, 2, \ldots$
1. $( s, U ) := {\ensuremath{\mathrm{rcon}}}( t', {\mathsf{S}}, A )$;
2. \[it:enum\] if $U$ is empty, return $s$; otherwise, for each datatype $D_j$ occurring in $U$
- let $d_i$ be the $i^{th}$ term in a fair enumeration of the elements of $D_j$
- let $t_i$ be the analogue of $d_i$ in the background theory $T$
- add $( {t_i\!\downarrow}, t_i, D_j )$ to $A$
[$\mathrm{rcon}$]{}$( t, D, A )$
- if $(t, s, D) \in A$, return $( s, \emptyset )$; otherwise, do one of the following:\
----- ------------------------------------------------------------------------------------------------------------------------------------------
(1) choose a $f( t_1, \ldots, t_n )$ s.t. ${ f( t_1, \ldots, t_n ) \!\downarrow}\ = t$ and $f$ has an analogue $c^{D_1 \ldots D_n D}$ in $D$
let $( s_i, U_i ) = {\ensuremath{\mathrm{rcon}}}( {t_i\!\downarrow}, D_i, A )$ for $i = 1, \ldots, n$
return $( f( s_1, \ldots, s_n ), U_1 \cup \ldots \cup U_n )$
(2) return $( t, \{ ( t, D ) \} )$
----- ------------------------------------------------------------------------------------------------------------------------------------------
Say we wish to construct a solution equivalent to $\lambda x_1\,x_2.\: x_1+(2*x_2)$ that meets restrictions specified by datatype ${\mathsf{S}}$ from Example \[ex:max-sygus\]. To do so, we let $A = \emptyset$, and call ${\ensuremath{\mathrm{rcon}}}({(x_1+(2*x_2))\!\downarrow}, {\mathsf{S}}, A )$. Since $A$ is empty and $+$ is the analogue of constructor ${\mathsf{plus}}^{ {\mathsf{S}} {\mathsf{S}} {\mathsf{S}}}$ of ${\mathsf{S}}$, assuming ${(x_1+(2*x_2))\!\downarrow}\ = x_1+(2*x_2)$, we may choose to return a pair based on the result of calling ${\ensuremath{\mathrm{rcon}}}$ on ${x_1\!\downarrow}$ and ${(2*x_2)\!\downarrow}$. Since ${\mathsf{x}}_1^{{\mathsf{S}}}$ is a constructor of ${\mathsf{S}}$ and ${x_1\!\downarrow}\ = x_1$, ${\ensuremath{\mathrm{rcon}}}( x_1, {\mathsf{S}}, A )$ returns $( x_1, \emptyset )$. Since ${\mathsf{S}}$ does not have a constructor for $*$, we must either choose a term $t$ such that ${t\!\downarrow}\ = {(2*x_2)\!\downarrow}$ where the topmost symbol of $t$ is the analogue of a constructor in ${\mathsf{S}}$, or otherwise return the pair $( 2*x_2, \{ (2*x_2, {\mathsf{S}} ) \} )$. Suppose we do the latter, and thus ${\ensuremath{\mathrm{rcon}}}( x_1+(2*x_2), {\mathsf{S}}, A )$ returns $( x_1+(2*x_2), \{ (2*x_2, {\mathsf{S}} ) \} )$. Since the second component of this pair is not empty, we pick in Step \[it:enum\] the first element of ${\mathsf{S}}$, ${\mathsf{x}}_1$ say, and add $( x_1, x_1, {\mathsf{S}} )$ to $A$. We then call ${\ensuremath{\mathrm{rcon}}}( {(x_1+(2*x_2))\!\downarrow}, {\mathsf{S}}, A )$ which by the same strategy above returns $( x_1+(2*x_2), \{ (2*x_2, {\mathsf{S}} ) \} )$. This process continues until we pick, the term ${\mathsf{plus}}( {\mathsf{x_2}}, {\mathsf{x_2}} )$ say, whose analogue is $x_2+x_2$. Assuming ${(x_2+x_2)\!\downarrow}\ = {(2*x_2)\!\downarrow}$, after adding the pair $( 2*x_2, x_2+x_2, {\mathsf{S}} )$ to $A$, ${\ensuremath{\mathrm{rcon}}}( {(x_1+(2*x_2))\!\downarrow}, {\mathsf{S}}, A )$ returns the pair $( x_1+(x_2+x_2), \emptyset )$, indicating that $\lambda x_1\,x_2.\, x_1+(x_2+x_2)$ is equivalent to $\lambda x_1\,x_2.\, x_1+(2*x_2)$, and meets the restrictions specified by ${\mathsf{S}}$.
This procedure depends upon the use of normal forms for terms. It should be noted that, since the top symbol of $t$ is generally ${{\mathsf{ite}}}$, this normalization includes both low-level rewriting of literals within $t$, but also includes high-level rewriting techniques such as ${{\mathsf{ite}}}$ simplification, redundant subterm elimination and destructive equality resolution. Also, notice that we are not assuming that ${t\!\downarrow}\ = {s\!\downarrow}$ if and only if $t$ is equivalent to $s$, and thus normal forms only underapproximate an equivalence relation between terms. Having a (more) consistent normal form for terms allows us to compute a (tighter) underapproximation, thus improving the performance of the reconstruction. In this procedure, we use the same normal form for terms that is used by the individual decision procedures of [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}. This is unproblematic for theories such as linear arithmetic whose normal form for terms is a sorted list of monomials, but it can be problematic for theories such as bitvectors. As a consequence, we use several optimizations, omitted in the description of the procedure in Figure \[fig:proc3\], to increase the likelihood that the procedure terminates in a reasonable amount of time. For instance, in our implementation the return value of ${\ensuremath{\mathrm{rcon}}}$ is not recomputed every time $A$ is updated. Instead, we maintain an evolving directed acyclic graph (dag), whose nodes are pairs $( t, S )$ for term $t$ and datatype $S$ (the terms we have yet to reconstruct), and whose edges are the direct subchildren of that term. Datatype terms are enumerated for all datatypes in this dag, which is incrementally pruned as pairs are added to $A$ until it becomes empty. Another optimization is that the procedure [$\mathrm{rcon}$]{} may choose to try simultaneously to reconstruct *multiple* terms of the form $f( t_1, \ldots, t_n )$ when matching a term $t$ to a syntactic specification $S$, reconstructing $t$ when any such term can be reconstructed.
Although the overhead of this procedure can be significant when large subterms do not meet the syntactic restrictions, we found that in practice it quickly terminates successfully for a majority of the solutions we considered where reconstruction was possible, as we discuss in the next section. Furthermore, it makes our implementation more robust, since it effectively treats in the same way different properties that are equal modulo normalization (which is parametric in the built-in theories we consider).
Experimental Evaluation
=======================
We implemented the techniques from the previous sections in the SMT solver [<span style="font-variant:small-caps;">cvc</span>[4]{}]{} [@CVC4-CAV-11], which has support for quantified formulas and a wide range of theories including arithmetic, bitvectors, and algebraic datatypes. We evaluated our implementation on 243 benchmarks used in the SyGuS 2014 competition [@AlurETAL2014SyGuSMarktoberdorf] that were publicly available on the StarExec execution service [@StuST-IJCAR-14]. The benchmarks are in a new format for specifying syntax-guided synthesis problems [@DBLP:journals/corr/RaghothamanU14]. We added parsing support to [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}for most features of this format. All SyGuS benchmarks considered contain synthesis conjectures whose background theory is either linear integer arithmetic or bitvectors. We made some minor modifications to benchmarks to avoid naming conflicts, and to explicitly define several bitvector operators that are not supported natively by [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}.
We considered multiple configurations of [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}corresponding to the techniques mentioned in this paper. Configuration [**cvc4+sg**]{} executes the syntax-guided procedure from Section \[sec:syntax-guided\], even in cases where the synthesis conjecture is single-invocation. Configuration [**cvc4+si-r**]{} executes the procedure from Section \[sec:refutation-based\] on all benchmarks having conjectures that it can deduce are single-invocation. In total, it discovered that 176 of the 243 benchmarks could be rewritten into a form that was single-invocation. This configuration simply ignores any syntax restrictions on the expected solution. Finally, configuration [**cvc4+si**]{} uses the same procedure used by [**cvc4+si-r**]{} but then attempts to reconstruct any found solution as a term in required syntax, as described in Section \[sec:si-syntax-guided\].
We ran all configurations on all benchmarks on the StarExec cluster.[^6] We provide comparative results here primarily against the enumerative CEGIS solver [<span style="font-variant:small-caps;">ESolver</span>]{} [@Udupa2013], the winner of the SyGuS 2014 competition. In our tests, we found that [<span style="font-variant:small-caps;">ESolver</span>]{}performed significantly better than the other entrants of that competition.
The results for benchmarks with single-invocation properties are shown in Figure \[fig:results-solved-si\]. Configuration [**cvc4+si-r**]{} found a solution (although not necessarily in the required language) very quickly for a majority of benchmarks. It terminated successfully for 168 of 176 benchmarks, and in less than a second for 159 of those. Not all solutions found using this method met the syntactic restrictions. Nevertheless, our methods for reconstructing these solutions into the required grammar, implemented in configuration [**cvc4+si**]{}, succeeded in 102 cases, or 61% of the total. This is 32 more benchmarks than the 70 solved by [<span style="font-variant:small-caps;">ESolver</span>]{}, the best known solver for these benchmarks so far. In total, [**cvc4+si**]{} solved 34 benchmarks that [<span style="font-variant:small-caps;">ESolver</span>]{}did not, while [<span style="font-variant:small-caps;">ESolver</span>]{}solved 2 that [**cvc4+si**]{} did not.
The solutions returned by [**cvc4+si-r**]{} were often large, having an order of 10K subterms for harder benchmarks. However, after exhaustively applying simplification techniques during reconstruction with configuration [**cvc4+si**]{}, we found that the size of those solutions is comparable to other solvers, and in some cases even smaller. For instance, among the 68 benchmarks solved by both [<span style="font-variant:small-caps;">ESolver</span>]{}and [**cvc4+si**]{}, the former produced a smaller solution in 15 cases and the latter in 9. Only in 2 cases did [**cvc4+si**]{} produce a solution that had 10 more subterms than the solution produced by [<span style="font-variant:small-caps;">ESolver</span>]{}. This indicates that in addition to having a high precision, the techniques from Section \[sec:si-syntax-guided\] used for solution reconstruction are effective also at producing succinct solutions for this benchmark library.
Configuration [**cvc4+sg**]{} does not take advantage of the fact that a synthesis conjecture is single-invocation. However, it was able to solve 48 of these benchmarks, including a small number not solved by any other configuration, like one from the [**icfp**]{} class whose solution was a single argument function over bitvectors that shifted its input right by four bits. In addition to being solution complete, [**cvc4+sg**]{} always produces solutions of minimal term size, something not guaranteed by the other solvers and [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}configurations. Of the 47 benchmarks solved by both [**cvc4+sg**]{} and [<span style="font-variant:small-caps;">ESolver</span>]{}, the solution returned by [**cvc4+sg**]{} was smaller than the one returned by [<span style="font-variant:small-caps;">ESolver</span>]{}in 6 cases, and had the same size in the others. This provides an experimental confirmation that the fairness techniques for term size described in Section \[sec:syntax-guided\] ensure minimal size solutions.
Configuration [**cvc4+sg**]{} is the only [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}configuration that can process benchmarks with synthesis conjectures that are not single-invocation. The results for [<span style="font-variant:small-caps;">ESolver</span>]{}and [**cvc4+sg**]{} on such benchmarks from SyGuS 2014 are shown in Figure \[fig:results-solved\]. Configuration [**cvc4+sg**]{} solved 53 of them over a total of 67. [<span style="font-variant:small-caps;">ESolver</span>]{}solved 58 and additionally reported that 6 had no solution. In more detail, [<span style="font-variant:small-caps;">ESolver</span>]{}solved 7 benchmarks that [**cvc4+sg**]{} did not, while [**cvc4+sg**]{} solved 2 benchmarks (from the [**vctrl**]{} class) that [<span style="font-variant:small-caps;">ESolver</span>]{}could not solve. In terms of precision, [**cvc4+sg**]{} is quite competitive with the state of the art on these benchmarks. To give other points of comparison, at the SyGuS 2014 competition [@AlurETAL2014SyGuSMarktoberdorf] the second best solver (the Stochastic solver) solved 40 of these benchmarks within a one hour limit and Sketch solved 23.
In total, over the entire SyGuS 2014 benchmark set, 155 benchmarks can be solved by a configuration of [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}that, whenever possible, runs the methods for single-invocation properties described in Section \[sec:refutation-based\], and otherwise runs the method described in Section \[sec:syntax-guided\]. This number is 27 higher than the 128 benchmarks solved in total by [<span style="font-variant:small-caps;">ESolver</span>]{}. Running both configuration [**cvc4+sg**]{} and [**cvc4+si**]{} in parallel[^7] solves 156 benchmarks, indicating that [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}is highly competitive with state-of-the-art tools for syntax guided synthesis. [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}’s performance is noticeably better than [<span style="font-variant:small-caps;">ESolver</span>]{} on single-invocation properties, where our new quantifier instantiation techniques give it a distinct advantage.
We conclude by observing that for certain classes of benchmarks, configuration [**cvc4+si**]{} scales significantly better than state-of-the-art synthesis tools. Figure \[fig:results-max\] shows this in comparison with [<span style="font-variant:small-caps;">ESolver</span>]{} for the problem of synthesizing a function that computes the maximum of $n$ integer inputs. As reported by Alur et al. [@AlurETAL2014SyGuSMarktoberdorf], no solver in the SyGuS 2014 competition was able to synthesize such a function for $n = 5$ within one hour.
For benchmarks from the [**array**]{} class, whose solutions are loop-free programs that compute the first instance of an element in a sorted array, the best reported solver for these in [@AlurETAL2014SyGuSMarktoberdorf] was Sketch, which solved a problem for an array of length 7 in approximately 30 minutes.[^8] In contrast, [**cvc4+si**]{} was able to reconstruct solutions for arrays of size 15 (the largest benchmark in the class) in 0.3 seconds, and solved each of the benchmarks in the class but 8 within 1 second.
$n$ 2 3 4 5 6 7 8 9 10
----------------- ------ --------- ------ ------ ----- ----- ----- ----- ------
[**esolver**]{} 0.01 1377.10 – – – – – – –
[**cvc4+si**]{} 0.01 0.02 0.03 0.05 0.1 0.3 1.6 8.9 81.5
Conclusion
==========
We have shown that SMT solvers, instead of just acting as subroutines for automated software synthesis tasks, can be instrumented to perform synthesis themselves. We have presented a few approaches for enabling SMT solvers to construct solutions for the broad class of syntax-guided synthesis problems and discussed their implementation in [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}. This is, to the best of our knowledge, the first implementation of synthesis inside an SMT solver and it already shows considerable promise. Using a novel quantifier instantiation technique and a solution enumeration technique for the theory of algebraic datatypes, our implementation is competitive with the state of the art represented by the systems that participated in the 2014 syntax-guided synthesis competition. Moreover, for the important class of single-invocation problems when syntax restrictions permit the if-then-else operator, our implementation significantly outperforms those systems.
We would like to thank Liana Hadarean for helpful discussions on the normal form used in [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}for bit vector terms.
[^1]: This work is supported in part by the European Research Council (ERC) Project *Implicit Programming* and Swiss National Science Foundation Grant *Constraint Solving Infrastructure for Program Analysis*.
[^2]: This paper is dedicated to the memory of Morgan Deters who died unexpectedly in Jan 2015.
[^3]: Other approaches in the verification and synthesis literature also rely implicitly, and in some cases unwittingly, on this restriction or stronger ones. We make satisfaction completeness explicit here as a sufficient condition for reducing satisfiability problems to unsatisfiability ones.
[^4]: An example of a property that is *not* single-invocation is $\forall x_1\,x_2\, f( x_1, x_2 ) {\approx}f( x_2, x_1 )$.
[^5]: We stress again, that both the instrumentation of the solver and the satisfaction completeness argument for the extended theory are generic with respect to the syntactic restriction on the synthesis problem and the original satisfaction complete theory $T$.
[^6]: A detailed summary can be found at [<http://lara.epfl.ch/w/cvc4-synthesis>.]{}
[^7]: [<span style="font-variant:small-caps;">cvc</span>[4]{}]{}has a *portfolio* mode that allows it to run multiple configurations at the same time.
[^8]: These benchmarks, as contributed to the SyGuS benchmark set, use integer variables only; they were generated by expanding fixed-size arrays and contain no operations on arrays.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper concerns regular connections on trivial algebraic $G$-principal fiber bundles over the infinitesimal punctured disc, where $G$ is a connected reductive linear algebraic group over an algebraically closed field of characteristic zero. We show that the pull-back of every regular connection to an appropriate covering of the infinitesimal punctured disc is gauge equivalent to a connection of the form $X z^{-1}\operatorname{d}\! z$ for some $X$ in the Lie algebra of $G$. We may even arrange that the only rational eigenvalue of ${{\operatorname}{ad}}X$ is zero. Our results allow a classification of regular ${{\operatorname}{SL}}_n$-connections up to gauge equivalence.'
address: 'Mathematisches Institut, Universit[ä]{}t Freiburg, Eckerstra[ß]{}e 1, D-79104 Freiburg, Germany'
author:
- 'Olaf M. Schn[ü]{}rer'
title: Regular Connections on Principal Fiber Bundles over the Infinitesimal Punctured Disc
---
Introduction {#S:introduction}
============
Let $G$ be a linear algebraic group over an algebraically closed field $k$ of characteristic zero, and let ${\mathfrak{g}}$ be its Lie algebra. The loop group $G((z))=G(k((z)))$ acts as a gauge group on the set ${\cal{G}}= {\mathfrak{g}}\otimes_k k((z)) {{\,{\operatorname}{d}\!z}}$ of connections on the trivial algebraic $G$-principal fiber bundle over the infinitesimal punctured disc. For $G={{\operatorname}{GL}}_n$, this action is given by $$g\left[A {{\,{\operatorname}{d}\!z}}\right]
=\left(g A g{^{-1}}+ \tfrac{\partial}{\partial z}(g)g{^{-1}}\right)
{{\,{\operatorname}{d}\!z}},$$ where $g \in {{\operatorname}{GL}}_n((z))$, $A \in {\mathfrak{gl}}_n \otimes_k k((z))$, and $\tfrac{\partial}{\partial z}$ acts on each entry of the matrix $g$. If is a closed subgroup, this action induces an action of $G((z))$ on ${\cal{G}}$. According to [@BV §8.2, Definition], a connection is regular if it is gauge equivalent to an element of . For a positive integer $m$, we define the inclusion ${{{m}^\ast}}: k((z)) {\hookrightarrow}k((z))$ by ${{{m}^\ast}}(f(z)) = f(z^m)$. Geometrically, it corresponds to an $m$-fold covering of the infinitesimal punctured disc. We can pull back every connection $A$ to a connection ${{{m}^\ast}}(A)$. Now the main results of this article can be formulated.
Let $G$ be a connected reductive linear algebraic group. There exists a positive integer $m$, such that for every regular connection $A$ the pull-back connection ${{{m}^\ast}}(A)$ is gauge equivalent to $X z{^{-1}}{{\,{\operatorname}{d}\!z}}$ for a suitable $X \in {\mathfrak{g}}$.
In [@BV §8.4 (c)], a similar statement is given for any affine algebraic group over the complex numbers. Its proof, however, uses analytic methods. Here, we give a purely algebraic proof for a connected reductive group.
Let $G$ be a connected reductive linear algebraic group. For every regular connection $A$, there exists a positive integer $m$ and an element $X \in {\mathfrak{g}}$, such that the only rational eigenvalue of ${{\operatorname}{ad}}X$ is zero and the pull-back connection ${{{m}^\ast}}(A)$ is gauge equivalent to $X z{^{-1}}{{\,{\operatorname}{d}\!z}}$.
The proofs of Theorems \[T:standard\] and \[T:standardnull\] are mainly based on the structure theory of the group $G$ and its Lie algebra ${\mathfrak{g}}$. We use these theorems and Galois cohomology in order to get a classification of regular ${{\operatorname}{SL}}_n$-connections up to ${{\operatorname}{SL}}_n((z))$-equivalence, see Theorem \[S:slnklassifik\] and Remarks \[rem:slnklassifik\], \[rem:nice-class\]. Our classification strategy is motivated by [@BV]. Some steps of this strategy can also be applied to other connected reductive linear algebraic groups, see [@OSdiplom].
This paper is organized as follows. Using results from [@DG], we define in Section \[S:conngauge\] the action of the gauge group on the space of connections intrinsically, [[i.e., ]{}]{}without choosing a closed embedding of $G$ into some ${{\operatorname}{GL}}_n$. We repeat the definitions of regular and aligned connections given in [@BV] and recall that every regular connection is gauge equivalent to an aligned connection. In Section \[S:pullback\], we explain how to pull back connections. We call two connections related, if they become gauge equivalent in some covering. Using Steinberg’s theorem (cf. [@Steinberg]), we show that for a connected group $G$ the relatives of a connection $A$ up to gauge equivalence correspond bijectively to a set ${{\operatorname}{H}}^1(K; A)$ defined via Galois cohomology. We prove our main Theorems \[T:standard\] and \[T:standardnull\] in Section \[S:standard\]. Section \[S:dmoduln\] contains the classification of regular ${{\operatorname}{GL}}_n$-connections up to ${{\operatorname}{GL}}_n((z))$-equivalence. This is classical, see for example [@BV §3]. We explain the relation between $n$-dimensional (Fuchsian) $D$-modules (see [@Manin]) and (regular) ${{\operatorname}{GL}}_n$-connections. If we translate the classification into the language of $D$-modules, we obtain a result of Manin ([@Manin]). The results of the previous sections are used in Section \[S:klassifikation\] in order to classify regular ${{\operatorname}{SL}}_n$-connections up to relationship and up to gauge equivalence. These classifications use an explicit description of the semisimple conjugacy classes in the centralizer ${{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$, where $X$ is an element of the Lie algebra ${\mathfrak{gl}}_n$. We include this description in Appendix \[App:cent\]. The definition of a Fuchsian connection in Section \[S:fuchszshg\] sounds more natural than the definition of a regular connection. It is based upon the notion of a Fuchsian $D$-module given in [@Manin]. We show that regular connections are Fuchsian, and that for $G={{\operatorname}{GL}}_n$ and $G={{\operatorname}{SL}}_n$, Fuchsian connections are regular.
This paper is a condensed version of my diploma thesis [@OSdiplom], written in Freiburg in 2002/2003. I would like to thank Wolfgang Soergel. He taught me a lot of the mathematics I know.
Connections and Gauge Group {#S:conngauge}
===========================
Conventions {#SS:conventions}
-----------
We fix an algebraically closed field $k$ of characteristic zero and write $\otimes=\otimes_k$, ${{\operatorname}{Hom}}={{\operatorname}{Hom}}_k$, and so on. If we discuss vector spaces, Lie algebras, linear algebraic groups, or the like, we always mean the corresponding structures defined over $k$. We denote the positive integers by ${{{{\mathbb{N}}}^+}}$ and define ${{{\mathbb{N}}}}= {{{{\mathbb{N}}}^+}}\cup \{0\}$.
Let ${\cal{O}}=k[[z]]$ be the ring of formal power series over $k$, with maximal ideal ${\mathfrak{m}}= zk[[z]]$ and the induced ${\mathfrak{m}}$-topology. The quotient field of ${\cal{O}}$ is the field $K=k((z))$ of Laurent series over $k$. Let $D= k((z)) {{\,\partial_z}}$ denote the continuous $k$-linear derivations from $K$ to $K$, where we abbreviate ${{\,\partial_z}}= \tfrac{\partial}{\partial z}$. Define $\Omega={{\operatorname}{Hom}}_K(D,K) = k((z)){{\,{\operatorname}{d}\!z}}$, where ${{\,{\operatorname}{d}\!z}}$ is dual to ${{\,\partial_z}}$, [[i.e., ]{}]{}${{\,{\operatorname}{d}\!z}}({{\,\partial_z}})=1$.
Action of the Gauge Group on the Space of Connections {#SS:opergauge}
-----------------------------------------------------
We collect some results from [@DG II, §4], in particular a definition of the Lie algebra. These will enable us to define the action of the gauge group on the space of connections in an intrinsic way. The reader who is not interested in this intrinsic definition may use Equation in Example \[Bsp:GLnOp\] as the definition for closed subgroups of ${{\operatorname}{GL}}_n$. He may check that this is well defined and does not depend on the closed embedding.
Let $G$ be a linear algebraic group. We consider $G$ as an affine algebraic group scheme. Suppose that $R$ is a $k$-algebra. We denote the algebra of dual numbers by $R[{\varepsilon}]=R\oplus R{\varepsilon}$. By applying the group functor $G$ to the unique $R$-algebra homomorphism from $R$ to $R[{\varepsilon}]$, we consider $G(R)$ as a subgroup of $G(R[{\varepsilon}])$. Define the $R$-algebra homomorphism $p:R[{\varepsilon}]{\rightarrow}R$ by $p({\varepsilon}) = 0$. As explained in [@DG], the kernel ${\mathfrak{g}}(R)$ of the group homomorphism $G(p):G(R[{\varepsilon}]) {\rightarrow}G(R)$ is endowed with a structure of Lie algebra over $R$. The Lie algebra ${\mathfrak{g}}= {\mathfrak{g}}(k)$ is canonically isomorphic to the standard Lie algebra of the linear algebraic group $G$. The obvious inclusion $k {\hookrightarrow}R$ induces a homomorphism of Lie algebras ${\mathfrak{g}}{\rightarrow}{\mathfrak{g}}(R)$. Tensoring with $R$ gives a canonical homomorphism $R\otimes{\mathfrak{g}}{\rightarrow}{\mathfrak{g}}(R)$ of Lie algebras over $R$. This is an isomorphism, as our group scheme $G$ is locally algebraic. Hence we identify $R\otimes{\mathfrak{g}}= {\mathfrak{g}}(R)$.
We define an action of the gauge group $G(K)$ on the set $${\cal{G}}= {{\operatorname}{Hom}}_K(D, {\mathfrak{g}}(K))$$ of connections on the trivial algebraic $G$-principal fiber bundle over the infinitesimal punctured disc. Every derivation $s\in D$ gives rise to a $k$-algebra homomorphism $\hat{s}: K {\rightarrow}K[{\varepsilon}]$ defined by $\hat{s}(f) = f + s(f){\varepsilon}$. We get a group homomorphism $ \hat{s}=G(\hat{s}) : G(K) {\rightarrow}G(K[{\varepsilon}])$. For $g\in G(K)$, define $\dot{g}: D {\rightarrow}G\left(K[{\varepsilon}]\right)$ by $\dot{g}(s)= \hat{s}(g)$. For all $g\in G(K)$ and $s\in D$, we deduce from $p\circ \hat{s}={{\operatorname}{id}}_K$ that $\dot{g}(s)g{^{-1}}$ is an element of the kernel of $G(p)$, [[i.e., ]{}]{}an element of ${\mathfrak{g}}(K)$.
\[P:gaugeaction\] The map $$G(K) \times {\cal{G}}{\rightarrow}{\cal{G}},\quad
(g,A) \mapsto g[A] = ({{\operatorname}{Ad}}g) \circ A + (\cdot {g{^{-1}}})\circ \dot{g},$$ defines an action of the gauge group $G(K)$ on the space of connections ${\cal{G}}$. Here, $(\cdot h)$ denotes right multiplication by a group element $h$.
If $f:G {\rightarrow}H$ is a homomorphism of linear algebraic groups, we have $f_{\ast}(g[A]) = f(g)[f_{\ast}(A)]$, where $f_{\ast}:{\cal{G}}{\rightarrow}{\cal{H}}$ is the obvious map.
We have to show that, for $g \in G(K)$ and $A\in{\cal{G}}$, the map $g[A]$ from $D$ to ${\mathfrak{g}}(K)$ is $K$-linear. In order to do this, use the definition in [@DG] of the $K$-vector space structure on ${\mathfrak{g}}(K)$. The explicit arguments can be found in [@OSdiplom]. The remaining claims are obvious.
There is another way of proving the $K$-linearity of the map $g[A]$: By choosing a closed embedding $G {\hookrightarrow}{{\operatorname}{GL}}_n$, we reduce to the case $G={{\operatorname}{GL}}_n$. Then an easy calculation based on Equation in Example \[Bsp:GLnOp\] shows the $K$-linearity.
Let $H\subset G(K)$ be a subgroup. Two connections $A$, $B \in {\cal{G}}$ are [**$H$-equivalent**]{}, if there is an element $h \in H$ such that $h[A] = B$. They are [**gauge equivalent**]{}, if they are $G(K)$-equivalent.
Note that canonically $${\cal{G}}={{\operatorname}{Hom}}_K(D, {\mathfrak{g}}(K)) = {\mathfrak{g}}(K)\otimes_K {{\operatorname}{Hom}}_K(D, K) =
{\mathfrak{g}}(K) \otimes_K \Omega.
$$ Since $\Omega =K {{\,{\operatorname}{d}\!z}}$, each element of ${\mathfrak{g}}(K) \otimes_K \Omega$ can be written uniquely in the form $A \otimes {{\,{\operatorname}{d}\!z}}$ with $A \in {\mathfrak{g}}(K)$. We abbreviate $A {{\,{\operatorname}{d}\!z}}= A
\otimes {{\,{\operatorname}{d}\!z}}$ and write ${\cal{G}}= {\mathfrak{g}}(K){{\,{\operatorname}{d}\!z}}$ accordingly. We define ${{\,{\operatorname}{d}\!\log z}}= z{^{-1}}{{\,{\operatorname}{d}\!z}}\in \Omega$ and write similarly $A {{\,{\operatorname}{d}\!\log z}}$ and ${\cal{G}}= {\mathfrak{g}}(K){{\,{\operatorname}{d}\!\log z}}$. If $B \in {\cal{G}}= {{\operatorname}{Hom}}_K(D, {\mathfrak{g}}(K))$ is a connection, we have $B = B({{\,\partial_z}}){{\,{\operatorname}{d}\!z}}= B(z{{\,\partial_z}}) {{\,{\operatorname}{d}\!\log z}}$.
We use the abbreviations $G((z))$ (resp. ${\mathfrak{g}}((z))$, ${\mathfrak{g}}[z]$) for $G\left(k((z))\right)$ (resp. ${\mathfrak{g}}\left(k((z))\right)$, ${\mathfrak{g}}\left(k[z]\right)$), and so on.
For a connection $A {{\,{\operatorname}{d}\!\log z}}\in {\mathfrak{g}}((z)){{\,{\operatorname}{d}\!\log z}}$ and a gauge transformation $g \in G((z))$, we get $$\label{Eq:actionG}
g[A {{\,{\operatorname}{d}\!\log z}}] = \left(({{\operatorname}{Ad}}g)(A) + \widehat{z{{\,\partial_z}}}(g)g{^{-1}}\right){{\,{\operatorname}{d}\!\log z}}.$$
\[Bsp:GLnOp\] Consider the case $G={{\operatorname}{GL}}_n$. Let $A \in {{\operatorname}{Mat}}_n\left(k((z))\right)={\mathfrak{gl}}_n((z))$ and be given. As $\widehat{z{{\,\partial_z}}}(g)=g + z{{\,\partial_z}}(g){\varepsilon}$ is satisfied in ${{\operatorname}{GL}}_n(k((z))[{\varepsilon}])$, we get $$\begin{aligned}
\label{Eq:actionGLn}
g\left[A {{\,{\operatorname}{d}\!\log z}}\right]
=& \left(g A g{^{-1}}+ z{{\,\partial_z}}(g)g{^{-1}}\right) {{\,{\operatorname}{d}\!\log z}}.\end{aligned}$$
Alignement of Regular Connections {#SS:alignregconn}
---------------------------------
Let $X\in{\mathfrak{g}}$ and $\lambda \in k$. We denote the eigenspace of ${{\operatorname}{ad}}X$ corresponding to $\lambda$ by ${{\operatorname}{Eig}}({{\operatorname}{ad}}X; \lambda)=\ker({{\operatorname}{ad}}X - \lambda)$. Let $X=
X_{{{\operatorname}{s}}}+X_{{\operatorname}{n}}$ be the Jordan decomposition in ${\mathfrak{g}}= {{\operatorname}{Lie}}G$.
(cf. [@BV §8.2, Definition, and §8.5, Definition]) The elements of ${\mathfrak{g}}[[z]]{{\,{\operatorname}{d}\!\log z}}$ are called [**connections of the first kind**]{}. A connection is [**regular**]{}, if it is gauge equivalent to a connection of the first kind. A connection of the first kind $A = \sum_{r \in {{{\mathbb{N}}}}}A_rz^r {{\,{\operatorname}{d}\!\log z}}$ is [**aligned**]{}, if $A_r \in {{\operatorname}{Eig}}({{\operatorname}{ad}}{A_{0,{{\operatorname}{s}}}};r)$ for all $r \in {{{\mathbb{N}}}}$.
\[T:regaus\] Every regular connection is gauge equivalent to an aligned connection.
For a worked out proof, that uses the exponential map as defined in [@DG II, §6, 3], see [@OSdiplom].
Relatives and Galois Cohomology {#S:pullback}
===============================
Pull-Back of Gauge Transformations and Connections {#SS:pullbackdiffformsandconn}
--------------------------------------------------
Let $E$ and $F$ be fields, $\phi:E{\rightarrow}F$ a map, $V$ an $E$-vector space, and $W$ an $F$-vector space. By a $\phi$-[**linear**]{} map $f:V{\rightarrow}W$ we mean a group homomorphism $f:V{\rightarrow}W$ such that $f(ev) = \phi(e)f(v)$ for all $e \in E$, $v \in V$.
For $m \in {{{{\mathbb{N}}}^+}}$ we define the field extension ${{{m}^\ast}}: K {\hookrightarrow}K=M$ by ${{{m}^\ast}}(f(z)) = f(z^m)$.
Let $G$ be a linear algebraic group, let $l$, $m \in {{{{\mathbb{N}}}^+}}$, and let $\phi:{{{l}^\ast}} {\rightarrow}{{{m}^\ast}}$ be a morphism of field extensions, [[i.e. ]{}]{} $\phi:K{\rightarrow}K$ is a ring homomorphism satisfying . The map $\phi=G(\phi): G(K) {\hookrightarrow}G(K)$ is called $\phi$-pull-back of gauge transformations. There is a unique $\phi$-linear map such that ${{\,{\operatorname}{d}}}\circ \phi = \Phi \circ {{\,{\operatorname}{d}}}$, where the derivation ${{\,{\operatorname}{d}}}: K {\rightarrow}\Omega$ is defined by ${{\,{\operatorname}{d}}}x (\delta) = \delta (x)$, for $x \in K$, $\delta \in D$. We denote this injective map by $\phi$ and call it $\phi$-pull back of $1$-forms. This map $\Phi$ and the $\phi$-linear homomorphism of Lie algebras ${\mathfrak{g}}(\phi):{\mathfrak{g}}(K) {\hookrightarrow}{\mathfrak{g}}(K)$ induce an injective $\phi$-linear map $${\mathfrak{g}}(\phi) \otimes \Phi:{\cal{G}}={\mathfrak{g}}(K)\otimes_K \Omega {\hookrightarrow}{\cal{G}}={\mathfrak{g}}(K)\otimes_K \Omega.$$ We denote this map simply by $\phi$ and call it $\phi$-pull-back of connections.
In particular, for $l=1$ and $\phi= {{{m}^\ast}}$, we have ${{{m}^\ast}}({{\,{\operatorname}{d}\!\log z}})=m{{\,{\operatorname}{d}\!\log z}}$, and therefore, for $A(z) \in {\mathfrak{g}}((z))$, $$\label{Eq:pullbacki}
{{{m}^\ast}}(A(z) {{\,{\operatorname}{d}\!\log z}})= m A(z^m){{\,{\operatorname}{d}\!\log z}}.$$
\[P:phiswitch\] Under the above assumptions we have $\phi(g)[\phi(A)]=\phi(g[A])$ for all $g \in G(K)$, $A \in {\cal{G}}$.
Use a closed embedding $G \subset {{\operatorname}{GL}}_n$ and Equation in Example \[Bsp:GLnOp\]. For a more intrinsic proof we refer to [@OSdiplom].
Connections and Galois Cohomology {#SS:formsandgalois}
---------------------------------
For $m \in {{{{\mathbb{N}}}^+}}$ let $\Gamma_m= {{\operatorname}{Gal}}({{{m}^\ast}})$ be the Galois group of the field extension ${{{m}^\ast}}:K {\hookrightarrow}K=M$. The map $\Gamma_m \times {\cal{G}}{\rightarrow}{\cal{G}}$, $(\sigma, A) \mapsto \sigma(A)$, given by the pull-back of connections, is an action of $\Gamma_m$ on ${\cal{G}}$.
\[L:invariantzshg\] Let $m \in {{{{\mathbb{N}}}^+}}$. A connection $C$ is if and only if there is a connection $D$ such that $C = {{{m}^\ast}}(D)$.
From $\sigma \circ {{{m}^\ast}} = {{{m}^\ast}}$, we deduce that for all $\sigma \in \Gamma_m$ and all connections $D$. Conversely, suppose that $C(z){{\,{\operatorname}{d}\!\log z}}\in
{\cal{G}}$ is $\Gamma_m$-invariant. For $\omega$ a primitive $m$-th root of unity, we define ${{{(\omega\cdot)}^\ast}} \in \Gamma_m$ by ${{{(\omega\cdot)}^\ast}}(f(z))= f(\omega z)$. From ${{{(\omega\cdot)}^\ast}}(C(z){{\,{\operatorname}{d}\!\log z}}) = C(\omega z) {{\,{\operatorname}{d}\!\log z}}$ we see that $C(z) = B(z^m)$ for some $B \in {\mathfrak{g}}((z))$. Equation implies that the connection $D=m{^{-1}}B (z){{\,{\operatorname}{d}\!\log z}}$ satisfies ${{{m}^\ast}}(D) = C (z){{\,{\operatorname}{d}\!\log z}}$.
Let $C$ be a connection, $m \in {{{{\mathbb{N}}}^+}}$. An $m$-[**form**]{} of $C$ is a connection $D$ such that $ {{{m}^\ast}}(D)$ and $C$ are gauge equivalent.
Fix a $\Gamma_m$-invariant connection $A$. The action of the Galois group $\Gamma_m$ on $K=M$ defines an action on $G(M)$ by group automorphisms. By Proposition \[P:phiswitch\], this action restricts to an action on the stabilizer $G(M)_A$ of the $\Gamma_m$-invariant connection $A$. We define a map $$p=p(A):\left\{\text{$m$-forms of $A$} \right\} {\rightarrow}{{\operatorname}{H}}^1(\Gamma_m;
G(M)_A), \quad B \mapsto p^B,$$ as follows. Given an $m$-form $B$ of $A$, choose $b \in G(M)$ such that $b[{{{m}^\ast}}(B)]=A$. Then the map $p^b:\Gamma_m {\rightarrow}G(M)_A$ defined by $p_\sigma^b = b\sigma(b{^{-1}})$ is a $1$-cocycle, and its cohomology class $p^B= [p^b]$ does not depend on the choice of $b$.
\[T:H1Kform\] For $m \in {{{{\mathbb{N}}}^+}}$, consider the field extension ${{{m}^\ast}}:K{\hookrightarrow}K=M$. If $A$ is a $\Gamma_m$-invariant connection, the map $p=p(A)$ defined above descends to an injection $${{\overline{p}}}={{\overline{p(A)}}}:\left\{\text{$m$-forms of $A$} \right\}/G(K) {\hookrightarrow}{{\operatorname}{H}}^1(\Gamma_m; G(M)_A).$$ If ${{\operatorname}{H}}^1(\Gamma_m; G(M)) = \{1\}$ this map ${{\overline{p}}}$ is bijective.
Let $\Gamma = \Gamma_m$. Let $B$ and $C$ be gauge equivalent $m$-forms of $A$. Let $g \in G(K)$ with $B=g[C]$, and let $b \in G(M)$ with $b[{{{m}^\ast}}(B)] =
A$. Consequently, we have $(b{{{m}^\ast}}(g))[{{{m}^\ast}}(C)]= A$. Now $$p_{\sigma}^{b{{{m}^\ast}}(g)}=b{{{m}^\ast}}(g)\sigma({{{m}^\ast}}(g){^{-1}}b{^{-1}}) = b \sigma(b{^{-1}}) = p_\sigma^b$$ implies that $p^B=p^C$. Thus our map $p$ descends.
Let $B$ and $C$ be $m$-forms of $A$. Choose $b$, $c \in G(M)$ with $b[{{{m}^\ast}}(B)]= c[{{{m}^\ast}}(C)]= A$. If $p^b$ and $p^c$ become equal in ${{\operatorname}{H}}^1(\Gamma, G(M)_A)$, there is an element $f\in G(M)_A$ such that $b\sigma(b{^{-1}})= fc\sigma(c{^{-1}}f{^{-1}})$ for all $\sigma \in
\Gamma$. We deduce that the element $c{^{-1}}f{^{-1}}b$ is $\Gamma$-invariant and hence equal to ${{{m}^\ast}}(h)$ for some $h \in
G(K)$. But then $${{{m}^\ast}}(h[B]) = {{{m}^\ast}}(h)[{{{m}^\ast}}(B)] = c{^{-1}}f{^{-1}}b
[{{{m}^\ast}}(B)]
= {{{m}^\ast}}(C)$$ and $h[B]=C$ since ${{{m}^\ast}}$ is injective. So ${{\overline{p}}}$ is injective.
Suppose that ${{\operatorname}{H}}^1(\Gamma; G(M)) = \{1\}$. We now prove that $p$ is surjective. Let $a\in {{\operatorname}{Z}}^1(\Gamma; G(M)_A)$ be a $1$-cocycle. By assumption, $a$ considered as an element of ${{\operatorname}{Z}}^1(\Gamma; G(M))$ is cohomologous to the trivial $1$-cocycle, [[i.e., ]{}]{}there is $g \in
G(M)$ such that $a_\sigma = g \sigma(g{^{-1}})$ for all $\sigma \in \Gamma$. Let $C = g{^{-1}}[A]$. For any $\sigma \in \Gamma$, we get $$\sigma(C) =\sigma(g{^{-1}})[A] = (g{^{-1}}a_\sigma)[A] = C.$$ According to Lemma \[L:invariantzshg\], there is a connection $D$ such that ${{{m}^\ast}}(D) =C$. As $g[{{{m}^\ast}}(D)]=A$, we have $p^D=[p^g] = [a]$.
\[R:H1triv\] In the cases $G={{\operatorname}{GL}}_n$, $G={{\operatorname}{SL}}_n$ or $G={{\operatorname}{Sp}}_{2n}$ we know that ${{\operatorname}{H}}^1(\Gamma_m; G(M)) = \{1\}$, according to [@Serre X, §1 and §2].
We now explain that, more generally, for every connected linear algebraic group $G$ we have ${{\operatorname}{H}}^1(\Gamma_m; G(M)) = \{1\}$. It is well known [@Serre IV, §2, Proposition 8 and Corollary] that the union ${{\overline{K}}}=\bigcup_{m\in{{{{\mathbb{N}}}^+}}}k((z^{1/m}))$ is an algebraic closure of $K=k((z))$. Thus, every algebraic extension of $K$ ramifies, [[i.e., ]{}]{}$K$ itself is the maximal unramified extension of $K$. According to [@Lang Theorem 12] and [@Serregaloiscoho II, §3.2 Corollary], the maximal unramified extension of a field that is complete with respect to a discrete valuation and that has perfect residue class field has dimension $\leq 1$. Consequently, we have $\dim K \leq 1$.
Let $G$ be a connected group. Since ${{\overline{K}}} \otimes k[G]$ is an integral domain, $G({{\overline{K}}})$ is a connected linear algebraic group over ${{\overline{K}}}$, and $K \otimes k[G]$ defines a $K$-structure on $G({{\overline{K}}})$ in the sense of [@Springerneu]. By [@Springerneu Theorem 17.10.2] (cf. [@Steinberg] and [@Serregaloiscoho III, §2.3, Theorem 1’]), we know that $${{\operatorname}{H}}^1({{\operatorname}{Gal}}({{\overline{K}}}/K); G({{\overline{K}}})) = \{1\}.$$
For $m \in {{{{\mathbb{N}}}^+}}$, we view the field extension ${{{m}^\ast}}: K {\hookrightarrow}K=M$ as a subextension of $K \subset {{\overline{K}}}$ via the embedding $K=M {\hookrightarrow}{{\overline{K}}}$, $f(z) \mapsto f(z^{1/m})$. As the canonical map from ${{\operatorname}{H}}^1(\Gamma_m; G(M))$ to the direct limit $${{\operatorname}{H}}^1({{\operatorname}{Gal}}({{\overline{K}}}/K); G({{\overline{K}}})) =
\varinjlim_{m\in {{{{\mathbb{N}}}^+}}}\, {{\operatorname}{H}}^1(\Gamma_m; G(M))$$ is injective, we deduce that ${{\operatorname}{H}}^1(\Gamma_m; G(M)) = \{1\}$.
An easy calculation, left to the reader, Theorem \[T:H1Kform\], and Remark \[R:H1triv\] yield
\[P:H1fieldinclusion\] Let $l$, $m \in {{{{\mathbb{N}}}^+}}$ with $d=m/l \in {{{{\mathbb{N}}}^+}}$. Consider the commutative diagram of field extensions $$\xymatrix@C10pt{
K=L \ar[rr]^-{{{{d}^\ast}}} && K=M\\
& K \ar[lu]^-{{{{l}^\ast}}} \ar[ru]_-{{{{m}^\ast}}}
}$$ If $A$ is a connection, the injection $${{{d}^\ast}}:{{\operatorname}{Z}}^1(\Gamma_l; G(L)_{{{{l}^\ast}}(A)}) {\hookrightarrow}{{\operatorname}{Z}}^1(\Gamma_m; G(M)_{{{{m}^\ast}}(A)}),$$ defined by $({{{d}^\ast}}(p): \sigma \mapsto {{{d}^\ast}}(p_{\sigma|_{L}})$, induces an injection ${{{d}^\ast}}={{{(m/l)}^\ast}}$ on cohomology. Furthermore, the diagram $$\xymatrix@C+10pt@R-10pt{
{\left\{\text{$l$-forms of ${{{l}^\ast}}(A)$} \right\}/G(K)
\ar@{^{(}->}[r]^-{{{\overline{p({{{l}^\ast}}(A))}}}}} \ar@{^{(}->}[d] & {{\operatorname}{H}}^1(\Gamma_l; G(L)_{{{{l}^\ast}}(A)})
\ar@{^{(}->}[d]^-{{{{(m/l)}^\ast}}} \\
{\left\{\text{$m$-forms of ${{{m}^\ast}}(A)$} \right\}/G(K)
\ar@{^{(}->}[r]^-{{{\overline{p({{{m}^\ast}}(A))}}}}} & {{\operatorname}{H}}^1(\Gamma_m; G(M)_{{{{m}^\ast}}(A)})
}$$ commutes. If $G$ is connected, the horizontal maps in this diagram are bijective.
Let $A$ be a connection. The punctured sets ${{\operatorname}{H}}^1(\Gamma_m; G(M)_{{{{m}^\ast}}(A)})$, for $m \in {{{{\mathbb{N}}}^+}}$, together with the maps ${{{(m/l)}^\ast}}$, for $l$, $m \in {{{{\mathbb{N}}}^+}}$ with $l$ divides $m$, form a directed system. We denote its direct limit by ${{\operatorname}{H}}^1(K; A)$.
\[d:relatives\] Two connections $A$ and $B$ are [**related**]{} if there is $m \in
{{{{\mathbb{N}}}^+}}$ such that ${{{m}^\ast}}(A)$ and ${{{m}^\ast}}(B)$ are gauge equivalent. This defines an equivalence relation on the set of connections, and we define ${{\operatorname}{Rel}}(A)$ to be the equivalence class containing $A$. The elements of ${{\operatorname}{Rel}}(A)$ are called the [**relatives**]{} of $A$.
\[P:verwandteh1\] Let $A$ be a connection. The map ${{\operatorname}{Rel}}(A)/G(K) {\hookrightarrow}{{\operatorname}{H}}^1(K; A)$ that is induced by the maps ${{\overline{p({{{m}^\ast}}(A))}}}$, for $m \in {{{{\mathbb{N}}}^+}}$, is injective. It is bijective if $G$ is a connected group.
The set of relatives of $A$ is the union, [[i.e., ]{}]{}the direct limit, of all $m$-forms of all connections ${{{m}^\ast}}(A)$ for $m \in {{{{\mathbb{N}}}^+}}$. Now use Proposition \[P:H1fieldinclusion\].
Transforming Regular Connections {#S:standard}
================================
Transforming Regular Connections to Standard Form {#SS:standard}
-------------------------------------------------
Let $G$ be a linear algebraic group. Elements of ${\mathfrak{g}}{{\,{\operatorname}{d}\!\log z}}$ are called [**connections in standard form**]{}.
\[T:standard\] Let $G$ be a connected reductive linear algebraic group. There exists a positive integer $m \in {{{{\mathbb{N}}}^+}}$, such that for every regular connection $A$ the pull-back connection ${{{m}^\ast}}(A)$ is gauge equivalent to a connection in standard form.
\[c:regular-related-standard\] If $G$ is connected reductive, each regular connection is related to a connection in standard form.
If the pull-back ${{{m}^\ast}}(A)$ of a connection $A$ is gauge equivalent to $X {{\,{\operatorname}{d}\!\log z}}$ for some $X \in {\mathfrak{g}}$, then Equation shows that $A$ is related to $m{^{-1}}X {{\,{\operatorname}{d}\!\log z}}$.
\[Ex:standard\] 1. For $G={{\operatorname}{GL}}_n$, we can choose $m=1$, as one can see from the choice of $m$ in the proof of Theorem \[T:standard\].
2\. Let $k={{{\mathbb{C}}}}\subset K = {{{\mathbb{C}}}}((z))$ and $G={{\operatorname}{SL}}_2$. By the proof of Theorem \[T:standard\], we are able to choose $m = 2$. But one has to choose $m>1$. Given $n \in {{{{\mathbb{N}}}^+}}$, we define the regular aligned ${{\operatorname}{SL}}_2$-connection $$A_{n}=
\begin{bmatrix}
\tfrac{n}{2} & z^n \\
0 & -\tfrac{n}{2}
\end{bmatrix}
{{\,{\operatorname}{d}\!\log z}}.$$ Thanks to [@BV §8.2 Example] we know that $A_{n}$ is ${{\operatorname}{SL}}_2(K)$-equivalent to a connection in standard form if and only if $n$ is even. If $k$ is algebraically closed of characteristic $\not=2$, the same statement can be verified by direct computation.
We now recall some results from [@Springerneu] and [@SpringerAMS]. Let $G$ be a connected reductive linear algebraic group, $T \subset G$ a maximal torus and $(\mathcal{X}(T), \mathcal{R}, \mathcal{X}^{\vee}(T),
\mathcal{R}^{\vee})$ the associated root datum. The derived group $G'=(G,G)$ is semisimple. Let $T'$ be the subgroup of $T$ generated by the images of all $\alpha^{\vee}$, $\alpha \in
\mathcal{R}$. To the maximal torus $T'$ in $G'$ we associate the root datum $(X(T'), R, X^{\vee}(T'), R^{\vee})$. The restriction map $\mathcal{X}(T) {\twoheadrightarrow}X(T')$ induces a canonical identification $\mathcal{R}=R$.
On the Lie algebra level, the reductive Lie algebra ${\mathfrak{g}}$ is the direct sum of its center ${\mathfrak{z}}$ and the semisimple Lie algebra $[{\mathfrak{g}},{\mathfrak{g}}]={\mathfrak{g}}'$. We have ${\mathfrak{t}}= {\mathfrak{z}}\oplus {\mathfrak{t}}'$. We associate to the Cartan subalgebra ${\mathfrak{t}}'$ in ${\mathfrak{g}}'$ the roots ${{\underline{R}}}$ in ${{\mathfrak{t}}'}^\ast={{\operatorname}{Hom}}({\mathfrak{t}}',k)$. Let ${{\underline{R}}}^{\vee}\subset {\mathfrak{t}}'$ denote the coroots. There are canonical identifications $R={{\underline{R}}}$ and $R^{\vee}={{\underline{R}}}^{\vee}$.
By taking the derivative at the unit element, every root $\alpha\in R
=\mathcal{R}$ can be considered as an element of ${\mathfrak{t}}^\ast$. For $H\in{\mathfrak{t}}$ and $\alpha\in R$, we have $\langle\alpha,H\rangle=\langle\alpha,H'\rangle$ if we decompose $H$ as $H=H''+H'\in {\mathfrak{z}}\oplus {\mathfrak{t}}'={\mathfrak{t}}$.
Suppose that $H\in{\mathfrak{t}}$ is arbitrary. Let $$R_H^{{{\mathbb{Z}}}}=\{\alpha \in R \mid \langle \alpha, H \rangle
\in {{{\mathbb{Z}}}}\} \subset R$$ denote the roots integral on $H$, and let $${R^{{{{\mathbb{Z}}}}{\vee}}_{H}}=\{\alpha^{{\vee}} \mid \alpha \in R_H^{{{\mathbb{Z}}}}\} \subset R^{{\vee}}$$ denote the corresponding coroots. Define $$V_H={{{\mathbb{Q}}}}R_H^{{{\mathbb{Z}}}}\subset {{\mathfrak{t}}'}^\ast
\quad \text{and}\quad
V_H^{\vee}={{{\mathbb{Q}}}}{R^{{{{\mathbb{Z}}}}{\vee}}_{H}}\subset {{\mathfrak{t}}'}.$$ The roots integral on $H$ are a root system $R_H^{{{\mathbb{Z}}}}$ in $V_H$. The canonical map $$V_H^{\vee}{\stackrel{\sim}{\rightarrow}}{{\operatorname}{Hom}}_{{{\mathbb{Q}}}}(V_H, {{{\mathbb{Q}}}}), \quad
\lambda \mapsto \lambda|_{V_H},$$ is an isomorphism of ${{{\mathbb{Q}}}}$-vector spaces. Therefore, we identify $V_H^{\vee}= {{\operatorname}{Hom}}_{{{\mathbb{Q}}}}(V_H, {{{\mathbb{Q}}}})$.
To the dual root system ${R^{{{{\mathbb{Z}}}}{\vee}}_{H}}$ in $V_H^{\vee}$, we associate the root lattice $Q({R^{{{{\mathbb{Z}}}}{\vee}}_{H}}) = {{{\mathbb{Z}}}}{R^{{{{\mathbb{Z}}}}{\vee}}_{H}}$. It is a subgroup of finite index in the weight lattice $$P({R^{{{{\mathbb{Z}}}}{\vee}}_{H}}) = \{x\in V_H^{\vee}\mid \langle \alpha, x \rangle \in {{{\mathbb{Z}}}}\quad \text{for all $\alpha \in R_H^{{{\mathbb{Z}}}}$}\}.$$ From $Q({R^{{{{\mathbb{Z}}}}{\vee}}_{H}}) \subset X^{\vee}(T') \subset
\mathcal{X}^{\vee}(T)$, we obtain that $$\left| P({R^{{{{\mathbb{Z}}}}{\vee}}_{H}}) / Q({R^{{{{\mathbb{Z}}}}{\vee}}_{H}})\right| \in
\{ m \in {{{{\mathbb{N}}}^+}}\mid m \cdot P ({R^{{{{\mathbb{Z}}}}{\vee}}_{H}}) \subset
\mathcal{X}^{\vee}(T)\}.$$ In particular, the set on the right hand side is not empty. We denote its minimum by $m_H$. Note that the sets $\{{R^{{{{\mathbb{Z}}}}{\vee}}_{H}}\mid H \in {\mathfrak{t}}\}$ and $\{m_H \mid H \in {\mathfrak{t}}\}$ are finite.
\[P:olmexist\] Let $m= {\operatorname}{lcm}\{m_H \mid H \in {\mathfrak{t}}\}$ be the least common multiple of all $m_H$. For every $H\in {\mathfrak{t}}$, there is a cocharacter $\psi \in
\mathcal{X}^{\vee}(T)$, such that $ \langle \alpha, \psi \rangle = m \langle \alpha, H \rangle$ for all $\alpha \in R_H^{{{\mathbb{Z}}}}$.
This proposition is a consequence of
\[L:mHtuts\] Let $H \in {\mathfrak{t}}$. There is a cocharacter $\phi \in \mathcal{X}^{\vee}(T)$, such that $\langle \alpha, \phi \rangle = m_H \langle \alpha, H \rangle$ for all $\alpha \in R_H^{{{\mathbb{Z}}}}$.
Let $B_H = \{\alpha_1,\dots,\alpha_l\}$, where $\alpha_i \neq \alpha_j$ for $i\neq j$, be a basis of the root system $R_H^{{{\mathbb{Z}}}}$ in $V_H$. Let $\{\varpi_1^{\vee},\dots, \varpi_l^{\vee}\}$ be the basis dual to $B_H$ in $V_H^{\vee}$, characterized by $\langle \alpha_i,\varpi_j^{\vee}\rangle =
\delta_{ij}$ for all $i$, $j \in \{1,\dots,l\}$. Define $\tau = \sum_{i=1}^l \langle \alpha_i, H\rangle \varpi_i^{\vee}$. As $\tau$ is an element of $P({R^{{{{\mathbb{Z}}}}{\vee}}_{H}})=\bigoplus_{i=1}^l {{{\mathbb{Z}}}}\varpi_i^{\vee}$, the definition of $m_H$ shows that $\phi = m_H \tau$ is an element of $\mathcal{X}^{\vee}(T).$ We have $\langle \alpha_j, \phi \rangle = m_H \langle \alpha_j,
H\rangle$ for all $j \in \{1,\dots,l\}$. As $B_H$ is a basis of $R_H^{{{\mathbb{Z}}}}$, the claim follows.
Let $m \in {{{{\mathbb{N}}}^+}}$ be the smallest positive integer that satisfies the following condition. $$\text{For all $H\in {\mathfrak{t}}$, there is
$\psi \in \mathcal{X}^{\vee}(T)$, such that
$\langle \alpha, \psi\rangle = m \langle\alpha, H\rangle$
for all
$\alpha \in R_H^{{{\mathbb{Z}}}}$.}$$ By Proposition \[P:olmexist\], this is well defined. Let $A$ be a regular connection. By Theorem \[T:regaus\], we may assume that $A$ is aligned. Therefore, there are elements $N\in{{{\mathbb{N}}}}$ and $A_0,\dots,A_N\in{\mathfrak{g}}$ such that $$\begin{aligned}
A =& \left( A_0+A_1z+\dots+A_Nz^N \right){{\,{\operatorname}{d}\!\log z}}\quad\text{and}\\
A_r \in& \;{{\operatorname}{Eig}}({{\operatorname}{ad}}{A_{0,{\operatorname}{s}}}; r) \quad\text{for all $r
\in\{0,\dots,N\}$.}\end{aligned}$$ Decompose $A_r={A''}_r+A{'}_r \in {\mathfrak{z}}\oplus{\mathfrak{g}}'={\mathfrak{g}}$ for $r
\in\{0,\dots,N\}$. Since all Cartan subalgebras of a semisimple Lie algebra are conjugate under the adjoint group, we find $x\in G'$ such that $({{\operatorname}{Ad}}x) ({A'}_{0,{{\operatorname}{s}}}) \in {\mathfrak{t}}'$. As $x \in G'(k) \subset G(K)$, we have $\widehat{z{{\,\partial_z}}}(x)=x$, [[i.e., ]{}]{}$\widehat{z{{\,\partial_z}}}(x)x{^{-1}}=0$ in ${\mathfrak{g}}(K)$. Using Equation , we see that $$x[A] = \big(({{\operatorname}{Ad}}x)(A_0)+({{\operatorname}{Ad}}x)(A_1)z+\dots+({{\operatorname}{Ad}}x)(A_N)z^N\big) {{\,{\operatorname}{d}\!\log z}}$$ is also aligned. Thus, by replacing $A$ by $x[A]$, we may assume that ${A'}_{0,{{\operatorname}{s}}}\in{\mathfrak{t}}'$. We define $H = A_{0,{{\operatorname}{s}}}$ and have $H={A''}_0+{A'}_{0,{{\operatorname}{s}}} \in {\mathfrak{z}}\oplus {\mathfrak{t}}'={\mathfrak{t}}.$ As $A$ is aligned, we know that $A_r \in {{\operatorname}{Eig}}({{\operatorname}{ad}}H; r)$ for all .
By the definition of $m$, we find $\psi \in \mathcal{X}^{\vee}(T)$ such that $$\langle \alpha, \psi \rangle = m \langle\alpha, H \rangle
\quad\text{for all $\alpha \in R_H^{{{\mathbb{Z}}}}$.}$$ Note that $\psi:G_{\text{m}} {\rightarrow}T$ is a homomorphism of group schemes from the multiplicative group to our torus. Defining , we get $$\alpha(t)=z^{-\langle \alpha, \psi \rangle}
= z^{-m\langle\alpha, H\rangle}
\quad\text{for all
$\alpha \in R_H^{{{\mathbb{Z}}}}$}.$$
We claim that $t[{{{m}^\ast}}(A)]$ is an element of ${\mathfrak{g}}{{\,{\operatorname}{d}\!\log z}}$. Equations and yield that $$t[{{{m}^\ast}}(A)]
=\Big(m({{\operatorname}{Ad}}t) \left(A_0\right) +\dots
+ m({{\operatorname}{Ad}}t) \left(A_N z^{mN}\right) + \widehat{z{{\,\partial_z}}}(t)t{^{-1}}\Big)
{{\,{\operatorname}{d}\!\log z}}.$$ Lemma \[L:Adt\] implies that we have $({{\operatorname}{Ad}}t) (A_rz^{mr}) = A_r$ for all $r \in \{0, \dots, N\}$. As $\widehat{z{{\,\partial_z}}}(t)t{^{-1}}$ is an element of ${\mathfrak{t}}$, by Lemma \[L:torustrafo\], our claim follows.
\[L:Adt\] Let $H \in {\mathfrak{t}}$, $m \in {{{{\mathbb{N}}}^+}}$, and $t\in T(K)$ be such that $\alpha(t)=z^{-m\langle \alpha, H \rangle}$ for all $\alpha \in R_H^{{{\mathbb{Z}}}}$. Then, for all $r \in {{{\mathbb{N}}}}$ and all $B \in {{\operatorname}{Eig}}({{\operatorname}{ad}}H; r)$, we have $({{\operatorname}{Ad}}t) (Bz^{mr}) = B$.
We decompose $B = B_0+\sum_{\alpha\in R}B_{\alpha}\in
{\mathfrak{t}}\oplus\bigoplus_{\alpha\in R}{{\mathfrak{g}}'}_{\alpha}$. Applying ${{\operatorname}{ad}}H$ gives $ ({{\operatorname}{ad}}H) (B) = \sum_{\alpha\in R} \langle \alpha,H\rangle
B_{\alpha}$. Combined with $B \in {{\operatorname}{Eig}}({{\operatorname}{ad}}H; r)$, this implies that $$B=B_0\delta_{r0}+\sum_{\alpha\in R \atop \langle
\alpha,H \rangle=r}
B_\alpha.$$ Now $({{\operatorname}{Ad}}t) (Bz^{mr})$ is equal to $$\Big( B_0\delta_{r0}
+\sum_{\alpha\in R \atop \langle \alpha,H\rangle=r}
\alpha(t) B_\alpha \Big) z^{mr}
= B_0\delta_{r0}z^{mr}+\sum_{\alpha\in R \atop \langle
\alpha,H\rangle=r} B_\alpha =B.$$
\[L:torustrafo\] Let $m\in{{{{\mathbb{N}}}^+}}$, $H \in {\mathfrak{t}}$, and $\psi \in
\mathcal{X}^{\vee}(T)$ be such that $\langle \alpha, \psi \rangle = m \langle \alpha, H \rangle$ for all $\alpha \in R_H^{{{\mathbb{Z}}}}$. Let $t=\psi (z{^{-1}}) \in T(K)$ and $Y = \widehat{z{{\,\partial_z}}}(t)t{^{-1}}\in {\mathfrak{t}}(K)$. Then we have $Y \in {\mathfrak{t}}$, $\langle \alpha, Y \rangle \in {{{\mathbb{Z}}}}$ for all $\alpha \in R$, and $\langle \alpha, Y \rangle = -m \langle \alpha, H \rangle$ for all $\alpha \in R_H^{{{\mathbb{Z}}}}$.
Let $d$ be the dimension of the torus $T$. We choose an isomorphism from $T$ to $(G_\text{m})^d$ and identify . For $i \in \{1,\dots, d\}$, we define the cocharacter $\delta_i:
G_\text{m} {\rightarrow}T$ by $\delta_i(x)= (1,\dots, 1,x,1,\dots, 1)$. Let the character $\epsilon_i:T {\rightarrow}G_\text{m}$ be defined by $\epsilon_i(x_1,\dots, x_d) = x_i$. We have $\mathcal{X}^{\vee}(T)= \bigoplus_{i=1}^d {{{\mathbb{Z}}}}\delta_i$ and $\mathcal{X}(T)= \bigoplus_{i=1}^d {{{\mathbb{Z}}}}\epsilon_i$.
Let $\psi =\sum f_i\delta_i$ for suitable $f_i \in {{{\mathbb{Z}}}}$. From $ t = \psi(z{^{-1}})
= (z^{-f_1},\dots, z^{-f_d})$, we deduce that $Y = z {{\,\partial_z}}(t)t{^{-1}}= (-f_1,\dots, -f_d) \in {\mathfrak{t}}$. An arbitrary $\alpha \in R$ can be written as $\alpha=\sum a_i\epsilon_i$ for suitable $a_i \in {{{\mathbb{Z}}}}$. Then we have $\langle\alpha, Y\rangle = -\sum a_if_i \in {{{\mathbb{Z}}}}$. If $\alpha \in R_H^{{{\mathbb{Z}}}}$, we obtain $m \langle \alpha, H \rangle = \langle \alpha, \psi \rangle = \sum
a_if_i = - \langle \alpha, Y \rangle$.
Transforming Regular Connections to Zero Standard Form {#SS:nullstandard}
------------------------------------------------------
Let $G$ be a linear algebraic group. We denote by ${{\mathfrak{g}}^{{\operatorname}{zero}}}$ the set of all $X \in {\mathfrak{g}}$ such that zero is the only rational eigenvalue of ${{\operatorname}{ad}}X$. The elements of ${{\mathfrak{g}}^{{\operatorname}{zero}}}{{\,{\operatorname}{d}\!\log z}}$ are called [**connections in zero standard form**]{}.
\[T:standardnull\] Let $G$ be a connected reductive linear algebraic group. For every regular connection $A$, there exists a positive integer $n \in {{{{\mathbb{N}}}^+}}$, such that the pull-back connection ${{{n}^\ast}}(A)$ is gauge equivalent to a connection in zero standard form.
\[c:regular-related-zero-standard\] If $G$ is connected reductive, each regular connection is related to a connection in zero standard form.
Similar to the proof of Corollary \[c:regular-related-standard\]. Note that ${{\mathfrak{g}}^{{\operatorname}{zero}}}$ is stable under multiplication by rational numbers.
Let $G={{\operatorname}{SL}}_2$. For $n \in
{{{{\mathbb{N}}}^+}}$, we define the regular ${{\operatorname}{SL}}_2$-connection $$B_n = \tfrac1{2n}
\begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
{{\,{\operatorname}{d}\!\log z}}.$$ We claim that ${{{n}^\ast}}(B_n)$ is not gauge equivalent to a connection in zero standard form. Otherwise suppose that $${{{n}^\ast}}(B_n)
=
\begin{bmatrix}
\tfrac12 & 0 \\
0 & -\tfrac12
\end{bmatrix}
{{\,{\operatorname}{d}\!\log z}}$$ is ${{\operatorname}{SL}}_2(K)$-equivalent to $X {{\,{\operatorname}{d}\!\log z}}$ for some $X \in {{\mathfrak{sl}}_2^{{\operatorname}{zero}}}$. We may assume that has Jordan normal form. According to Proposition \[p:slngleichglnaequi\], there is $l \in {{{\mathbb{Z}}}}$ such that $$X =
\begin{bmatrix}
\tfrac12 + l & 0 \\
0 & -\tfrac12 -l
\end{bmatrix}.$$ Then $1+2l \in {{{\mathbb{Q}}}}-\{0\}$ is a rational eigenvalue of ${{\operatorname}{ad}}X$. This contradicts our assumption $X \in {{\mathfrak{sl}}_2^{{\operatorname}{zero}}}$.
This example shows that there is no $n \in {{{{\mathbb{N}}}^+}}$, such that for every regular ${{\operatorname}{SL}}_2$-connection $B$, the connection ${{{n}^\ast}}(B)$ is gauge equivalent to a connection in zero standard form.
We prepare for the proof of Theorem \[T:standardnull\] by showing some results for semisimple linear algebraic groups and semisimple Lie algebras.
\[P:jordancartan\] Let $G$ be a semisimple linear algebraic group, a Borel subgroup, and $T \subset B$ a maximal torus in $G$. The set of semisimple elements in ${\mathfrak{b}}$ is equal to ${{\operatorname}{Ad}}(B)({\mathfrak{t}})$.
Suppose that $H\in{\mathfrak{b}}$ is a semisimple element. Let ${\mathfrak{t}}'$ be a maximal toral subalgebra of ${\mathfrak{g}}$, containing $H$. As ${\mathfrak{g}}$ is semisimple, ${\mathfrak{t}}'$ is a Cartan subalgebra of ${\mathfrak{g}}$. All Cartan subalgebras are conjugate under the adjoint group of ${\mathfrak{g}}$. This adjoint group is the identity component of ${{\operatorname}{Ad}}(G)$. Therefore, we find $g \in G$ such that $({{\operatorname}{Ad}}g) ({\mathfrak{t}})={\mathfrak{t}}'$. Obviously, ${\mathfrak{t}}'$ is the Lie algebra of $T'=gTg{^{-1}}$. The identity component $D$ of $T'\cap B$ is a torus. We find $H \in {\mathfrak{t}}' \cap {\mathfrak{b}}= {{\operatorname}{Lie}}(T'\cap B) ={\mathfrak{d}}$. Let $S$ be a maximal torus in $B$ that contains $D$. The maximal tori $T$ and $S$ in $B$ are conjugate under $B$, so we find $b\in B$ such that $bTb{^{-1}}=S$. Now we see that $({{\operatorname}{Ad}}b)({\mathfrak{t}})={\mathfrak{s}}\supset{\mathfrak{d}}\ni H$.
\[K:jordancartan\] For every $X\subset {\mathfrak{g}}$, there is a group element $g\in
G$ such that $({{\operatorname}{Ad}}g) (X) \in {\mathfrak{b}}$ and $({{\operatorname}{Ad}}g) (X_{{{\operatorname}{s}}}) \in
{\mathfrak{t}}$.
Given $X \in {\mathfrak{g}}$, let ${\mathfrak{b}}'$ be a Borel subalgebra containing $X_{{\operatorname}{s}}$ and $X_{{\operatorname}{n}}$. Since all Borel subalgebras are conjugate under the adjoint group, we find an element $h \in G$ with $({{\operatorname}{Ad}}h)({\mathfrak{b}}')={\mathfrak{b}}$. By Proposition \[P:jordancartan\], there is $b \in B$ such that $({{\operatorname}{Ad}}bh)(X_{{\operatorname}{s}}) \in {\mathfrak{t}}.$ We have $({{\operatorname}{Ad}}bh)(X) \in {\mathfrak{b}}$.
\[L:jordannilpotent\] Suppose that ${\mathfrak{t}}$ is a Cartan subalgebra of a semisimple Lie algebra ${\mathfrak{g}}$, let $R=R({\mathfrak{g}},{\mathfrak{t}})$ be the roots and choose a system of positive roots $R^+ \subset R$. Define ${\mathfrak{u}}^+ = \bigoplus_{\alpha \in R^+}{\mathfrak{g}}_\alpha$ and ${\mathfrak{b}}=
{\mathfrak{t}}\oplus {\mathfrak{u}}^+$. Let $X \in {\mathfrak{b}}$ and $X=X_{{\operatorname}{s}}+X_{{\operatorname}{n}}$ be its Jordan decomposition in ${\mathfrak{g}}$. Then $X_{{\operatorname}{s}} \in {\mathfrak{t}}$ implies that $X_{{\operatorname}{n}} \in {\mathfrak{u}}^+$.
This follows from the root space decomposition, the nilpotency of ${{\operatorname}{ad}}X_{{\operatorname}{n}}$, and ${\mathfrak{t}}^\ast= kR^+$.
Once again, we use the notation introduced in Subsection \[SS:standard\]. In particular, $G$ is connected reductive and $G'=(G,G)$ is semisimple. Let $B'\subset G'$ be a Borel subgroup containing $T'$, and let $R^+=R^+(B')\subset R$ be the corresponding positive roots. The set ${U'}^+ \subset B'$ of all unipotent elements of $B'$ is a closed nilpotent connected subgroup. For the corresponding Lie algebras, we have ${{\mathfrak{u}}'}^+ = \bigoplus_{\alpha \in R^+}{{\mathfrak{g}}'}_\alpha$ and ${\mathfrak{b}}' = {{\mathfrak{t}}'} \oplus {{\mathfrak{u}}'}^+$.
\[P:jordananpassenred\] For every $X \in {\mathfrak{g}}$, there is $g \in G$ such that $({{\operatorname}{Ad}}g)(X_{{\operatorname}{s}}) \in {\mathfrak{t}}$ and $({{\operatorname}{Ad}}g)(X_{{\operatorname}{n}}) \in {{\mathfrak{u}}'}^+$.
This follows from Corollary \[K:jordancartan\], Lemma \[L:jordannilpotent\] and the adaption to the reductive situation.
\[L:eigenwertered\] Let $H \in {\mathfrak{t}}$ and $N\in {{\mathfrak{u}}'}^+$. The eigenvalues of ${{\operatorname}{ad}}(H+N)$ are given by $\{\langle\alpha,H\rangle\mid \alpha \in R\cup\{0\}\}$.
The endomorphisms ${{\operatorname}{ad}}H$ and ${{\operatorname}{ad}}(H + N)$ have the same characteristic polynomial.
Let $A$ be a regular connection. According to Theorem \[T:standard\], there are a positive integer $m
\in {{{{\mathbb{N}}}^+}}$ and an element $X \in {\mathfrak{g}}$, such that ${{{m}^\ast}}(A)$ is gauge equivalent to the connection $C=X {{\,{\operatorname}{d}\!\log z}}$.
By Proposition \[P:jordananpassenred\], we may assume that $X_{{\operatorname}{s}} \in {\mathfrak{t}}$ and $X_{{\operatorname}{n}} \in {{\mathfrak{u}}'}^+$. Let $$R_{X_{{\operatorname}{s}}}^{{{\mathbb{Q}}}}= \{\alpha \in R \mid \langle \alpha, X_{{\operatorname}{s}}
\rangle \in {{{\mathbb{Q}}}}\}$$ denote the roots rational on $X_{{\operatorname}{s}}$. Choose $l \in {{{{\mathbb{N}}}^+}}$ such that $\langle \alpha, lX_{{\operatorname}{s}} \rangle \in {{{\mathbb{Z}}}}$ for all $\alpha \in R_{X_{{\operatorname}{s}}}^{{{\mathbb{Q}}}}$. Define $H=lX_{{\operatorname}{s}}$ and $N=lX_{{\operatorname}{n}}$. If a root attains a rational value on $H$, this value is already integral, which means $R_{H}^{{{\mathbb{Q}}}}= R_{H}^{{{\mathbb{Z}}}}$.
By Lemma \[L:mHtuts\], there is $\phi \in
\mathcal{X}^{\vee}(T)$, such that $$\langle \alpha, \phi\rangle = m_H \langle\alpha, H\rangle
\quad\text{for all
$\alpha \in R_H^{{{\mathbb{Z}}}}$}.$$ Define $t = \phi(z{^{-1}}) \in T(K)$. Because $$\alpha(t)=z^{-\langle\alpha,\phi\rangle}
=z^{-m_H\langle\alpha,H\rangle}
\quad\text{for all
$\alpha \in R_H^{{{\mathbb{Z}}}}$},$$ and $m_HlX \in {{\operatorname}{Eig}}({{\operatorname}{ad}}H; 0)$, Lemma \[L:Adt\] implies that $({{\operatorname}{Ad}}t) (m_HlX)= m_HlX$.
Set $Y= \widehat{z {{\,\partial_z}}}(t)t{^{-1}}$. Lemma \[L:torustrafo\] gives $Y \in {\mathfrak{t}}$, $$\begin{aligned}
\langle \alpha, Y \rangle \in& \;{{{\mathbb{Z}}}}\quad\text{for all $\alpha \in R$, and}\label{Eq:Ywurzel}\\
\langle \alpha, Y \rangle =& -m_H \langle \alpha, H \rangle
\quad\text{for all $\alpha \in R_H^{{{\mathbb{Z}}}}$.}\label{Eq:Yganzewurzel}\end{aligned}$$ We apply the gauge transformation $t$ to ${{{(m_Hl)}^\ast}}(C) = m_Hl X{{\,{\operatorname}{d}\!\log z}}$ and get $$t\left[ {{{(m_Hl)}^\ast}}(C) \right]
= (\underbrace{m_HH + Y}_{\in {\mathfrak{t}}} + \underbrace{m_HN}_{\in {{\mathfrak{u}}'}^+})
{{\,{\operatorname}{d}\!\log z}}.$$ Let $\alpha \in R$ with $\langle \alpha, m_HH + Y\rangle \in {{{\mathbb{Q}}}}$. We claim that $\langle\alpha, m_HH + Y\rangle = 0$. Equation shows that $\alpha \in R_H^{{{\mathbb{Q}}}}=R_H^{{{\mathbb{Z}}}}$. Then Equation proves our claim. Thus, by Lemma \[L:eigenwertered\], $(m_HH+Y)+m_HN$ is an element of ${{\mathfrak{g}}^{{\operatorname}{zero}}}$.
\[c:standardnull-semisimple-related\] Let $G$ be connected reductive with maximal torus $T \subset G$. Then every connection of the form $X {{\,{\operatorname}{d}\!\log z}}$ with $X \in {\mathfrak{t}}$ is related to a connection in ${\mathfrak{t}}\cap {{\mathfrak{g}}^{{\operatorname}{zero}}}{{\,{\operatorname}{d}\!\log z}}$.
If $X \in {\mathfrak{t}}$, it is obvious from the proof of Theorem \[T:standardnull\] that there exists $n \in {{{{\mathbb{N}}}^+}}$ such that ${{{n}^\ast}}(X {{\,{\operatorname}{d}\!\log z}})$ is gauge equivalent to $X' {{\,{\operatorname}{d}\!\log z}}$ for some $X' \in {\mathfrak{t}}\cap {{\mathfrak{g}}^{{\operatorname}{zero}}}$. So $X {{\,{\operatorname}{d}\!\log z}}$ is related to $n{^{-1}}X' {{\,{\operatorname}{d}\!\log z}}$.
Regular GLn-Connections {#S:dmoduln}
=======================
Classification of Regular ${{\operatorname}{GL}}_n$-Connections
---------------------------------------------------------------
For $a \in{{{{\mathbb{N}}}^+}}$ and $x \in k$, we denote by ${{\operatorname}{E}}_a \in {{\operatorname}{End}}(k^a) = {{\operatorname}{Mat}}_{a}(k)$ the identity matrix and by ${{\operatorname}{J}}(x, a) \in {{\operatorname}{Mat}}_{a}(k)$ the $a \times a$-Jordan block with diagonal entries equal to $x$. For example, we have $${{\operatorname}{J}}(x,3)=
\begin{bmatrix}
x & 1 & 0 \\
0 & x & 1 \\
0 & 0 & x \\
\end{bmatrix}.$$ Let $n \in {{{\mathbb{N}}}}$. We denote the set of all $n\times n$-matrices in Jordan normal form by ${\mathcal{J}_n}$.
Let $X$, $Y \in {\mathcal{J}_n}$ be given, $$\begin{aligned}
\label{eq:XYjordan}
\begin{split}
X = & {{\operatorname}{blockdiag}\;}({{\operatorname}{J}}(x_1, a_1),\dots , {{\operatorname}{J}}(x_r, a_r)),\\
Y = & {{\operatorname}{blockdiag}\;}({{\operatorname}{J}}(y_1, b_1),\dots , {{\operatorname}{J}}(y_s, b_s)).
\end{split}\end{aligned}$$ The matrices $X$ and $Y$ [**differ integrally**]{} (resp. [**rationally**]{}) [**after block permutation**]{}, if $s=r$ and there is a permutation $\tau \in {{\operatorname}{Sym}}_r$ such that $$a_i = b_{\tau(i)} \quad \text{and}\quad
x_i \equiv y_{\tau(i)} \mod {{{\mathbb{Z}}}}\quad\text{(resp. \negthickspace\negthickspace\negthickspace $\mod {{{\mathbb{Q}}}}$)}\quad\text{
for all $i \in \{1, \dots, r\}$.}$$
Now we can classify regular ${{\operatorname}{GL}}_n$-connections up to gauge equivalence. Our proof of the following theorem is more or less the same as that in [@BV §3].
\[T:GLnklass\] The map $${\mathcal{J}_n} {\twoheadrightarrow}\{\text{regular
${{\operatorname}{GL}}_n$-connections}\}/{{\operatorname}{GL}}_n(K), \quad X \mapsto [X {{\,{\operatorname}{d}\!\log z}}],$$ is a surjection. For $X$, $Y \in {\mathcal{J}_n}$, the connections $X {{\,{\operatorname}{d}\!\log z}}$ and $Y {{\,{\operatorname}{d}\!\log z}}$ are ${{\operatorname}{GL}}_n(K)$-equivalent if and only if $X$ and $Y$ differ integrally after block permutation.
The surjectivity follows from Theorem \[T:standard\] and the example $G={{\operatorname}{GL}}_n$ in Examples \[Ex:standard\]. Let $X$, $Y \in \mathcal{J}_n$ be given in the form .
Assume that $X$ and $Y$ differ integrally after block permutation. Performing the block permutation by an element of ${{\operatorname}{GL}}_n(k)$ (or ${{\operatorname}{SL}}_n(k)$), we may assume that $Y$ has the form $$Y = {{\operatorname}{blockdiag}\;}({{\operatorname}{J}}(x_1+n_1, a_1),\dots , {{\operatorname}{J}}(x_r+n_r, a_r))$$ for suitable $n_1,\dots, n_r \in {{{\mathbb{Z}}}}$. We define $$\label{eq:gform}
g = {{\operatorname}{blockdiag}\;}\big(z^{n_1} {{\operatorname}{E}}_{a_1}, \dots, z^{n_r}
{{\operatorname}{E}}_{a_r}\big) \in {{\operatorname}{GL}}_n(K)$$ and deduce from Equation that $g\left[X
{{\,{\operatorname}{d}\!\log z}}\right] = Y {{\,{\operatorname}{d}\!\log z}}$.
Assume now that $X {{\,{\operatorname}{d}\!\log z}}$ and $Y {{\,{\operatorname}{d}\!\log z}}$ are gauge equivalent. We transform $X {{\,{\operatorname}{d}\!\log z}}$ by a gauge transformation $g$ of the form for suitable $n_1, \dots, n_r \in {{{\mathbb{Z}}}}$ and deal with $Y {{\,{\operatorname}{d}\!\log z}}$ similarly, and may so assume that $$\label{eq:eigenvalue-cond}
\lambda -\mu \in {{{\mathbb{Z}}}}\Rightarrow \lambda =\mu \quad \text{for all
$\lambda$, $\mu \in \{a_1,\dots a_r, b_1, \dots, b_s\}$.}$$ Let $h \in {{\operatorname}{GL}}_n(K)$ with $h[Y {{\,{\operatorname}{d}\!\log z}}]=X {{\,{\operatorname}{d}\!\log z}}$. This implies $Xh-hY=z{{\,\partial_z}}(h)$. We write $h=\sum_{l \geq N} h_lz^l$ with $N \in {{{\mathbb{Z}}}}$ and $h_l \in {{\operatorname}{Mat}}_n(k)$ and get $Xh_l-h_lY = lh_l$ for all $l\geq N$. Since the eigenvalues of the linear map ${{\operatorname}{Mat}}_n(k)
{\rightarrow}{{\operatorname}{Mat}}_n(k)$, $A \mapsto XA-AY$ are given by $a_i-b_j$, for $1\leq i \leq r$, $1 \leq j \leq s$, implies that $h
=h_0 \in {{\operatorname}{GL}}_n(k)$. But then $h_0 Y h_0{^{-1}}=X$, so $X$ and $Y$ are conjugate and hence differ integrally (in fact, by $0$) after block permutation.
$D$-Modules and ${{\operatorname}{GL}}_n$-Connections
-----------------------------------------------------
We denote by $D_0= k[[z]]z{{\,\partial_z}}$ the subspace of derivations $\delta \in D$ with $\delta({\mathfrak{m}}) \subset {\mathfrak{m}}$.
A [**$D$-module**]{} is a $K$-vector space $M$ together with a map $\alpha: D \times M {\rightarrow}M$, $(\delta, m) \mapsto \delta m =
\alpha(\delta, m)$, that is $K$-linear in the first argument and additive in the second one, and that satisfies $\delta(xm) = (\delta x) m + x (\delta m)$ for all $\delta \in D$, $x
\in K$, and $m \in M$. A map $\alpha$ as above is a [**$D$-module structure**]{} on $M$. A [**morphism of $D$-modules**]{} $f:(M, \alpha) {\rightarrow}(N, \beta)$ is a $K$-linear map $f: M
{\rightarrow}N$ satisfying $f(\alpha(\delta, m)) = \beta (\delta, f(m))$ for all $\delta \in D$, $m \in M$.
Let $M$ be a $D$-module. For $m \in M$, let $E(m)$ be the smallest ${\cal{O}}$-submodule of $M$ that contains $m$ and is $D_0$-stable. A $D$-module $M$ is [**Fuchsian**]{}, if $E(m)$ is finitely generated as an ${\cal{O}}$-module for all $m \in M$.
Let $a \in {{{{\mathbb{N}}}^+}}$ and $x \in k$. There is a unique $D$-module structure $\alpha$ on $M = K^a=\bigoplus_{i=1}^{a}K e_i$ such that $$\alpha(z{{\,\partial_z}}, e_i)
= x e_i + e_{i-1} \quad
\text{for all $i \in \{1,\dots a\}$},$$ where $e_0=0$. We denote this $D$-module by $M^{x,a}$. It is easy to see that $M^{x,a}$ is Fuchsian and indecomposable.
Fix $n \in {{{\mathbb{N}}}}$. The group ${{\operatorname}{GL}}_n(K)$ acts on the set of all $D$-module structures on ${K^n}$ as follows: Given $g \in {{\operatorname}{GL}}_n(K)$ and a $D$-module structure $\alpha$, we define $g.\alpha$ by $(\delta,
w) \mapsto g(\alpha(\delta, g{^{-1}}(w)))$. Two $D$-module structures $\alpha$ and $\beta$ are in the same orbit if and only if $({K^n}, \alpha)$ and $({K^n}, \beta)$ are isomorphic.
Let $A$ be a ${{\operatorname}{GL}}_n$-connection. If we evaluate $A \in {\cal{GL}_n}= {{\operatorname}{Hom}}_{K}(D, {\mathfrak{gl}}_n(K))$ at $\delta \in D$, we get an element $A(\delta) \in {{\operatorname}{End}}_K({K^n}) = {{\operatorname}{Hom}}({k^n},{K^n})$. Then $$\alpha_A (\delta, x\otimes v) = \delta(x)\otimes v - x
A(\delta)(v),$$ where $x \in K$, $v \in k^n$, defines a $D$-module structure $\alpha_A$ on ${K^n}=K\otimes {k^n}$.
We omit the easy proof of
\[P:dmodulGLnzshg\] The map $$\{\text{${{\operatorname}{GL}}_n$-connections}\} {\stackrel{\sim}{\rightarrow}}\{\text{$D$-module structures on ${K^n}$}\}, \quad
A \mapsto \alpha_A,$$ is a ${{\operatorname}{GL}}_n(K)$-equivariant bijection and induces a bijection between regular connections and Fuchsian $D$-module structures. The regular ${{\operatorname}{GL}}_n$-connection $${{\operatorname}{blockdiag}\;}(-{{\operatorname}{J}}(x_1, a_1),\dots, -{{\operatorname}{J}}(x_r, a_r)) {{\,{\operatorname}{d}\!\log z}}$$ corresponds to the Fuchsian $D$-module $M^{x_1,a_1}\oplus\dots \oplus M^{x_r,a_r}$.
We use Proposition \[P:dmodulGLnzshg\] in order to translate Theorem \[T:GLnklass\] in the language of $D$-modules and obtain
\[T:fuchsklass\] Every finite-dimensional Fuchsian $D$-module is a direct sum of indecomposable Fuchsian $D$-modules. The summands are unique up to permutation and isomorphism.
The map $(x,a) \mapsto M^{x,a}$ induces a bijection from $k/{{{\mathbb{Z}}}}\times {{{{\mathbb{N}}}^+}}$ to the set of isomorphism classes of finite-dimensional indecomposable Fuchsian $D$-modules.
Regular SLn-Connections {#S:klassifikation}
=======================
Let $\mathcal{J}({\mathfrak{sl}}_n)={\mathfrak{sl}}_n \cap \mathcal{J}_n$ and $\mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})={{\mathfrak{sl}}_n^{{\operatorname}{zero}}}\cap \mathcal{J}_n$. This section is organized as follows. First we describe the set ${{\operatorname}{Rel}}(X{{\,{\operatorname}{d}\!\log z}})/{{\operatorname}{SL}}_n(K)$ of relatives up to gauge equivalence, for $X \in
\mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$ (Bijection ). We deduce from this that the set of regular ${{\operatorname}{SL}}_n$-connections is $\bigcup{{\operatorname}{Rel}}(X {{\,{\operatorname}{d}\!\log z}})$, where $X$ ranges over $\mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$ (Proposition \[p:alleregulaer\], Corollary \[K:alleverwandtenregulaer\]). Then we establish the classification up to relationship (Theorem \[t:sln-class-rel\]) and up to gauge equivalence (Theorem \[S:slnklassifik\]). We conclude with a slightly different view on this classification (Remark \[rem:nice-class\]) and explain the example ${{\operatorname}{SL}}_2$ (Example \[ex:SL2\]).
Let $X \in {{\mathfrak{sl}}_n^{{\operatorname}{zero}}}$. Recall from Proposition \[P:verwandteh1\] the bijection $$\label{eq:rel-H1}
{{\operatorname}{Rel}}(X{{\,{\operatorname}{d}\!\log z}})/{{\operatorname}{SL}}_n(K) {\stackrel{\sim}{\rightarrow}}{{\operatorname}{H}}^1(K; X{{\,{\operatorname}{d}\!\log z}}).$$ Since ${{\operatorname}{H}}^1(K; X{{\,{\operatorname}{d}\!\log z}})$ is the direct limit of the ${{\operatorname}{H}}^1(\Gamma_l; {{\operatorname}{SL}}_n(L)_{{{{l}^\ast}}(X {{\,{\operatorname}{d}\!\log z}})})$, for $l \in
{{{{\mathbb{N}}}^+}}$ and ${{{l}^\ast}}:K {\hookrightarrow}K=L$, we are interested in the stabilizer of ${{{l}^\ast}}(X {{\,{\operatorname}{d}\!\log z}})=lX{{\,{\operatorname}{d}\!\log z}}$ in ${{\operatorname}{SL}}_n(L)$. As ${{\mathfrak{sl}}_n^{{\operatorname}{zero}}}$ is stable under multiplication by rational numbers, Proposition \[P:stabisln\] shows that $${{\operatorname}{SL}}_n(L)_{lX {{\,{\operatorname}{d}\!\log z}}}={{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(lX)={{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X) \subset {{\operatorname}{SL}}_n(k).$$ In particular, the action of the Galois group $\Gamma_l$ on ${{\operatorname}{SL}}_n(L)_{lX {{\,{\operatorname}{d}\!\log z}}}$ is trivial.
\[P:stabisln\] Let $X \in {{\mathfrak{sl}}_n^{{\operatorname}{zero}}}$. Then ${{\operatorname}{SL}}_n(K)_{X {{\,{\operatorname}{d}\!\log z}}}={{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$, where ${{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$ is the centralizer of $X$ in ${{{\operatorname}{SL}}_n}={{{\operatorname}{SL}}_n}(k)$ under the adjoint action.
The inclusion ${{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X) \subset {{{\operatorname}{SL}}_n}(K)_{X {{\,{\operatorname}{d}\!\log z}}}$ is obvious. Let . As ${{\operatorname}{SL}}_n(K) \subset {{\operatorname}{GL}}_n(K) \subset {{\operatorname}{Mat}}_{n}(K)$, we find $N \in {{{\mathbb{Z}}}}$ and $g_i \in {{\operatorname}{Mat}}_{n}(k)$ such that $g = \sum_{i\ge N} g_i z^i$. From $g[X {{\,{\operatorname}{d}\!\log z}}]= X {{\,{\operatorname}{d}\!\log z}}$ and Equation we get $ Xg-gX = z{{\,\partial_z}}(g)$, or, equivalently, $({{\operatorname}{ad}}_{{\mathfrak{gl}}_n} X) (g_i) = ig_i$ for all $i \ge N$. But then $X \in {{\mathfrak{sl}}_n^{{\operatorname}{zero}}}\subset {{\mathfrak{gl}}_n^{{\operatorname}{zero}}}$ implies that $g_i= 0$ for $i \neq 0$, in other words, $g = g_0 \in {{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$.
Recall that ${{\overline{K}}}=\bigcup_{m\in{{{{\mathbb{N}}}^+}}}k((z^{1/m}))$ is an algebraic closure of $K=k((z))$. For $l \in {{{{\mathbb{N}}}^+}}$, we view the field extension ${{{l}^\ast}}: K {\hookrightarrow}K=L$ as a subextension of $K \subset {{\overline{K}}}$ via the embedding $K=L {\hookrightarrow}{{\overline{K}}}$, $f(z) \mapsto f(z^{1/l})$. The Galois group ${{\operatorname}{Gal}}({{\overline{K}}}/K)$ is isomorphic to the procyclic group $\widehat{{{{\mathbb{Z}}}}}$. For the rest of this section, we fix a procyclic generator $\gamma$ of ${{\operatorname}{Gal}}({{\overline{K}}}/K)$. For $l \in {{{{\mathbb{N}}}^+}}$, the Galois group $\Gamma_l={{\operatorname}{Gal}}({{{l}^\ast}})$ is generated by $\gamma|_{L}$.
Let $X \in {{\mathfrak{sl}}_n^{{\operatorname}{zero}}}$ and $l \in {{{{\mathbb{N}}}^+}}$. Since $\Gamma_l$ acts trivially on ${{\operatorname}{SL}}_n(L)_{lX {{\,{\operatorname}{d}\!\log z}}}$, the map $$\label{z1isotel}
{{\operatorname}{Z}}^1(\Gamma_l; {{{\operatorname}{SL}}_n}(L)_{lX {{\,{\operatorname}{d}\!\log z}}}) \xrightarrow[\gamma]{\sim}
\{\text{$l$-torsion-elements in ${{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$}\}, \quad
p \mapsto p_{\gamma|_L},$$ is bijective. It depends on $\gamma$. We indicate this here and in the following by putting a $\gamma$ at the corresponding arrow. This Bijection induces a bijection $${{\operatorname}{H}}^1(\Gamma_l; {{{\operatorname}{SL}}_n}(L)_{lX {{\,{\operatorname}{d}\!\log z}}}) \xrightarrow[\gamma]{\sim}
\{\text{conj.\ classes of $l$-torsion-elts.\ in ${{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$}\}.$$ We use Proposition \[P:H1fieldinclusion\] and pass to the direct limit. We obtain for $X \in {{\mathfrak{sl}}_n^{{\operatorname}{zero}}}$ a bijection $$\label{eq:h1kktorsion}
{{{\operatorname}{H}}^1(K; X {{\,{\operatorname}{d}\!\log z}})} \xrightarrow[\gamma]{\sim}
\{\text{conj.\ classes of torsion elements in ${{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$}\}.$$
Suppose that $X \in \mathcal{J}({\mathfrak{sl}}_n)$. Let $T_X \subset {{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$ be the diagonal maximal torus and $W_X$ be the Weyl group as in Theorem \[T:zentralisator\]. The Weyl group $W_X$ stabilizes $D_X = T_X \cap {{\operatorname}{SL}}_n$.
\[P:TEDX\] For $X \in \mathcal{J}({\mathfrak{sl}}_n)$, the inclusion $D_X {\hookrightarrow}Z_{{{\operatorname}{SL}}_n}(X)$ induces a bijection $$\{\text{torsion elements in $D_X$}\}/W_X {\stackrel{\sim}{\rightarrow}}\{\text{conj.\ classes of torsion elts.\ in ${{\operatorname}{Z}}_{{{\operatorname}{SL}}_n}(X)$}\}.$$
This follows from Theorem \[T:zentralisator\] and the fact that, in characteristic zero, every element of finite order is semisimple.
Assume now that $X \in \mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$. We combine Bijections , and Proposition \[P:TEDX\] in order to get the bijection $$\label{eq:kombverwandt}
{\{\text{torsion elements in $D_X$}\}/W_X} \xrightarrow[\gamma]{\sim}
{{\operatorname}{Rel}}(X {{\,{\operatorname}{d}\!\log z}})/{{\operatorname}{SL}}_n(K).$$ We denote this map by $\delta \mapsto \left[(X {{\,{\operatorname}{d}\!\log z}})^\delta\right]$ and describe it explicitely in the proof of
\[p:alleregulaer\] All relatives of $X {{\,{\operatorname}{d}\!\log z}}$ are regular, for $X \in \mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$.
Write $X \in \mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$ as $$X={{\operatorname}{blockdiag}\;}({{\operatorname}{J}}(x_1, a_1),\dots, {{\operatorname}{J}}(x_r, a_r))$$ for suitable $x_i \in k$ and $a_i \in {{{{\mathbb{N}}}^+}}$. Given a torsion element $d \in D_X$ we explain now how to construct an ${{\operatorname}{SL}}_n$-connection in the orbit $\left[(X {{\,{\operatorname}{d}\!\log z}})^{W_X d}\right]$. As this connection will be regular, this proves the proposition. For $l \in {{{{\mathbb{N}}}^+}}$, we view ${{{l}^\ast}}$ as the field extension $K=k((z)) {\hookrightarrow}k((z^{1/l}))$. Let $\omega_l$ be the primitive $l$-th root of unity such that $\gamma(z^{1/l})= \omega_l z^{1/l}$.
Let $d \in D_X$ be a torsion element. We find $l \in {{{{\mathbb{N}}}^+}}$ and $j_1,\dots, j_r \in {{{\mathbb{N}}}}$ such that $$d= {{\operatorname}{blockdiag}\;}\big(\omega_l^{j_1}{{\operatorname}{E}}_{a_1}, \dots, \omega_l^{j_r}{{\operatorname}{E}}_{a_r}\big).$$ Let $\omega=\omega_l$, $\zeta=z^{1/l}$, and $\Sigma=\sum_{s=1}^r
{j_sa_s} \in {{{\mathbb{N}}}}$. As $1=\det (d)=\omega^\Sigma$, we see that $\Sigma$ is divisible by $l$. Define $$g={{\operatorname}{blockdiag}\;}\big(\zeta^{j_1}{{\operatorname}{E}}_{a_1}, \dots,
\zeta^{j_{r-1}}{{\operatorname}{E}}_{a_{r-1}},
\zeta^{j_r}, \dots, \zeta^{j_r}, \zeta^{j_r-\Sigma}\big) \in
{{\operatorname}{SL}}_n((\zeta)).$$ Now $\omega^\Sigma=1$ implies that $d=g{^{-1}}\gamma(g)$. Therefore, for any $m \in {{{{\mathbb{N}}}^+}}$, we get $$d^m = d \gamma(d) \gamma^2(d)\cdots \gamma^{m-1}(d)
= g{^{-1}}\gamma^m(g).$$ This means that $d$, regarded as an element of ${{\operatorname}{Z}}^1(\Gamma_l;
{{\operatorname}{SL}}_n((\zeta))_{lX {{\,{\operatorname}{d}\!\log\zeta}}})$ via Bijection , is cohomologous to the trivial 1-cocycle in ${{\operatorname}{Z}}^1(\Gamma_l; {{\operatorname}{SL}}_n((\zeta)))$. From the proof of Theorem \[T:H1Kform\] results that the connection $g\left[lX{{\,{\operatorname}{d}\!\log\zeta}}\right]$ is invariant under the Galois group $\Gamma_l$. We define $$(X {{\,{\operatorname}{d}\!\log z}})^d = {{\operatorname}{blockdiag}\;}\Big({{\operatorname}{J}}(x_1+\tfrac{j_1}{l},
a_1), \dots, {{\operatorname}{J}}(x_{r-1}+\tfrac{j_{r-1}}{l},
a_{r-1}), C\Big) {{\,{\operatorname}{d}\!\log z}},$$ where $C \in {{\operatorname}{Mat}}_{a_r}(K)$ is given by $$C=
\begin{bmatrix}
x_r+\tfrac{j_r}{l} & 1 & 0 & \dots & & 0 \\
0 & x_r+\tfrac{j_r}{l} & \ddots & \ddots & & \\
\vdots & \ddots & \ddots & 1 & 0 & \vdots \\
& & 0 & x_r+\tfrac{j_r}{l} & 1 & 0 \\
& & & 0 & x_r+\tfrac{j_r}{l} & z^{\frac{\Sigma}{l}} \\
0 & & & \dots & 0 &
x_r+\tfrac{j_r-\Sigma}{l}
\end{bmatrix}$$ If $a_r=1$, this is to be interpreted as $C=x_r+\tfrac{j_r-\Sigma}{l}$. As $\tfrac{\Sigma}{l} \in {{{\mathbb{N}}}}$ is a nonnegative integer, $(X
{{\,{\operatorname}{d}\!\log z}})^d$ is regular. It is easy to verify that ${{{l}^\ast}}\left((X {{\,{\operatorname}{d}\!\log z}})^d\right)=g\left[lX {{\,{\operatorname}{d}\!\log\zeta}}\right]$. We conclude that $(X {{\,{\operatorname}{d}\!\log z}})^d$ is a connection in $\left[(X {{\,{\operatorname}{d}\!\log z}})^{W_X d}\right]$.
\[c:diagonal-rel-equi\] If $X\in {\mathfrak{sl}}_n$ is a diagonal matrix, each connection related to $X {{\,{\operatorname}{d}\!\log z}}$ is gauge equivalent to a connection of the form $Y
{{\,{\operatorname}{d}\!\log z}}$ with $Y \in {\mathfrak{sl}}_n$ a diagonal matrix.
By Corollary \[c:standardnull-semisimple-related\] we may assume that $X \in {{\mathfrak{sl}}_n^{{\operatorname}{zero}}}$ is diagonal. Then our claim follows from the description of Bijection in the above proof.
\[K:alleverwandtenregulaer\] Every regular ${{\operatorname}{SL}}_n$-connection is related to $X {{\,{\operatorname}{d}\!\log z}}$, for some $X
\in \mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$. All relatives of a regular ${{\operatorname}{SL}}_n$-connection are regular. An ${{\operatorname}{SL}}_n$-connection $A$ is regular if and only if there is $l \in {{{{\mathbb{N}}}^+}}$ such that the connection ${{{l}^\ast}}(A)$ is regular.
\[q:grelatregulaer\] Are all relatives of a regular $G$-connection regular, if $G$ is an arbitrary linear algebraic group?
The first claim follows from Corollary \[c:regular-related-zero-standard\], and then the second claim is a consequence of Proposition \[p:alleregulaer\]. If ${{{l}^\ast}}(A)$ is regular, we have just seen that it is related to $X {{\,{\operatorname}{d}\!\log z}}$ with $X \in {\mathfrak{sl}}_n$. So $A$ is related to the regular connection $l{^{-1}}X{{\,{\operatorname}{d}\!\log z}}$ and therefore regular.
\[r:corollarygln\] Very similar arguments prove that Corollary \[K:alleverwandtenregulaer\] with ${{\operatorname}{SL}}_n$ replaced by ${{\operatorname}{GL}}_n$ is true.
\[p:slngleichglnaequi\] For $X$, $Y \in \mathcal{J}({\mathfrak{sl}}_n)$, the following are equivalent:
1. \[SLequi\] $X {{\,{\operatorname}{d}\!\log z}}$ and $Y {{\,{\operatorname}{d}\!\log z}}$ are ${{\operatorname}{SL}}_n(K)$-equivalent.
2. \[GLequi\] $X {{\,{\operatorname}{d}\!\log z}}$ and $Y {{\,{\operatorname}{d}\!\log z}}$ are ${{\operatorname}{GL}}_n(K)$-equivalent.
3. \[differZ\] $X$ and $Y$ differ integrally after block permutation.
The implication $\Rightarrow$ is obvious, and $\Rightarrow$ follows from Theorem \[T:GLnklass\]. In the proof of Theorem \[T:GLnklass\] we proved the implication $\Rightarrow$ . But we actually showed $\Rightarrow$ : If the traces of $X$ and $Y$ vanish, the element $g$ defined in Equation is an element of ${{\operatorname}{SL}}_n(K)$.
\[t:sln-class-rel\] The map $X \mapsto X{{\,{\operatorname}{d}\!\log z}}$ induces a surjection $$\mathcal{J}({\mathfrak{sl}}_n) {\twoheadrightarrow}\left\{\text{regular
${{\operatorname}{SL}}_n$-connections}\right\}/\text{relationship}.$$ For $X$, $Y \in \mathcal{J}({\mathfrak{sl}}_n)$, the connections $X {{\,{\operatorname}{d}\!\log z}}$ and $Y {{\,{\operatorname}{d}\!\log z}}$ are related if and only if $X$ and $Y$ differ rationally after block permutation.
Our map is surjective by Corollary \[K:alleverwandtenregulaer\]. The second statement follows from Proposition \[p:slngleichglnaequi\] and the fact that the matrices $l{{\operatorname}{J}}(x,a)$ and ${{\operatorname}{J}}(lx,a)$ are ${{\operatorname}{SL}}_a(k)$-conjugate, for $l \in {{{{\mathbb{N}}}^+}}$.
Let $X {{\,{\operatorname}{d}\!\log z}}$ and $Y {{\,{\operatorname}{d}\!\log z}}$ be two related ${{\operatorname}{SL}}_n$-connections with $X$, $Y \in \mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$. From Bijection , we conclude that there is a unique map ${{\operatorname}{can}}_{YX}$ such that the diagram $$\label{eq:canXY-diagram}
\xymatrix@R-10pt{
{\left\{\text{torsion elements in $D_X$} \right\}/W_X
\ar@{->}[r]^-{\sim}_-{\gamma}} \ar[d]_-{{{\operatorname}{can}}_{YX}}^-\sim &
{{\operatorname}{Rel}}(X {{\,{\operatorname}{d}\!\log z}})/{{\operatorname}{SL}}_n(K) \ar@{=}[d]\\
{\left\{\text{torsion elements in $D_Y$} \right\}/W_Y
\ar@{->}[r]^-\sim_-{\gamma}} &
{{\operatorname}{Rel}}(Y {{\,{\operatorname}{d}\!\log z}})/{{\operatorname}{SL}}_n(K)
}$$ commutes. This map ${{\operatorname}{can}}_{YX}$ can be described explicitly, see [@OSdiplom].
\[S:slnklassifik\] Let $\gamma$ be a procyclic generator of ${{\operatorname}{Gal}}({{\overline{K}}}/K)$ and $$\coprod_{X \in \mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})}
\{\text{torsion-elts.\ in $D_X$}\}/W_X \underset{\gamma}{\twoheadrightarrow}
\left\{\text{regular ${{\operatorname}{SL}}_n$-connections}\right\}/{{\operatorname}{SL}}_n(K)$$ be the map induced by the maps , $\delta \mapsto \left[(X {{\,{\operatorname}{d}\!\log z}})^\delta\right]$. Then this map is surjective, and we have $\left[(X {{\,{\operatorname}{d}\!\log z}})^\delta\right] =\left[(Y {{\,{\operatorname}{d}\!\log z}})^\epsilon\right]$ if and only if $X$ and $Y$ differ rationally after block permutation and ${{\operatorname}{can}}_{YX}(\delta)= \epsilon$.
\[rem:slnklassifik\] The sets $\{\text{torsion elements in $D_X$}\}/W_X$ are easy to describe. “Differing rationally after block permutation” is an equivalence relation on $\mathcal{J}({{\mathfrak{sl}}_n^{{\operatorname}{zero}}})$. By choosing a complete system of representatives for this relation, and by using the explicit description of the map $\delta \mapsto \left[(X {{\,{\operatorname}{d}\!\log z}})^\delta\right]$ given in the proof of Proposition \[p:alleregulaer\], Theorem \[S:slnklassifik\] enables us to give a list of all regular ${{\operatorname}{SL}}_n$-connections up to ${{\operatorname}{SL}}_n(K)$-equivalence.
Proposition \[p:alleregulaer\], Bijection and Corollary \[K:alleverwandtenregulaer\] show that our map is well defined and surjective. The remaining claim follows from Theorem \[t:sln-class-rel\] and Diagram .
\[rem:nice-class\] We now explain a nice partial classification of regular connections up to gauge equivalence. We associate to the standard diagonal Cartan subalgebra ${\mathfrak{t}}\subset {\mathfrak{sl}}_n$ the coroots $R^{\vee}$ and the Weyl group $W$. This Weyl group acts naturally on ${\mathfrak{t}}$ and stabilizes the subgroups ${{{\mathbb{Z}}}}R^{\vee}$ and ${{{\mathbb{Q}}}}R^{\vee}$. The groups $W^{{{\mathbb{Z}}}}= {{{\mathbb{Z}}}}R^{\vee}{\rtimes}W$ and $W^{{{\mathbb{Q}}}}= {{{\mathbb{Q}}}}R^{\vee}{\rtimes}W$ act on ${\mathfrak{t}}$ by $(a,w).H=a+wH$. Two elements of ${\mathfrak{t}}$ are in the same $W^{{{\mathbb{Z}}}}$-orbit (resp. $W^{{{\mathbb{Q}}}}$-orbit) if and only if they differ integrally (resp. rationally) after block permutation. Let $\mathcal{N}_n= \mathcal{J}({\mathfrak{sl}}_n)-{\mathfrak{t}}$. Consider the commutative diagram $$\label{eq:LieTmodW}
\xymatrix@R-10pt{
{\mathfrak{t}}/W^{{{\mathbb{Z}}}}\ar@{^{(}->}[r] \ar@{-{>>}}[d] &
\left\{\text{regular
${{\operatorname}{SL}}_n$-connections}\right\}/{{\operatorname}{SL}}_n(K) \ar@{-{>>}}[d]^\pi \\
{\mathfrak{t}}/W^{{{\mathbb{Q}}}}\ar@{^{(}->}[r] &
\left\{\text{regular
${{\operatorname}{SL}}_n$-connections}\right\}/\text{relationship} &
\mathcal{N}_n \ar[l]_-{\nu} \\
}$$ with obvious vertical maps. The horizontal maps are induced by $X \mapsto X {{\,{\operatorname}{d}\!\log z}}$. The horizontal maps on the left are well-defined and injective by Proposition \[p:slngleichglnaequi\] and Theorem \[t:sln-class-rel\]. They yield a partial classification. In the lower row of Diagram the images of the horizontal maps are complementary by Theorem \[t:sln-class-rel\]. From Corollary \[c:diagonal-rel-equi\] follows that the image of the upper horizontal map is the complement of $\pi{^{-1}}(\nu(\mathcal{N}_n))$.
\[ex:SL2\] We restrict now to the case $n=2$. Then $\mathcal{N}_2 =
\{{{\operatorname}{J}}(0,2)\}$, and $\nu(\mathcal{N}_2)$ consists of one element, namely ${{\operatorname}{Rel}}({{\operatorname}{J}}(0,2){{\,{\operatorname}{d}\!\log z}})/\text{relationship}$. Its inverse image under $\pi$ is ${{\operatorname}{Rel}}({{\operatorname}{J}}(0,2){{\,{\operatorname}{d}\!\log z}})/{{\operatorname}{SL}}_n(K)$. From the description of the map in the proof of Proposition \[p:alleregulaer\] we see that this set has precisely two elements, namely the orbits of the two connections (cf. example $G={{\operatorname}{SL}}_2$ in Examples \[Ex:standard\]) $$\begin{bmatrix}
0 & 1\\
& 0
\end{bmatrix}
{{\,{\operatorname}{d}\!\log z}}\quad
\text{and}
\quad
\begin{bmatrix}
\tfrac 12 & z\\
& -\tfrac 12
\end{bmatrix}
{{\,{\operatorname}{d}\!\log z}}.$$
Fuchsian Connections {#S:fuchszshg}
====================
Let $G$ be a linear algebraic group and $\rho: G {\rightarrow}{{\operatorname}{GL}}(V)$ be a (rational) representation of $G$ in a finite-dimensional vector space $V$. If $A$ is a $G$-connection, $\rho(A)=\rho_{\ast}(A) \in {\cal{GL}}(V)$ is a ${{\operatorname}{GL}}(V)$-connection and corresponds to a $D$-module structure $\alpha_{\rho(A)}$ on $K\otimes V$ (cf. Proposition \[P:dmodulGLnzshg\]).
A connection $A$ is [**Fuchsian**]{} if for every finite-dimensional representation $\rho: G {\rightarrow}{{\operatorname}{GL}}(V)$ the $D$-module $(K\otimes V, \alpha_{\rho(A)})$ is Fuchsian.
\[B:fuchsaequidef\] According to Proposition \[P:dmodulGLnzshg\], a connection $A$ is Fuchsian if and only if for every finite-dimensional representation $\rho: G {\rightarrow}{{\operatorname}{GL}}(V)$ the connection $\rho(A)$ is regular.
Let $G$ be a linear algebraic group. Every regular connection is Fuchsian. For $G={{\operatorname}{GL}}_n$ or , every Fuchsian connection is regular.
\[q:Fuchsgleichreg\] Do the notions of Fuchsian and regular connection coincide for every linear algebraic group?
Using Remarks \[r:corollarygln\] and \[B:fuchsaequidef\], it is easy to see that all relatives of a Fuchsian connection are Fuchsian. This shows that the answer “yes” to Question \[q:Fuchsgleichreg\] implies the same answer to Question \[q:grelatregulaer\]
The first claim and the second one for $G={{\operatorname}{GL}}_n$ are obvious. Let $A$ be a Fuchsian ${{\operatorname}{SL}}_n$-connection. Let be the standard representation of ${{\operatorname}{SL}}_n$. There are $g \in {{\operatorname}{GL}}_n(K)$ and $X(z) \in
{\mathfrak{gl}}_n[[z]]$ such that $g[\rho(A)]= X(z) {{\,{\operatorname}{d}\!\log z}}$. Consider the field extension ${{{n}^\ast}}: K {\hookrightarrow}K=N$. Let $f \in N$ be an $n$-th root of ${{{n}^\ast}}(\det(g{^{-1}}))$. Then $h = f {{{n}^\ast}}(g)$ is an element of ${{\operatorname}{SL}}_n(N)$, and we have $$\rho(h[{{{n}^\ast}}(A)]) = f{\mathbf{1}}\left[ {{{n}^\ast}}(X(z){{\,{\operatorname}{d}\!\log z}}) \right]
= \left( nX(z^n) + z{{\,\partial_z}}(f) f{^{-1}}{\mathbf{1}}\right) {{\,{\operatorname}{d}\!\log z}}.$$ It is obvious that $z{{\,\partial_z}}(f) f{^{-1}}\in k[[z]]$. But then $h[{{{n}^\ast}}(A)]$ is regular, and Corollary \[K:alleverwandtenregulaer\] shows that $A$ is regular.
Semisimple Conjugacy Classes {#App:cent}
============================
Let $n \in {{{\mathbb{N}}}}$ and $X \in {\mathfrak{gl}}_n = {{\operatorname}{End}}({k^n})$. For $\lambda \in k$ and $i \in {{{\mathbb{N}}}}$, define $$E_\lambda^i = {{\operatorname}{ker}}(X - \lambda) \cap {{\operatorname}{im}}(X-\lambda)^i.$$ Every $g \in {{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$ stabilizes all $E_\lambda^i$. Thus $g$ induces maps $g|_{E_\lambda^i} \in {{\operatorname}{GL}}(E_\lambda^i)$ and ${{\overline{g|_{E_\lambda^i}}}} \in {{\operatorname}{GL}}(E_\lambda^i / E_\lambda^{i+1})$. The following theorem gives an explicit description of the semisimple conjugacy classes in ${{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$.
\[T:zentralisator\] Let $n \in {{{\mathbb{N}}}}$ and $X \in {\mathfrak{gl}}_n = {{\operatorname}{Mat}}_n(k)$ be in Jordan normal form. Let $T_n \subset {{\operatorname}{GL}}_n$ be the standard diagonal torus. Then we have the following:
1. \[En:torus\] $T = T_X = T_n \cap {{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$ is a maximal torus in ${{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$.
2. \[En:torusiso\] The homomorphism $$\pi: {{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X) {\twoheadrightarrow}\prod_{\lambda \in k \atop i \in {{{\mathbb{N}}}}}{{\operatorname}{GL}}(E_\lambda^i /
E_\lambda^{i+1}), \quad
g \mapsto \left({{\overline{g|_{E_\lambda^i}}}}\right)_{\lambda, i},$$ is surjective. It induces, by restriction, an isomorphism $ \pi|_{T}: T {\stackrel{\sim}{\rightarrow}}\pi(T)$, and $\pi(T)$ is a maximal torus in $ \prod {{\operatorname}{GL}}(E_\lambda^i / E_\lambda^{i+1})$.
3. \[En:heKk\] The Weyl group $W_X$ associated to the torus $\pi(T)$ in $ \prod {{\operatorname}{GL}}(E_\lambda^i / E_\lambda^{i+1})$ acts via $\pi|_T$ on T, and the inclusion $T_X {\hookrightarrow}{{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$ induces a bijection $$T_X/W_X {\stackrel{\sim}{\rightarrow}}\{\text{semisimple conj.\ classes in
${{\operatorname}{Z}}_{{{\operatorname}{GL}}_n}(X)$}\}.$$
The proof is left to the reader. It can be found in [@OSdiplom].
[Man65]{}
Donald G. Babbitt and Veeravalli S. Varadarajan, *Formal reduction theory of meromorphic differential equations: a group theoretic view*, Pacific J. Math. **109** (1983), no. 1, 1–80. [MR ]{}[86b:34010]{}
Michel Demazure and Pierre Gabriel, *Groupes alg[é]{}briques. [T]{}ome [I]{}: [G]{}[é]{}om[é]{}trie alg[é]{}brique, g[é]{}n[é]{}ralit[é]{}s, groupes commutatifs*, Masson & Cie, [É]{}diteur, Paris, 1970. [MR ]{}[46 \#1800]{}
Serge Lang, *On quasi algebraic closure*, Ann. of Math. (2) **55** (1952), 373–390. [MR ]{}[13,726d]{}
Juri I. Manin, *Moduli fuchsiani*, Ann. Scuola Norm. Sup. Pisa (3) **19** (1965), 113–126. [MR ]{}[31 \#4815]{}
Olaf M. Schn[ü]{}rer, *[R]{}egul[ä]{}re [Z]{}usammenh[ä]{}nge in trivialen algebraischen [$G$]{}-[H]{}auptfaserb[ü]{}ndeln [ü]{}ber der infinitesimalen punktierten [K]{}reisscheibe*, Diplomarbeit, Freiburg (2003), [<http://www.freidok.uni-freiburg.de/volltexte/1477/>]{}.
Jean-Pierre Serre, *Corps locaux*, Hermann, Paris, 1968, Deuxi[è]{}me [é]{}dition, Publications de l’Universit[é]{} de Nancago, No. VIII. [MR ]{}[50 \#7096]{}
[to3em]{}, *Galois cohomology*, Springer-Verlag, Berlin, 1997, Translated from the French by Patrick Ion and revised by the author. [MR ]{}[98g:12007]{}
Tonny A. Springer, *Reductive groups*, Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 3–27. [MR ]{}[80h:20062]{}
[to3em]{}, *Linear algebraic groups*, second ed., Progress in Mathematics, vol. 9, Birkh[ä]{}user Boston Inc., Boston, MA, 1998. [MR ]{}[99h:20075]{}
Robert Steinberg, *Regular elements of semisimple algebraic groups*, Inst. Hautes [É]{}tudes Sci. Publ. Math. (1965), no. 25, 49–80. [MR ]{}[31 \#4788]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Oliinychenko, Bugaev and Sorin \[arXiv:1204.0103 \[hep-ph\]\] considered the role of conservation laws in discussing possible weaknesses of thermal models which are utilized in describing the hadron multiplicities measured in central nucleus-nucleus collisions. They argued to analyse the criteria for chemical freeze-out and to conclude that none of them were robust. Based on this, they suggested a new chemical freeze-out criterion. They assigned to the entropy per hadron the [*ad hoc*]{} value $7.18$ and supposed to remain unchanged over the whole range of the baryo-chemical potentials. Due to unawareness of recent literature, the constant entropy per hadron has been discussed in Ref. \[Fizika B18 (2009) 141-150, Europhys.Lett. 75 (2006) 420\]. Furthermore, it has been shown that the constant entropy per hadron is equivalent to constant entropy normalized to cubic temperature, an earlier criterion for the chemical freeze-out introduced in Ref. \[Europhys.Lett. 75 (2006) 420, Nucl.Phys.A764 (2006) 387-392\]. In this comment, we list out the ignored literature, compare between the entropy-number density ratio and two criteria of averaged energy per averaged particle number and constant entropy per cubic temperature. All these criteria are confronted to the experimental results. The physics of constant entropy per number density is elaborated. It is concluded that this ratio can’t remain constant, especially at large chemical potential related to AGS and SIS energies.'
author:
- 'A. Tawfik'
- 'E. Gamal'
- 'H. Magdy'
title: 'Comment on ”Investigation of Hadron Multiplicity and Hadron Yield Ratios in Heavy-Ion Collisions”'
---
.
Introduction
============
In the preprint [@sorin], Oliinychenko, Bugaev and Sorin have considered the role of conservation laws, the values of hard core radii along with the effects of the Lorentz contraction of hadron eigen volumes in discussing the weaknesses of thermal models which are utilized in describing the hadron multiplicities measured in the central nucleus-nucleus collisions. Regardless the unawareness of earlier literature, the authors concluded that none of the criteria for the chemical freeze-out is robust. In doing this, they entirely disregarded the experimental results in baryo-chemical potentials $\mu_b$ and their corresponding temperatures $T$. A systematic analysis of the four criteria describing the chemical freeze-out is introduced in [@Tawfik:2004ss; @Tawfik:2005qn; @cleymans05]. Furthermore, a comparison between these four criteria is elaborated in [@Tawfik:2004ss; @Tawfik:2005qn; @cleymans05].
Starting from phenomenological observations at SIS energy, it was found that the averaged energy per averaged particle $\langle\epsilon\rangle/\langle n\rangle\approx 1~$GeV [@jeanRedlich], where Boltzmann approximations are applied in calculating $\langle\epsilon\rangle/\langle n\rangle$, this constant ratio is assumed to describe the whole $T-\mu_b$ diagram. For completeness, we mention that the authors assumed that the pions and rho-mesons get dominant, at high $T$ and small $\mu_b$. The second criterion assumes that total baryon number density $\langle n_b\rangle+\langle n_{\bar{b}}\rangle\approx 0.12~$fm$^{-3}$ [@nb01]. In framework of percolation theory, the authors of Ref. [@percl] have suggested a third criterion. As shown in Fig. 2 of [@Tawfik:2005qn], the last two criteria seem to give almost identical results. All of them are stemming from phenomenological observation. A fourth criterion based on lattice QCD simulations was introduced in Ref. [@Tawfik:2004ss; @Tawfik:2005qn]. Accordingly, the entropy normalized to cubic temperature is assumed to remain constant over the whole range of baryo-chemical potentials, which is related to the nucleus-nucleus center-of-mass energies $\sqrt{s_{NN}}$ [@cleymans05]. An extensive comparison between constant $\langle\epsilon\rangle/\langle n\rangle$ and constant $s/T^3$ is given in [@Tawfik:2004ss; @Tawfik:2005qn].
The thermodynamic quantities deriving the chemical freeze-out. In framework of hadron resonance gas are deduced [@Tawfik:2004ss; @Tawfik:2005qn]. Explicit expressions for $s/n$ at vanishing and finite temperature are introduced [@Tawfik:2004ss; @Tawfik:2005gk]. The motivation of suggesting constant normalized entropy is the comparison to the lattice QCD simulations with two and three flavors. We simply found the $s/T^3=5$ for two flavors and $s/T^3=7$ for three flavors. Furthermore, we confront the hadron resonance gas results to the experimental estimation for the freeze-out parameters, $T$ and $\mu_b$.
The hadron resonance gas model {#sec:hrg}
==============================
The hadron resonances treated as a free gas [@Karsch:2003vd; @Karsch:2003zq; @Redlich:2004gp; @Tawfik:2004sw; @Taw3] are conjectured to add to the thermodynamic pressure in the hadronic phase (below $T_c$). This statement is valid for free as well as strong interactions between the resonances themselves. It has been shown that the thermodynamics of strongly interacting system can also be approximated to an ideal gas composed of hadron resonances with masses $\le 2~$GeV [@Tawfik:2004sw; @Vunog]. Such a mass cut-off is implemented to avoid the Hagedorn singularity [@hgdrn1]. Therefore, the confined phase of QCD, the hadronic phase, is modelled as a non-interacting gas of resonances. The grand canonical partition function reads Z(T, V) &=&, where $H$ is the Hamiltonian of the system and $T$ is the temperature. The Hamiltonian is given by the sum of the kinetic energies of relativistic Fermi and Bose particles. The main motivation of using this Hamiltonian is that it contains all relevant degrees of freedom of confined and strongly interacting matter. Obviously, it can be characterized by various - but a complete - set of microscopic states and therefore the physical properties of the quantum systems turn to be accessible in approximation of non-correlated [*free*]{} hadron resonances. Each of them is conjectured to add to the overall thermodynamic pressure of the [*strongly*]{} interacting hadronic matter. It includes implicitly the interactions that result in resonance formation. In addition, it has been shown that this model can submit a quite satisfactory description of particle production in heavy-ion collisions [@Karsch:2003vd; @Karsch:2003zq; @Redlich:2004gp; @Tawfik:2004sw; @Taw3]. With the above assumptions the dynamics the partition function can be calculated exactly and be expressed as a sum over [*single-particle partition*]{} functions $Z_i^1$ of all hadrons and their resonances. \[eq:lnz1\] Z(T, \_i ,V)&=&\_i Z\^1\_i(T,V)=\_i\_0\^ k\^2 dk (1), where $\epsilon_i(k)=(k^2+ m_i^2)^{1/2}$ is the $i-$th particle dispersion relation, $g_i$ is spin-isospin degeneracy factor and $\pm$ stands for bosons and fermions, respectively.
The switching between hadron and quark chemistry is given by the relations between the [*hadronic*]{} chemical potentials and the quark constituents; $\mu_i =3\, n_b\, \mu_q + n_s\, \mu_S$, where $n_b$($n_s$) being baryon (strange) quantum number. The chemical potential assigned to the light quarks is $\mu_q=(\mu_u+\mu_d)/2$ and the one assigned to strange quark reads $\mu_S=\mu_q-\mu_s$. The strangeness chemical potential $\mu_S$ is calculated as a function of $T$ and $\mu_i $ under the assumption that the overall strange quantum number has to remain conserved in heavy-ion collisions [@Tawfik:2004sw].
The HRG calculations assume quantum statistics and an overall strangeness conservation. With this regard, the strangeness chemical potential $\mu_S$ is calculated at each value of $T$ and $\mu_b$ assuring that the number of strange particles should be the same as that of the anti-strange particles. It is worthwhile to mention that no statistical fitting has been applied in determining all thermodynamic quantities, including entropy and number density derived from Eq. (\[eq:lnz1\]).
Physics of constant entropy per number density {#sec:phys}
==============================================
From the entropy and equilibrium, the Gibbs condition simply leads to \[eq:thrml\] &=& (+-\_b), the rhs is positive as long as $\mu_b<p/n+\epsilon/n$, where the thermodynamic quantities, $p$, $\epsilon$ and $n$ are supposed to be calculates at the $T-\mu_b$ diagram of the chemical freeze-out. Fig. \[fig:tmu1\] shows the experimental estimation for the freeze-out parameters $T$ and $\mu_b$. It is obvious that increasing $\mu_b$ leads to decreasing $T$ and therefore all values of the thermodynamic quantities decrease as well. Cleymans [*et al.*]{} [@jeanRedlich] suggested an empirical $T-\mu_b$ relation \[eq:tmu\] T &=& a-b \_b\^2 - c \_b\^4, where $a$, $b$ and $c$ are fitting parameters. In light of this discussion, the value given to $s/n$ can’t remain unchanged with increasing $\mu_b$. Left panel of Fig. \[fig:sn1\] presents the values of the three criteria $\langle\epsilon\rangle/\langle n\rangle$, $\langle n_b\rangle+\langle n_{\bar{b}}\rangle$ and $s/n$ calculated in HRG, section \[sec:hrg\], at $s/T^3=7$. It is obvious that all four criteria seem to remain constant, especially at high $\sqrt{s_{NN}}$. At low energies, the value assigned to $s/n$ [@sorin] is larger than the actual one, the value resulted from order conditions. The reason is illustrated in the right panel. At different values for $\mu_b$, the thermal evolution of $s/n$ is presented. It is obvious that $s/n$ never reaches $7.18$ at $\mu_b>500~$MeV. It is essential to bear in mind that the value $7.18$ has almost no physical interpretation. It is just an [*ad hoc*]{} value. This makes it inapplicable at Alternating Gradient Synchrotron (AGS) and Schwer-Ionen-Synchrotron (SIS) energies. Almost same kind of restriction would be valid for $\epsilon/n$. According to Eq. (\[eq:thrml\]), &=& T + \_b -.
The physics of constant $s/T^3$ has been discussed in Ref. [@Tawfik:2004ss; @Tawfik:2005qn]. It combines the three thermodynamic quantities, $p/T^4$, $\epsilon/T^4$ and $n/T^3$ &=& + - \_b . At chemical equilibrium, the particle production at freeze-out is conjuncted to fully fulfil the laws of thermodynamics, as Eq. (\[eq:thrml\]). The hadronic abundances observed in the final state of heavy-ion collisions are settled when $s/T^3$ drops to $7$ i.e., the degrees of freedom drop to $7 \pi^2/4$. Meanwhile the changing in the particle number with the changing in the collision energy is given by $\mu_b$, the energy that produces no additional work, i.e. the stage of vanishing free energy, gives the entropy at the chemical equilibrium. At the chemical freeze-out, the equilibrium entropy represents the amount of energy that can’t be used to produce additional work. In this context, the entropy is defined as the degree of sharing and spreading the energy inside the system that is in chemical equilibrium [@Tawfik:2005qn].
Constant Entropy per Number in Lattice QCD Simulations and Heavy-ion Collisions
===============================================================================
Once again, related literature on lattice QCD simulations is not cited in [@sorin]. For example, Borsanyi [*et al.*]{} [@fodor12] studied the trajectories of constant $s/n$, where $s=S/V$ and $n=N/V$, on the phase diagram and thermodynamic observables along these isentropic lines. This was not the only work devoted to such line of constant physics [@old84]. In Stefan-Boltzmann limit, the ratio $s/n$ is assumed to remain unchanged with increasing $\mu_b$ (Appendix A of [@fodor12]). In doing this, lowest order in perturbation theory is assumed, where strangeness chemical potential $\mu_S$ likely vanishes. For $\mu_b/T$, a limiting behavior for the isentropic lines on the phase diagram is obtained. The ratio $s/n$ has been measured at various $\sqrt{s_{NN}}$ [@expp]. It is concluded that in limits of low temperatures, increasing the chemical potential results in an overestimation for the ratio $s/n$ even beyond the applicability region of the Taylor-expansion method, which is applied in lattice QCD simulations at finite chemical potential. Two remarks are now in order. First, the values of $s/n$ seem to depend on the chemical potential $\mu_b$ or $\sqrt{s_{NN}}$. This is confirmed in different experiments [@expp] and lattice gauge theory [@fodor12]. Second, the ratio $s/n$ as calculated in the lattice QCD simulations [@fodor12] is suggested to characterized the QCD phase diagram [@Tawfik:2004sw]. The QCD phase diagram is likely differs from the freeze-out diagram [@Tawfik:2004ss; @Tawfik:2005qn], especially at large chemical potential $\mu_b$ or small $\sqrt{s_{NN}}$ so that at fixed $\mu_b$ the critical temperature differs from the freeze-out temperature.
Results and Conclusions {#sec:resl}
=======================
In Fig. \[fig:tmu1\], the freeze-out parameters, $T$ and $\mu_b$, measured in various heavy-ion collisions experiments are compared with the three criteria, $\langle\epsilon\rangle/\langle n\rangle=1~$GeV (dashed line), $s/n=7.18$ (dotted line) and $s/T^3=7$ (solid line). The experimental data are taken from [@cleymans05] and the reference therein. The quality of each criterion is apparent. All conditions are almost equivalent at very high energy. The ability of the condition $\langle\epsilon\rangle/\langle n\rangle=1~$GeV at very low energies are not as much as that of $s/T^3=7$. As discussed in section \[sec:phys\], $s/n=7.18$ seems to fail to reproduce the freeze-out parameters at $\mu_b>500~$MeV. To illustrate the reason for this observation, the thermal evolution of $s/n$ at very high chemical potential calculated in HRG is presented in the right panel of Fig. \[fig:sn1\]. Details on HRG are elaborated in section \[sec:hrg\]. It is obvious that the value assigned to $s/n$ would be achieved at $\mu_b>500~$MeV. In other words, it is obvios that the behavior of $s/n$ is non-monotonic.
The left panel of Fig. \[fig:sn1\] presents the energy scan for the three criteria, $\langle\epsilon\rangle/\langle n\rangle$, $\langle n_b\rangle+\langle n_{\bar{b}}\rangle$ and $s/n$ calculated in HRG, section \[sec:hrg\] at $s/T^3=7$. The calculations in HRG are performed as follows. Starting with a certain $\mu_b$, the temperature is increased very slowly. At this value of $\mu_b$ and at each increase in $T$, the strangeness chemical potential $\mu_S$ is determined to assure strangeness conservation. Having the three values of $\mu_b$, $T$ and $\mu_S$, then all thermodynamic quantities are calculated. When the ratio $s/T^3$ reaches the value $7$, then the three quantities $\langle\epsilon\rangle/\langle n\rangle$, $\langle n_b\rangle+\langle n_{\bar{b}}\rangle$ and $s/n$ are registered. This procedure is repeated over all values of $\mu_b$. We find that $s/T^3=7$ assures $s/n=7.18$ and $\langle\epsilon\rangle/\langle n\rangle=1~$GeV at small $\mu_b$ (large $\sqrt{s_{NN}}$). At large $\mu_b$ (small $\sqrt{s_{NN}}$), the values of $s/n$ gets smaller values, so that the applicability of $s/n=7.18$ is limited to $\mu_b<500~$MeV. In conclusion, the robustness of $s/n=7.18$ is very much limited in comparison to the four criteria: percolation [@percl], baryon number [@nb01], energy per particle [@jeanRedlich] and normalized entropy [@Tawfik:2004ss; @Tawfik:2005qn]. That $s/T^3$ is accompanied with constant $s/n$ has been introduced in Ref. [@Tawfik:2004ss]. That authors of [@sorin] argue that $s/n=7.18$ is novel likely reflects an ignorance of related literature. The four criteria [@Tawfik:2004ss; @Tawfik:2005qn; @jeanRedlich; @nb01; @percl] are based on physical observation either phenomenological and/or theoretical. The authors of [@sorin] are suggesting an [*ad hoc*]{} value for the ratio $s/n$. It is inapplicable at AGS and SIS energies. Its relation to $s/T^3$ is apparently overseen. The same is valid for the comparison with other criteria (some of them are ignored, completely) and ignorance of the experimental measurements. The [*ad hoc*]{} value assigned to $s/n$ is obviously much robuster than any other criterion. Unawareness of literature and underestimating or even ignoring previous work are violation of rules of the scientific research.
[99]{} D.R. Oliinychenko, K.A. Bugaev and A.S. Sorin, arXiv:1204.0103 \[hep-ph\] (2012).
A. Tawfik, Europhys. Lett. [**75**]{}, 420 (2006).
A. Tawfik, Nucl. Phys. A [**764**]{}, 387-392 (2006).
J. Cleymans, H. Oeschler, K. Redlich, S. Wheaton, Phys. Rev. C [**73**]{}, 034905 (2006).
J. Cleymans and K. Redlich, Phys. Rev. C [**60**]{}, 054908 (1999).
P. Braun-Munzinger and J. Stachel, J. Phys. G [**28**]{}, 1971 (2002).
V. Magas and H. Satz, Eur. Phys. J. C [**32**]{}, 115 (2003).
A. Tawfik, Fizika B [**18**]{}, 141-150 (2009).
F. Karsch, K. Redlich, A. Tawfik, Eur. Phys. J. C [**29**]{}, 549 (2003).
F. Karsch, K. Redlich, A. Tawfik, Phys. Lett. B [**571**]{}, 67 (2003).
K. Redlich, F. Karsch, A. Tawfik, J. Phys. G [**30**]{}, S1271 (2004).
A. Tawfik, Phys. Rev. D [**71**]{}, 054502 (2005).
A. Tawfik, J. Phys. G [**31**]{}, S1105 (2005).
R. Venugopalan, M. Prakash, Nucl. Phys. A [**546**]{}, 718 (1992).
R. Hagedorn, Nuovo Cim. Suppl. [**6**]{}, 311-354 (1968); Nuovo Cim. A [**56**]{}, 1027-1057 (1968).
Sz. Borsanyi, G. Endrodi, Z. Fodor, S. D. Katz, S. Krieg, C. Ratti and K. K. Szabo, JHEP [**1208**]{}, 053 (2012).
R.W.P. Ardill, M. Creutz, K.J.M. Moriarty and S. Samuel, ”Lines of constant physics for SU(3) lattice gauge theory in four-dimensions”, BNL-35081 (1984); S. Ejiri, F. Karsch, E. Laermann, and C. Schmidt, Phys. Rev. D [**73**]{}, 054506 (2006); C. Bernard, [*et al.*]{}, Phys. Rev. D [**77**]{}, 014503 (2008); C. DeTar, [*et al.*]{}, Phys. Rev. D [**81**]{}, 114504 (2010).
M. Bluhm, B. Kampfer, R. Schulze, D. Seipt, and U. Heinz, Phys. Rev. C [**76**]{}, 034901 (2007).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: '[In this paper, we discuss the asymptotic stability of singular steady states of the nonlinear heat equation $u_t=\Delta u+u^p$ in weighted $L^r$– norms.]{}'
author:
- |
Dominika Pilarczyk\
\
Instytut Matematyczny, Uniwersytet Wrocławski,\
pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland\
e-mail: `[email protected]`
title: Asymptotic stability of singular solution to nonlinear heat equation
---
[**Mathematics Subject Classification (2000):**]{} 35K57, 35B33, 35B40.\
[**Keywords:**]{} [semilinear parabolic equation, asymptotics of solutions, supercritical nonlinearity]{}
Introduction
============
\[ce\] It is well-known that the behaviour for large $t$ of solutions of the Cauchy problem $$\begin{aligned}
\label{h.e.}&u_t=\Delta u+u^p,\\
\label{d.} &u(x,0)=u_0(x)\end{aligned}$$ depends on the value of the exponent $p$ of the nonlinearity. Let us first recall the critical value of $p=p_F=1+\nicefrac{2}{n}$ called the Fujita exponent which borders the case of a finite-time blow-up for all positive solutions (for $p\leqslant p_F$) and the case of the existence of some global bounded positive solutions (if $p>p_F$). It is also known that the Sobolev exponent $p_S=\frac{n+2}{n-2}$ is critical for the existence of positive steady states that is classical solutions $\psi \in C_0( {{{\mathbb R}}^n})$ of the elliptic equation $\Delta \psi +\psi ^p=0\quad {\rm on} \quad {{{\mathbb R}}^n} .$ Such solutions exist only if $p \geqslant p _S $ (see e.g. [@Chen], [@Gidas] ). Moreover, for $p \geqslant p _S$ there is a one parameter family of radial positive steady states $\psi _k,\ k>0$, given by $$\label{psi k1}
\psi _k(x)=k\psi_1(k^{\frac{p-1}{2}}|x|),$$ where $\psi _1$ is the unique radial stationary solution with $\psi _1(0)=1$, which is stricly decreasing in $|x|$ and satisfies $\psi _1(|x|)\rightarrow 0$ as $|x|\rightarrow \infty $ (see [@QS]).
Another important exponent $$p _{JL}=\frac{n-2\sqrt{n-1}}{n-4-2\sqrt{n-1}} \quad {\rm for }\quad n\geqslant 11,$$ appeared for the first time in [@JL] where the authors studied problems with the non linearities of the form $f(u)=a(1+bu)^p$ for some $a, b>0$. It is also connected with a change in stability property of positive steady states defined in . Indeed, Gui [*et al.*]{} [@GNW] proved that for $p <p_{JL}$, all positive stationary solutions $\psi _k$ are unstable in any reasonable sense, while for $p\geqslant p_{JL}$ they are “weakly asymptotically stable” in a weighted $L^\infty-$norm. Results on the asymptotic stability of zero solution to - can be found in [@Q08] and in the references given there.
Let us recall that for $p \geqslant p_{JL}$ the family of the positive equilibria $\psi _k$, $k>0$, forms a simply ordered curve. Furthermore, this curve connects the trivial solution if $k \rightarrow 0$ and the singular steady state for $k \rightarrow \infty $, which exists for $p>p_{st}=\nicefrac{n}{(n-2)}$ in dimensions $n \geqslant 3$ and has the form $v_{\infty }(x)=L |x|^{\nicefrac{-2}{(p-1)}}$ with a suitably chosen constant $L$ (see below). It is also known ([@QS]) that if $p_S\leqslant p <p_{JL}$ the graphs of the steady states $\psi _k$, $0<k<\infty $, intersect the graph of $v_{\infty }$, whereas for $p \geqslant p _{JL}$ we have $\psi _k<v_{\infty }$, $(0<k<\infty ).$
Our main goal in this note is to prove asymptotic stability of the singular stationary solution $v_\infty $ in suitable weighted $L^r-$spaces using estimates of a fundamental solution to a parabolic equation with singular coefficients [@LS; @MS].
Results and comments
====================
It can be directly checked that for $p>p_{st}=\nicefrac{n}{(n-2)}$ and $n\geqslant 3$ equation has the singular stationary solution of the form $$\label{v inf}
v_{\infty }(x)=L |x|^{-\frac{2}{p-1}}=\Bigg(\frac{2}{p-1}\Big(n-2-\frac{2}{p-1}\Big)\Bigg)^\frac{1}{p-1} |x|^{-\frac{2}{p-1}},$$ which plays the central role in this paper.
In particular, problem – with a nonnegative initial datum $u_0$, which is bounded and below singular steady state $v_\infty $, has the global in time classical solution (see [@QS Th. 20.5 (i)] and [@PY Th. 1.1]). Moreover, following Galaktionov & Vazquez [@Gal-Vaz Th. 10.4 (ii)], we may generalize that result and prove that if $0\leqslant u_0(x)\leqslant v_\infty (x)$ and $u_0(x)\not \equiv v_\infty (x)$, then the limit function $u(x,t)=\lim_{N\rightarrow \infty }u^N(x,t),$ where $u^N=u^N(x,t)$ is the solution of the problem $$\begin{aligned}
&u_t=\Delta u+u^p, \quad u(x,0)=\min \{u_0 (x),N\},\end{aligned}$$ solves and $u(\cdot ,t)\in L^\infty ({{\mathbb R}}^n)$ for all $t>0$. By those reasons, in the theorems below we always assume that $u$ is the nonnegative solution to the initial value problem – with the initial datum $u_0$ satisfying $$\label{a.1}
0\leqslant u_0(x) \leqslant v_\infty (x).$$
In order to show the asymptotic stability of the steady state $v_\infty $ we linearize around $v_\infty $. Denoting by $u=u(x,t)$ the nonnegative solution to – and introducing $w=v_\infty -u$, we obtain $$\label{l.e. w}
w_t=\Delta w+\frac{\lambda }{|x|^2}w-\big[ (v_\infty -w)^p-v_\infty ^p+pv_\infty ^{p-1}w \big],$$ where $$\label{lambda}
\lambda =\lambda (n,p)= \frac{2p}{p-1}\Big(n-2-\frac{2}{p-1}\Big).$$ Next, we use estimates of the fundamental solution of the linear heat equation with singular potential $$\label{lin}
u_t=\Delta u +\frac{\lambda }{|x|^2}u, \quad x\in {{\mathbb R}}^n , \quad t>0$$ obtained recently by Liskevich & Sobol [@LS], Milman & Semenov [@MS] (see also Moschini & Tesei [@MT]). As the consequence of the Hardy inequality, it is crucial in that reasoning to assume that $\lambda \leqslant \frac{(n-2)^2}{4}$ in equation . Coming back to the perturbed equation and using the explicit form of $\lambda (n,p)$ in , we obtain by direct calculation (see Remark \[1: ex lin e\] for more details) that the inequality $\lambda (n,p)\leqslant \frac{(n-4)^2}{4}$ is valid if $$\label{a.2}
p\geqslant p_{JL}=\frac{n-2\sqrt{n-1}}{n-4-2\sqrt{n-1}} \quad \textrm{for }\quad n\geqslant 11.$$ By this reason, we limit ourselves to the exponent $p$ of the nonlinearity in satisfying . The exponents mentioned above are ordered as follows: $p_F < p_{st} <p_S < p_{JL}$.
We introduce the parameter $\sigma $ which plays a crucial role in our reasoning by the formula $$\label{a.3}
\sigma =\sigma(n,p)= \frac{n-2}{2}-\sqrt{\frac{(n-2)^2}{4}-\frac{2p}{p-1}\bigg(n-2-\frac{2}{p-1}\bigg)}.$$ It is worth pointing out that $\sigma (n,p) >\nicefrac{2}{(p-1)}$ if $p>p_{st}$ and $n>2$. Moreover, the number $\sigma (n,p)$ has the property $2\sigma (n,p)<n$. Let us also notice that $\sigma (n,p)$ appears in a hidden way in the papers of Poláčik, Yanagida, Fila, Winkler (see [*e.g.*]{}[@PY1], [@FWY]), because it is the sum of the constant $\nicefrac{2}{(p-1)}$ and $\lambda_1$, where $\lambda _1$ is one of the root of the quadratic polynomial $z^2-(n-2-2L)z+2(n-2-L)$, given explicitly by the formula $$\lambda_1=\frac{1}{2}\bigg(n-2-2L-\sqrt{(n-2-2L)^2-8(n-2-L)} \bigg),$$ where $L$ is defined in .
Now we are in a position to formulate our first result on the convergence of the solutions towards the singular steady state.
\[half l\] Assume , , . Suppose, moreover, that there exist constants $b>0$ and $\ell \in \big(\sigma ,n-\sigma \big)$ such that $$v_\infty (x)-b|x|^{-\ell} \leqslant u_0(x)$$ for all $|x|\geqslant 1$. Then $$\begin{aligned}
&\label{rel i} \sup_{|x|\leqslant \sqrt{t} }|x|^\sigma \big( v_\infty (x) - u(x,t) \big) \leqslant Ct^{-\frac{\ell -\sigma }{2}} \\
\intertext{and}
&\label{rel ii} \sup_{|x|\geqslant \sqrt{t}} \big( v_\infty (x) -u(x,t) \big) \leqslant Ct^{-\frac{\ell }{2}}.\end{aligned}$$ for a constant $C>0$ and all $t\geqslant 1$.
Poláčik & Yanagida [@PY Th. 6.1] showed that under the assumptions of Theorem \[half l\] the pointwise convergence holds true, namely, $\lim_{t \rightarrow \infty}u(x,t)=v_\infty (x)$ for every $x \in {{\mathbb R}}^n \setminus \{0\}$. More recently, Fila & Winkler [@FW] proved the uniform convergence of solutions $u=u(x,t)$ toward a singular steady state on ${{\mathbb R}}^n \setminus B_\nu (0)$, where $B_\nu (0)$ is the ball in ${{\mathbb R}}^n$ with the center at the origin and radius $\nu $. Theorem \[half l\]completes those results by providing optimal weighted decay estimates in the whole ${{\mathbb R}}^n$.
[ Note that our calculations in the proof of Theorem \[half l\] are valid for any $\ell \in (\nicefrac{2}{(p-1)}, n-\sigma )$, but for $\ell \in (\nicefrac{2}{(p-1)}, \sigma ]$ the right-hand side of inequality does not decay in time. ]{}
We can improve Theorem \[half l\] for $\ell=\sigma $ as follows.
\[mth 2\] Assume that , and are satisfied. Suppose that there exists a constant $b>0$ such that $$v_\infty (x)-b|x|^{-\sigma }\leqslant u_0 (x).$$ Let, moreover, $$\lim_{|x|\rightarrow \infty }|x|^{\sigma }\big( v_\infty (x)-u_0(x) \big) = 0.$$ Then $$\begin{aligned}
&\lim_{t\rightarrow \infty }\sup_{|x|\leqslant \sqrt{t}}|x|^\sigma \big(v_\infty (x)-u(x,t)\big)=0\\
\intertext{\it and }
&\lim_{t\rightarrow \infty }t^{\frac{\sigma }{2}}\sup_{|x|\geqslant \sqrt{t}}\big(v_\infty (x)-u(x,t)\big) =0.\end{aligned}$$
\[small b\] Under the assumptions of Theorem \[half l\] and Theorem \[mth 2\], respectively if, moreover, $b$ is sufficiently small, we obtain $$\begin{aligned}
\label{sup u} &\|u(\cdot ,t)\|_\infty \geqslant C t^{\frac{\ell -\sigma}{\sigma (p-1)-2}} \quad \textit{if } \quad \ell \in (\sigma , n-\sigma )\\
\intertext{\it for a constant $C>0$ and all $t\geqslant 1$ and }
\label{sup u 2}&\lim_{t\rightarrow \infty }\|u(\cdot ,t)\|_\infty =+\infty \quad \textit{if }\quad \ell=\sigma .\end{aligned}$$
[Estimates from below of $\|u(\cdot ,t)\|_\infty $, similar to that stated in , were obtained by Fila [*et al.*]{} in [@FWY1 Theorem 1.1.], [@FWY Theorem 1.1.] and improved in [@FKWY Theorem 1.1.] using matched asymptotics expansions. In Corollary \[small b\], we emphasize that this inequality is an immediate consequence of Theorem \[half l\]. ]{}
[For $p>p_{JL}$ estimates and seem to be optimal, because they imply the optimal lower bound , see [@FKWY]. On the other hand, for $p=p_{JL}$ the authors of [@FKWY1] obtained the logarithmic factor on the right-hand side of , which we are not able to see by our method. ]{}
Our next goal is to prove the asymptotic stability of $v_\infty $ in the Lebesgue space $L^2 ({{\mathbb R}}^n )$.
\[stab.2\] Assume that , and are valid.
- Suppose that $v_\infty -u_0 \in L^1({{\mathbb R}}^n )$ and $|\cdot |^{-\sigma } (v_\infty -u_0)\in L^1({{\mathbb R}}^n)$. Then $$\label{1: L1}
\| v_\infty (\cdot ) - u(\cdot , t) \|_2 \leqslant Ct^{-\frac{n}{4}}\| v_\infty -u_0 \|_1 + Ct^{-\frac{n-2\sigma }{4}}\| |\cdot |^{-\sigma }(v_\infty -u_0 )\|_1 .$$
- Suppose that $v_\infty -u_0 \in L^2 ({{\mathbb R}}^n )$. Then $$\lim_{t \rightarrow \infty } \|v_\infty (\cdot )-u(\cdot ,t) \|_2 =0.$$
Note, that $v_\infty \in L^2_{loc}({{\mathbb R}}^n )$ for every $p>p_F$. Here, this property of the singular solution $v_\infty $ is satisfied, because $p_{JL} >p_F$.
Using the fact that the steady states $\psi_k$ defined in are below the singular stationary solution $v_\infty $ for $p> p_{JL}$, we may rephrase Theorem \[stab.2\] as follows.
\[c. stab.2\] Assume that , and are valid. Let $\psi_k$ be the stationary solutions for some $k>0$.
- Suppose that $\psi_k -u_0 \in L^1({{\mathbb R}}^n )$ and $|\cdot |^{-\sigma } (\psi_k -u_0)\in L^1({{\mathbb R}}^n)$. Then $$\label{1: c L1}
\| \psi_k (\cdot ) - u(\cdot , t) \|_2 \leqslant Ct^{-\frac{n}{4}}\| \psi_k -u_0 \|_1 + Ct^{-\frac{n-2\sigma }{4}}\big\| \, |\cdot |^{-\sigma }(\psi_k -u_0 )\big\|_1 .$$
- Suppose that $\psi_k -u_0 \in L^2 ({{\mathbb R}}^n )$. Then $$\label{c lim L2}
\lim_{t \rightarrow \infty } \| \psi_k (\cdot )-u(\cdot ,t) \|_2 =0.$$
[Observe that Theorem \[stab.2\] and Corollary \[c. stab.2\] complete the results by Poláčik & Yanagida, who proved in [@PY Proposition 3.5] the stability estimate $$\| \psi_k (\cdot ) -u(\cdot ,t)\|_2 \leqslant \|\psi_k -u_0\|_2 .$$ ]{}
Linear equation with a singular potential
=========================================
In this section we recall the estimate from above of the fundamental solution of the equation $u_t=\Delta u+\lambda |x|^{-2}u$ obtained by Liskevich & Sobol in [@LS] and by Milman & Semenov [@MS]. Following those arguments, we define the weights ${{\varphi}}_\sigma (x,t) \in C({{\mathbb R}}^n \setminus \{0\})$ as $$\label{weights}
{{\varphi}}_\sigma (x,t)=
\begin{cases}
\big(\frac{\sqrt{t}}{|x|}\big) ^\sigma & {\rm if}\ |x| \leqslant \sqrt{t},\\
1 & { \rm if}\ |x|\geqslant \sqrt{t} .
\end{cases}$$
[[@LS; @MS]]{}\[kernel th\] Let $Hu=\Delta u+\lambda |x|^{-2}u$. Assume that $0 \leqslant \lambda \leqslant \nicefrac{(n-2)^2}{4}$. The semigroup ${{\text{\rm{e}}}}^{-tH}$ of the linear operators generated by $H$ can be written as the integral operator with a kernel ${{\text{\rm{e}}}}^{-tH}(x,y)$, namely $${{\text{\rm{e}}}}^{-tH}u_0 (x)=\int _{{{\mathbb R}}^n}{{\text{\rm{e}}}}^{-tH}(x,y)u_0(y) {{\ \rm d}}y.$$ Moreover, there exist positive constants $C>0$ and $c > 1$, such that for all $t>0$ and all $x,y \in {{\mathbb R}}^n \setminus \{0\}$ $$\label{kernel}
{0 \leqslant {{\text{\rm{e}}}}^{-tH}(x,y) \leqslant C\varphi_\sigma (x,t)\ \varphi_\sigma (y,t)\ G(x-y, c t)},$$ where $\sigma =\frac{n-2}{2}-\sqrt{\frac{(n-2)^2}{4}-\lambda }$, the functions ${{\varphi}}_\sigma $ are defined in (see also Remark \[R Phi\] below) and $G(x,t)=\big(4\pi t\big)^{\nicefrac{-n}{2}}\exp( \nicefrac{-|x|^2}{4\pi t})$ is the heat kernel.
\[R Phi\] [ In fact, Milman & Semenov in [@MS] used the more regular weight functions $\Phi _\sigma \in C^2({{\mathbb R}}^n \setminus \{0\})$, namely $$\Phi_\sigma (x,t)=
\begin{cases}
\big(\frac{\sqrt{t}}{|x|}\big) ^\sigma & {\rm if}\ |x| \leqslant \sqrt{t},\\
\frac{1}{2} & { \rm if}\ |x|\geqslant 2\sqrt{t}
\end{cases}$$ and $\frac{1}{2}\leqslant \Phi_\sigma (x,t) \leqslant 1$ for $\sqrt{t}\leqslant |x|\leqslant 2\sqrt{t}$. It can be checked directly that there exist positive constants $c$ and $C$ for which the inequalities $$c{{\varphi}}_\sigma (x,t)\leqslant \Phi_\sigma (x,t) \leqslant C{{\varphi}}_\sigma (x,t)$$ hold true, where ${{\varphi}}_\sigma $ are defined by . By this reason we are allowed to use the weights ${{\varphi}}_\sigma $ instead of $\Phi_\sigma $. ]{}
The following theorem is the consequence of the estimates stated in .
\[w half l\] Let the assumptions of Theorem \[kernel th\] be valid. Assume that $p>1+\frac{2}{n-\sigma }$. Suppose that there exist $b>0$ and $\ell \in (\frac{2}{p-1},n-\sigma )$ such that a nonnegative function $w_0$ satisfies $$\begin{aligned}
{2}
&w_0(x)\leqslant b|x|^{-\frac{2}{p-1}} &\quad &\textit{for} \quad |x|\leqslant 1 ,\\
&w_0(x)\leqslant b|x|^{-\ell} &\quad &\textit{for}\quad |x|\geqslant 1.\end{aligned}$$ Then $$\label{sup etH}
\sup_{x\in {{\mathbb R}}^n} {{\varphi}}_\sigma ^{-1}(x,t) |{{\text{\rm{e}}}}^{-tH}w_0(x)|\leqslant Ct^{-\frac{\ell}{2}}$$ for a constant $C>0$ and all $t\geqslant 1$.
First, for every fixed $x \in {{\mathbb R}}^n $, we apply the estimate of the kernel ${{\text{\rm{e}}}}^{-tH}$ from Theorem \[kernel th\] in the following way $${{\varphi}}_\sigma ^{-1}(x,t) \big|{{\text{\rm{e}}}}^{-tH}w_0(x)\big| \leqslant C\int_{{{\mathbb R}}^n} G(x-y,ct){{\varphi}}_\sigma (y,t)w_0(y) {{\ \rm d}}y.$$ Next, we split the integral on the right-hand side into three parts $I_1(x,t)$, $I_2(x,t)$ and $I_3(x,t)$ according to the definition of the weights ${{\varphi}}_\sigma $ and the assumptions on the function $w_0$. Let us begin with $I_1(x,t)$: $$\begin{split}
I_1(x,t)&\equiv C\int_{|y|\leqslant 1} G(x-y,ct){{\varphi}}_\sigma (y,t)w_0(y) {{\ \rm d}}y \\
&\leqslant Cb t^{\frac{\sigma }{2}}\int_{|y|\leqslant 1} G(x-y, ct)|y|^{-\sigma -\frac{2}{p-1}} {{\ \rm d}}y
\leqslant Cbt^{-\frac{n-\sigma }{2}},
\end{split}$$ because $G(x-y,ct)$ is bounded by $C t^{-\frac{n}{2}}$ and the function $|y|^{-\sigma -\frac{2}{p-1}}$ is integrable for $|y|\leqslant 1$ if $p>1+\nicefrac{2}{(n-\sigma )}$.
We use the same argument to deal with $$\begin{split}
I_2(x,t)&\equiv C\int_{1\leqslant |y|\leqslant \sqrt{t}}G(x-y,ct){{\varphi}}_\sigma (y,t)w_0(y) {{\ \rm d}}y\\
&\leqslant Cbt^{\frac{\sigma }{2}} \int_{1\leqslant |y|\leqslant \sqrt{t}} G(x-y,ct) |y|^{-\sigma -\ell} {{\ \rm d}}y
\leqslant Cbt^{\frac{\sigma -n}{2}} \int_{1\leqslant |y|\leqslant \sqrt{t}}|y|^{-\sigma -\ell} {{\ \rm d}}y \\
&\leqslant Cbt^{-\frac{\ell}{2}} + Cbt^{-\frac{n-\sigma }{2}}.
\end{split}$$
Finally, we estimate $$\begin{split}
I_3(x,t)&\equiv C\int_{|y|\geqslant \sqrt{t}} G(x-y, ct){{\varphi}}_\sigma(y,t) w_0(y) {{\ \rm d}}y \\
&\leqslant Cb\int_{|y|\geqslant \sqrt{t}} G(x-y, ct)|y|^{-\ell} {{\ \rm d}}y \leqslant Cbt^{-\frac{\ell}{2}},
\end{split}$$ using the inequality $1\leqslant \big(\frac{\sqrt{t}}{|y|}\big)^\ell $ for $|y|\geqslant \sqrt{t}$ and the identity $\int_{{{\mathbb R}}^n} G(x-y,ct) {{\ \rm d}}y=1$ for $t>0$, $x\in {{\mathbb R}}^n $. Since $\ell \in (\nicefrac{2}{(p-1)},n-\sigma )$, we complete the proof of .
\[limit e -tH th\] Assume that $|\cdot |^\sigma w_0 \in L^\infty ({{\mathbb R}}^n )$ and $$\lim_{|x|\rightarrow \infty }|x|^\sigma w_0 (x)=0.$$ Then $$\label{limit e -tH}
\lim_{t\rightarrow \infty }t^{\frac{\sigma }{2}}\sup _{x\in {{\mathbb R}}^n}{{\varphi}}^{-1} _\sigma (x,t)|{{\text{\rm{e}}}}^{-tH}w_0(x)|=0.$$
For every fixed $x\in {{\mathbb R}}^n $ we use the estimate from Theorem \[kernel th\] as follows $${{\varphi}}_\sigma ^{-1}(x,t) \big| {{\text{\rm{e}}}}^{-tH}w_0(x)\big| \leqslant C\int_{{{\mathbb R}}^n} G(x-y,ct){{\varphi}}_\sigma (y,t)w_0(y) {{\ \rm d}}y.$$ We decompose the integral on the right-hand side according to the definition of ${{\varphi}}_\sigma $ and we estimate each term separable. Substituting $y=z\sqrt{t}$ we obtain $$\begin{split}
I_1(x,t)&\equiv C\int_{|y|\leqslant \sqrt{t}} G(x-y,ct) \bigg( \frac{\sqrt{t}}{|y|} \bigg)^{\sigma }w_0(y) {{\ \rm d}}y\\
&=Ct^{-\frac{\sigma }{2}}\int_{|z|\leqslant 1 }G\bigg(\frac{x}{\sqrt{t}}-z,c\bigg)|z|^{-2\sigma} |\sqrt{t}z|^{\sigma }w_0(\sqrt{t}z) {{\ \rm d}}z .
\end{split}$$ Hence, $$t^{\frac{\sigma }{2}}\sup_{x\in {{\mathbb R}}^n }I_1(x,t) \rightarrow 0 \quad \textrm{as} \quad t\rightarrow \infty$$ by the Lebesgue dominated convergence theorem, because $G\big(\frac{x}{\sqrt{t}}-z,c\big)$ is bounded and the function $|z|^{-2\sigma }$ is integrable for $|z|\leqslant 1$. By the assumption imposed on $w_0$, given ${{\varepsilon }}>0$ we may choose $t$ so large that $$\sup_{|y|\geqslant \sqrt{t} }|y|^\sigma w_0(y)<{{\varepsilon }}.$$
Now, using the inequality $1\leqslant \big( \frac{\sqrt{t}}{|y|} \big)^\sigma $ for $|y|\geqslant \sqrt{t}$, we obtain $$\begin{split}
I_2(x,t)&\equiv \int_{|y|\geqslant \sqrt{t}} G(x-y,ct) w_0(y) {{\ \rm d}}y
\leqslant t^{-\frac{\sigma }{2}}\int_{|y|\geqslant \sqrt{t}} G(x-y,ct)|y|^{\sigma }w_0(y) {{\ \rm d}}y \\
&\leqslant {{\varepsilon }}t^{-\frac{\sigma }{2}}\int_{|y|\geqslant \sqrt{t}}G(x-y, ct) {{\ \rm d}}y .
\end{split}$$ Since $\int_{{{\mathbb R}}^n } G(x-y,ct ){{\ \rm d}}y=1$ for all $t>0$, $x\in {{\mathbb R}}^n $ and since ${{\varepsilon }}>0$ is arbitrary, we get $$t^{\frac{\sigma }{2}} \sup_{x\in {{\mathbb R}}^n} I_2(x,t) \rightarrow 0 \quad \textrm{as} \quad t\rightarrow \infty .$$
Let us defined the weighted $L^q$-norm as follows $$\|f\|_{q,{{\varphi}}_\sigma (t)}=\bigg( \int_{{{\mathbb R}}^n} |f(x){{\varphi}}_\sigma ^{-1} (x,t)|^q {{\varphi}}_\sigma ^2 (x,t) {{\ \rm d}}x \bigg) ^{\frac{1}{q}} \quad \textrm{ for every} \quad 1\leqslant q <\infty ,$$ and $$\|f\|_{\infty, {{\varphi}}_\sigma (t)}=\sup_{x\in {{\mathbb R}}^n} {{\varphi}}_\sigma ^{-1}(x,t)|f(x)| \quad {\rm for }\quad q=\infty .$$ Note, that in particular for $q=2$, the norm $\| \cdot \|_{2, {{\varphi}}_\sigma (t)} $ agrees with the usual $L^2$-norm on ${{\mathbb R}}^n $.
Suppose that $1\leqslant q\leqslant \infty $. Then the following inequality holds true $$\label{w norm}
\|{{\text{\rm{e}}}}^{-tH}w_0\|_{q,{{\varphi}}_\sigma (t)}\leqslant Ct^{-\frac{n}{2}(\frac{1}{r}-\frac{1}{q})} \|w_0\|_{r,{{\varphi}}_\sigma (t)}$$ for every $1\leqslant r\leqslant q\leqslant \infty$ and all $t>0$.
The proof of estimate can be directly deduced from the reasoning by Milman & Semenov. Indeed, in [@MS page 381], we can find the inequality $$\| {{\varphi}}_\sigma ^{-1} {{\text{\rm{e}}}}^{-tH} {{\varphi}}_\sigma f\|_{L^2\big({{\mathbb R}}^n , {{\varphi}}_\sigma ^2 (x,t) {{\ \rm d}}x \big)} \leqslant Ct^{-\frac{n}{4}} \| f\|_{L^1\big({{\mathbb R}}^n , {{\varphi}}_\sigma ^2 (x,t) {{\ \rm d}}x \big)}.$$ Hence, substituting ${{\varphi}}_\sigma f=w_0$ and using the definitions of the norm $\| \cdot \|_{q,{{\varphi}}_\sigma (t)}$, we obtain with $q=2$ and $r=1$. This inequality together with $$\|{{\text{\rm{e}}}}^{-tH}w_0\|_{1,{{\varphi}}_\sigma (t) }\leqslant C\|w_0\|_{1,{{\varphi}}_\sigma (t)},$$ stated in [@MS page 391], imply for $q=1$ and every $1\leqslant r\leqslant 2$ by Riesz-Thorin interpolation theorem. Moreover, the operator ${{\text{\rm{e}}}}^{-tH}$ is self-adjoint, so by duality the inequality $$\|{{\text{\rm{e}}}}^{-tH}w_0\|_{\infty , {{\varphi}}_\sigma (t)}\leqslant Ct^{-\frac{n}{4}}\|w_0\|_{2, {{\varphi}}_\sigma (t)}$$ holds true. The semigroup property ${{\text{\rm{e}}}}^{-tH}={{\text{\rm{e}}}}^{-\nicefrac{t}{2}H}{{\text{\rm{e}}}}^{-\nicefrac{t}{2}H}$ leads to with $q=\infty $ and $r=1$. Applying duality and Riesz-Thorin interpolation theorem once more, we complete the proof of . Let us emphasize at the end of this reasoning, that the inequalities are used by Milman & Semenov in [@MS] to derive the kernel estimate .
Linearization around a singular steady state
============================================
Let $u$ be a solution of with initial datum satisfying . We substitute $$w(x,t)=v_\infty (x)-u(x,t)$$ to get $$\label{h.e.1}
w_t=\Delta w+\frac{\lambda }{|x|^2}w-\big[ (v_\infty -w)^p-v_\infty ^p+pv_\infty ^{p-1}w \big],$$ where $\lambda =\lambda (n,p)=\frac{2p}{p-1}(n-2-\frac{2}{p-1})$. Let us note that the last term on the right-hand side of equation is non positive, namely $$(v_\infty -w)^p-v_\infty ^p \geqslant -pv_\infty ^{p-1}w,$$ which is the direct consequence of the convexity of the function $f(s)=s^p$. Indeed, since the graph of the function $f$ lies above all of its tangents, we have $f(s-h)-f(s)\geqslant -f'(s)h$ for all $s$ and $h$ in ${{\mathbb R}}$.
The proofs of our results are based on the following elementary observation. If $w$ is a nonnegative solution of equation with the initial condition $w_0(x)\geqslant 0$, then $$0\leqslant w(x,t)\leqslant {{\text{\rm{e}}}}^{-tH}w_0(x)$$ with $Hw=\Delta w +\lambda (n,p)|x|^{-2}\, w $. Consequently, using the condition $0\leqslant u_0(x) \leqslant v_\infty (x)$ and the just-mentioned comparison principle we can write $$\begin{aligned}
\label{v inf and e H 0} & 0\leqslant v_\infty (x) -u(x,t)\leqslant {{\text{\rm{e}}}}^{-tH}\big(v_\infty(x)-u_0(x) \big)\\
\intertext{or , equivalently, }
\label{v inf and e H} &v_\infty (x) -{{\text{\rm{e}}}}^{-tH}\big( v_\infty (x)-u_0(x) \big) \leqslant u(x,t)\leqslant v_\infty (x).\end{aligned}$$
\[1: ex lin e\] [ If $n\geqslant 11$ and either $p \geqslant p_{JL}$ or $\frac{n}{n-2} <p <\frac{n+2\sqrt{n-1}}{n-4+2\sqrt{n-1}}$, then the linearized problem $$\begin{split}
& w_t=\Delta w+\frac{\lambda }{|x|^2}w,\\
&w(x,0)=w_0(x),
\end{split}$$ with $\lambda =\lambda (n,p) =\frac{2p}{p-1}(n-2-\frac{2}{p-1})$ has the unique solution. Indeed, in the view of Theorem \[kernel th\], it is sufficient to show that $$\lambda (n,p)=\frac{2p}{p-1}\bigg(n-2-\frac{2}{p-1}\bigg) \leqslant \frac{(n-2)^2}{4}.$$ Substituting $y=\nicefrac{1}{(p-1)}$, after elementary calculations, we arrive at the inequality $$16y^2+(32-8n)y+n^2-12n+20 \geqslant 0$$ which has the solution $y\in \big(-\infty , \frac{n-4-2\sqrt{n-1}}{4}\Big]\cup \Big[\frac{n-4+\sqrt{n-1}}{4}, +\infty \big)$. Moreover, if $n \geqslant 11$, then $\frac{n-4-2\sqrt{n-1}}{4}>0$ and if $n\in (2, 10)$, then $\frac{n-4-2\sqrt{n-1}}{4}<0$ and $\frac{n-4+2\sqrt{n-1}}{4}>0$. These observations give us that $p \geqslant p_{JL}$ or $\frac{n}{n-2}<p\leqslant \frac{n+2\sqrt{n-1}}{n-4+2\sqrt{n-1}}$. ]{}
Asymptotic stability of steady states
=====================================
It suffices to use inequality and to estimate its right-hand side by Theorem \[w half l\].
As in the proof of Theorem \[half l\], it is sufficient to use together with Theorem \[limit e -tH th\] substituting $w_0(x)=v_\infty (x)- u_0(x)$.
Since we have inequality , it suffices to prove that $$\sup_{x\in {{\mathbb R}}^n}\big[v_\infty (x)-{{\text{\rm{e}}}}^{-tH}w_0(x)\big]\geqslant C(b) t^{\frac{\ell-\sigma }{\sigma (p-1)-2}}$$ for $w_0=v_\infty -u_0$. Hence, inequality from Theorem \[w half l\] enables us to write $$v_\infty (x)-{{\text{\rm{e}}}}^{-tH}\big(v_\infty (x)- u_0(x) \big)\geqslant v_\infty (x)-Cb {{\varphi}}_\sigma (x,t)t^{-\frac{\ell}{2}}$$ for all $x\in {{\mathbb R}}^n \setminus \{0\}$ and $t>0$. Next, using the explicit form of the weights ${{\varphi}}_\sigma $, we define the function $$F(|x|,t)=v_\infty (|x|)-Cb{{\varphi}}_\sigma (x,t) t^{-\frac{\ell }{2}}=
\begin{cases}
L|x|^{-\frac{2}{p-1}}-Cbt^{\frac{\sigma -\ell}{2}}|x|^{-\sigma } & {\rm for }\ |x| \leqslant \sqrt{t},\\
L|x|^{-\frac{2}{p-1}}-Cbt^{-\frac{\ell }{2}} & {\rm for }\ |x|\geqslant \sqrt{t}.
\end{cases}$$
An easy computation shows that the function $F$ has its maximum at $$|x|=C(b ) t^{ \frac{\sigma - \ell}{2}\frac{p-1}{\sigma (p-1)-2}}$$ and it is equal to $$\max_{x\in {{\mathbb R}}^n }F(|x|,t)=C(b) t^{\frac{\ell-\sigma }{\sigma (p-1)-2}}$$ for some constant $C(b)\geqslant 0$. Hence, we get .
To obtain , we use the result from Theorem \[limit e -tH th\]. It follows from that for every ${{\varepsilon }}>0$ there exists $T>0$ such that $$\big| {{\text{\rm{e}}}}^{-tH}w_0 (x) \big| <{{\varepsilon }}{{\varphi}}_\sigma (x,t) t^{-\frac{\sigma }{2}}$$ for all $x \in {{\mathbb R}}^n \setminus \{0\}$ and $t>T$. Hence, by , we have $$v_\infty (x)-{{\text{\rm{e}}}}^{-tH}\big(v_\infty (x)- u_0(x) \big)\geqslant v_\infty (x)-C {{\varepsilon }}{{\varphi}}_\sigma (x,t)t^{-\frac{\sigma }{2}}.$$ Now, once more using the explicit form of the weights ${{\varphi}}_\sigma $, we consider the function $$G(|x|,t)=v_\infty (|x|)-Cb{{\varphi}}_\sigma (x,t) t^{-\frac{\sigma }{2}}
= \begin{cases}
L|x|^{-\frac{2}{p-1}}-{{\varepsilon }}|x|^{-\sigma } & {\rm for }\ |x| \leqslant \sqrt{t},\\
L|x|^{-\frac{2}{p-1}}-{{\varepsilon }}t^{-\frac{\sigma }{2}} & {\rm for }\ |x|\geqslant \sqrt{t}.
\end{cases}$$ An elementary computations give us that the function $G$ attains its maximum at $$|x|=c {{\varepsilon }}^{ -\frac{p-1}{\sigma (p-1)-2}}$$ and $$\max_{x\in {{\mathbb R}}^n }G(|x|,t)=C{{\varepsilon }}^{-\frac{2}{\sigma (p-1)-2}}$$ for some constant $C\geqslant 0$. Since $\sigma>\nicefrac{2}{(p-1)}$, we see that the maximum of the function $G$ diverges to infinity if ${{\varepsilon }}$ tends to zero. This completes the proof of .
According to it is enough to estimate the $L^2-$ norm of the expression ${{\text{\rm{e}}}}^{-tH}w_0$ for every $w_0$ satisfying two conditions: $w_0\in L^1 ({{\mathbb R}}^n )$ and $| \cdot |^{-\sigma }w_0 \in L^1 ({{\mathbb R}}^n )$. Applying , with $q=2$, $r=1$ and using the definition of the functions ${{\varphi}}_\sigma (x,t)$, we may write $$\begin{split}
\|{{\text{\rm{e}}}}^{-tH} w_0\|_2 &\leqslant Ct^{-\frac{n}{4}}\|w_0\|_{1, {{\varphi}}_\sigma (t)} =Ct^{-\frac{n-2\sigma }{4}}\int_{|x|\leqslant \sqrt{t}}w_0(x)|x|^{-\sigma }{{\ \rm d}}x\\
&+Ct^{-\frac{n}{4}}\int_{|x|\geqslant \sqrt{t}}w_0(x){{\ \rm d}}x \leqslant Ct^{-\frac{n-2\sigma }{4}}\| w_0 |\cdot |^{-\sigma }\|_1 + Ct^{-\frac{n}{4}}\| w_0\|_1.
\end{split}$$ This establishes formula .
Again, by , we only need to show that $$\lim_{t\rightarrow \infty } \|{{\text{\rm{e}}}}^{-tH}w_0\|_2 =0$$ for each $w_0 \in L^2 ({{\mathbb R}}^n)$. Hence, for every ${{\varepsilon }}>0$ we choose $\psi \in C_c^\infty ( {{\mathbb R}}^n )$ such that $\|w_0 -\psi\|_2<{{\varepsilon }}$. Using first the triangle inequality and next , with $q=2$ and $r=2$, we obtain $$\begin{split}
\| {{\text{\rm{e}}}}^{-tH}w_0 \|_2 &\leqslant \|{{\text{\rm{e}}}}^{-tH}(w_0 -\psi )\|_2 + \| {{\text{\rm{e}}}}^{-tH} \psi \|_2 \\
&\leqslant C{{\varepsilon }}+ \| {{\text{\rm{e}}}}^{-tH} \psi \|_2.
\end{split}$$ Since the second term on the right-hand side convergence to zero as $t\rightarrow \infty $ by the first part of Theorem \[stab.2\], we get $$\limsup_{t\rightarrow \infty }\| {{\text{\rm{e}}}}^{-tH}w_0\|_2 \leqslant C{{\varepsilon }}.$$ This completes the proof of Theorem \[stab.2\] (ii), because ${{\varepsilon }}>0$ can be arbitrary small.
We linearize equation around the positive steady state $\psi_k$ substituting $v=\psi_k -u$ to get $$\label{eq. v}
v_t=\Delta v+p\psi_k^{p-1} v-\big( (\psi_k-v)^p - \psi_k^p + p\psi_k^{p-1}v \big).$$ Once more, using the convexity of the function $f(s)=s^p$, let us notice that the expression $(\psi_k -v )^p-\psi_k^{p-1}v$ is nonnegative. Furthermore, $\psi_k <v_\infty $ as long as $p>p_{JL}$ and $n\geqslant 11$. Hence applying first, the comparison principle to the approximate problem $$\begin{split}
&v_t=\Delta +p \min \{ N^{p-1}, v_\infty ^{p-1} \} v,\\
&v(x,0)=v_0(x)
\end{split}$$ with the nonnegative initial datum and next, passing to the limit when $N$ tends to infinity, we get $$0\leqslant \psi_k(x)-u(x,t)\leqslant {{\text{\rm{e}}}}^{-tH}\big( \psi_k(x) - u_0(x)\big) .$$ Now, and are the straightforward consequences of the reasoning used in the proof of Theorem \[stab.2\].
Acknowledgment {#acknowledgment .unnumbered}
==============
The author wishes to express her gratitude to the anonymous referee for several helpful comments and to Jacek Zienkiewicz for many stimulating conversations and . This work is a part of the author PhD dissertation written under supervision of Grzegorz Karch.
[GG]{}
W. X. Chen, C. Li, [*Qualitative properties of solutions to some nonlinear elliptic equations in ${{{\mathbb R}}^n}$*]{}, Duke Math. J. [**71**]{} (1993), 427–439.
M. Fila, M. Winkler, E. Yanagida, [*Grow–up rate of solutions for a supercritical semilinear diffusion equation*]{}, J. Diff. Equations [**205**]{} (2004), 365–389.
M. Fila, J. R. King, M. Winkler, E. Yanagida, [*Optimal lower bound of the grow–up rate for a supercritical parabolic equation*]{}, J. Diff. Equations [**228**]{} (2006), 339–356.
M. Fila, J. R. King, M. Winkler, E. Yanagida,[*Grow–up rate of solutions of a semilinear parabolic equation with a critical exponent*]{}, Adv. Diff. Eq. [**12**]{} (2007), 1–26.
M. Fila, M. Winkler,[*Rate of convergence to a singular steady state of a supercritical parabolic equation*]{}, J. Evol. Eq. [**8**]{} (2008), 673–692.
M. Fila, M. Winkler, E. Yanagida, [*Slow convergence to zero for a parabolic equation with a supercritical nonlinearity*]{}, Math. Ann. [**340**]{} (2008), 477–496.
V.A. Galaktionov, J.L. Vazquez, [*Continuation of blow–up solutions of nonlinear heat equations in several space dimensions*]{}, Comm. Pure Appl. Math. [**50**]{} (1997) 1–67.
B. Gidas, J. Spruck, [*Global and local behavior of positive solutions of nonlinear elliptic equations*]{}, Comm. Pure Appl. Math. [**34**]{} (1981), 525–598.
C. Gui, W.-M. Ni, X. Wang, [*Furter study on a nonlinear heat equation*]{}, J. Differ. Equations [**169**]{} (2001), 588–613.
D. D. Joseph, T.S. Lundgren, [*Quasilinear Dirichlet problems driven by positive sources*]{}, Arch. Rational Mech. Anal. [**49**]{} (1973), 241–269.
V. Liskevich, Z. Sobol, [*Estimates of integral kernels for semigroups associated with second order elliptic operators with singular coefficients*]{}, Potential Anal. [**18**]{} (2003), 359–390.
L. Moschini, A. Tesei, [*Parabolic Harnack inequality for the heat equation with inverse–square potential*]{}, Forum Math. [**19**]{} (2007), 407–427.
P. D. Milman, Yu. A. Semenov, [*Global heat kernel bounds via desingularizing weights*]{}, Journal of Func. Analysis. [**212**]{} (2004), 373–398.
P. Poláčik, E. Yanagida, [*On bounded and unbounded global solutions of a supercritical semilinear heat equation*]{}, Math. Ann. [**327**]{} (2003), 745–771.
P. Poláčik, E. Yanagida, [*Nonstabilizing solutions and grow–up set for a supercritical semilinear diffusion equation*]{}, Differential and Integral Equations, [**17**]{} (2004), 535–548.
P. Quittner, [*The decay of global solutions of a semilinear heat equation,*]{} Discrete Contin. Dyn. Syst. [**21**]{} (2008), 307–318.
P. Quittner, P. Souplet, [*Superlinear Parabolic Problems*]{}, Birkhäuser Advanced Texts (2007).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The majority of galactic baryons reside outside of the galactic disk in the diffuse gas known as the circumgalactic medium (CGM). While state-of-the art simulations excel at reproducing galactic disk properties, many struggle to drive strong galactic winds or to match the observed ionization structure of the CGM using only thermal supernova feedback. To remedy this, recent studies have invoked non-thermal cosmic ray (CR) stellar feedback prescriptions. However, numerical schemes of CR transport are still poorly constrained. We explore how the choice of CR transport affects the multiphase structure of the simulated CGM. We implement anisotropic CR physics in the astrophysical simulation code, [Enzo]{} and simulate a suite of isolated disk galaxies with varying prescriptions for CR transport: isotropic diffusion, anisotropic diffusion, and streaming. We find that all three transport mechanisms result in strong, metal-rich outflows but differ in the temperature and ionization structure of their CGM. Isotropic diffusion results in a spatially uniform, warm CGM that underpredicts the column densities of low-ions. Anisotropic diffusion develops a reservoir of cool gas that extends further from the galactic center, but disperses rapidly with distance. CR streaming projects cool gas out to radii of 200 kpc, supporting a truly multiphase medium. In addition, we find that streaming is less sensitive to changes in constant parameter values like the CR injection fraction, transport velocity, and resolution than diffusion. We conclude that CR streaming is a more robust implementation of CR transport and motivate the need for detailed parameter studies of CR transport.'
author:
- 'Iryna S. Butsky'
- 'Thomas R. Quinn'
bibliography:
- 'ButskyReferences.bib'
title: The Role of Cosmic Ray Transport in Shaping the Simulated Circumgalactic Medium
---
Introduction
============
The majority of baryons in galactic halos, including the majority of metals, reside in the circumgalactic medium (CGM) of galaxies [@Werk:2013; @Werk:2014]. Loosely defined, the CGM refers to the diffuse, multiphase gas that extends to the virial radius of galaxies. The CGM is shaped by the interplay between outflows from the star-forming disk and inflows from the pristine intergalactic medium (IGM) and provides constraints to theories of galaxy formation and evolution.
Early theoretical works predicted the existence of the CGM as a by-product of the interactions between cooling and accretion during galaxy formation [@Binney:1977; @Rees:1977; @Silk:1977]. Due to its low density, the CGM is extremely difficult to observe directly in emission. Instead, recent observations, such as those taken with The Cosmic Origins Spectrograph (COS) [@Green:2012] on board the Hubble Space Telescope (HST), have studied the CGM through absorption lines in quasar spectra that intersect the halos of galaxies along the line-of-sight. Using the quasar absorption method, different groups have greatly advanced our knowledge of the CGM both in high-redshift (e.g. @Steidel:2010 [@Rudie:2012; @Turner:2014]), and low-redshift (e.g. @Chen:2010 [@Gauthier:2010; @Kacprzak:2010; @Bordoloi:2011; @Prochaska:2011; @Tumlinson:2011; @Nielsen:2013; @Stocke:2013; @Werk:2013; @Zhang:2016]) galaxies. Today, we understand the CGM to have an intricate and complex temperature, density, and kinematic structure that is shaped by galactic outflows.
Simulations play an integral role in understanding the physical processes that govern galactic outflows by allowing astronomers to perform experiments testing the validity of different feedback prescriptions. The success of a simulation has traditionally been marked by its ability to reproduce observed galactic disk properties, such as the morphology, Tully-Fisher relation, and star formation rate density [@Schaye:2010; @Dave:2011; @Puchwein:2013; @Stinson:2013; @Christensen:2014]. Requiring that simulations also match the observed structure of the CGM will place strong additional constraints to stellar feedback models.
One such constraint is that stellar feedback must drive galactic outflows that carry a substantial amount of metals along with the gas they expel. Although metals are produced within galactic disks, galaxies retain only $\sim 20-25\%$ of these metals in their stars and ISM [@Peeples:2014]. Data from the Sloan Digital Sky Survey suggest that metals have been lost to outflows [@Tremonti:2004]. Since galaxies have very low metallicities at early cosmic times ($z > 3$), the metals ejected by supernovae serve as excellent tracers of outflowing material and can be used to make predictions for future observations.
These outflows must not only enrich the CGM but also reproduce its multiphase ionization structure. For example, the CGM of galaxies at low redshift contains a substantial amount of metal-enriched, cool gas at $10^4-10^5$K [@Werk:2013; @Werk:2014]. However, cooling times of $\simeq 10^5$K gas are very short compared to galactic timescales, and it is unclear how this material survives in such abundance. The data seem to imply an additional unknown source of non-thermal pressure that supports the warm gas against condensation.
Recent studies have investigated various types of thermal wind-launching mechanisms, including radiation pressure from massive stars [@Kim:2011; @Murray:2011; @Hopkins:2012; @Sharma:2012; @Wise:2012], thermal supernova feedback [@Abadi:2003; @Joung:2009; @Hummels:2012; @Creasey:2013], kinetic supernova feedback [@Hopkins:2012; @Springel:2003; @Agertz:2013], supernova superbubble models, [@Keller:2015; @Keller:2016], and subgrid models tuned to generate strong outflows [e.g. @Springel:2003; @Stinson:2006; @Oppenheimer:2006; @Governato:2012]). However, many of these feedback prescriptions still struggle to produce sufficiently strong galactic outflows. Those that do succeed in expelling gas from the galactic disk underpredict the observed column densities of H I, O VI, and low ions in the CGM. [@Hummels:2013; @Marasco:2015; @Fielding:2017; @Gutcke:2017]. This discrepancy is amplified in massive galaxies for which thermal supernova feedback also struggles to quench star formation [@Pontzen:2013]. In such galaxies, feedback from active galactic nuclei (AGN) is often invoked to regulate the baryon cycling process [e.g. @Suresh:2015; @Tremmel:2017; @Oppenheimer:2017]. Although AGN help recreate some observable properties for massive galaxies, they cannot account for discrepancies in galaxies (such as the Milky Way) that host a small supermassive black hole (SMBH) at their core. It is likely that the missing key is a non-thermal component.
Cosmic rays (CRs) are charged particles that have been accelerated to relativistic speeds in shocks (e.g. SNe @Ackermann:2013, structure formation shocks @Pfrommer:2008 radio-loud AGN @McNamara:2007) and are observed to be roughly in equilibrium with the thermal and magnetic pressures in our galaxy [@Boulares:1990]. Due to past computational constraints, CRs have only recently been included in 3-D hydrodynamical simulations of galactic structure. These simulations have shown that CR feedback drives strong, mass-loaded outflows and suppresses star formation [e.g. @Miniati:2001; @Ensslin:2007; @Jubelgas:2008; @Socrates:2008; @Uhlig:2012; @Vazza:2012; @Booth:2013; @Salem:2014a; @Girichidis:2016; @Simpson:2016; @Samui:2017]. Furthermore, CRs provide pressure support to the thermal gas, which may explain the presence of the observed structures in the CGM that appear to be out of thermal equilibrium.
[@Salem:2016] were the first to show that stellar feedback models that included CR energy were better at matching COS-Halos data for low-ion column densities than those with purely thermal feedback. However, these results depended on the choice of a constant diffusion coefficient, which is only loosely constrained by observations. Furthermore, their simulations neglected magnetic fields, which are crucial for accurate modeling of CR transport. Traditionally, implementations of CR transport have been separated into two approximations: diffusion and streaming. Although both approaches have been successful at driving galactic outflows, the strength and mass-loading factor of CR driven winds depends on the invoked transport mechanism [@Ruszkowski:2017; @Wiener:2017].
In this work, we investigate the role of different CR transport prescriptions in shaping the multiphase structure of the CGM. This paper is structured as follows. In §\[sec:crenzo\], we describe the implementation of CR physics in [*ENZO*]{}. In §\[sec:methods\], we describe the simulation suite and the relevant initial conditions and physical modules used in our isolated disk setup. We present our results in §\[sec:results\], focusing on the generated outflows, temperature structure, and column densities of the different galaxy models. We outline the qualitative differences between CR diffusion and streaming and discuss future prospects in §\[sec:discussion\]. Finally, we provide a summary of our work in §\[sec:summary\].
[^1]
------------------------------------------------------------------------------ --------------------- ------------------ ---------------- ------------------------------------------------------ ------- -------
Run ID Min. grid size (pc)
$\mathrm{f}_{c}$ Transport Mode $\kappa_{\varepsilon}$ \[$\times 10^{28}$ cm$^2$/s\] $f_s$ $H_c$
**[ncr]{} & **160 &**- & **no CRs & **- &**- & **no\
& 320 & - & no CRs & - & - & no\
\
[adv]{} & **160 & **0.1 & **advection only & **0 & **- & **no\
& 320 & 0.01 & advection only & 0 & - & no\
& 320 & 0.1 & advection only & 0 & - & no\
& 320 & 0.3 & advection only & 0 & - & no\
\
[isod]{} & **160 & **0.1 & **isotropic **diffusion & **3 & **- & **no\
& 320 & 0.01 & isotropic diffusion & 3 & - & no\
& 320 & 0.1 & isotropic diffusion & 3 & - & no\
& 320 & 0.3 & isotropic diffusion & 3 & - & no\
& 320 & 0.1 & isotropic diffusion & 1 & - & no\
\
[anisd]{} & **160 & **0.1 & **anisotropic diffusion & **3 & **- & **no\
& 320 & 0.01 & anisotropic diffusion & 3 & - & no\
& 320 & 0.1 & anisotropic diffusion & 3 & - & no\
&320 & 0.1 & anisotropic diffusion & 10 & - & no\
& 320 & 0.3 & anisotropic diffusion & 3 & - & no\
& 320 & 0.1 & anisotropic diffusion & 1 & - & no\
\
[anisdh]{} & **160 & **0.1 & **anisotropic diffusion & **3 &**- & **yes\
& 320 & 0.1 & anisotropic diffusion & 3 & - & yes\
\
[stream]{}&**160 & **0.1 & **streaming & **- & **4 &**yes\
& 160 & 0.1 & streaming & - & 1 & yes\
& 320 & 0.01 & streaming & - & 4 & yes\
& 320 & 0.1 & streaming & - & 1 & yes\
& 320 & 0.1 & streaming & - & 2 & yes\
& 320 & 0.1 & streaming & - & 4 & yes\
& 320 & 0.3 & streaming & - & 4 & yes\
****************************************************************************
------------------------------------------------------------------------------ --------------------- ------------------ ---------------- ------------------------------------------------------ ------- -------
\[tab:parameters\]
Cosmic Rays in ENZO {#sec:crenzo}
===================
In the following section, we describe our implementation of the CR fluid into the different Riemann solvers in the adaptive mesh refinement (AMR) magnetohydrodynamics (MHD) simulation code [Enzo]{}. Our implementation builds on the work of [@Salem:2014a], which described the integration of a CR fluid in the [zeus]{} finite-difference solver [@Stone:1992]. Because the current implementation of the [zeus]{} solver in the public version of [Enzo]{} doesn’t support MHD, the primary advantage of our work is the ability to model the interaction of CRs with magnetic field lines. In §\[sec:crfluid\], we describe the new set of conservation equations. In §\[sec:crstreaming\] and §\[sec:crdiffusion\], we describe the algorithm for anisotropic CR streaming and diffusion. In §\[sec:gradient\] we discuss our approach for avoiding unphysical values of CR energy.
Cosmic Ray Fluid {#sec:crfluid}
----------------
As an Eulerian code, [Enzo]{} models gas as a fluid moving through grid cells that are fixed in space. At every timestep, [Enzo]{} advances the state of the fluid in the simulation by numerically approximating a solution to the equations below. These equations encompass the conservation of mass (Equation \[eqn:mass\]), conservation of momentum (Equation \[eqn:momentum\]), the induction equation (Equation \[eqn:induction\]), and the energy equation (Equation \[eqn:energy\]). In simplified terms, these equations assert that the change over time in the value of a given conserved quantity in one cell (the $\frac{\partial }{\partial t}$ term ) is equal to the flux of that conserved quantity through the boundaries of that cell (the $\nabla \cdot ()$ term). Source terms, which encompass both energy gains and losses (for example, an injection of CR energy after a supernova explosion) appear on the right-hand-side of the equality. Traditionally, the evolution of thermal gas is encompassed in Equations \[eqn:mass\] - \[eqn:energy\] (setting the cosmic ray pressure term, $P_c$ to zero). We model the evolution of CRs as an additional ultra-relativistic proton fluid with adiabatic index $\gamma_{c} = 4/3$ [@Jun:1994; @Drury:1986] in Equation \[eqn:crenergy\].
$$\label{eqn:mass}
\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho {\bf v}) = 0$$
$$\label{eqn:momentum}
\frac{\partial(\rho {\bf v})}{\partial t} + \nabla
\cdot (\rho{\bf vv}^T + P_g + P_c) = -\rho \nabla {\bf \Phi}$$
$$\label{eqn:induction}
\frac{\partial \bf B}{\partial t} + \nabla\cdot({\bf Bv^{\mathrm{T}}
- vB^{\mathrm{T}}}) = \bf 0$$
$$\label{eqn:energy}
\frac{\partial \varepsilon_g}{\partial t} + \nabla\cdot ({\bf v} \varepsilon_g)
= - P_g\nabla\cdot{\bf v} + H_c +
\Gamma_g + \Lambda_g$$
$$\label{eqn:crenergy}
\frac{\partial \varepsilon_c}{\partial t} + \nabla\cdot {\bf F}_\mathrm{c}
= - P_\mathrm{c} \nabla\cdot{\bf v} - H_\mathrm{c} +
\Gamma_\mathrm{c} + \Lambda_\mathrm{c}.$$
$$\label{eqn:crtransport}
{\bf F}_\mathrm{c} = {\bf v} \varepsilon_{\mathrm{c}} +{\bf v}_\mathrm{s}(\varepsilon_{\mathrm{c}}+P_{\mathrm{c}})-\kappa_{\varepsilon}{\bf b}({\bf b\cdot\nabla}
\varepsilon_{\mathrm{c}})$$
Here, we define $\Phi$ to be the gravitational potential (where $ \bf \nabla ^2 \Phi = 4\pi G \rho_{tot} $), $\rho_{tot}$ to be the total density, and $\rho$, $\bf v$ to be the gas density and velocity. $\bf B$ is the magnetic field strength, and $\bf b =
{\bf B}/|{\bf B}|$ is the magnetic field direction. The superscript $\mathrm{T}$ denotes the vector transpose. The internal gas pressure, $P_g$ is related to the internal thermal energy density $\varepsilon_g = (\gamma_g - 1)P_g$, where $\gamma_g = 5/3$. Similarly, the CR pressure is related to the CR energy density, $\varepsilon_c = (\gamma_c - 1)P_c$, with $\gamma_c = 4/3$. From here on out, we use the subscript ’g’ to refer to properties of the thermal gas, and the subscript ’c’ to refer to properties of the CR fluid. We model the diffusion coefficient, $\kappa_{\varepsilon}$ as a constant, which is observationally constrained to be on the order of $\kappa_{\varepsilon} \simeq
10^{28}$ cm$^2$ s$^{-1}$ [@Ptuskin:2006; @Strong:1998; @Tabatabaei:2013]. Although we neglect CR transport perpendicular to the magnetic field, it could have observable effects [@Kumar:2014]. $\Gamma$ and $\Lambda$ are energy source and sink terms respectively. In our simulations, the source of CR energy is supernova events. We do not implement hadronic CR energy losses.
Equation \[eqn:crtransport\] encompasses the three modes of CR transport that we have implemented. The first term on the left represents advection, in which the CR fluid moves with the bulk velocity of the thermal gas. This term is solved explicitly in the Riemann solvers of [Enzo]{}. The next two terms describe CR streaming and diffusion respectively. These are approximations to CR motion relative to the gas and are therefore solved separately from the advection equation. The implementation of these transport methods is described in detail in sections \[sec:crstreaming\] and \[sec:crdiffusion\].
In the streaming approximation, CRs move relative to the gas with a velocity given by
$${\bf v}_{\mathrm{s}} = -sgn({\bf b} \cdot {\bf \nabla}\varepsilon_{\mathrm{c}}).
f_s{\bf v_A}$$
Here, $f_s$ is the constant streaming factor described in @Ruszkowski:2017. The Alfv[è]{}n velocity, ${\bf v_A} = {\bf B}/\sqrt{4\pi\rho}$ represents the transverse waves propagating along the magnetic field lines in a plasma. The function $sgn$ returns the sign of the enclosed expression. In this limit, CRs also transfer momentum to the thermal gas through the heating term, $H_c$, where $$H_c = |{\bf v_A}\cdot \nabla P_{c} |.$$ Although this term appears only in the streaming approximation, we follow the example of @Wiener:2017 and include it in some simulations with CR diffusion to isolate the underlying differences in the different transport mechanisms (see Table \[tab:parameters\]).
Cosmic Ray Streaming {#sec:crstreaming}
--------------------
Both CR streaming and diffusion are approximations to CRs scattering off of Alfv[è]{}n waves. The difference lies in the source that is generating these waves. In the streaming approximation, CRs are assumed to drive the growth of Alfv[è]{}n waves through the streaming instability @Kulsrud:1969. This is often referred to as the “self-confinement” case [@Zweibel:2017].
We isolate the streaming behavior from the general CR transport equation (Eqn. \[eqn:crtransport\]) using $$\frac{\partial \varepsilon_c}{\partial t} - \nabla \cdot [{\bf v_{\mathrm{s}}}
(\varepsilon_{\mathrm{c}}+P_{\mathrm{c}})] = 0.$$ At each simulation timestep, we calculate CR streaming by updating the value of the CR energy density in each cell. The evolution of the CR energy density in cell $i$ is given by $$\label{eqn:crstreaming}
\varepsilon_{c, i}^{n+1} = \varepsilon_{c, i}^{n} -\Delta t \sum_j
-{\bf \mathrm{sgn}}({\bf b}_{ij} \cdot {\bf \nabla}\varepsilon^n_{\mathrm{c},ij}){\bf v}_{A,ij}
\cdot {\bf n}_{ij} (\Delta x_{ij})^{-1},$$ where $\varepsilon^n_{c,i}$ is the value of the CR energy density in cell $i$ before streaming is applied. $\Delta t$ is the timestep, and the terms ${\bf b}_{ij}$ and $\nabla \varepsilon_{c, ij}$ are the direction of the magnetic field and the gradient of CR energy density computed at cell face $j$. ${\bf n}_{ij}$ describes the plane parallel to face $j$, and $\Delta x_{ij}$ is the length of the cell’s axis that is perpendicular to face $j$. If the cells in the simulation are constructed as cubes, then $\Delta x$ can be treated as a constant and moved out of the summation term.
The streaming time step is set by the bulk motion of the gas and the alfv[è]{}n velocity, so that $ t_{\mathrm{stream}} < \frac{\Delta x}{{\bf v}_g + f_s{\bf v}_a}$. We avoid instabilities near local extrema of CR energy density by employing the regularization described in @Sharma:2009 and @Ruszkowski:2017 where $${\bf F}_c = -(e_c + p_c) {\bf u}_A \mathrm{tanh} (h_c \hat{\bf v} \cdot \nabla e_c / e_c).$$ We follow the recommendation of @Ruszkowski:2017 and set the free regularization parameter, $h_c$, to 10 kpc.
Cosmic Ray Diffusion {#sec:crdiffusion}
--------------------
In the “extrinsic turbulence” case, Alfv[è]{}n waves are excited by turbulent motions in the thermal gas. Since the gyroradius of CRs around magnetic field lines is significantly smaller than the best-resolved cells in our simulation, CR transport in this regime is modeled as diffusion parallel to the direction of magnetic field lines. At each timestep, in addition to solving the conservation equations, [Enzo]{} computes CR diffusion according to $$\frac{\partial \varepsilon_c}{\partial t} - \nabla \cdot [\kappa_{\varepsilon}
{\bf b} ({\bf b} \cdot \nabla \varepsilon_c )] = 0.$$ With diffusion, the updated CR energy density in a grid cell, $i$ follows the prescription $$\label{eqn:crdiffusion}
\varepsilon_{c, i}^{n+1} = \varepsilon_{c, i}^{n} + \Delta t
\sum_j \kappa_{\varepsilon} ({\bf b}_{ij} \cdot \nabla \varepsilon_{c, ij}^n)
{\bf b}_{ij} \cdot {\bf n}_{ij} (\Delta x_{ij})^{-1},$$ where $\Delta t$ is the diffusion timestep.
In the case of isotropic diffusion, Equation \[eqn:crdiffusion\] becomes
$$\label{eqn:isocrdiffusion}
\varepsilon_{c, i}^{n+1} = \varepsilon_{c, i}^{n} + \Delta t
\sum_j \kappa_{\varepsilon}\nabla \varepsilon_{c, ij}^n
\cdot {\bf n}_{ij} (\Delta x_{ij})^{-1}.$$
We employ an explicit diffusion scheme because we find that our simulation time step is not limited by the diffusion time step constraint of $t_{\mathrm{diff}} < \frac{1}{2N}\frac{\Delta x^2}{\kappa_{\varepsilon}}$, where N is the dimensionality of our simulation. For a detailed discussion on numerically modeling semi-implicit anisotropic CR diffusion, see @Pakmor:2016b.
Limiting the Gradient {#sec:gradient}
---------------------
For some geometrical configurations, using a simple estimate of the CR energy gradient when calculating anisotropic diffusion violates the second law of thermodynamics and leads to an unphysical flow of CR energy from cells with lower energy density to cells with higher energy density. In more extreme cases, this leads to some cells developing unphysical (negative) values of CR energy. To combat this issue, we employ a limited gradient as described in @VanLeer:1977.
The simple gradient only considers the flux through a given cell face, so that $$\nabla \varepsilon_{c} = \frac{\varepsilon_{c, i+1} - \varepsilon_{c, i}}{\Delta x_i}$$ where $i$ and $i+1$ are the indices of neighboring cells.
Where the simple gradient produces an unphysical flux, we estimate the limited CR energy density gradient on the interface [@Sharma:2007; @Pakmor:2016b] to be $$\nabla \varepsilon_{c} = \frac{4}{\sum_n (\nabla \varepsilon_{cr}^n)^{-1}}.$$
This approximation takes into account the component of the gradient that is parallel to the cell face, $n$.
{width="\textwidth"}
Numerical Methods {#sec:methods}
=================
We conduct our work using [Enzo]{}, an open source multi-physics MHD astrophysical simulation code that employs AMR to resolve areas of interest [@Collins:2010; @Bryan:2014]. At each time step, [Enzo]{} solves the Riemann conservation equations. We employ the local Lax-Friedrichs Riemann solver (LLF; @Kurganov:2000) to compute the flux at cell interfaces and spatial reconstruction is performed with the piecewise linear method (PLM; @VanLeer:1977). Time stepping is carried out with the total variation diminishing (TVD) 2nd order Runge-Kutta (RK) scheme [@Shu:1988]. To avoid creating unphysical magnetic monopoles, we employ a hyperbolic divergence cleaning approach first described by @Dedner:2002. Interested readers should see @Wang:2008 [@Wang:2009] for detailed discussions and extensive testing of the MHD formulation in [Enzo]{}.
Stellar Feedback {#sec:supernova}
----------------
Star formation in our simulations follows the prescription described in @Cen:1992 with minor modifications. Stars particles are only formed in grids at the maximum level of refinement that meet predetermined density, mass, and minimum dynamical time thresholds: $\rho_{\mathrm{cell}} > \rho_{\mathrm{thres}}$, $M_{\mathrm{cell}} >
M_{\mathrm{thres}}$, $t_{\mathrm{cool, cell}} < t_{\mathrm{dyn, min}}$. The exact values for $\rho_{\mathrm{thres}}, M_{\mathrm{thres}}, t_{\mathrm{dyn, min}}$ depend on the resolution of the simulation, because a very resolved cell may not be capable of holding sufficient mass to satisfy the formation criteria. In our simulations, we choose $\rho_{\mathrm{thres}} = 3.0 \times 10^{-26}$ g cm$^{-3}$, $M_{\mathrm{thres}} = 3.0 \times 10^5 M_{\odot}$, $t_{\mathrm{dyn, min}} = 10$ Myr. Additionally, the gas in the parent cell must be collapsing (determined by measuring negative velocity divergence) and Jeans unstable. If all conditions are met, the parent cell produces a star cluster particle with 10% mass efficiency so that the initial mass is given by $M^{\mathrm{init}}_{\mathrm{SC}} = 0.1 \rho_g \Delta x^3$.
Over the course of 120 Myr, 25% of that mass is ejected into the cell in which the star cluster particle resides, modeling the effects of Type II supernovae. We inject $10^{51}$ ergs of energy for every $42 M_{\odot}$, so that a star cluster particle of $M = 3.0\times 10^{5} M_{\odot}$ expels a total of $E_{tot} = 5.4 \times 10^{54}$ ergs of energy. In our model, this total energy is comprised of thermal, magnetic, and CR energy so that $E_{th} = f_{th}E_{tot}$, $E_{B} = f_{B}E_{tot}$, and $E_{cr} = f_{cr}E_{tot}$, where $f_{th} + f_{B} + f_{cr} = 1$. We adopt a conservative estimate of $f_{B} = 0.01$, $f_{cr} = 0.1$ [@Wefel:1987; @Ellison:2010] and assume 2% of the ejecta to be metals.
Chemistry and Cooling {#chemistry}
---------------------
In our simulations, we explicitly track all ion species of hydrogen. All other species are tracked together in a metallicity variable. We calculate cooling using the GRACKLE chemistry library [@Bryan:2014; @Smith:2017] that is integrated with [Enzo]{}. It uses pre-computed tables of metal cooling rates as functions of the gas density and temperature generated by CLOUDY [@Ferland:2013]. In addition, we consider uniform photoelectric heating of $8.5\times10^{-26}$ ergs s$^{-1}$ cm$^{-3}$ without selfshielding [@Tasker:2008]. In our idealized isolated disk setup, we do not consider the ultraviolet background radiation from distant quasars and galaxies.
Synthetic Observations {#sec:trident}
----------------------
We use [*Trident*]{} [@Hummels:2017], a python based tool integrated with yt [@Turk:2012] to construct ion densities and generate synthetic spectra from our simulated datasets. For ions that are not explicitly tracked by the simulation, we can determine their number densities, $n_{X}$, in post processing with $$n_{X} = n_H Z \bigg(\frac{n_X}{n_H}\bigg)_{\odot},$$ where $n_H$ is the total hydrogen number density, Z is the metallicity (a value that is kept track of throughout the simulation), and $(n_X/n_H)$ is the solar number abundance for any element that is not explicitly tracked.
Trident computes relative ion abundances in a simulation cell by considering both photoionization from an extragalactic UV background [@Haardt:2012], and collisional ionization within the gas. The collisional ionization rate is determined by the cell’s temperature, density, and metallicity, using an extensive set of lookup tables precalculated by CLOUDY, which assume ionization abundances as predicted by collisional ionization equilibrium (CIE). CIE holds even when CR pressure dominates over thermal pressure because the underlying assumption of our model is that the CR fluid is collisionless. Any deviations from CIE (which is more likely in the dense regions of the galactic disk than in the CGM) would likely increase of the ionization rate in low temperature gas [@Oppenheimer:2013].
We use this functionality in [*Trident*]{} to plot column densities of different ions as a function of impact parameter in Figure \[fig:col\_density\]. We calculate the column densities by defining sight-lines through the simulation box. The ion number density is then calculated in each length element, $dl$, along this projected ray. The column density along the line-of-sight is then given by the summation $n_{\mathrm{col}} = \sum dl \cdot n$.
{width="\textwidth"}
![\[fig:massload\] The time-evolution of the integrated mass loading factor (the total ejected gas mass / total mass in stars), measured at 150 kpc from the galactic center. At late times, models with anisotropic CR transport continue driving gas out of the galaxy, even when star formation is declining. ](mass_loading.pdf){width="45.00000%"}
Initial Conditions {#sec:ICs}
------------------
We simulate a suite of idealized isolated disk galaxies with initial conditions described by the AGORA collaboration [@Kim:2014]. The fixed dark matter halo of mass $M_{200} = 1.074 \times 10^{12} M_{\odot}$ follows the Navarro-Frenk-White (NFW; @Navarro:1997) profile and is situated in a hot ($10^6$ K), stationary, uniform density box of (1.31 Mpc)$^3$. The concentration and spin parameters are respectively defined to be $c = 10,$ $\lambda = 0.04$. The gaseous disk has a total mass of $M_d = 4.297\times 10^{10} M_{\odot}$ and follows the analytic exponential profile
$\rho(r,z) = \rho_0 e^{-r/r_d}e^{-|z|/z_d}$
where $r_d = 3.432$ kpc, $z_d = 343.2$ pc, and $\rho_0 = M_d f_{\mathrm{gas}}/
4\pi r^2_d z_d$. We define $f_{\mathrm{gas}} = 0.2$ to be the gas mass fraction of the disk. The rest of the disk mass (80%) is contained in $10^5$ stellar particles. The stellar bulge has a stellar mass of $4.297 \times 10^9 M_{\odot}$ and follows the Hernquist density profile [@Hernquist:1990].
We include an initial toroidal magnetic field of strength $\mathrm{B_0} = 1 \mu$G in the disk and a toroidal field of $\mathrm{B_0} = 10^{-15}$ G in the halo. The strong initial field in the disk follows the example of [@Ruszkowski:2017] to achieve sufficiently fast CR streaming velocities at early simulation times. The strength of the initial halo field lies in the accepted theoretical range of primordial magnetic fields, $10^{-20} - 10^{-9}$ G [@Cheng:1994; @Durrer:2013]. To avoid numerical interpolation errors, galaxy models that include CR physics are initialized with an isotropic background CR energy density of 0.1 erg/cm$^3$. This background CR pressure is 15 orders of magnitude weaker than the initial conditions of the gas pressure and does not alter the thermal gas in any significant way. Because we’re interested in tracing the cycling process of metals, we set the initial metallicities of the disk and halo to be $0.3
Z_{\odot}$ and $10^{-3} Z_{\odot}$ respectively.
Description of Simulation Suite {#sec:suite}
-------------------------------
Our simulation suite is designed to isolate the effect of different implementations of CR transport. All of the galaxy models share the initial conditions described above. The fiducial models differ from each other only in their CR transport prescription and are described below:
- Model [ncr]{} does not include CR physics and serves as our control model.
- Model [adv]{} only simulates CR advection with the bulk motion of the gas.
- Model [isod]{} assumes isotropic CR diffusion using the algorithm described in @Salem:2014a with a constant diffusion coefficient of $\kappa_{\varepsilon} = 3\times10^{28}$ cm$^2$ s$^{-1}$.
- Model [anisd]{} assumes anisotropic CR diffusion along magnetic field lines with a constant diffusion coefficient of $\kappa_{\varepsilon} = 3\times10^{28}$ cm$^2$ s$^{-1}$.
- Model [anisdh]{} builds on model [anisd]{} with an added CR heating term, $H_c$.
- Model [stream]{} assumes CR streaming with a streaming factor of $f_s = 4$ [@Ruszkowski:2017].
All of our fiducial models that include CR physics (in bold font in Table \[tab:parameters\]) inject 10% of their supernova ejecta in the form of CR energy. See Table \[tab:parameters\] for a summary of the different models.
Results {#sec:results}
=======
![\[fig:sfr\] The star formation rate as a function of time for galaxy models with different CR transport prescriptions. Model [adv]{} quenches after the first episode of star formation. The star formation in model [isod]{} consistently lies just below that of the control model [ncr]{} with no CRs. Model [anisd]{} briefly matches the star formation in the control around t = 2.5 Gyr, but is quenched soon after at around t = 10 Gyr. Model [stream]{} has a variable star formation history with a period of roughly 2 Gyr. ](sfr.pdf){width="45.00000%"}
Outflows {#sec:outflows}
--------
The CGM in the isolated galaxy models is enriched solely by the outflows that expel gas from the disk. For this reason, we first turn our attention to analyzing the CR-driven winds and their dependence on the CR transport mechanism.
In broad strokes, a CR-driven wind begins when CRs move down their gradient, out of the midplane of the galaxy. CR pressure support then lifts low-entropy gas out of the galactic potential well, triggering outflows. Figure \[fig:outflows\] displays the relationship between gas entropy (top row), the vertical component of the velocity of the galactic winds (middle row), and the ratio of CR to thermal pressures (bottom row) after 2 Gyr of evolution. From left to right, the columns show models with: No CR physics ([ncr]{}), CR advection ([adv]{}) only, isotropic CR diffusion ([isod]{}), anisotropic CR diffusion ([anisd]{}, and CR streaming ([stream]{})).
Although models [ncr]{} and [adv]{} experienced a brief period of weak gas expulsion, by t = 2 Gyr these galaxies have lost all signs of outflows and show no signs of strong inflows. This is reinforced by the gas entropy profiles, which resemble the initial conditions. The main difference between models [ncr]{} and [adv]{} at this time is near the midplane of the galaxy, where the CR pressure of model [adv]{} has created a thicker vertical disk profile. Confined to move solely through advection, the CRs in model [adv]{} have no efficient mechanism for escaping the disk.
Models with CR transport relative to the gas ([isod]{}, [anisd]{}, and [stream]{}) all drive strong outflows with velocities reaching $10^2$ km/s, consistent with previous works [e.g. @Salem:2016; @Pakmor:2016a; @Wiener:2017].
Model [isod]{} drives relatively thin and uniform conical outflows. The gas entropy profile is relatively unaltered, reaching values near $5\times10^2\ \mathrm{cm^2\ keV}$ near the midplane of the disk. CR pressure dominates over thermal pressure, tracing the shape of the outflows out to a cylindrical radius of 10 kpc. Outside of the active outflow region, the CR pressure is below one tenth of the gas pressure.
The outflows generated with anisotropic CR diffusion in model [anisd]{} have a thinner radial profile and reach higher velocities compared to those driven by isotropic diffusion. For $z > 20$ kpc outside of the midplane, there are signs of infalling gas surrounding the outflow. The CR pressure dominates over thermal pressure for nearly all radii in the $100\times100$ kpc projection. The gas entropy reaches values of 50 cm$^2$ keV just outside the disk, roughly an order of magnitude lower than that in model [isod]{}.
Compared to both of the diffusion models, the outflows generated by CR streaming start higher above the midplane of the disk and have a wider horizontal extent, reaching radii of nearly 40 kpc at a vertical height of 50 kpc. Near the galactic center, there are several filaments of inflowing gas. Compared to model [anisd]{}, model [stream]{} has lower gas entropy immediately outside the disk and at larger radii, maintaining values around $5\times10^2$ cm$^2$ keV at radii 50 kpc. Unlike either of the diffusion models, the distribution of the CR pressure ratio in model [stream]{} is patchy, ranging from 0.2 - 50 times that of the thermal pressure.
Figure \[fig:velz\] follows the time evolution of the density-weighted outflow velocity as a function of the height above the galactic disk, for models with different CR transport prescriptions. At early times, gas in our control model, [ncr]{}, is inflowing at all radii within 200 kpc of the galactic center. After 2 Gyr, weak ($\mathrm{v_z} = 10$ km/s) outflows develop at heights above 30 kpc. As we shall see in Figure \[fig:ion\_density\], these weak outflows fail to enrich the CGM with metals. Instead, we turn our attention to models [isod]{}, [anisd]{}, and [stream]{}, for which 10% of supernova feedback was injected as CR energy.
Model [isod]{} has the strongest outflows at t = 1 Gyr, with radially-averaged velocities surpassing 100 km/s. By t = 4 Gyr, these outflows weaken significantly, with average velocities hovering around 10 km/s. After 8 Gyr, notable inflows develop at radii above 30 kpc and persist for the rest of the simulation time.
This galactic wind is consistent with expectations of isotropic CR diffusion. The initial burst of supernova feedback in model [isod]{} creates a steep gradient of CR energy density. With isotropic diffusion, CRs drag gas out of the galaxy as they travel down their own gradient, uninhibited by the presence of magnetic fields. This process continues until the CR gradient flattens, slowing down the diffusion process. At later times, the star formation has decreased enough that the newly-injected CRs can no longer create a gradient steep enough to drive a strong wind. As the CRs in the CGM continue to stream away from the galaxy, they can’t provide enough pressure support to the gas they expelled and it begins to collapse back down on to the galaxy.
Model [anisd]{} retains consistent outflow velocities within 50 kpc of the disk throughout its evolution. Beyond 50 kpc, the winds are sensitive to the star formation history. For example, the double-valley feature at t = 8 Gyr traces the suppressed star formation at t = 7 Gyr (see Figure \[fig:sfr\]).
The steady nature of the galactic outflows in model [anisd]{} is fueled by anisotropic CR diffusion. In this approximation, the velocity with which CRs can escape the disk is regulated by the complex geometry of magnetic field lines. Therefore, the galaxy releases its built-up CR energy slowly over time. Although the galaxy is quenched after t = 10 Gyr, weak outflows in model [anisd]{} persist out to t = 13 Gyr.
Like in model [anisd]{}, the key to sustained outflows in model [stream]{} is its anisotropic CR transport. Unlike any other fiducial run, model [stream]{} shows signs of inflow near the disk while simultaneously hosting outflows at larger radii.
The integrated mass loading factor (IMLF), defined as the ratio of the total injected gas mass over the total stellar mass, quantifies the expelled gas content. Figure \[fig:massload\] shows the evolution of the IMLF in models [isod, anisd, anisdh,]{} and [stream]{}. We exclude models [nocr]{} and [adv]{}, which didn’t drive outflows, from this analysis.
For the first 1.5 Gyr, the IMLF is nearly indistinguishable between models [isod, anisd]{} and [anisdh]{}. The low IMLF at early times in model [stream]{} is most likely due to the later onset of the galactic wind as well as the wind’s location above the galactic midplane.
At later times, the IMLF decreases in model [isod]{}, yet increases in the other three models. Model [anisd]{} reaches the highest IMLF due to the accumulated reservoir of CR energy near the galactic disk that continues to drive outflows even after star formation has ceased. Models [anisdh]{} and [stream]{} have weaker IMLFs than model [ansid]{} due to higher star formation rates and losses in CR energy through the heating term.
{width="\textwidth"}
Star Formation and Morphology {#sec:morphology}
-----------------------------
The relationship between CR transport and a galaxy’s star formation rate (SFR) can dramatically influence a galaxy’s morphology. The outflows generated by different CR transport prescriptions remove gas from the disk which would have otherwise been available for star formation. In addition, the presence of strong CR pressure in the disk can prevent star formation by stabilizing the thermal gas against collapse. In this section, we contrast the morphologies of our fiducial galaxy models after evolving the simulations for 13 Gyr. Because a galaxy’s morphology is so intricately tied to its star formation history, we describe Figures \[fig:density\] and \[fig:sfr\] simultaneously.
Figure \[fig:density\] displays the density-weighted face-on and edge-on projections of the gas density in our fiducial galaxy models after 13 Gyr of evolution. In the left panel, model [ncr]{} serves as the benchmark example of a MW-type disk galaxy. This galaxy has a clear spiral structure, with the disk extending to a radius of roughly 25 kpc and a vertical height of roughly 3 kpc. The star formation rate of model [ncr]{} begins with one solar mass per year and slowly decreases over time, hovering around a few tenths of a solar mass at later times. With a modest star formation history and no galactic winds, model [ncr]{} retains much of its gas in its disk, with average density values around $3\times10^{-25}$ g/cm$^3$.
Although model [adv]{} has had a similar outflow history as [ncr]{}, its star formation history has dramatically altered its morphology. Model [adv]{} quenched only 200 Myr after its initial burst of star formation (see Figure \[fig:sfr\]). Confined to advection-only transport, the CRs in this galaxy are trapped inside the galactic disk. After the first episode of star formation induces CR feedback, CR pressure dominates over gas pressure (see the bottom-left panel of Figure \[fig:outflows\] and the discussion in §\[sec:outflows\]). This additional pressure stabilizes the gas against against collapse, halting future star formation. Without galactic outflows to carry CRs out of the disk and without any new stars to create supernovae that could potentially drive outflows, this galaxy will remain quenched. Although model [adv]{} has a substantial reservoir of gas in its disk, CR pressure expands its vertical profile keeping the gas at lower average densities.
The star formation history of model [isod]{} most closely follows that of model [ncr]{}. Episodes of star formation that expel CRs into the disk keep the SFR of model [isod]{} consistently below that of the control. However, because isotropic diffusion is efficient at removing CRs from the galactic disk, star formation is only weakly suppressed in this galaxy. The morphology of model [isod]{} is qualitatively similar to that of model [ncr]{} out to a cylindrical radius of 15 kpc. Having expelled much of its ISM through outflows, model [isod]{} has a shorter radial and vertical extent to its gas.
{width="\textwidth"}
Anisotropic CR transport in model [anisd]{} retains CR pressure in the disk, which suppresses star formation at early times. As galactic outflows develop, the CR pressure is lifted out of the disk, allowing for star formation to resume. After roughly 2 Gyr of evolution the star formation in model [anisd]{} briefly matches that of the control model. The injected CR pressure following this star formation episode decreases future star formation, ultimately quenching the galaxy after 10 Gyr. CR heating in model [anisdh]{} relieves some of the CR pressure in the disk, leading to a higher star formation rate than that in model [anisd]{}. Although at the time of the snapshot in Figure \[fig:density\] model [anisd]{} has been quenched for 3 Gyr, this galaxy model retains some spiral structure and high gas densities within a 15 kpc radius of its center. The lingering CR pressure from past star formation creates an extended low-density gas profile around the disk.
Both models [anisd]{} and [stream]{} have extended, thick gaseous profiles that are most apparent in the edge-on view. Model [anisd]{} has a bimodal distribution of gas, with a core around $2\times10^{25}$ g/cm$^3$ extending to 15 kpc, surrounded by a less dense gaseous halo extending to 25 kpc. Although some spiral structure is present, it is significantly less pronounced than in model [isod]{}. Model [stream]{} has relatively low gas density in the disk with a spiral arm structure that is thinner than that of models [ncr]{} or [isod]{}.
Because of the toroidal geometry of the initial magnetic field, the CRs in model [stream]{} suppress star formation for the first 500 Myr. Once the magnetic field develops a sufficiently strong vertical component, the CRs escape the disk, dragging thermal gas along with them. The outflows relieve the ISM of the CR pressure, allowing star formation to resume. Filaments of inflowing gas near the midplane of the disk (see Figure \[fig:outflows\]) supply the additional gas necessary for extended episodes of star formation. Compared to the other galaxy models, model [stream]{} has the lowest gas densities and the most extended vertical and radial disk profile.
{width="\textwidth"}
The Simulated Circumgalactic Medium
-----------------------------------
We now turn our attention to the different ionization structures within the CGM of the simulated galaxy models. Our control model, [ncr]{}, has limited amounts of metals outside of the galactic disk. This is an unrealistic result and we will not dwell on its implications here. Our goal is to isolate the effects of CR transport on the CGM’s temperature and ionization structure, and so the following figures and discussions will focus on models [isod]{}, [anisd]{}, and [stream]{}. Where relevant, we discuss model [anisdh]{}, which includes the heating term $H_c$, that is present in the CR streaming prescription. Although the heating term is an unphysical addition to CR diffusion, its presence helps isolate which aspect of the CR streaming or diffusion mechanism is responsible for observed differences in temperature and ionization states [@Wiener:2017].
Figure \[fig:ion\_density\] shows the density-weighted projections of gas metallicity and temperature and the unweighted projections of H I and O VI column densities after 13 Gyr of evolution. Each column holds a different mode of CR transport. Starting from the left, the columns show results for galaxy models [ncr]{}, [isod]{} , [anisd]{}, [anisdh]{}, and [stream]{}.
The outflows in models with CR diffusion or streaming have populated their CGM with metals from the disk out to radii surpassing 200 kpc. Model [isod]{} has a relatively uniform distribution of metallicity in its CGM, with a density-weighted average value around 0.2 Z$_{\odot}$. Similarly, its temperature, H I and O VI column densities are also spatially uniform. CR pressure support creates a cooler temperature profile (roughly $3\times10^{5}$ K) compared to the control. The cooler temperatures result in stronger column densities of H I and O VI compared to model [ncr]{}.
Anisotropic diffusion in model [anisd]{} produces a steep temperature and H I column density gradient. The temperature of the gas is coolest near the disk, where CR pressure is strongest, and decreases radially outwards. The H I column density follows the shape of the temperature profile, extending out to where the density-weighted temperature is $10^6$ K. The O VI column density is sensitive to metallicity, so the column density profile traces the radial extent of the outflows. Compared to the other fiducial runs, model [anisd]{} has the strongest column densities of H I and O VI. We point out that although this model has been quenched for 3 Gyr, the gradual release of CR pressure results in the accumulation of cold gas just outside the disk.
The metallicity distribution of model [anisdh]{} is qualitatively similar to that of model [anisd]{}. However, the added heating term dramatically decreases the impact of CR pressure, resulting in warmer temperatures and weaker column densities of H I and O VI than those in model [anisd]{}.
The outflows in model [stream]{} reach larger radii than those in model [anisd]{}. The temperature profile is dominated by clumps of cool and warm gas extending out to radii of 150 kpc. Model [stream]{} has a patchy distribution of both H I and O VI column densities, with both coexisting in relative abundance at radii above 100 kpc from the galactic center.
{width="\textwidth"}
Temperature Distribution
------------------------
To better understand the CGM structure, we analyze the abundance of O VI as a function of temperature and density in Figure \[fig:dtm\_phase\]. Since we are primarily interested with the gas in the CGM, we exclude data points contained within a cylindrical radius of 25 kpc and a vertical height of 3 kpc of the galactic center.
{width="\textwidth"}
The CGM of models [ncr]{} and [adv]{} has low masses of O VI and scarce amounts of low-density gas compared to the other models. Since low density values trace large radii in the CGM, the artificially high density floors are due to the lack of outflows. Suppressed star formation in model [adv]{} is responsible for the low metallicities in its disk that ultimately result in low masses of O VI.
Models with additional CR transport drive strong outflows that enrich the CGM with metals (see Figure \[fig:ion\_density\]). This enrichment creates higher column densities of O VI at large radii. However, the different prescriptions for CR transport result in varied phase structures within the CGM.
One such difference is the shape of the temperature-density phase diagram. Models with weak CR pressure support (models [isod]{} and [anisdh]{}) have less cool gas at low densities than models [anisd]{} and [stream]{}. The strong CR pressure support in models [anisd]{} and [stream]{} supports a wide range of temperatures at each density. This feature is most pronounced at low densities in model [stream]{}.
CR transport also affects the abundance of O VI and the temperature of the gas that produces it. Both models [stream]{} and [anisd]{} predict an a reservoir of O VI ionized with the help of CR pressure support in low-density gas. However, only model [stream]{} predicts an abundance of O VI photoionized with gas temperatures around $3\times 10^5$ K, consistent with predictions in @McQuinn:2017.
Figure \[fig:temperature\] shows the total enclosed mass as a function of spherical radius for six different galaxy models. The colors of the bars denote the fractional distribution of the gas temperature at different radial shells. We consider three temperature bins of gas: cold gas ($T < 10^5 K$) in dark purple, warm gas ($10^5 K < T < 10^6 K$) in medium purple, and hot gas ($T > 10^6K$) in light purple. The colored gas fraction is not cumulative since the cold mass contribution from the disk would dominate the distribution in some cases.
Having driven weak outflows, model [ncr]{} has trace amounts of cold or warm gas outside of the disk. In contrast, although model [adv]{} had similarly weak winds, the presence of CR pressure keeps the gas immediately around the disk below $10^6$ K. The influence of CR pressure drops abruptly after a radius of 50 kpc. Because the CR pressure in the disk for this model stopped star formation almost immediately, model [adv]{} has more gas available both inside and outside the disk of this model.
Models in which CRs can move relative to the gas (via diffusion or streaming) have an altered temperature structure in their CGM. Model [isod]{} is dominated by warm gas out to large radii. Since isotropic diffusion depends solely on the direction and magnitude of the CR energy density gradient, the CR pressure distribution is nearly uniform throughout the halo. Over time, with more stellar feedback releasing CRs into the disk, the CR pressure accumulates in the halo, providing pressure support to the thermal gas.
Models with anisotropic CR transport exhibit a multiphase temperature structure. Models [anisd]{} and [anisdh]{} are both dominated by cold gas near the disk. In both models, the cold gas extends out to radii of 125 kpc and the warm gas extends to radii of 200 kpc.
Comparing this result with Figure \[fig:ion\_density\], we see the cool gas, traced by H I column densities, decrease radially away from the center. The added heating term in model [anisdh]{} converts some CR pressure, which is responsible for supporting the warm and cold gas, into heating the gas, consistent with results in [@Wiener:2017]. Even with the heating term, the distribution of warm gas in model [anisdh]{} is still similar to that of [anisd]{}.
Model [stream]{} shows signs of a true multiphase medium. This model has the lowest cumulative gas mass in its disk, likely due to its low gas densities and recent episodes of star formation (see Figure \[fig:sfr\]). Although warm gas dominates its CGM out to radii of 150 kpc, cold gas survives in abundance 200 kpc away from the galactic center. Using Figure \[fig:ion\_density\], we see that the temperature structure in model [stream]{} does indeed result in a patchy, multiphase medium.
Ionization Structure
--------------------
Figure \[fig:col\_density\] holds the column densities of H I, Si IV, C III, and O VI as a function of the spherical radius from the center of the galaxy. Each scatter point in the plot represents one column density measurement at that radius. In each panel, we include points from both the fiducial model and its low-resolution counterpart. The solid black lines show the average column densities for the fiducial run while the dashed black lines show the average column densities for the lower resolution run with the same physics. For details on how the column densities were constructed, see §\[sec:trident\].
To sufficiently sample our simulation space, we calculate the column densities at randomly-oriented sightlines passing through the CGM. To generate a sightline, we first select a random point on the sphere of radius $r \in [10, 200]$ kpc from the galaxy center. We then chose a sightline by selecting a random angle in the plane tangent to the sphere at that point.
The control model, [ncr]{}, underpredicts the column densities of all four ions. This is both due to the lack of metals in its CGM and a deficit of cool gas.
Of the remaining models, model [isod]{} has the weakest column densities with a flat distribution across impact parameter. This picture is consistent with weak and spatially uniform CR pressure in the CGM. Model [anisd]{} has stronger column densities with a wider spread at large radii. This is likely due to the uneven distribution of metallicity (see Figure \[fig:ion\_density\]). Model [stream]{} predicts column densities that are similar in strength to model [anisd]{}. However, the column density measurements in model [stream]{} are more tightly clustered around the average value, which is qualitatively similar to observations.
For all models, lower resolution predicts stronger column densities, possibly due to under-resolving the complicated structure of magnetic field lines. The column densities in model [stream]{} are the least sensitive to changes in resolution.
The column densities presented here are a qualitative example of the differences in ionization structure between different models of CR transport. In reality, the exact values of ion column densities would be influenced by the metallicity and inflows from the IGM. Therefore, in order to better match observations, we would need to simulate galaxies in a cosmological context.
{width="\textwidth"}
The Distribution of Cosmic Rays in the CGM
------------------------------------------
CR pressure in the CGM impacts the temperature and ionization structure of the gas. For this reason, we investigate the role of CR transport mechanisms in altering this CR pressure distribution.
In Figure \[fig:phase\_crbeta\] we explore the distribution of CR pressure as a function of spherical radius for models [isod]{}, [anisd]{}, [anisdh]{} and [stream]{}. The pixels in the phase plot are colored by the density of the gas. Like in Figure \[fig:dtm\_phase\], we cut out the disk from our data sample.
Model [isod]{} has a flat and narrow profile of CR pressure as a function of radius, with the ratio $\mathrm{P_{c}/P_{g}}$ ranging between 0.5 and 5. At larger radii, the CR pressure ratio widens considerably, revealing regions where CR pressure is sparse.
Model [anisd]{} has a much wider range of CR pressure ratios across all radii. Compared to isotropic diffusion, anisotropic diffusion creates CR pressure ratios that are nearly an order of magnitude higher across all radii. In this model, the CR pressure dominatesover thermal pressure near the galactic disk and at high gas densities. The heating term in model [anisdh]{} lowers the CR pressure ratio at all radii. However, this model still retains the same qualitative shape of the CR pressure distribution that is present in model [anisd]{}.
Although model [stream]{} also has a wide spread of CR to thermal gas pressure ratios, the distribution of that spread is qualitatively different from that in model [anisd]{}. In this model, CR pressure in and near the disk is roughly 1.5 dex lower, spanning nearly 2 dex in strength. Unlike in model [anisd]{}, the CR pressure is low near the galactic disk and dominates over thermal pressure at large radii (and low gas densities).\
Tuning the Knobs: The Impact of Choosing Parameters
---------------------------------------------------
Figure \[fig:CRBeta\] shows the density-averaged ratio of CR to thermal pressure as a function of impact parameter, measured after 3 Gyr of evolution. The solid black line represents the low-resolution runs of our fiducial models. All other lines differ from this run only in the parameter specified in the label. Blue and green lines represent galaxy models in which supernova feedback injected 1% and 30% of its energy as CRs, respectively. Purple lines represent slower CR transport velocities with either a diffusion coefficient of $\kappa_{\varepsilon} = 10^{27} \mathrm{cm}^2/s$ in models [isod]{} and [anisd]{} or a streaming factor of $f_s = 1.0$ in model [stream]{}. To better compare models with anisotropic diffusion to those with isotropic diffusion, we explore faster CR transport ($\kappa_{\varepsilon} = 10^{29} \mathrm{cm}^2/\mathrm{s}$) in the middle panel, depicted by the red line. Finally, the black dotted line is the high-resolution fiducial run. For a summary of the parameters described above, see Table 1.
Varying the injected CR fraction does not correspond with a linear change in the ratio of CR to thermal pressure in the CGM. For example, increasing $f_{CR}$ to 0.3 only marginally increases CR pressure in model [isod]{}. Counter-intuitively, increasing $f_{CR}$ in models [anisd]{} and [stream]{} actually decreases the CR pressure ratio in the CGM by suppressing star formation in the disk and thus preventing the injection of additional CRs. On the flip side, decreasing the injected CR fraction by a factor of 10 from the fiducial value does decrease the CR pressure in model [isod]{} by roughly an order of magnitude at radii above 100 kpc. However, this decrease in $f_{CR}$ does not significantly alter the CR pressure ratio for models [anisd]{} and [stream]{}.
The CR pressure distribution in all three models depends strongly on their transport velocities. Decreasing the diffusion coefficient by a factor of 10 limits the reach of outflows to 100 kpc in model [isod]{} and 60 kpc in model [anisd]{}. Similarly, decreasing the streaming factor by four limits the reach of CR pressure to 40 kpc in 3 Gyr. In addition to limiting the radial extent of CR pressure, the decrease in transport velocity in model [isod]{} increases the CR pressure ratio by a factor of 10 at small radii. Increasing the transport velocity in model [anisd]{} decreases the CR pressure within 75 kpc of the galactic center, but has no discernible effects at larger radii.
Doubling the resolution decreases the CR pressure ratio by roughly an order of magnitude for models [isod]{} and [anisd]{}. In model [stream]{}, the CR pressure ratio of the higher resolution run only deviates from the fiducial run above 150 kpc.
Model [isod]{} is the most sensitive to changing parameters. In this approximation of CR transport, CRs can only move relative to the gas by diffusing down their own gradient. This motion is very sensitive to the injection fraction of CRs, which defines the strength of the CR gradient. Anisotropic diffusion is moderated by transport along magnetic field lines. The complicated geometry of the magnetic field lines slows anisotropic diffusion near the disk, making model [anisd]{} less sensitive to the CR injection fraction and star formation history than model [isod]{}. The most robust model is [stream]{}, in which the CR transport depends on the CR gradient, magnetic field strength, and Alfv[è]{}n wave velocity. The CR pressure ratio in all three models decreases with slower CR transport and increased resolution. Breaking this degeneracy will require additional parameter studies.
Discussion {#sec:discussion}
==========
With our suite of isolated disk galaxy simulations, we have shown that the density, temperature, and ionization structures of the simulated CGM depend on the choice of CR transport mechanism. This discrepancy stems from the nature of the CR transport approximation and cannot be trivially remedied by altering constant parameters. In the following section, we will summarize the qualitative nature of each CR transport prescription and discuss potential improvements for future simulations.
The mere presence of CRs alongside thermal gas is not enough to drive galactic outflows. This point is demonstrated with model [adv]{}, in which CRs are confined to propagate only through advection with the bulk motion of the thermal gas. These CRs provide pressure support to the gas within the disk but have no efficient mechanism through which to escape into the CGM. This added pressure stabilizes the gas against collapsing to form stars and thus quenches the galaxy almost immediately after the first episode of star formation. Although some CRs ultimately do escape the disk, there is no significant CR influence in the CGM after 13 Gyr of evolution.
CR transport [*relative*]{} to the thermal gas is necessary to drive galactic winds or to reproduce the observed column densities of ions in the CGM. As CRs move out of the disk, they provide pressure support which lifts low-entropy gas out of the gravitational potential well (see Figure \[fig:outflows\]). This additional CR pressure lowers the temperature and increases the ionization state of low ions, such as C III and Si IV, at large radii. However, the CGM structure depends on the distribution of CR pressure, which varies substantially between different CR transport models.
In the isotropic diffusion approximation, CRs move down their energy gradient with a velocity that is determined by both the strength of the gradient and the constant diffusion coefficient, $\kappa_{\varepsilon}$. The resulting galactic outflows and CGM are therefore sensitive to recent star formation, which injects CR energy into the ISM. The newly-ejected CRs propagate away from the galactic disk until the CR energy gradient is sufficiently flattened. Therefore, at late times, additional CR pressure injected by ongoing star formation can no longer drive strong outflows. Ultimately, the CR pressure in the CGM is unable to support the gas that it expelled at earlier times, triggering inflows. Varying the value of the constant diffusion coefficient alters the time scales on which the CR energy gradient flattens and the radial extent of CR energy, but not the qualitative shape of the CR distribution.
Although isotropic diffusion is a crude approximation to the true interactions between CRs and magnetic fields, it is a computationally frugal choice as it circumvents the need for fully MHD galaxy models. Simulations with isotropic CR diffusion have been successful at driving strong outflows and increasing the column densities of low ions in the CGM [@Salem:2014a; @Wiener:2017]. However, in our simulations, model [isod]{} was the least effective at producing a multiphase medium and the most sensitive to the choice of constant parameter values.
Anisotropic diffusion improves upon isotropic diffusion by approximating CR transport as a random walk down the CR energy density gradient, along the magnetic field. This transport around magnetic field lines modulates the velocity of CR transport. Near the disk, where the magnetic field lines are the most tangled, some CRs become trapped in their motion around magnetic field loops and make slow radial progress away from the disk. Simultaneously, CRs near magnetic field lines pointing out of the disk escape to larger radii. Therefore, anisotropic diffusion is capable of driving large-scale outflows while keeping a substantial presence of CR pressure near the disk. In this model, CR pressures are strongest near the galactic center and decrease rapidly at large radii. This CR pressure suppresses star formation in the disk and supports an abundance of cool gas in the the halo. The strength and radial extent of the CR pressure depends on the constant diffusion coefficient, magnetic field topology, and the star formation history.
Increasing the diffusion coefficient in models with anisotropic diffusion counteracts the effect of tangled magnetic field lines near the galactic center. However, the resulting models are still not directly comparable to models with isotropic diffusion with a lower diffusion coefficient. The difference lies in the variable timing of the CR propagation. In anisotropic diffusion models, the CR pressure distributes itself preferentially along magnetic field lines, so the nominally constant diffusion coefficient becomes a function of the magnetic field geometry. There is no single ratio of diffusion coefficients that results in the same distribution of CR energy in both anisotropic and isotropic diffusion models.
CR streaming is the first-order approximation to CR transport. Streaming CRs move along magnetic field lines with a velocity that depends on the shape, [*direction*]{}, and [*strength*]{} of the magnetic field. This mode of transport creates a CR distribution in the CGM that supports a truly multiphase medium, with cold gas clumps surviving alongside a warm and hot medium. We find that compared to diffusion, streaming is less sensitive to changes in star formation history or the choice of constant parameter values.
A key component of the streaming approximation is the additional heating term, through which CRs give up their momentum to heat thermal gas. This transfer prevents simulations from overestimating CR pressure in the disk and halo. To discern the effect of the heating term from the streaming behavior, we simulated a galaxy with both anisotropic CR diffusion and CR heating [@Wiener:2017]. Although the additional heating term increased temperatures and lowered the column densities of H I and O VI, the CR distribution in the CGM remained qualitatively the same. Therefore, we conclude that the key differences between the streaming and diffusion models lie in the transport approximations.
Limitations and Future Work
---------------------------
In this work, we explored the qualitative impact of the choice of CR transport on the simulated CGM. However, before simulations with CR physics can hold predictive power, several improvements must be made to constrain the details of this transport. We discuss these factors in greater detail below.
### Magnetic Fields
Magnetic fields are the media along which CRs propagate and are therefore crucially important for robust implementations of CR physics. The shape of the magnetic field dictates CR transport in the anisotropic diffusion and streaming approximations while the magnetic field strength sets the streaming velocity and the rate of heat transfer from CRs to the thermal gas. In addition, magnetic pressure is inversely proportional to the length scale of thermal instabilities in the CGM [@Ji:2017]. Therefore, simulating CR transport and recreating the multiphase CGM requires a realistic treatment of magnetic fields.
Simulating the co-evolution of CRs and magnetic fields is complicated at early simulation times. The CRs streaming velocity depends on the magnetic field strength, yet primordial fields are believed to be no stronger than $10^{-9}$ G [@Cheng:1994; @Durrer:2013]. If newly-injected CRs cannot escape the galactic disk before the next round of star formation, the galaxy risks becoming immediately quenched.
One way to remedy this is to include magnetic supernova feedback, which can fuel the exponential magnetic field growth to observed values [@Butsky:2017]. However, in our attempts to simulate such a field, the small supernova injection sites were not sufficiently resolved to accurately capture CR transport. The overabundance of CR energy in the disk suppressed star formation, which in turn suppressed additional magnetic field injection. In these test simulations, the magnetic fields never reached observable strengths, and the streaming CRs never escaped the disk.
To circumvent this problem, we set the initial disk magnetic field to $B_{0,d} = 1 \mu$G, similar to that in @Ruszkowski:2017. This initial field is strong enough for CR streaming models to drive outflows that are comparable in their reach to outflows generated by the diffusion models with $\kappa_{\varepsilon} = 10^{28}\mathrm{cm^2/s}$. The initial magnetic field in the CGM, which is the focus of this work, was set to $B_{0,h}=10^{-15}$ G. Therefore, the evolution of the of the magnetic field in the halo was driven by magnetized galactic winds and the turbulence they produced.
Improved implementation of the co-evolution of magnetic fields with streaming CRs will require detailed resolution and parameter studies which are outside of the scope of this work.
### Improved CR Physics
Our work makes several assumptions that are commonplace in current implementations of CR transport. One such approximation is that of a constant diffusion coefficient, $\kappa_{\varepsilon}$, which avoids calculating the momentum-weighted integral of the local CR energy density. The velocity of CR transport in the diffusion limit is therefore very sensitive to the choice of $\kappa_{\varepsilon}$ (see Figure \[fig:CRBeta\]). Because the value of $\kappa_{\varepsilon}$ is poorly constrained, simulations with the same model for CR transport can produce significantly varied results.
However, it is possible to improve upon the constant diffusion model without the computational expense of explicitly solving for the momentum-weighted value at every grid cell. For example, @Farber:2017 recently demonstrated that using a temperature-dependent, bimodal value for $\kappa_{\varepsilon}$ alters galactic wind properties and spatial distribution of CRs.
The underlying assumption of our CR transport models is that CRs must propagate either in the diffusion limit or the streaming limit. Realistically, both modes of transport must be taking place. In a recent advancement, @Jiang:2018 describe a new numerical scheme for CR transport that self-consistently handles both streaming and diffusion processes. This prescription is a promising new direction that may resolve the discrepancies between current implementations of CR transport presented in this work.
### Cosmological Context
The idealized experimental setup of isolated disk galaxies offers great control in isolating the effects of CR-driven galactic outflows on the structure of the CGM. However, this isolation comes at the expense of simulating a realistic CGM.
For example, the CGM in our simulations is populated almost entirely by CR-driven outflows. Neglecting inflows from the IGM leads to an unrealistically empty CGM, which is particularly apparent in our control model. The isolated disk galaxy model is an oversimplification of galactic evolution since most galaxies, including our own Milky Way, evolve in a group or cluster environment. Interactions with nearby galaxies and satellites can have a significant effect on the host galaxy’s gas supply, which in turn affects its ability to form stars and drive outflows. These interactions can also strip gas directly from the CGM.
We focus solely on the CGM of Milky Way-type galaxies, in which purely thermal feedback is successful at reproducing galactic disk properties. However, the differences between CR transport mechanisms presented above are may change with galaxy mass. For example, @Jacob:2017 recently showed that the strength and mass-loading factor galactic outflows driven by isotropic CR diffusion depends on the host galaxy’s mass. This mass-dependency is likely to be present for anisotropic diffusion and streaming transport prescriptions. However, the nature of that relationship may change for different transport mechanisms.
Future parameter studies using cosmological simulations will be necessary to develop a robust prescription of CR transport. We judge our CR transport models by the temperature and ionization structure of the simulated CGM. However, CR collisions in the CGM are expected to account for up to 10% of the diffuse, isotropic gamma ray background [@Feldmann:2013]. Therefore, observations of the diffuse gamma ray emissions, such as those taken with the Fermi-LAT telescope can be used to constrain CR transport models [@Ackermann:2012].
Summary {#sec:summary}
=======
Simulations including CR feedback are more effective at driving galactic outflows and reproducing the observed ionization structure in the CGM than models with purely thermal feedback. However, models for simulating CR feedback and transport are poorly constrained.
In this work, we demonstrate that galactic outflows and CGM structure are sensitive to the invoked CR transport mechanism. We achieve this by simulating a suite of isolated MW-type disk galaxies with three commonly-used prescriptions for CR transport relative to thermal gas: isotropic diffusion, anisotropic diffusion, and streaming. For completeness, we also include advection-only models, in which CRs are constrained to move with the bulk motion of the gas, and control models without any CR physics. The results are summarized below.
- Models with CR transport relative to the thermal gas (streaming or diffusion) drive strong outflows. These outflows are generated by CR pressure support, which lifts thermal gas higher out of the gravitational potential well. Models with no CR feedback and those with CR advection do not launch strong galactic winds.
- Models with isotropic diffusion launch the fastest winds that last around 5 Gyr. Models with anisotropic CR diffusion and streaming drive steady winds that persist after 13 Gyr of evolution. Models with anisotropic diffusion continue to drive outflows even after the galaxy is quenched. Models with CR streaming support simultaneous inflows near the disk and outflows at larger radii.
- All models with CR feedback suppress star formation by supporting thermal gas against collapse. The degree to which star formation is suppressed depends on the CR pressure within the disk. Models with pure CR advection quenched after the first episode of star formation. The star formation in models with isotropic diffusion closely followed that of the control. Models with anisotropic diffusion suppressed star formation more efficiently, ultimately quenching after 10 Gyr. Models with CR streaming had a cyclical star formation history, supported by inflows from the CGM.
- CR pressure in the CGM supports gas with cooler temperatures. However, that temperature structure is sensitive to the CR transport model. The CGM of models with isotropic diffusion is primarily composed of warm gas that is spatially uniform. Models with anisotropic diffusion produce large quantities of cool ($T < 10^{5} K$) gas out to radii of 100 kpc that remain even after star formation is quenched. This is an interesting example of a quenched galaxy with a reservoir of cool gas in its CGM (e.g., @Gauthier:2010 [@Thom:2012]). Models with CR streaming produce a patchy, multiphase gas distribution with cool gas existing alongside warm and hot gas at large radii.
- CR pressure creates a multiphase medium, allowing gas of a given temperature to span several orders of magnitude in density. At late times, models with isotropic diffusion show a relatively uniform CGM temperature, whereas anisotropic diffusion and streaming models retain varied temperature profiles. In anisotropic diffusion models, the influence of CRs is strongest near the galactic center and decreases with spherical radius. In CR streaming models, the multiphase temperature structure is patchy, with clumps of cool gas existing 200 kpc from the galactic center.
- Models with anisotropic diffusion and streaming generate higher column densities of H I, Si IV, C III, and O VI than models with isotropic diffusion. The column densities generated by models with CR streaming have less variation in predicted column densities as a function of impact parameter and are less sensitive to changes in resolution.
- The differences between our galaxy models stem from the varied distribution of CRs in the CGM. Compared to isotropic diffusion, anisotropic diffusion and CR streaming are more effective at retaining CR pressure near the galactic disk. Since the transport in isotropic diffusion depends only on the gradient of CR energy density, its CR pressure support in the CGM decreases at late simulation times.
- The distribution of CR pressure in the CGM is sensitive to runtime parameters such as the amount of CRs injected in supernova feedback, the velocity of CR transport, and the resolution. We find that models with isotropic CR diffusion are the most sensitive to changes in these parameters. This is because the velocity of CR transport in this approximation depend only on the CR energy gradient and the choice of constant diffusion coefficient. The CR pressure distribution in models with anisotropic diffusion are less sensitive in comparison. We conclude that models with streaming, in which CR transport depends on the shape, direction, and strength of the magnetic field lines, are the most robust.
We have demonstrated that CR feedback can drive strong galactic outflows and provide the necessary pressure support to reproduce the multiphase temperature and ionization structure of the CGM. However, because the state of the simulated CGM depends strongly on the invoked CR transport method, it is necessary to first develop a robust numerical CR transport model before simulations with CR feedback can hold predictive power.
Acknowledgments
===============
The authors would like to thank the anonymous referee for their insightful suggestions. They also thank Cameron Hummels, Julianne Dalcanton, Juliette Becker, Jessica Werk, Matt McQuinn, Zeljko Ivezić, Victoria Meadows, and Scott Anderson for their valuable comments on this manuscript. I.B. also thanks Daniel Sotolongo for many helpful conversations. I.B. was supported by the National Science Foundation (NSF) Blue Waters Graduate Fellowship. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (Grants No. OCI-0725070 and No. ACI-1238993) and the State of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.
Testing the Newly Implemented CR Physics
========================================
In §\[sec:crenzo\], we described the implementation of CR physics into the Riemann solvers in Enzo. Here, we demonstrate the performance of our implemented CR advection, anisotropic diffusion, and streaming. For tests of the isotropic diffusion module, refer to [@Salem:2014a].
Modified Sod Shock-tube
-----------------------
The Sod shock-tube, first described in @Sod:1978 is used to test the behavior of gas in the presence of a strong shocks in numerical simulations. Because the original Sod shock-tube doesn’t include CRs, we use a modified version first described in @Pfrommer:2006. The initial conditions are described in Table \[table:sod\] below.
In Figure \[fig:shocktube\_compare\] we test our implementation of the non-diffusive ($\kappa_{\varepsilon} = 0.0$) two-fluid CR model and compare it to the existing implementation in the [Zeus]{} hydro scheme in [Enzo]{} [@Salem:2014a]. From left to right, the panels show the density, velocity, and pressures of this shock-tube after t = 0.31 code units of evolution. Where applicable, the analytic solution is depicted as a black dashed line. Compared to @Salem:2014a, our implementation has a sharper velocity profile at $x \simeq 250$ cm, but slightly under-predicts the density value around $x = 400$ cm. Overall, the two implementations of CR advection agree well with each other and with the analytic solution.
$\rho$ $P_{g}$ $\varepsilon_{c}$ $v$
------- -------- ------------------- ------------------- -----
Left 1.0 $2/3 \times 10^5$ $4.0\times10^5$ 0.0
Right 0.2 267.2 801.6 0.0
: Initial conditions for the modified Sod Shock-tube[]{data-label="table:sod"}
[^2]
$\rho$ $P_{g}$ $\varepsilon_{c}$ $\mathrm{v_x}$ $\mathrm{v_y}$ $B_x$ $B_y$
------- -------- --------- ------------------- ---------------- ---------------- ------- -------
Left 1.0 0.6 1.2 0.0 0.0 1.0 1.0
Right 0.125 0.06 0.12 0.0 0.0 1.0 -1.0
: Modified Brio-Wu Shock-tube[]{data-label="tab:bw_shock"}
[^3]
![\[fig:shocktube\_compare\] The density, velocity, and pressure (thermal, CR, and total pressures) as a function of position in a 1D simulation of the modified Sod shock-tube. CR pressure compliments thermal pressure while the total pressure remained unchanged in the rarefaction wave.](shocktube_compare.pdf){width="\textwidth"}
![\[fig:bw\_shock\] Various parameters of the modified Brio-Wu MHD shock-tube as a function of one dimensional position. [**Top row:**]{} density, CR, thermal, and total pressure, and thermal gas entropy. [**Middle row:**]{} x-component of the velocity, y-component of the velocity and magnetic field, ratio of CR density to thermal density. [**Bottom row:**]{} the density-weighted x and y components of the velocity, and the total energy. Thermal gas is influenced by the pressure of both CRs and magnetic field lines. ](BW_shocktube.pdf){width="\textwidth"}
Brio Wu Shock-tube
------------------
The Brio-Wu shock-tube [@Brio:1988] tests the advection properties of gas interactions with magnetic fields in the presence of an extreme shock. In the modified Brio-Wu shock-tube, we include CR pressure and evolve the system for 0.2 code units. The initial conditions of the gas are described in Table \[tab:bw\_shock\].
Anisotropic Diffusion
---------------------
In the anisotropic diffusion approximation, CRs propagate down their gradient, along magnetic field lines. In Figure \[fig:pakmor\], we test our implementation with the anisotropic ring problem initially described by @Parrish:2005 [@Sharma:2007] and explored in detail in @Pakmor:2016b. The experiment uses a uniform density box in with no gas (or a gas that is fixed in space such that there is no CR advection) on a domain of $[-1,1]^2$.
Following the setup in @Pakmor:2016b, we set initial conditions for the CR energy density to be $$\epsilon_{c}(x,y) = \begin{cases}
12\quad \mathrm{if}\enspace 0.5 < r < 0.7\enspace \mathrm{and}\enspace |\phi| < \frac{\pi}{12} \\
10 \quad \mathrm{else},\\
\end{cases}$$ where the radial coordinate $r = \sqrt{x^2 + y^2}$, $\phi = \mathrm{atan}2(y,x)$, and the diffusion coefficient is set as $\kappa_{c}= 0.01$.
In Figure \[fig:pakmor\], we compare the performance of our anisotropic diffusion implementation against the analytic solution given by $$\epsilon_{c}(x,y) = \begin{cases}
10 + \mathrm{erfc}\big[\big(\phi + \frac{\pi}{12}\big)\frac{r}{D}\big] - \mathrm{erfc}\big[\big(\phi - \frac{\pi}{12}\big)\frac{r}{D}\big]
\end{cases}$$ where $D = \sqrt{4\kappa_{\epsilon}t}$. $\epsilon_{c}(x,y) = 10$ everywhere else.
After evolving this simulation for 10 code time units, we see the initial wedge of uniform CR energy density moving down its gradient around the magnetic fields lines. The lower resolution runs have lower peak values of CR energy density and show signs of CR transport perpendicular to the magnetic field direction. Both of these properties improve with increased resolution. The best-resolved simulation (800 x 800, fixed grid) matches the analytic solution well.
At a later time (t = 100 code units), the CR energy density has reached the opposite side of the circle traced by the toroidal magnetic field lines. The energy density approaches a steady state and is nearly evenly distributed around the annulus. In low-resolution runs, the average final CR energy is lower, due to conservation of CR energy in the thicker rings.
![\[fig:pakmor\] A 2D test of anisotropic diffusion of CR energy density along circular magnetic field lines at an early time (t = 10; top row) and a late time (t = 100; bottom row). The columns are ordered from left to right by increasing resolution, ending with the analytic solution. With anisotropic diffusion, CRs are confined to propagate solely along magnetic field lines. Increasing resolution decreases the amount of diffusion perpendicular to the magnetic field. ](diffusion_compare.pdf){width="\textwidth"}
Streaming
---------
![\[fig:streaming\] A comparison of the time evolution of a 1D Gaussian profile with CR streaming (green) and diffusion (blue). A magnetic field of strength 0.25 code units lies in the x direction. The analytic solution for the CR diffusion profile is overplotted as a black dotted line. Although no analytic solution exists for the CR streaming case, our results qualitatively match previous studies. ](streaming.pdf){width="50.00000%"}
We test the behavior of our CR streaming implementation with a 1D simulation of an initial Gaussian profile of CR energy density (see Figure \[fig:streaming\]). The initial CR energy density profile is set by $$\varepsilon = \varepsilon_0 e^{-x^2/2D},$$ where $x$ is the spatial coordinate. In our example, we chose constants $\varepsilon_0 = 100$ and $ D = 0.05$. We include a magnetic field in the $\hat{x}$ direction of a strength of 0.25 in code units. We isolate the effects of CR streaming and diffusion by fixing the gas so that no advection is taking place.
For CR diffusion, the analytic solution for the evolution of the CR energy density over time is given by $$\varepsilon = \varepsilon_0 \sqrt{\frac{D}{D + 2\kappa_{\varepsilon}t}}
\mathrm{exp}\bigg(\frac{-x^2}{2(D + 2\kappa_{\varepsilon}t)}\bigg).$$
The CR diffusion implementation follows the analytic solution well. Although there is no analytic solution for the CR streaming case, our results are qualitatively comparable to previous studies (e.g., [@Uhlig:2012; @Wiener:2017]).
[^1]: A summary of the simulation initial conditions. The run ID and the corresponding boldface entries describe the fiducial runs discussed in depth throughout this paper. The fiducial runs differ from each other only by the invoked CR transport mechanism. We deviate from the fiducial runs by changing the resolution, the fraction of supernova energy injected as CRs ($\mathrm{f_c}$), the diffusion coefficient ($\kappa_{\varepsilon}$), the streaming factor ($f_s$), and including the CR heating term ($H_c$). Figure \[fig:CRBeta\] compares the effect of these variables on the CR pressure distribution in the CGM.
[^2]: The initial density, thermal pressure, CR energy density, and velocity for the modified Sod shocktube. The left and right regions of the shocktube are defined as $(0 < x < 250 \mathrm{\ cm})$ and $(250 < x < 500 \mathrm{\ cm})$ respectively.
[^3]: The initial density, thermal pressure, CR energy density, velocity and magnetic field strengths for the modified Brio-Wu shocktube. The left and right regions of the shocktube are defined by $(-0.5 < x < 0)$ and $(0 < x < 0.5)$ respectively.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Training abstractive summarization models typically requires large amounts of data, which can be a limitation for many domains. In this paper we explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods when applied to small corpora of student reflections. First, we explored whether tuning state of the art model trained on newspaper data could boost performance on student reflection data. Evaluations demonstrated that summaries produced by the tuned model achieved higher ROUGE scores compared to model trained on just student reflection data or just newspaper data. The tuned model also achieved higher scores compared to extractive summarization baselines, and additionally was judged to produce more coherent and readable summaries in human evaluations. Second, we explored whether synthesizing summaries of student data could additionally boost performance. We proposed a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores. Finally, we showed that combining data synthesis with domain transfer achieved higher ROUGE scores compared to only using one of the two approaches.'
author:
- |
Ahmed Magooda^1^, Diane Litman^1^\
^1^ Computer Science Department, University of Pittsburgh\
Pittsburgh, PA, USA\
[email protected], [email protected]
bibliography:
- 'aaai-bib.bib'
title: Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis
---
Introduction
============
Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides [@see2017get; @gehrmann2018bottom; @paulus2017deep]. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) [@hermann2015teaching; @nallapati2016abstractive] and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., [@luo2015summarizing]). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.
To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies [@hua2017pilot; @keneshloo2019deep], our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
Related Work
============
**Abstractive Summarization**. Abstractive summarization aims to generate coherent summaries with high readability, and has seen increasing interest and improved performance due to the emergence of seq2seq models [@sutskever2014sequence] and attention mechanisms [@bahdanau2014neural]. For example, [@see2017get], [@paulus2017deep], and [@gehrmann2018bottom] in addition to using encoder-decoder model with attention, they used pointer networks to solve the out of vocabulary issue, while [@see2017get] used coverage mechanism to solve the problem of word repetition. In addition, [@paulus2017deep] and [@P18-1063] used reinforcement learning in an end-to-end setting.
To our knowledge, training such neural abstractive summarization models in low resource domains using domain transfer has not been thoroughly explored on domains different than news. For example, [@nallapati2016abstractive] reported the results of training on CNN/DM data while evaluating on DUC data without any tuning. Note that these two datasets are both in the news domain, and both consist of well written, structured documents. The domain transfer experiments of [@gehrmann2018bottom] similarly used two different news summarization datasets (CNN/DM and NYT). Our work differs in several ways from these two prior domain transfer efforts. First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data [@gehrmann2018bottom], or didn’t perform any tuning at all [@nallapati2016abstractive]. In contrast, we do the training in two consecutive phases, pretraining and fine tuning. Finally, [@gehrmann2018bottom] reported that while training with domain transfer outperformed training only on out-of-domain data, it was not able to beat training only on in-domain data. This is likely because their in and out-of-domain data sizes are comparable, unlike in our case of scarce in-domain data.
In a different approach to abstractive summarization, [@cao2018retrieve] developed a soft template based neural method consisting of an end-to-end deep model for template retrieval, reranking and summary rewriting. While we also develop a template based model, our work differs in both model structure and purpose.\
**Data Synthesis**. Data synthesis for text summarization is underexplored, with most prior work focusing on machine translation, and text normalization. [@zhang2015character] proposed doing data augmentation through word replacement, using WordNet [@miller1998wordnet] and vector space similarity, respectively. We will use a WordNet replacement method as a baseline synthesis method in the experiments described below. In contrast, [@fadaee2017data] synthesized/augmented data through back-translation and word replacement using language models. [@parida2019abstract] is another recent work that was done in parallel and is very close to ours. However, in addition to the difference in both our and their model, we think it might be infeasible to back generate student reflections from a human summary, especially an abstractive one.
Reflection Summarization Dataset
================================
Student reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students’ feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a [*student reflection document*]{}. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR[^1] (Introduction to Materials Science and Engineering), Stat2015 and Stat2016[^2] (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS[^3] (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts [@menekse2011effectiveness]: “Point of Interest (POI): Describe what you found most interesting in today’s class” and “Muddiest Point (MP): Describe what was confusing or needed more detail.”
**Prompt**
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Point of Interest (POI): Describe what you found most interesting in today’s class.
**Student Reflection Document**
Learning about bags was very interesting.
Bags as a data type and how flexible they are.
etc...
**Reference Summary**
Students were interested in ADT Bag, and also its array implementation. Many recognized that it should be resizable, and that the underlying array organization should support that. Others saw that order does not matter in bags. Some thought methods that the bag provides were interesting.
: \[tab:summaries\_example\] Sample data from the CS course.
**CS** **ENGR** **Stat2015** **Stat2016**
------------- -------- ---------- -------------- --------------
Lectures 23 26 22 23
Prompts 2 2 2 2
Reflections 26 66 41 44
Summaries 3 1 2 2
Documents 138 52 88 92
: \[dataset\_summary\] Dataset summary (n=370 documents).
For each reflection document, at least one human (either a TA or domain expert) created summaries. Table \[tab:summaries\_example\] shows example reference summary produced by one annotator for the CS course. Table \[dataset\_summary\] summarizes the dataset in terms of number of lectures, number of prompts per lecture, average number of reflections per prompt, and number of abstractive reference summaries for each set of reflections.
Explored Approaches for Limited Resources
==========================================
To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating [**domain transfer**]{} into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)[@see2017get][^4]. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is [**data synthesis**]{}. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table \[tab:summaries\_example\]). Our third explored approach is to [**combine domain transfer with data synthesis**]{}.
Proposed Template-Based Synthesis Model
=======================================
Our motivation for using templates for data synthesis is that seq2seq synthesis models (as discussed in related work) tend to generate irrelevant and repeated words [@koehn2017six], while templates can produce more coherent and concise output. Also, extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models. Accordingly, using templates can be very tempting for domains with limited resources such as ours.
[**Model Structure.**]{} The model consists of 4 modules:\
*1. Template extraction*: To convert human summaries into templates, we remove keywords in the summary to leave only non-keywords. We use Rapid Automatic Keyword Extraction (RAKE) [@rose2010automatic] to identify keywords.\
*2. Template clustering*: Upon converting human summaries into templates, we cluster them into $N$ clusters with the goal of using any template from the same cluster interchangeably. A template is first converted into embeddings using a pretrained BERT model[^5] [@devlin2018bert], where template embedding is constructed by average pooling word embeddings. Templates are then clustered using k-medoid.\
*3. Summary rewriting*: An encoder-attention-decoder with pointer network is trained to perform the rewriting task. The model is trained to inject keywords into a template and perform rewriting into a coherent paragraph. The produced rewrites are considered as candidate summaries.\
*4. Summary selection*: After producing candidate summaries, we need to pick the best ones. We argue that the best candidates are those that are coherent and also convey the same meaning as the original human summary. We thus use a hybrid metric to score candidates, where the metric is a weighted sum of two scores and is calculated using Equations 1, 2, and 3. Eq.1 measures coherency using a language model (LM), Eq.2 measures how close a candidate is to a human summary using ROUGE scores, while Eq.3 picks the highest scored $N$ candidates as the final synthetic set.
$$\small
% LM_S = \frac{\sum_{w\in CS}log(P(w))}{len(CS)}
LM_S = (\sum_{w\in CS}log(P(w)))/(len(CS))$$
$$\small
% R_S =\frac{\sum_{i\in [1, 2, l]}R_{i}(CS,HS)}{3}% + R2(CS,HS) + Rl(CS,HS)}{3}
R_S = Avg(\sum_{i\in [1, 2, l]}R_{i}(CS,HS))$$
$$\small
% Score =\frac{\alpha LM_S + \beta R_S}{\alpha + \beta}
Score =(\alpha LM_S + \beta R_S)/(\alpha + \beta)$$
CS and HS are a candidate and human summary. $P(w)$ is the probability of word $w$ using a language model. $\alpha, \beta $ are weighting parameters. In this work we use $\alpha=\beta=1$ for all experiments. $R_{i}(CS,HS)$ is ROUGE-i score between CS and HS for i=1, 2, and $l$.
[**Model Training.**]{} Before using the synthesis model, some of the constructing modules (rewriting module, scoring LM) need training. To train the rewriting model, we use another dataset consisting of a set of samples, where each sample can be a text snippet (sentence, paragraph, etc.). For each sample, keywords are extracted using RAKE, then removed. The keywords plus the sample with no keywords are then passed to the rewriting model. The training objective of this model is to reconstruct the original sample, which can be seen as trying to inject extracted keywords back into a template. [**Model Usage.**]{} To use the synthesis model to generate new samples, the set of human summaries are fed to the model, passing through the sub-modules in the following order:\
1. Human summaries first pass through the template extraction module, converting each summary $s_i$ into template $t_i$ and the corresponding keywords $kw_i$.\
2. Templates are then passed to the clustering module, producing a set of clusters. Each cluster $C$ contains a number of similar templates.\
3. For each template $t_i$ and corresponding keywords $kw_i$ from step 1, find the cluster $C_i$ that contains the template $t_i$, then pass the set of templates within that clusters $\{t_j\} \forall{j},$ if $t_j \in C_i$ alongside the keywords $kw_i$ to the summary rewriting module. This will produce a set of candidate summaries.\
4. The summary selection module scores and selects the highest $N$ candidates as the synthetic summaries.
Experiments {#sec:experiments}
===========
Our experimental designs address the following hypotheses:\
**Hypothesis 1 (H1)** : Training complex abstractive models with limited in-domain or large quantities of out-of-domain data won’t be enough to outperform extractive baselines.\
**Hypothesis 2 (H2)** : Domain transfer helps abstractive models even if in-domain and out-of-domain data are very different and the amount of in-domain data is very small.\
**Hypothesis 3 (H3)** : Enriching abstractive training data with synthetic data helps overcome in-domain data scarcity.\
**Hypothesis 4 (H4)** : The proposed template-based synthesis model outperforms a simple word replacement model.\
**Hypothesis 5 (H5)** : Combining domain transfer with data synthesis outperforms using each approach on its own.\
**Hypothesis 6 (H6)** : The synthesis model can be extended to perform reflection summarization directly.\
\
**Extractive Baselines (for testing H1)**. While [@see2017get] used Lead-3 as an extractive baseline, in our data sentence order doesn’t matter as reflections are independent. We thus use a similar in concept baseline: randomly select N reflections. Since the baseline is random we report the average result of 100 runs. Following [@luo2015summarizing], we compare results to MEAD [@radev2004centroid] and to [@luo2015summarizing]’s extractive phrase-based model. Since these models extracted 5 phrases as extractive summary, we use N=5 for our three extractive baselines. Additionally we compare to running only the extractive part of Fast-RL.\
**Domain Transfer (for testing H2, H5)**.
[|p[0.17]{}||p[0.7 ]{}|]{} **Model** & **Summary**\
CNN/DM & Internal vs. external version of iteration iterarors i was a bit preoccupied today but seeing merge sort. How typically iterating through a linked list can be very inefficient the implementation of iterators iterators and their effectiveness how iterators can be used.\
Student Reflections & Most students found the data of data along with its mean and effectiveness interesting, as well as topics related to sse, their, and different. Students also found different a good topic.\
Tuned & Most of students were interested in iterators, the concept of iterators, and quick sort and merge sort. They also found analyzing linked lists in regards to runtime to be interesting.\
\
\
\
\
\
\
To observe the impact of using out-of-domain (news) data for pretraining to compensate for low resource in-domain (reflection) data, we train 3 variants of PG-net: model training on CNN/DM; model training on reflections; and model training on CNN/DM then tuning using reflections. Table \[tab:Summary\_example\] shows example summaries generated by the three variants of PG-net for a CS document. For all experiments where reflections are used for training/tuning, we train using a leave one course out approach (i.e, in each fold, three courses are used for training and the remaining course for testing). If the experiment involves tuning a combined dictionary of CNN/DM and reflections is used to avoid domain mismatch. To tune model parameters, the best number of steps for training, the learning rate, etc., a randomly selected 50% of the training data is used for validation. We choose the parameters that maximize ROUGE scores over this validation set.
To implement PG-net we use OpenNMT [@2017opennmt] with the original set of parameters. The out-of-domain model is trained for 100k steps using the CNN/DM dataset. Following base model training, we tune the model by training it using student reflections. The tuning is done by lowering the LR from 0.15 to 0.1 and training the model for additional 500 steps. The in-domain model is trained only using reflections. We use the same model architecture as above and train the model for 20k steps using adagrad and LR of 0.15.\
**Synthesis Baseline (for testing H3, H4)**. Following [@zhang2015character], we developed a data synthesis baseline using word replacement via WordNet. The baseline iterates over all words in a summary. If word $X$ has $N$ synonyms in WordNet, the model creates $N$ new versions of the summary and corresponding reflections by replacing the word $X$ with each of the $N$ synonyms.\
**Template Synthesis Model (for testing H4, H5)**. To synthesize summaries, we use the same leave one course out approach. For each course, we use the data from the other three courses to train the rewriting module and tune the scoring language model. We can also use the summaries from CNN/DM data as additional samples to further train the rewriting module. We then start synthesizing data using that training data as input. First templates are constructed. The templates are then clustered into 8 clusters. We decided to use 8 to avoid clustering templates from POI with MP, as the templates from both prompts would contain very different supporting words. We also wanted to avoid a high level of dissimilarity within each cluster, and allow some diversity. Following the clustering, the rewriting model produces candidate summaries for each human summary. The rewriting model is another PG-net with the same exact parameters. After producing the candidate summaries, a language model is used to score them. The language model is a single layer LSTM language model trained on 36K sentences from Wikipedia and fine tuned using student reflections. In this work we decided to pick only the highest 3 scored candidate summaries as synthetic data, to avoid adding ill-formed summaries to the training data. Since we are adding $N$ synthetic summaries for each set of reflections, that means we are essentially duplicating the size of our original reflection training data by $N$, which is 3 in our case.[^6] Table \[tab:Summary\_example\] shows a human summary, the keywords extracted, then the output of injecting keywords in a different template using rewriting.\
**Template-based Summarization (for testing H6)**. While the proposed template-based model was intended for data synthesis, with minor modification it can be adapted for summarization itself. Because the modifications introduce few parameters, the model is suitable for small datasets. Recall that for data synthesis, the input to the template method is a summary. Since for summarization the input instead is a set of reflections, we perform keyword extraction over the set of reflections. We then add an extra logistic regression classifier that uses the set of reflections as input and predicts a cluster of templates constructed from other courses. Using the keywords and the predicted cluster of templates, we use the same rewriting model to produce candidate summaries. The last step in the pipeline is scoring. In data synthesis, a reference summary is used for scoring; however, in summarization we don’t have such a reference. To score the candidate summaries, the model only uses the language model and produces the candidate with the highest score.
Results
=======
**ROUGE Evaluation Results**.
[|l|l|c|c|c||c|c|c||c|]{} & **R-1 & **R-2 & **R-L & **R-1 & **R-2 & **R-L**& **Row**\
& & & &\
& [@luo2015summarizing] & 27.65 & 6.66 & 22.76 & & & & 1\
& Mead 5 & & & & 29.35 & 7.91 & 23.12 & 2\
& Random Select 5 & 26.74 & 5.89 & 20.55 & 26.14 & 5.35 & 20.57 & 3\
& Fast-RL (Extractive) & 28.95 & 6.62 & 22.16 & 26.6 & 5.09 & 21.06 & 4\
& CNN/DM &29.83 & 7.10 & 18.28 & 29.30 & 6.95 & 17.63 & 5\
& Student Reflection & 25.90 & 4.62 & 17.49 & 26.14 & 6.05 & 20.94 & 6\
& [Student Reflection + WordNet Synthetic]{} & [27.15]{} & 3.13 & [17.8]{} & [28.11]{} & [6.11]{} & [21.29]{} & 7\
& [Student Reflection + Template Synthetic]{} & [26.93]{} & 3.49 & [19.38]{} & [29.54]{} & [6.96]{} & [21.30]{} & 8\
& [Tuned]{} & *37.31* & *10.20*& *24.16* & *38.47* & *13.88* & *27.79* & 9\
& [Tuned + WordNet Synthetic]{} & [34.13]{} & 7.13 & [21.96]{} & [32.61]{} & [7.51]{} & [21.72]{} & 10\
& [Tuned + Template Synthetic]{} & & & & & & & 11\
& *34.8* & *9.3* & 23.4 & *36.5* & *11.2* & 24.1 & 12\
**********
& & &\
& [@luo2015summarizing] & & & & & & & 13\
& Mead 5 & 26.06 & 8.84 & 21.28 & 32.31 & 12.30 & 26.27 & 14\
& Random Select 5 & 23.50 & 5.88 & 19.46 & 23.77 & 7.63 & 20.11 & 15\
& Fast-RL (Extractive) & 27.49 & 7.73 & 22.05 & 24.59 & 8.16 & 20.66 & 16\
& CNN/DM & 27.22 & 7.62 & 17.80 & 30.99 & 10.01 & 20.29 & 17\
& Student Reflection & *29.29* & 5.66 & 20.31 & 32.10 & 5.92 & 22.28 & 18\
& [Student Reflection + WordNet Synthetic]{} & [26.11]{} & 5.26 & [20.41]{} & [31.92]{} & [6.14]{} & [22.36]{} & 19\
& [Student Reflection + Template Synthetic]{} & *29.65* & [5.42]{}& [20.54]{} & [32.43]{} & [5.96]{} & [21.53]{} & 20\
& [Tuned]{} & *38.78* & & *26.19* & & [12.17]{} & *28.25* & 21\
& [Tuned + WordNet Synthetic]{} & [34.65]{} & 9.88 & [24.31]{} & [36.58]{} & [9.78]{} & [24.08]{} & 22\
& [Tuned + Template Synthetic]{} & & *12.23*& & *40.95* & & & 23\
& *35.6* & *11.1* & 24.8 & *38.3* & 11.9 & 25.6 & 24\
Table \[tab:models\_results\] presents summarization performance results for the 4 extractive baselines, for the original and proposed variants of PG-net, and finally for template-summarization. Following [@see2017get], performance is evaluated using ROUGE (1, 2, and $L$) [@lin2004rouge] on F1. The motivation for using domain transfer and data synthesis is our hypothesis **(H1)**. Table \[tab:models\_results\] supports this hypothesis. All ROUGE scores for PG-net that outperform all extractive baselines (in italics) involve tuning and/or use of synthesised data, except for one R-1 (row 18).
As for our second hypothesis **(H2)**, table \[tab:models\_results\] shows that it is a valid one. For PG-net, comparing the CNN/DM out-of-domain and Student Reflection in-domain results in rows (5 and 6) and (17 and 18) with their corresponding tuned results in rows 9 and 21, we see that fine tuning improves R-1, R-2, and R-$L$ for all courses (rows 5, 6, 9 and 17, 18, 21). Qualitatively, the examples presented in Table \[tab:Summary\_example\] clearly show that tuning yields a more coherent and relevant summary. Over all courses, the tuned version of PG-net consistently outperforms the best baseline result for each metric (rows 9 vs. 1, 2, 3, 4 and 21 vs. 13, 14, 15, 16) except for R-2 in Stat2016.
To validate our next set of hypothesises **(H3, H4. H5)**, we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table \[tab:models\_results\] supports [**H4**]{} by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting **H3** (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting **H5**.
Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting **H6**. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data.\
**Human Evaluation Results**. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don’t read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance.
Conclusions and Future Work
===========================
We explored improving the performance of neural abstractive summarizers when applied to the low resource domain of student reflections using three approaches: domain transfer, data synthesis and the combination of both. For domain transfer, state of the art abstractive summarization model was pretrained using out-of-domain data (CNN/DM), then tuned using in-domain data (student reflections). The process of tuning improved ROUGE scores on the student reflection data, and at the same time produced more readable summaries. To incorporate synthetic data, we proposed a new template based synthesis model to synthesize new summaries. We showed that enriching the training data with this synthesized data can further increase the benefits of using domain transfer / tuning to increase ROUGE scores. We additionally showed that the proposed synthesis model outperformed a word replacement synthesis baseline. Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions.
Acknowledgments
===============
The research reported here was supported, in whole or in part, by the institute of Education Sciences, U.S. Department of Education, through Grant R305A180477 to the University of Pittsburgh. The opinons expressed are those of the authors and do not represent the views of the institute or the U.S. Department of Education
[^1]: http://www.coursemirror.com/download/dataset
[^2]: http://www.coursemirror.com/download/dataset2
[^3]: This data was collected and summarized by us, following the procedures published for the downloadable data.
[^4]: We also performed experiments using another recent model, fast abstractive summarization with reinforcement learning (Fast-RL)[@P18-1063]. Fast-RL showed similar behavior to PG-net with lower performance. Thus, due to page limit, we only report PG-net experiments
[^5]: https://github.com/google-research/bert\#pre-trained-models
[^6]: We plan to explore the effect of varying $N$ in the future.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the reduced single-particle density matrix (RSPDM), the momentum distribution, natural orbitals and their occupancies, of dark “soliton” (DS) states in a Tonks-Girardeau gas. DS states are specially tailored excited many-body eigenstates, which have a dark solitonic notch in their single-particle density. The momentum distribution of DS states has a characteristic shape with two sharp spikes. We find that the two spikes arise due to the high degree of correlation observed within the RSPDM between the mirror points ($x$ and $-x$) with respect to the dark notch at $x=0$; the correlations oscillate rather than decay as the points $x$ and $-x$ are being separated.'
author:
- 'H. Buljan,$^{1}$ K. Lelas,$^{1}$ R. Pezer,$^{2}$ and M. Jablan$^{1}$'
title: 'The single-particle density matrix and the momentum distribution of dark “solitons” in a Tonks-Girardeau gas'
---
Introduction
============
Exactly solvable models have the possibility of providing important insight into the quantum many-body physics beyond various approximation schemes. Two of such models, the Tonks-Girardeau [@Girardeau1960] and the Lieb-Liniger model [@Lieb1963], which describe interacting Bose gases in one dimension (1D), have drawn considerable attention in recent years with the developments of the experimental techniques for tightly confining atoms in effectively 1D atomic waveguides [@OneD; @Kinoshita2004; @Paredes2004; @Kinoshita2006]. The Lieb-Liniger model (LL) describes a system of bosons interacting via two-body $\delta$-function interactions [@Lieb1963]. The Tonks-Girardeau (TG) model corresponds to infinitely repulsive (“impenetrable core”) bosons in 1D [@Girardeau1960; @LL-TG]; this model is exactly solvable via Fermi-Bose mapping, which relates the TG gas to a system of noninteracting spinless fermions in 1D [@Girardeau1960]. A study of atomic scattering for atoms confined transversally in an atomic waveguide has lead to a suggestion for the experimental observation of a TG gas [@Olshanii]; such atomic systems enter the TG regime at low temperatures, low linear densities, and strong effective interactions [@Olshanii; @Petrov; @Dunjko]. The experimental realization of the TG model was reported in two experiments from 2004 [@Kinoshita2004; @Paredes2004]. Moreover, nonequilibrium dynamics of the 1D interacting Bose gases including the TG regime has been recently experimentally studied within the context of relaxation to an equilibrium [@Kinoshita2004]. Within this paper we analyze the reduced single-particle density matrix (RSPDM) and related observables of certain specially tailored excited eigenstates of the TG gas, which are also referred to as dark “soliton” (DS) states [@Girardeau2000; @Busch2003; @Buljan2006].
Dark solitons are fundamental nonlinear excitations. Within the context of interacting Bose gases, they were mainly studied in the regime of weak repulsive interactions [@ExpDark; @Dum1998; @Busch2000; @Muryshev2002] were mean-field theories \[e.g., the Gross-Pitaevskii theory, which employs the nonlinear Schr" odinger equation (NLSE)\] are applicable. In the regime of strong repulsive interactions in quasi-1D geometry, dark solitons were also studied by using NLSE with a quintic nonlinear term [@Kolomeisky2000; @Frantzeskakis2004; @Ogren2005]. In Ref. [@Girardeau2000], Girardeau and Wright have studied the concept of dark solitons within the exactly solvable TG model; they found specially tailored excited many-body eigenstates of the TG gas on the ring (DS states), with a dark notch in their single-particle density, which is similar to the dark notch of nonlinear dark-solitons. The dynamics of such excitations in a TG gas was studied by Busch and Huyet [@Busch2003] in a harmonic trap. Recently, a scheme based on parity selective filtering (“evaporation”) of a many-body wave function was suggested [@Buljan2006] as a candidate for the experimental observation of DS states. However, to the best of our knowledge, the momentum distribution, the RSPDM, natural orbitals (NOs) and their occupancies, have not been studied yet for DS states. These quantities are important for the better understanding of DS states, but may also be necessary ingredients for their experimental detection, which provides motivation for this study.
The calculation of correlation functions (such as the RSPDM) for 1D Bose gases [@Lenard1964; @Creamer1981; @Girardeau2001; @Minguzzi2002; @Cazallilla2002; @Olshanii2003; @Papenbrock2003; @Forrester2003; @Gangardt2003; @Astrakharchik2003; @Rigol2004; @Gangardt2004; @Berman2004; @Rigol2005; @Minguzzi2005; @Brand2005; @Forrester2006; @Gangardt2006; @Rigol2006; @Pezer2007; @Deuretzbacher2007; @Caux2007; @Lin2007] from the many-body wave functions [@Girardeau1960; @Lieb1963; @Girardeau2000; @Muga1998; @Sakmann2005; @Batchelor2005] yields important physical information (such as the momentum distribution) on the state of the system. Within the TG model, the RSPDM and the momentum distribution have been studied in the continuous [@Lenard1964; @Girardeau2001; @Minguzzi2002; @Papenbrock2003; @Forrester2003; @Gangardt2004; @Minguzzi2005; @Pezer2007; @Lin2007], and discrete (lattice) case [@Rigol2004; @Rigol2005; @Gangardt2006; @Rigol2006; @Cazalilla2004], both for the static [@Lenard1964; @Girardeau2001; @Minguzzi2002; @Papenbrock2003; @Forrester2003; @Rigol2004; @Gangardt2004; @Gangardt2006; @Lin2007] and time-dependent problems [@Rigol2005; @Minguzzi2005; @Rigol2006; @Pezer2007]. In the stationary case, most studies consider the ground state properties of the TG gas. The momentum distribution for the ground state of the TG gas on the ring has a spike at $k=0$, $n_B(k)\propto |k|^{-1/2}$ [@Lenard1964]. In both the harmonic confinement [@Minguzzi2002; @Olshanii2003] and on the ring [@Olshanii2003], the TG ground state momentum distribution decays as a power law $n_B(k)\propto k^{-4}$; in Ref. [@Olshanii2003] it has been pointed out that $k^{-4}$-decay is also valid for the LL gas (for any strength of the interaction). These ground states of the TG gas are not Bose condensed [@Lenard1964; @Forrester2003], which is evident from the fact that the occupancy of the leading natural orbital scales as $\sqrt{N}$ for large $N$ [@Forrester2003; @Papenbrock2003]. In the box-confinement, the momentum distribution of a TG gas has been studied by generalizing the Haldane’s harmonic-fluid approach [@Cazallilla2002]. Besides for the ground states, the momentum distribution has been analyzed in time-dependent problems including irregular dynamics on the ring [@Berman2004], dynamics in the harmonic potential with time dependent frequency [@Minguzzi2005], and in a periodic potential in the context of many-body Bragg reflections [@Pezer2007]. A number of interesting results for time-dependent problems have been recently obtained within the discrete lattice model including fermionization of the momentum distribution during 1D free expansion [@Rigol2005], and relaxation to a steady state carrying memory of initial conditions [@Rigol2006].
The correlation functions for TG and LL models were studied by using various analytical and numerical methods [@Lenard1964; @Creamer1981; @Girardeau2001; @Minguzzi2002; @Cazallilla2002; @Olshanii2003; @Papenbrock2003; @Forrester2003; @Gangardt2003; @Astrakharchik2003; @Rigol2004; @Gangardt2004; @Berman2004; @Rigol2005; @Minguzzi2005; @Brand2005; @Forrester2006; @Gangardt2006; @Rigol2006; @Pezer2007; @Deuretzbacher2007; @Caux2007; @Lin2007]. The formula that was derived and employed in Ref. [@Pezer2007] allows efficient and exact numerical calculation of the RSPDM for the TG gas in versatile states (ground state, excited eigenstates, time-evolving states [@Pezer2007]), and for a fairly large number of particles. We find it suitable for this study of DS states.
Here we numerically calculate the RSPDM correlations, natural orbitals and their occupancies, and the momentum distribution of DS states. We find that these excited eigenstates of a TG gas have characteristic shape of the momentum distribution with two sharp spikes. The two sharp spikes arise due to the high degree of correlation observed within the RSPDM between the mirror points, $x$ and $-x$, with respect to the dark notch at $x=0$; interestingly, the correlations oscillate rather than decay as the points $x$ and $-x$ are being separated.
The model
=========
We study a system of $N$ identical Bose particles in 1D space, which experience an external potential $V(x)$. The bosons interact with impenetrable pointlike interactions [@Girardeau1960], which is most conveniently represented as a subsidiary condition on the many-body wave function [@Girardeau1960]:
$$\psi_B(x_1,x_2,\ldots,x_N,t)=0\ \mbox{if}\ x_i=x_j$$
for any $i\neq j$. Besides this condition, $\psi_B$ obeys the Schr" odinger equation
$$i \frac{\partial \psi_B}{\partial t}=
\sum_{j=1}^{N} \left[ -\frac{\partial^2 }{\partial x_j^2}
+ V(x_j) \right] \psi_B;$$
here we use dimensionless units as in Ref. [@Buljan2006], i.e., $x=X/X_0$, $t=T/T_0$, and $V(x)=U(X)/E_0$, where $X$ and $T$ are space and time variables in physical units, $X_0$ is an arbitrary spatial length-scale (e.g., $X_0=1\ \mu$m), which sets the time-scale $T_0=2mX_0^2/\hbar$, and energy-scale $E_0=\hbar^2/(2mX_0^2)$; $m$ denotes particle mass, and $U(X)$ is the potential in physical units. The wave functions are normalized as $\int dx_1\ldots dx_N |\psi_B(x_1,x_2,\ldots,x_N,t)|^2=1$.
The solution of this system may be written in compact form via the famous Fermi-Bose mapping, which relates the TG bosonic wave function $\psi_B$ to an antisymmetric many-body wave function $\psi_F$ describing a system of noninteracting spinless fermions in 1D [@Girardeau1960]:
$$\psi_B =
A(x_1,\ldots,x_N) \psi_F(x_1,x_2,\ldots,x_N,t).
\label{mapFB}$$
Here
$$A=\Pi_{1\leq i < j\leq N} \mbox{sgn}(x_i-x_j)
\label{unitA}$$
is a “unit antisymmetric function” [@Girardeau1960], which ensures that $\psi_B$ has proper bosonic symmetry under the exchange of two bosons. The fermionic wave function $\psi_F$ is compactly written in a form of the Slater determinant,
$$\psi_F(x_1,\ldots,x_N,t)=
\frac{1}{\sqrt{N!}} \det_{m,j=1}^{N} [\psi_m(x_j,t)],
\label{psiF}$$
where $\psi_m(x,t)$ denote $N$ orthonormal SP wave functions obeying a set of uncoupled single-particle (SP) Schr" odinger equations
$$i\frac{\partial \psi_m}{\partial t}=
\left [ - \frac{\partial^2 }{\partial x^2}+
V(x) \right ] \psi_m(x,t), \ m=1,\ldots,N.
\label{master}$$
Equations (\[mapFB\])-(\[master\]) prescribe construction of the many-body wave function describing the TG gas in an external potential $V(x)$, both in the static [@Girardeau1960] and time-dependent case [@Girardeau2000]. The eigenstates of the TG system are
$$\psi_B(x_1,\ldots,x_N)=A(x_1,\ldots,x_N)
\frac{1}{\sqrt{N!}} \det_{m,j=1}^{N} [\phi_m(x_j)],
\label{psiBeig}$$
where $\phi_m(x)$ are single-particle eigenstates for the potential $V(x)$. In the rest of the paper we will discuss the eigenstates of the TG system and their observables; hence, we drop the time-variable from subsequent notation.
The many-body wave function $\psi_B$ fully describes the state of the system. However, its form does not transparently yield physical information related to many important observables (e.g., the momentum distribution). The expectation values of one-body observables are readily obtained from the RSPDM, defined as
$$\begin{aligned}
\rho_{B}(x,y) & = & N \int \!\! dx_2\ldots dx_N \, \psi_B(x,x_2,\ldots,x_N)^*
\nonumber \\
&& \times \psi_B(y,x_2,\ldots,x_N).\end{aligned}$$
The observables of great interest are the SP $x$-density $\rho_{B}(x,x)=\sum_{m=1}^{N}|\phi_m(x)|^2$, and the momentum distribution [@Lenard1964]:
$$n_B(k) = \frac{1}{2\pi}\int \!\! dx dy \, e^{i k(x-y)}\rho_{B}(x,y).
\label{MDformula}$$
The SP density $\rho_{B}(x,x)$ is identical for the TG gas and the noninteracting Fermi gas [@Girardeau1960], however, the momentum distributions of the two systems considerably differ [@Lenard1964].
A concept that is very useful for the understanding of the many-body systems is that of natural orbitals (NOs). The NOs $\Phi_i(x)$ are eigenfunctions of the RSPDM,
$$\int \!\! dx\, \rho_{B}(x,y) \, \Phi_i (x) =
\lambda_i \, \Phi_i (y), \quad i=1,2,\ldots,$$
where $\lambda_i$ are the corresponding eigenvalues; the RSPDM is diagonal in the basis of NOs,
$$\rho_{B}(x,y) = \sum_{i=1}^{\infty}
\lambda_i \Phi_i^* (x) \Phi_i (y).$$
The NOs can be interpreted as effective SP states occupied by the bosons, where $\lambda_i$ represents the occupancy of the corresponding NO [@Girardeau2001]. The sum of the Fourier power spectra of the NOs is the momentum distribution:
$$\begin{aligned}
n_B(k) = \sum_{i=1}^{\infty}
\lambda_i \tilde\Phi_i^* (k) \tilde\Phi_i (k),
\label{BMDNOs}\end{aligned}$$
where $\tilde\Phi_i (k)$ is the Fourier transform of $\Phi_i (x)$.
The RSPDM of the noninteracting fermionic system on the Fermi side of the mapping is
$$\rho_{F}(x,y)=\sum_{m=1}^{N}\phi_m^*(x)\phi_m(y);$$
evidently, the SP eigenstates $\phi_m(x_j)$ are NOs of the fermionic system, with occupancy unity [@Girardeau2001]. The fermionic momentum distribution is
$$\begin{aligned}
n_F(k) = \sum_{m=1}^{N} \tilde\phi_m^* (k) \tilde\phi_m (k),
\label{FMDNOs}\end{aligned}$$
where $\tilde\phi_m (k)$ is the Fourier transform of $\phi_m (x)$.
The calculation of the TG momentum distribution is preceded by a calculation of $\rho_B(x,y)$, which we conduct according to the method described in Ref. [@Pezer2007]. If the RSPDM is expressed in terms of the SP eigenstates $\phi_m$ as
$$\rho_{B}(x,y)=\sum_{i,j=1}^{N}
\phi^{*}_{i}(x)A_{ij}(x,y)\phi_{j}(y),
\label{expansion}$$
it can be shown that the $N\times N$ matrix ${\mathbf A}(x,y)=\{ A_{ij}(x,y) \}$ has the form
$${\mathbf A}(x,y)= ({\mathbf P}^{-1})^{T} \det {\mathbf P},
\label{formulA}$$
where the entries of the matrix ${\mathbf P}$ are $P_{ij}(x,y)=\delta_{ij}-2\int_{x}^{y}dx' \phi_{i}^{*}(x')\phi_{j}(x')$ ($x<y$ without loss of generality) [@Pezer2007]. Formulas (\[expansion\]) and (\[formulA\]) enable fast numerical calculation of the RSPDM (and related quantities) for dark “soliton” states.
DS states on the ring
=====================
Within this section we analyze the RSPDM, the momentum distribution, NOs and their occupancies for excited eigenstates of a TG gas on the ring of length $L$; in other words, external potential is zero, $x$-space is $x \in [-L/2,L/2]$, and periodic boundary conditions are imposed. The many-body eigenstates of the TG gas are constructed from the SP eigenstates of the system via Eq. (\[psiBeig\]). Th SP eigenstates for the ring geometry are plane waves $\sqrt{1/L}e^{ik_m x}$, with SP energy $k_m^2$; here $k_m=2\pi m/L$, and $m$ is integer [@OddN]. Apparently, the eigenstates $\sqrt{1/L}e^{ik_m x}$ and $\sqrt{1/L}e^{-ik_m x}$ are degenerate. This degeneracy in the SP eigenstates induces \[via Eq. (\[psiBeig\])\] degeneracy of the TG many-body excited eigenstates. One particular subspace of degenerate eigenstates (DEs) is spanned with
$$\phi_m(x) =\frac{1}{\sqrt{L}}[a_{m}^{-}e^{-ik_m x}+a_{m}^{+}e^{ik_m x}]$$
where $|a_{m}^{-}|^2+|a_{m}^{+}|^2=1$, and $m=1,\ldots,N$; the corresponding many-body eigenstates are
$$\begin{aligned}
\psi_{DE} & = &
A(x_1,\ldots,x_N)
L^{-\frac{N}{2}}
\times
\nonumber \\
&&\det_{j,m=1}^{N}[a_{m}^{-}e^{-ik_m x_j}+a_{m}^{+}e^{ik_m x_j}].
\label{DE}\end{aligned}$$
Intuition suggests that, although these states are degenerate, some of the corresponding observables, such as the SP density in $x$-space, the momentum distribution, spatial coherence or entropy, could be quite different from one eigenstate to another depending on their internal symmetry, which is designated by the choice of the coefficients $a_{m}^{-}$ and $a_{m}^{+}$.
In Ref. [@Girardeau2000], Girardeau and Wright have pointed out that if one constructs excited many-body eigenstates of the TG gas on the ring as
$$\begin{aligned}
\psi_{DS} & = &
A(x_1,\ldots,x_N)
\left ( \frac{2}{L} \right)^{\frac{N}{2}} \times
\nonumber \\
&&\det_{j,m=1}^{N}[\sin k_m x_j],
\label{psiDS}\end{aligned}$$
that is, if one chooses the coefficients as $a_{m}^{-}=i/\sqrt{2}$ and $a_{m}^{+}=-i/\sqrt{2}$, the SP density of these many-body eigenstates [@Girardeau2000],
$$\rho_{DS}(x,x)=\frac{N+1}{L}-
\frac{\sin(\frac{(N+1)2\pi x}{L})\cos(\frac{N2\pi x}{L})}
{L\sin(\frac{2\pi x}{L})},
\label{rhoDSxx}$$
has the structure closely resembling dark solitons [@Girardeau2000] (hence the notation $\psi_{DS}$ for the many-body wave function, and analogously for related observables below). The structure of these excited eigenstates is somewhat artificial because on the fermionic side of the mapping, these states correspond to noninteracting fermions being placed solely within the [*odd*]{} SP eigenstates $\sin k_m x$. Nevertheless, such states can be excited by filtering of the many-body wave function [@Buljan2006].
Let us utilize the procedure outlined in Sec. II to calculate the RSPDM, and related one-body observables for DS states \[Eq. (\[psiDS\])\]. It is straightforward to calculate the entries of the matrix ${\mathbf P}={\mathbf 1}-{\mathbf Q}$ \[ see Eq. (\[formulA\])\], where
$$\begin{aligned}
Q_{ij} & = & \frac{\sin (2(i+j)\pi x/L)}{(i+j)\pi}-\frac{\sin(2(i-j)\pi x/L)}{(i-j)\pi}
\nonumber \\
&& -\frac{\sin(2(i+j)\pi y/L)}{(i+j)\pi}+\frac{\sin(2(i-j)\pi y/L)}{(i-j)\pi},\ i\neq j;
\nonumber \\
Q_{ii} & = & -2\frac{x-y}{L}
+\frac{ \sin (\frac{4 i \pi x}{L})}{2 i \pi }
-\frac{ \sin (\frac{4 i \pi y}{L})}{2 i \pi };\end{aligned}$$
for $i,j=1,\ldots,N$. As for the inverse of the matrix ${\mathbf P}$, and consequently the RSPDM, we were able to find its analytical form up to $N=7$ by using [*Mathematica*]{}. However, for larger $N$ we resorted to numerical calculations. It is straightforward to see that the RSPDMs of two DS states, for two different values of $L$, say $L_1$ and $L_2$, are connected by a simple scaling,
$$L_1 \rho_{DS,L_1}(x L_1,y L_1)
=
L_2 \rho_{DS,L_2}(x L_2,y L_2),$$
where $x,y\in [-\frac{1}{2},\frac{1}{2}]$; thus, it is sufficient to calculate it for just one value of $L$. In what follows, without loosing any generality, we choose $N=L$.
Figure \[rhodark\] displays contour plots of $\rho_{DS}(x,y)$ for $N=5,11,17$ and $25$. We clearly see a characteristic pattern for each value of $N$: The RSPDMs are largest close to the diagonal, with oscillations following the $x$-space density from Eq. (\[rhoDSxx\]). However, there are strong correlations along the line $x=-y$ indicating coherence between mirror points $x$ and $-x$ around the DS center (at $x=0$).
Figure \[MDdark\](a) displays the momentum distribution $n_{DS}(k)$ of DS states for $N=11,17$, and $25$. All momentum distributions for the ring geometry are normalized as $\sum_{k_m} n_B(k_m)=N$ (the SP momentum values $k_m$ are discrete in the ring geometry). The momentum distributions have a characteristic shape with a smooth hump close to the origin ($k=0$), and with two sharp spikes which are located at $\pm k_{peak}=\pm\sum_{m=1}^{N}k_m/N =\pm \pi (N+1) /L$; the spikes indicate that there is high probability of finding a boson in momentum states $\exp(\pm i\pi (N+1)/L)$. Note that due to our choice $N=L$ the peaks for different values of $N$ approximately coincide at $\pm \pi (1+1/N)\approx \pm \pi$.
The sharp spikes at $k_{peak}=\pm \pi (N+1) /L$ are intimately related to the strong correlation between the mirror points $x$ and $-x$. This is illustrated in Fig. \[rhoxminx\] which shows the cross-diagonal section of the RSPDM $\rho_{DS}(x,-x)$ and the function $\cos(2k_{peak}x)$ for $N=25$. There is evident correlation between $\rho_{DS}(x,-x)$ and $\cos(2k_{peak}x)$. Because of the symmetry $\rho_{DS}(x,y)=\rho_{DS}(y,x)$, the Fourier transform (FT) with respect to $\exp[ik(x-y)]$ reduces to FT with respect to $\cos k(x-y)$, which is $\cos 2 k x$ at $y=-x$; hence, from Fig. \[rhoxminx\] it immediately follows that the cross-diagonal behavior of $\rho_{DS}(x,-x)$ induces the peaks in the momentum distribution of DS states. We would like to point out that the correlations $\rho_{DS}(x,-x)$ do not decay, but oscillate, as the separation between points $x$ and $-x$ is increased.
In order to gain more insight into the origin of the two sharp spikes in the momentum distribution and the related coherence between mirror points $x$ and $-x$, it is illustrative to calculate the RSPDM and the momentum distribution for eigenstates that are degenerate (i.e., that have the same energy) to DS states, but which are less restrictive with respect to symmetry of the coefficients $a_{m}^{-}$ and $a_{m}^{+}$. If the coefficients are chosen as $a_{m}^{-}=i \exp(i\theta_m) /\sqrt{2}$ and $a_{m}^{+}=-i \exp(-i\theta_m) /\sqrt{2}$, one obtains a whole class of eigenstates degenerate to dark solitons, which have the form
$$\begin{aligned}
\psi_{DE} & = &
A(x_1,\ldots,x_N)
\left ( \frac{2}{L} \right)^{\frac{N}{2}} \times
\nonumber \\
&&\det_{j,m=1}^{N}[\sin (k_m x_j+\theta_m)],
\label{psiDE}\end{aligned}$$
where $\theta_m$, $m=1,\ldots,N$ are $N$ phases (for $\theta_m=0$, $\psi_{DS}=\psi_{DE}$).
Figure \[rhorandom\] displays contour plots of RSPDMs $\rho_{DE}(x,y)$, which corresponds to some typical states $\psi_{DE}$ obtained from Eq. (\[psiDE\]) by randomly choosing $N$ phases $\theta_m$ (with respect to the uniform probability density in $[-\pi,\pi]$). We see that the SP density for this state, $\rho_{DE}(x,x)$, is not zero at $x=0$, which evidently follows from the fact that $\sin (k_m x+\theta_m)$ is not an odd function for $\theta_m\neq 0$, while $\rho_{DE}(x,x)=\sum_{m=1}^{N}|\sin (k_m x+\theta_m)|^2$. Furthermore, we observe that the structure of the RSPDM along the $x=-y$ line is absent, that is, there is no coherence between the mirror points $x$ and $-x$. A closely related observation is that the momentum distributions of such states do not have a pair of sharp spikes which are present in $n_{DS}(k)$; this is illustrated in Fig. \[MDdark\](b) which shows typical momentum distributions $n_{DE}(k)$ for $N=11,17$ and $25$.
Besides the RSPDM and the momentum distribution, excited many-body eigenstates of interest can be characterized by the corresponding natural orbitals (NOs) and their occupancies. Figure \[NOocc\] shows the occupancies of the NOs of the state $\psi_{DS}$, and a typical state $\psi_{DE}$, for $N=25$. We observe that the occupancies are fairly low (less than one) for all NO’s, but there is a sharp drop in the occupancies after the $25$th NO. We have observed such a behavior for other values of $N$ as well. In contrast, the occupancies of the NOs corresponding to a typical state $\psi_{DE}$ do not exhibit a sharp drop after the $N$th orbital, but decrease rather smoothly.
Figure \[NOs-xandk\] illustrates the spatial structure and the Fourier power spectra of the NOs corresponding to the DS state for $N=25$. The spatial structure of calculated NOs is either symmetric or antisymmetric. This is connected to the symmetry $\rho_{DS}(x,y)=\rho_{DS}(-y,-x)$; due to this symmetry it follows that if some NO is non-degenerate, it is either symmetric or antisymmetric; if two NOs are degenerate (their occupancies are identical), they can be superimposed to yield one symmetric and one antisymmetric NO. Our numerical study shows that the low order (leading) NOs are localized in space, but broad in $k$-space; Fig. \[NOs-xandk\](a) depicts the $x$-space structure, and Fig. \[NOs-xandk\](b) show the $k$-space structure of the first and the third NO. We see that these low order NOs do not contribute to the sharp peaks observed in the momentum distribution of DS states. Further inspection of the NOs reveals that those NOs just on the upper side of the sharp drop in $\lambda_j$ (Fig. \[NOocc\]) are in fact responsible for the sharp peaks: Figs. \[NOs-xandk\](c) and (d) display the $x$-space and $k$-space structure, respectively, of the $24$th and the $25$th NO ($N=25$). The total momentum distribution (red squares, dotted line in Fig. \[NOs-xandk\](d)) can be written as $\sum_{i=1}^{\infty} \lambda_i \tilde\Phi_i^* (k) \tilde\Phi_i (k)$; a contribution to this sum stemming from the $24$th and the $25$th NO is shown in Fig. \[NOs-xandk\](d) with black solid line. Evidently, for this DS state where $N=25$, the $24$th and the $25$th NO give rise to the peaks in the momentum distribution.
It is interesting to note that when all phases are chosen to be identical but not zero, e.g., if $\theta_m=\pi/2$, then all of the fermionic NOs are $\propto \cos (k_m x)$, we again observe a higher degree of correlation between mirror points in the RSPDM and peaks in the momentum distribution (not shown).
All of the observations above indicate a somewhat smaller degree of order in the degenerate eigenstates $\psi_{DE}$ than in dark solitons $\psi_{DS}$, which follows from the random (disordered) choice of the phases $\theta_m$. This is further underpinned in Table \[TabEnt\], which shows the entropy $S=-\sum_i p_i \log p_i$, where $p_i=\lambda_i/N$, for the dark “soliton” states $\psi_{DS}$, and typical $\psi_{DE}$ states. The entropy of states $\psi_{DE}$ is systematically larger than in the states $\psi_{DS}$.
$N$ $S[\psi_{DS}]$ $S[\psi_{DE}]$
----- ---------------- ----------------
11 2.90 3.21
17 3.39 3.66
25 3.83 4.05
: The entropy $S$ of dark “soliton” states, and typical $\psi_{DE}$ states for different values of the number of particles $N$.[]{data-label="TabEnt"}
From our observations it follows that the many-body state $\psi_{DS}$ contains a distinct component, which can be interpreted as a standing wave populating momentum modes at $\pm k_{peak}$. In the effective single-particle picture, we see that this component give rise to the population of the natural orbitals close to (and including) the $N$th NO. However, it should be pointed out that this component is fairly small, i.e., it yields small occupation of these effective SP states.
In a similar fashion to the excited $\psi_{DS}$ state, the ground state of the TG gas on the ring yields distinct population of the zero-momentum mode [@Lenard1964]; in this case, however, the zero-momentum mode is the leading natural orbital, and its population is fairly large (it scales as $\sqrt{N}$ [@Forrester2003]). Even though the TG states are not Bose condensed, they can sharply populate a single momentum mode because bosons do not obey the Pauli principle and consequently more than one boson can occupy a single momentum state (which is not the case for noninteracting fermions).
It is interesting to note that on the Fermi side of the mapping, the momentum distribution of noninteracting fermions $n_F(k)$ is uniform up to the Fermi edge (excluding the zero momentum mode at $k_0=0$), and does not depend on the randomly chosen phases $\theta_m$:
$$n_{F,DS}(k_m) = n_{F,DE}(k_m) =
\left\{
\begin{array}{ll}
\frac{1}{2} & \textrm{if $1\leq |m| \leq N$}\\
0 & \textrm{otherwise}
\end{array}
\right.
\label{nF}$$
Namely, the SP eigenstates $\sin(k_m x+\theta_m)$ are NOs of the fermionic system. The Fourier power spectrum of each SP state $\sin(k_m x+\theta_m)$ \[which determine the fermionic momentum distribution via Eq. (\[FMDNOs\])\], does not depend on on the phase $\theta_m$. Each fermionic NO $\sin(k_m x+\theta_m)$ can be written as a superposition of two plane waves $\sin(k_m x+\theta_m)=(e^{ik_m x+i\theta_m}-e^{-ik_m x-i\theta_m})/2i$. Evidently, the mean value of the momenta pointing in the positive (negative) direction is $\pi (N+1) /L$ \[$-\pi (N+1) /L$, respectively\], that is, it is identical to $k_{peak}$. When the fermionic states are mapped to the TG states, a wave function component which distinctively populates momentum modes at $\pm k_{peak}$ can appear. This occurs when the phases $\theta_m$ act coherently, i.e., it is evident that a random choice of the phases $\theta_m$ destroys the observation of the two spikes connected with this component.
Before closing this section we should say that in all our numerical calculations, the phases of the states $\psi_{DE}$ were chosen at random (with respect to the uniform probability density in $[-\pi,\pi]$). A random choice of the phases yields a typical state $\psi_{DE}$ in the sense that one-body observables, such as the momentum distribution, of typical states approximately coincide. In order to verify this assumption Figure \[figEns\] displays momentum distributions for 10 eigenstates $\psi_{DE}$ ($N=11$ particles), obtained by 10 randomly chosen configurations $\{ \theta_m\ |\ m=1,\ldots,N \}$ of the phases. The momentum distribution only slightly varies from case to case with one exception that exhibits $2$ (relatively small) dark solitonic spikes. Exceptions from the typical behavior will be harder to see for larger values of $N$, because in this case the parameter space spanned by $N$ phases $\theta_m$ is larger, and it is harder to correlate the phases by chance, which could yield characteristic solitonic spikes in the momentum distribution. Hence, we can conclude that our observations regarding the class of states $\psi_{DE}$ from Eq. (\[psiDE\]) hold for practically all of these states in the sense stated above.
DS states in a parity invariant well-shaped potential
=====================================================
The concept of dark “solitons” can be extended to various types of parity-invariant potentials (e.g., see [@Buljan2006]). DS states are found in harmonic confinement [@Busch2003] periodic lattices [@Buljan2006], well-shaped potentials [@Buljan2006], and so-forth. In Ref. [@Buljan2006] it was shown that by parity invariant filtering of the many-body wave function, one could in principle excite the TG gas into a DS state. Let us compare the RSPDM and the momentum distribution of DS states on the ring, and in a parity invariant potential $V_c(x)=V_c^0 \{ 2 + \sum_{i=1,2} (-)^{i+1}\tanh x_w(x+(-)^i x_c) \}$ ($V_c^0=15$, $x_w=8$, and $x_c=25$). In such potential, DS states are constructed by populating the first $N$ [*odd*]{} SP eigenstates on the Fermi side of the map. Figure \[box\](a) displays the RSPDM, while Fig. \[box\](b) displays the momentum distribution of such an excited eigenstate for $N=10$. We clearly observe that the structure of the RSPDM and the momentum distribution is similar to that of DS states on the ring; the RSPDM has off-diagonal mirror-point correlations, while the momentum distribution has two sharp spikes. Furthermore, Fig. \[box\](c) displays the occupancies of the NOs, which clearly exhibit a large and sudden drop after the $N$th NO. Fig. \[box\](d) shows the contribution from the $(N-1)$th and the $N$th NO to the momentum distribution: $\sum_{i=9}^{10} \lambda_i \tilde\Phi_i^* (k) \tilde\Phi_i (k)$; clearly these NOs are responsible for the peaks in the momentum distribution.
The observations presented in Fig. \[box\] suggest that the behavior of the one-body observables of DS states, such as the two sharp spikes in the momentum distribution and correlation between the mirror points in the RSPDM can be found in various types of parity-invariant potentials.
Connection to incoherent light
==============================
We would like to point out that the behavior of incoherent light in linear [@Turunen] and nonlinear [@Moti; @SpatSolitons; @Cohen2005; @Picozzi] optical systems has many similarities to the behavior of interacting (partially condensed or non-condensed) Bose gases [@Buljan2006; @Picozzi; @Naraschewski1999; @Buljan2005]. The dynamics of incoherent light in nonlinear systems attracted considerable interest in the past decade since the first experiments on incoherent solitons [@Moti] in noninstantaneous nonlinear media were conducted. A number of important results were obtained (for a review e.g., see Ref. [@SpatSolitons]) since then. Among the recent results one finds, e.g., the experimental observation of incoherent solitons nonlinear photonic lattices [@Cohen2005], and thermalization of incoherent nonlinear waves [@Picozzi]. We believe that many of the phenomena observed with incoherent light in optics [@Moti; @SpatSolitons; @Cohen2005; @Picozzi] can find its counterpart in the context of Bose gases.
In Ref. [@Buljan2006] it has been pointed out that there is mathematical relation between the propagation of partially spatially incoherent light in [*linear*]{} 1D photonic structures and quantum dynamics of a TG gas. More specifically, the correlation functions describing incoherent nondiffracting beams in optics [@Turunen] can be mapped [@Buljan2006] to DS states. However, it should be emphasized that the spatial power spectrum of these incoherent beams corresponds to the momentum distribution of noninteracting fermions, i.e., it profoundly differs from the momentum distribution of DS states in a TG gas discussed here.
Summary
=======
We have employed a recently obtained formula [@Pezer2007] to numerically calculate the RSPDM correlations, natural orbitals and their occupancies, and the momentum distribution of dark “solitons” in a TG gas. We have found that these excited eigenstates of a TG gas have characteristic shape of the momentum distribution, which has two distinguished sharp spikes; while most of the paper is devoted to the ring geometry, where the spikes are located at $k_{peak}=\pm\pi (N+1)/L$ ($N$ is the number of particles and $L$ is the length of the ring), we have shown results which suggest that such behavior is general for DS states in parity invariant potentials. It has been shown that the spikes in the momentum distribution are closely connected to the cross-diagonal oscillatory long-range correlations between mirror points ($x$ and $-x$) in the RSPDM. This behavior of DS states follows from the fact that they are specially tailored; in the ring geometry, it has been shown that the two spikes and a special form of spatial coherence are lost for most eigenstates that are degenerate to DS states.
Acknowledgments
===============
This work is supported by the Croatian Ministry of Science (grant no. 119-0000000-1015).
[99]{}
M. Girardeau, J. Math. Phys. [**1**]{}, 516 (1960).
E. Lieb and W. Liniger, Phys. Rev. [**130**]{}, 1605 (1963); E. Lieb, Phys. Rev. [**130**]{}, 1616 (1963).
F. Schreck, L. Khaykovich, K.L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles, and C. Salomon, Phys. Rev. Lett. [**87**]{}, 080403 (2001); A. G" orlitz, J.M. Vogels, A.E. Leanhardt, C. Raman, T.L. Gustavson, J.R. Abo-Shaeer, A.P. Chikkatur, S. Gupta, S. Inouye, T. Rosenband, and W. Ketterle, [*ibid.*]{} [**87**]{}, 130402 (2001); M. Greiner, I. Bloch, O. Mandel, T.W. Hansch, and T. Esslinger, [*ibid.*]{} [**87**]{}, 160405 (2001); H. Moritz, T. St" oferle, M. Kohl, and T. Esslinger, [*ibid.*]{} [**91**]{}, 250402 (2003); B. Laburthe-Tolra, K.M. O’Hara, J.H. Huckans, W.D. Phillips, S.L. Rolston, and J.V. Porto, [*ibid.*]{} [**92**]{}, 190401 (2004); T. St" oferle, H. Moritz, C. Schori, M. Kohl, and T. Esslinger, [*ibid.*]{} [**92**]{}, 130403 (2004).
T. Kinoshita, T. Wenger, and D.S. Weiss, Science [**305**]{}, 1125 (2004).
B. Paredes, A. Widera, V. Murg, O. Mandel, S. F" olling, I. Cirac, G. V. Shlyapnikov, T. W. H" ansch, and I. Bloch, Nature (London) [**429**]{}, 277 (2004).
T. Kinoshita, T. Wenger, and D.S. Weiss, Nature (London) [**440**]{}, 900 (2006).
In the limit of infinitely strong $\delta$-function interactions, the Lieb-Liniger gas becomes a gas of impenetrable bosons in 1D, i.e., the TG gas.
M. Olshanii, Phys. Rev. Lett. [**81**]{}, 938 (1998).
D.S. Petrov, G.V. Schlyapnikov, and J.T.M. Valraven, Phys. Rev. Lett. [**85**]{}, 3745 (2000).
V. Dunjko, V. Lorent, and M. Olshanii, Phys. Rev. Lett. [**86**]{}, 5413 (2001).
M.D. Girardeau and E.M. Wright, Phys. Rev. Lett. [**84**]{}, 5691 (2000).
T. Busch and G. Huyet, J. Phys. B [**36**]{} 2553 (2003).
H. Buljan, O. Manela, R. Pezer, A. Vardi, and M. Segev, Phys. Rev. A [**74**]{}, 043610 (2006).
S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera, G.V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett. [**83**]{}, 5198 (1999); J. Denschlag, [*et al.*]{}, Science [**287**]{}, 97 (2000).
R. Dum, J. I. Cirac, M. Lewenstein, and P. Zoller, Phys. Rev. Lett. [**80**]{}, 2972 (1998).
Th. Busch, and J.R. Anglin, Phys. Rev. Lett. [**84**]{}, 2298 (2000).
A.E. Muryshev, G.V. Shlyapnikov, W. Ertmer, K. Sengstock, and M. Lewenstein, Phys. Rev. Lett. [**89**]{}, 110401 (2002).
E.B. Kolomeisky, T.J. Newman, J.P. Straley, and Xiaoya Qi, Phys. Rev. Lett. [**85**]{} 1146, (2000).
D.J. Frantzeskakis, N.P. Proukakis, and P.G. Kevrekidis Phys. Rev. A [**70**]{}, 015601 (2004).
M. " Ogren, G.M. Kavoulakis, A.D. Jackson, Phys. Rev. A [**72**]{}, 021603(R) (2005).
A. Lenard, J. Math. Phys. [**5**]{}, 930 (1964); T.D. Schultz, J. Math. Phys. [**4**]{}, 666 (1963); H.G. Vaidya and C.A. Tracy, Phys. Rev. Lett. [**42**]{}, 3 (1979).
D.B. Creamer, H.B. Thacker, and D. Wilkinson, Phys. Rev. D [**23**]{}, 3081 (1981); M. Jimbo and T. Miwa, Phys. Rev. D [**24**]{}, 3169 (1981).
M.D. Girardeau, E.M. Wright, and J.M. Triscari, Phys. Rev. A [**63**]{}, 033601 (2001); G.J. Lapeyre, M.D. Girardeau, and E.M. Wright, Phys. Rev. A [**66**]{}, 023606 (2002).
A. Minguzzi, P. Vignolo, M.P. Tossi, Phys. Lett. A [**294**]{}, 222 (2002).
M.A. Cazalilla, Europhys. Lett. [**59**]{}, 793 (2002).
M. Olshanii and V. Dunjko, Phys. Rev. Lett. [**91**]{}, 090401 (2003).
T. Papenbrock, Phys. Rev. A [**67**]{}, 041601(R) (2003).
P.J. Forrester, N.E. Frankel, T.M. Garoni, N.S. Witte, Phys. Rev. A [**67**]{}, 043607 (2003).
D.M. Gangardt and G.V. Shlyapnikov. Phys. Rev. Lett. [**90**]{}, 010401 (2003).
G.E. Astrakharchik and S. Giorgini, Phys. Rev. A [**68**]{}, 031602(R) (2003).
M. Rigol and A. Muramatsu, Phys. Rev. A [**70**]{}, 031603(R) (2004)
D.M. Gangardt, J. Phys. A [**37**]{}, 9335 (2004).
G.P. Berman, F. Borgonovi, F.M. Izrailev, and A. Smerzi, Phys. Rev. Lett. [**92**]{}, 030404 (2004).
M. Rigol and A. Muramatsu, Phys. Rev. Lett. [**94**]{}, 240403 (2005).
A. Minguzzi and D.M. Gangardt, Phys. Rev. Lett. [**94**]{}, 240404 (2005).
J. Brand and A.Yu. Cherny, Phys. Rev. A [**72**]{}, 033619 (2005).
P.J. Forrester, N.E. Frankel, and M.I. Makin, Phys. Rev. A [**74**]{}, 043614 (2006).
D.M. Gangardt and G.V. Shlyapnikov, New J. of Phys. [**8**]{}, 167 (2006).
M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii Phys. Rev. Lett. [**98**]{}, 050405 (2007).
R. Pezer and H. Buljan, Phys. Rev. Lett. [**98**]{}, 240403 (2007).
F. Deuretzbacher, K. Bongs, K. Sengstock, and D Pfannkuche, Phys. Rev. A [**75**]{}, 013614 (2007).
J.-S. Caux, P. Calabrese, and N. A. Slavnov, J. Stat. Mech. (2007) P01008.
Y. Lin and B. Wu, Phys. Rev. A [**75**]{}, 023613 (2007).
J.G. Muga and R.F. Snider, Phys. Rev. A [**57**]{}, 3317 (1998).
K. Sakmann, A.I. Streltsov, O.E. Alon, and L.S. Cederbaum, Phys. Rev. A [**72**]{}, 033613 (2005).
M.T. Batchelor, X.-W. Guan, N. Oelkers, and C. Lee, J. Phys. A [**38**]{}, 7787 (2005).
The behavior of the [*discrete*]{} HCB-lattice model is not equivalent to the TG model in a [*continuous*]{} potential, e.g., see M.A. Cazalilla, Phys. Rev. A [**70**]{}, 041604(R) (2004).
For simplicity, here we focus only on an odd number of particles on the ring. If $N$ is even, then $k_m=2\pi m/L + \pi/L$, and $m$ is integer (see footnote 6. in Ref. [@Lieb1963]).
J. Turunen, A. Vasara, and A.T. Friberg, J. Opt. Soc. Am. A [**8**]{} 282 (1991); A.V. Shchegrov and E. Wolf, Opt. Lett. [**25**]{} (2000).
M. Mitchell, Z. Chen, M. Shih, and M. Segev, Phys. Rev. Lett. [**77**]{}, 490 (1996); M. Mitchell and M. Segev, Nature (London) [**387**]{}, 880 (1997); Z. Chen, M. Mitchell, M. Segev, T. Coskun, and D.N. Christodoulides, Science [**280**]{}, 889 (1998).
M. Segev and D.N. Christodoulides, [*Incoherent Solitons*]{} in [*Spatial Solitons*]{}, S. Trillo and W. Torruellas eds. (Springer, Berlin, 2001) pp. 87-125.
O. Cohen, G. Bartal, H. Buljan, J.W. Fleischer, T. Carmon, M. Segev, and D.N. Christodoulides, Nature (London) [**433**]{}, 500 (2005).
S. Pitois, S. Lagrange, H.R. Jauslin, and A. Picozzi, Phys. Rev. Lett. [**97**]{} 033902 (2006); for a recent review, see A. Picozzi, Opt. Express [**15**]{} 9063 (2007), and references therein.
M. Naraschewski and R.J. Glauber, Phys. Rev. A [**59**]{}, 4595 (1999).
H. Buljan, M. Segev, and A. Vardi, Phys. Rev. Lett. [**95**]{}, 180401 (2005).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this paper, we propose a method of targetless and automatic Camera-LiDAR calibration. Our approach is an extension of hand-eye calibration framework to 2D-3D calibration. By using the sensor fusion odometry method, the scaled camera motions are calculated with high accuracy. In addition to this, we clarify the suitable motion for this calibration method.
The proposed method only requires the three-dimensional point cloud and the camera image and does not need other information such as reflectance of LiDAR and to give initial extrinsic parameter. In the experiments, we demonstrate our method using several sensor configurations in indoor and outdoor scenes to verify the effectiveness. The accuracy of our method achieves more than other comparable state-of-the-art methods.
author:
- 'Ryoichi Ishikawa$^{1}$, Takeshi Oishi$^{1}$ and Katsushi Ikeuchi$^{2}$ [^1] [^2]'
title: ' **LiDAR and Camera Calibration using Motion Estimated by Sensor Fusion Odometry** '
---
Introduction
============
Sensor fusion has been widely studied in the field of robotics and computer vision. Compared to a single sensor system, higher level tasks can be performed by a fusion system combining multiple sensors. This type of system can be directly applied to three-dimensional environmental scanning. For example, by combining cameras with LiDAR, it is possible to perform color mapping on range images (Fig. \[fig:intro\]) or estimate accurate sensor motion for mobile sensing systems [@Inso_laserline; @Zheng_balloon; @ishikawa20163d; @zhang2017real].
In a 2D-3D sensor fusion system composed of a camera and LiDAR, an extrinsic calibration of the sensors is required. There are methods that can obtain extrinsic parameter by using target cues or manually associating 2D points on the image with 3D points on the point cloud. However, manually establishing correspondences for accurate calibration is laborious because it requires multiple matches. Moreover, even though a lot of correspondence can be created in this way, the accuracy of calibration is still insufficient. Automated methods such as [@zhang2004extrinsic; @fremont2008extrinsic] use targets that can be detected on both 2D images and 3D point clouds. However, since it is necessary to prepare targets detectable by both the camera and LiDAR, it is impractical and undesirable for on-site calibration. Recently, automatic 2D-3D calibration methods that do not require targets have been proposed. However, since the information obtained from each sensor is multi-modal, the calibration result depends on the modality between the sensors.
![Top: Motion-based 2D-3D Calibration. In our method, the LiDAR motion is estimated by ICP algorithm and Camera motion is initially estimated by using feature point matching and then estimated with scale by sensor fusion system. Bottom: Colored scan by HDL-64E. Texture is taken by Ladybug 3 and our calibration result is used for texturing.[]{data-label="fig:intro"}](introimage_calib.pdf){width=".9\linewidth"}
In this paper, we propose an automatic and targetless calibration method between a fixed camera and LiDAR. As shown in Fig. \[fig:intro\], the proposed method is based on hand-eye calibration. In our method, the motions of the sensors are estimated respectively and calibration is performed using the estimated motions. Each sensor motion is calculated in the same modal and extrinsic parameter is derived numerically from each sensor motion. In the conventional motion-based 2D-3D calibration, the motion of the camera is obtained from only 2D images [@taylor2016motion]. However, the motion can only be estimated up to scale using only camera images. The precision of the extrinsic parameter is greatly affected by the motion error in the hand-eye calibration. Although the scale itself can be calculated simultaneously with extrinsic parameter from multiple motions, hand-eye calibration with scaleless motion deteriorates the accuracy of calibration.
On the other hand, in the sensor fusion odometry using the LiDAR and the camera, the motion of the camera can be accurately estimated with scale if the extrinsic parameter between the sensors are known [@Inso_laserline; @Zheng_balloon; @ishikawa20163d; @zhang2017real]. In our method, we adopt the idea of camera motion estimation using sensor fusion odometry. First, an initial extrinsic parameter is obtained from scaleless camera motions and scaled LiDAR motions. Next, the camera motions are recalculated with scale using the initial extrinsic parameter and the point cloud from the LiDAR. Then the extrinsic parameter is calculated again using the motions. This recalculation of camera motions and recalculation of the extrinsic parameter are repeated until the estimation converges.
Our proposed method requires that the camera and the LiDAR have overlap in their measurement ranges and that the LiDAR’s measurement range is 2D to align scans for the LiDAR motion estimation. The contributions of this paper are shown below.
- As far as we know, this method is the first approach that incorporates camera motion estimation through sensor fusion into 2D-3D calibration.
- We study the optimal sensor motion which this calibration method works effectively.
- The input is only the RGB image from a camera and the three-dimensional point cloud from a LiDAR and does not need any other information such as the reflectance of a LiDAR or the initial value of the extrinsic parameter. The estimation result of the extrinsic parameter is more accurate than other methods with a small number of motion.
Related Works {#sec:related}
=============
Our work is related to the targetless and automatic 2D-3D calibration and hand-eye calibration.
Target-less multi-modal calibration
-----------------------------------
Targetless and automatic 2D-3D calibration methods generally use the common information existing in both image and point cloud. For example, portions that appear as discontinuous 3D shapes are highly likely to appear as edges on an RGB image. Methods that align this three-dimensional discontinuous portion with the 2D edge has been proposed [@levinson2013automatic; @cui2016line]. Meanwhile, a multi-modal alignment method using Mutual Information (MI) was proposed by Viola et al. [@viola], and it has been developed mainly in the field of medical imaging. 2D-3D calibration methods using MI for evaluating the commonality between LiDAR and camera have also been proposed in recent years. As indicators evaluated through MI, reflectance - gray scale intensity [@pandey2015automatic], surface normal - gray scale intensity [@taylor2012mutual], and multiple evaluation indicators including discontinuities of LiDAR data and edge strength in images [@irie2016target] etc. are used. Taylor and Nieto also proposed gradient based metric in [@taylor2014multi].
While these methods align 3D point cloud to 2D image using 3D to 2D projections, some texturing methods using images taken from multiple places with a camera to construct a stereo, reconstructing 3d structure from the images and aligning it to 3D point cloud has also been proposed. In [@Banno_cviu], a method of computing the extrinsic parameter between the LiDAR and the camera for the texturing a dense 3D scan is proposed. The extrinsic calibration was done by aligning dense three-dimensional range data and sparse three-dimensional data reconstructed from two-dimensional stereo images.
Hand-eye Calibration
--------------------
The method of changing the position and orientation of the sensor and performing the calibration using the motions observed by each sensor is known as hand-eye calibration. Let ${\bf A}$ and ${\bf B}$ are the changes in position and orientation observed by two fixed sensors respectively, and ${\bf X}$ be the unknown relative position and orientation between sensors. Then the expression ${\bf AX}={\bf XB}$ holds(Refer top of Fig. \[fig:intro\]). By using this expression to solve ${\bf X}$, extrinsic parameter between sensors can be obtained [@shiu1989calibration; @fassi2005hand]. Furthermore, due to the influence of noise on the sensor, Kalman Filter was used in calibration methods for estimating the bias of the sensor simultaneously [@hol2010modeling; @kelly2011visual].
In [@heng2013camodocal], a method to calibrate the transitions between four cameras mounted on the car using visual odometry is proposed. In [@taylor2016motion], Taylor and Nieto propose a method to obtain the motion of a sensor in 2D-3D calibration and estimate the extrinsic parameter and the time offset of the motion between the sensors. In their method, a highly accurate result is obtained by combining a multi-modal alignment method. However, since the method uses the scaleless position transition estimated from the camera images, it is difficult to obtain an accurate camera motion. In particular, it is difficult to accurately obtain the translation parameters with a small number of motions.
Methodology
===========
{width=".9\linewidth"}
Figure \[fig:overview\] shows the overview of our method. This method is roughly divided into two steps. In the initialization phase, we estimate the extrinsic parameter from the LiDAR motions using ICP alignment and camera motions using feature point matching. Then we alternatingly iterate estimating the extrinsic parameter and the camera motions through sensor fusion odometry.
Initial calibration parameter estimation
----------------------------------------
### Sensor motion estimation
First, we explain about the motion estimation of each sensor in the initial extrinsic parameter estimation.
[**LiDAR**]{}
For the estimation of the LiDAR motion, we use a high-speed registration method through ICP algorithm which searches correspondence points in gaze directions [@alignment]. We create meshes on point clouds in advance using a sequence of points and project these points onto two dimension and use Voronoi splitting. When aligning scans, initially, the threshold value of the distance between corresponding points is set to be large, and the outlier correspondences are eliminated while gradually decreasing the threshold value.
[**Camera**]{}
For the initial motion estimation of the camera, a method using standard feature point matching is used. First, we extract feature points from two images using AKAZE algorithm [@alcantarilla2011fast], calculate descriptors, and make matchings. From the matchings, we calculate the initial relative position and pose between camera frames using 5 point algorithm [@Nister_2006] and RANSAC [@fischler1981random]. After obtaining the initial relative position and orientation, optimization is performed by minimizing the projection error using an angle error metric with the epipolar plane was used [@pagani2011structure].
### Initial extrinsic parameter calibration from sensor motion
In order to obtain the relative position and orientation between the two sensors from the initially estimated motions, we use a method that extends the normal hand-eye calibration to include the estimation of the camera motion’s scale. Let the $i$ th position and pose changes measured by the camera and the LiDAR be $ 4 \times 4 $ matrix $ {\bf A^i} $, ${\bf B^i}$ respectively and extrinsic parameter between two sensors be $ 4 \times 4 $ matrix ${\bf X}$. ${\bf A^iX}={\bf X^iB}$ can be established and the following two equations hold by decomposing it [@shiu1989calibration], $$\begin{aligned}
\label{eq:axis}
{\bf R}_a^{i}{\bf R}&=&{\bf R} {\bf R}_b^{i}\\
\label{eq:trans}
{\bf R}_{a}^{i}{\bf t}+{\bf t}_a^{i}&=&{\bf R}{\bf t}_b^{i}+{\bf t},\end{aligned}$$ where ${\bf R}_{a}^{i}$ and ${\bf R}_{b}^{i}$ is a $3 \times 3$ rotation matrix of ${\bf A}^{i}$ and ${\bf B}^{i}$, ${\bf t}_a^{i}$ and ${\bf t}_b^{i}$ represents the $3 \times 1$ vector of translational component of ${\bf A}^{i}$, ${\bf B}^{i}$. Let ${\bf k}_a^{i}$ and ${\bf k}_b^{i}$ be rotational axis of rotation matrix ${\bf R}_{a}^{i}$ and ${\bf R}_{b}^{i}$. When Eq. \[eq:axis\] holds, following equation holds, $$\begin{aligned}
\label{eq:axis2}
{\bf k}_a^{i}={\bf R} {\bf k}_b^{i}.\end{aligned}$$ Since the absolute scale of translational movement between camera frames can not be calculated, Eq. \[eq:trans\] is written as following using the scale factor $s^{i}$, $$\begin{aligned}
\label{eq:trans2}
{\bf R}_{a}^{i}{\bf t}+s^{i} {\bf t}_a^{i}={\bf R}{\bf t}_b^{i}+{\bf t}.\end{aligned}$$
${\bf R}$ is linearly solved by using SVD from series of ${\bf k}_a^{i}, {\bf k}_b^{i}$. However, to solve the rotation, more than two position and pose transitions are required and rotations in different directions must be included in the series of transition. In nonlinear optimization, ${\bf R}$ is optimized by minimizing the following cost function derived from Eq. \[eq:axis\], $$\begin{aligned}
\label{eq:rotopt}
{\bf R}={\mathop{\rm arg~min}\limits}_{R}\sum_{i}\left|{\bf R}_a^{i}{\bf R}-{\bf R} {\bf R}_b^{i}\right|\end{aligned}$$ After optimizing ${\bf R}$, ${\bf t}$ and $s^{i}$ is obtained by constructing a simultaneous equation and solving it linearly using the least squares method.
Iteration of Camera motion estimation and Calibration from sensor motion
------------------------------------------------------------------------
Initial extrinsic parameter can be obtained using estimated LiDAR motions and initial camera motions. However, scale information of camera motion cannot be obtained from camera images alone. The rotation component of the extrinsic parameter is independent of the scale information and can be accurately estimated in the initial parameter estimation phase. On the other hand, the translation of the extrinsic parameter can be calculated from the difference between the movement amount of the camera and the LiDAR when rotating sensors as indicated in Eq. \[eq:trans\]. Therefore, since the precision of the translational component of the extrinsic parameter is deeply related to the accuracy of camera motion estimation, it is difficult to accurately estimate the extrinsic parameter from the scaleless camera motion.
On the other hand, the motion estimation using the sensor fusion system can solve the scaled motion with high accuracy [@Inso_laserline; @Zheng_balloon]. Once the extrinsic parameter is estimated, we can estimate the camera motion ${\bf A^{i}}$ with scale by using the given extrinsic parameter ${\bf X}$ and the range data scanned by the LiDAR. After motion estimation, the extrinsic parameter $ {\bf X} $ is re-estimated using the series of $ {\bf A^{i}} and {\bf B^{i}} $. Since there is no need to estimate the translation component of $ {\bf A^{i}} $ at the same time, it is possible to estimate the translation component of $ {\bf X} $ more accurately than the initial estimation. Estimating the camera motion $ {\bf A^{i}} $ again using the re-estimated $ {\bf X} $ and the range data increases the accuracy of $ {\bf A^{i}}$. The extrinsic parameter is then optimized by repeating the estimation of the camera motion and the estimation of the extrinsic parameter alternately until convergence.
### Camera motion estimation with range data
![Schematic diagram of how to obtain 2D-3D correspondences[]{data-label="fig:proj2d3d"}](algorithm.pdf){width=".8\linewidth"}
Figure \[fig:proj2d3d\] shows that the schematic diagram of constructing 2D-3D correspondence. Inputs are point cloud in the world coordinates, two camera images taken at the position of camera 1, 2, and extrinsic parameter to localize camera 1 into the world coordinates. First, a certain point ${\bf p}$ in the point cloud onto the image in Camera 1 using projection function $Proj({\bf p}_c)$ which project 3D points ${\bf p}_c$ in camera coordinates onto camera image and extrinsic parameter by following equation, $$\begin{aligned}
{\bf v}=Proj({\bf Rp}+{\bf t}),\end{aligned}$$ where ${\bf v}$ is the vector heading from the center of camera 1 to the corresponding pixel. Then we track the pixel on which ${\bf p}$ is projected from camera image 1 to image 2 using KLT tracker [@KLT_1981]. Let ${\bf v}'$ be the vector heading from the center of camera 2 to the tracked pixel. Now the point ${\bf p}$ and the vector $ {\bf v}'$ of 2D-3D correspondence is constructed.
After constructing 2D-3D correspondences, it is possible to optimize the relative position and orientation of the camera 1 and the camera 2 by minimizing the projection error. Let $ ({\bf v}'_j, {\bf p}_j)$ be the $j$ th 2D-3D correspondence in $i$ th motion, the position and pose transition between cameras $ {\bf R}_a^i, {\bf t }_a^i $ can be optimized by minimizing the following angle metric cost function. $$\begin{aligned}
{\bf R}_a^i, {\bf t}_a^i={\mathop{\rm arg~min}\limits}_{R_a,t_a} \sum_{j}\left|v'_j\times Proj({\bf R_a}({\bf R}{\bf p}_j+{\bf t})+{\bf t_a})\right|\end{aligned}$$ For the initial values of ${\bf R}_a$ and ${\bf t}_a$, generalized perspective 3 point algorithm [@kneip2011novel] is used in the first iteration. After the second iteration, the estimation result in the previous iteration is used.
### Parameter Calibration
Once the position and pose transition of the camera is recalculated, the extrinsic parameter is optimized again using the motion of the camera and the LiDAR. In each iteration, $ {\bf R} $ and ${\bf t} $ is solved linearly and non-linearly. In non-linear optimization, ${\bf R}$ is optimized by Eq. \[eq:rotopt\] and ${\bf t} $ is optimized by following, $$\begin{aligned}
{\bf t}={\mathop{\rm arg~min}\limits}_{t}\sum_{i}{\left|({\bf R}_a^i{\bf t}+{\bf t}_a^i)-({\bf R}{\bf t}_b^i+{\bf t})\right|}\end{aligned}$$
Optimal motion for 2D-3D calibration
====================================
We consider the motion suitable for the calibration taking into consideration the influence each other’s error has on the mutual parameter estimation. During the alternating estimation of the extrinsic parameter between the sensors and the motion of the camera, it is inevitable to estimate the position and pose of each other with the error. Since motion estimation and extrinsic parameter estimation are also dependent on the measured environment and the number of motions, it is difficult to obtain precise convergence conditions. However, it is possible to consider the motion that is likely to converge the estimation.
Camera motion estimation
------------------------
![Schematic diagram when there is an error in the localized position of the camera 1[]{data-label="fig:camLocalize"}](cameraLocalize_.pdf){width="\linewidth"}
First, we consider the influence of the extrinsic parameter error on the localization of the camera and the conditions under which the estimation is successful. We suppose the case where there is an error in the extrinsic parameter given in the section as shown in Fig. \[fig:camLocalize\]. Let “Estimated Camera 1” be the estimated position of camera 1 with respect the actual camera position (Camera 1, 2 in Fig. \[fig:camLocalize\]). The rotation error between Camera 1 and Estimated Camera 1 is considered to be negligibly small.
Next, consider the operation of creating 2D-3D correspondence. In the projection step, a certain point ${\bf p} '$ on the point cloud is projected onto the Estimated camera 1. However, a difference is caused between the projected pixel and the 3D point due to the error of the extrinsic parameter. Then the pixel on which the point $ {\bf p} '$ is projected is tracked on the Camera 2 image. Let $ {\bf v} $ be the vector heading from Estimated camera 1 to the point $ {\bf p}' $. Point $ {\bf p}$ is actually corresponds of the vector $ {\bf v} $. Ignoring the error of pixel tracking from Camera 1 to Camera 2, the direction vector from Camera 2 to the pixel corresponding to ${\bf v}$ is $ {\bf v}'$. On the computer, the three-dimensional point ${\bf p}'$ and $ {\bf v}' $ correspond to each other.
For the projection error, assuming that there is no rotation error in estimating the position and orientation of the camera 2, the projection error when the estimated position is $ {\bf a}_{est} $ is $$\begin{aligned}
\label{eq:projerr}
e({\bf a}_{est})= \left|{\bf v}' \times \frac{\alpha {\bf v}-{\bf a}_{est}}{|\alpha {\bf v}-{\bf a}_{est}|} \right|. \end{aligned}$$ From Fig. \[fig:camLocalize\], let ${\bf a}$ be the vector directing from Camera 1 to Camera 2, ${\bf v}'$ is expressed as following, $$\begin{aligned}
\label{eq:vd}
{\bf v}'=\frac{\beta {\bf v}-{\bf a}}{|\beta {\bf v}-{\bf a}|}.\end{aligned}$$ Substitute Eq. \[eq:vd\] for Eq. \[eq:projerr\], $$\begin{aligned}
\label{eq:projerr2}
e({\bf a}_{est})= \left| \frac{\beta {\bf v}-{\bf a}}{|\beta {\bf v}-{\bf a}|} \times \frac{\alpha {\bf v}-{\bf a}_{est}}{|\alpha {\bf v}-{\bf a}_{est}|} \right|.\end{aligned}$$ When optimizing ${\bf a}_{est}$, we compute the projection error for ${\bf v}$ in all directions and ${\bf a}_{est}$ approach to the point where the sum of the projection error is minimized.
Now, in order to estimate the extrinsic parameter accurately, ideal camera motion should be estimated as real camera actually moves. In other words, ideally estimated motion becomes $ {\bf a}_{est} \to {\bf a} $. In order for $ {\bf a}_{est}$ to approach $ {\bf a} $, the projection error $e({\bf a})$ when Estimated Camera 2 is located away from Estimated Camera 1 by ${\bf a}$ becomes small. $e({\bf a})$ is represented by the following equation: $$\begin{aligned}
\label{eq:projerr3}
e({\bf a})&=&\left| \frac{(\alpha-\beta) ({\bf a}\times{\bf v})}{|\beta {\bf v}-{\bf a}||\alpha {\bf v}-{\bf a}|} \right|.\end{aligned}$$
From Eq. \[eq:projerr3\], the followings can be said.
- The smaller the ${\bf a}$, the smaller the projection error even if the extrinsic parameter contains error. That is, the smaller the moving distance of the camera is, the more accurate the estimation becomes. In other words, when ${\bf a}$ is small, the existence probability of Estimated Camera 2 appears to the periphery of the ideal place acutely. Therefore in the subsequent extrinsic parameter estimation, the existence probability of Estimated Camera 1 also appears sharply around the true value.
- The smaller the value of $ (\alpha - \beta)$, the smaller the projection error. The cases the difference of $ (\alpha - \beta) $ comes out are, for example, when there is a large step under environments or when the incident angle from the camera is shallow. Therefore it is preferable that the calibration environment is, for example, surrounded by smooth walls.
Extrinsic parameter estimation
------------------------------
Next, we consider the influence of the error in the estimated motions on extrinsic parameter calibration. Ignoring the rotation error for the simplification, consider the case where there is the error in the translation of the camera motion and the translation of the extrinsic parameter for Eq. \[eq:trans\]. Let $ {\bf e}_a$ and ${\bf e}$ be the error with respect to ${\bf t}_a$ and $ {\bf t} $, $$\begin{aligned}
\label{eq:trans_err}
{\bf R}_{a}({\bf t}+{\bf e})+{\bf t}_a+{\bf e}_a={\bf R}{\bf t}_b+{\bf t}+{\bf e}.\end{aligned}$$ Taking the difference between Eq. \[eq:trans\_err\] and Eq. \[eq:trans\], $$\begin{aligned}
\label{eq:diff}
{\bf e}_a=({\bf I}-{\bf R}_{a}){\bf e}.\end{aligned}$$ In the case seeing Eq. \[eq:diff\] as a single unit, when the rotation amount of the ${\bf R}_{a} $ is small, the translation errors become $ |{\bf e}_a|<| {\bf e} | $. This indicates that the error of the camera motion propagates to the extrinsic parameter in the diverging direction to the error.
If the amount of error propagation in estimating the extrinsic parameter from the camera motions does not exceed the amount of the error reduction in the camera motion estimation using the distance image, the accuracy of the relative position and orientation is improved by the proposed method. Therefore, in order to reduce the propagation amount of error in relative position and pose estimation, increasing the rotation amount of the camera motion is effective. It is also effective to sample a plurality of motions as much as possible for robust extrinsic parameter estimation. Regarding the amount of rotation of the camera motion, if the appearance of the image changes significantly, it might affect the accuracy of the motion estimation. Therefore this also needs to be taken into account. Although the proposed method can be applied to perspective camera, an omnidirectional camera has advantage because it is possible to secure a common field of view even when the camera rotates significantly.
Experimental results
====================
![Indoor and outdoor calibration scene taken by Ladybug 3[]{data-label="fig:scene"}](scene.pdf){width="\linewidth"}
![Sensor configuration. (a)Imager 5010C and Ladybug 3, (b)HDL-64E and Ladybug 3[]{data-label="fig:sensorconfig"}](sensorconfig.pdf){width="\linewidth"}
In the experiments, we conduct calibrations in indoor and outdoor environments shown in Fig. \[fig:scene\] using panoramic LiDAR, multi-beam LiDAR, and omnidirectional camera. One of the compared methods is image-based calibration using the camera motions with no scale, which is used as the base in [@taylor2016motion]. In Fig. \[fig:overview\], this is the initialization output and hereinafter referred to it as “[*Scaleless*]{}”. In addition to [*Scaleless*]{}, we used calibration by [*Manual*]{} correspondence acquisition and calibration with alignment using [*MI*]{} [@pandey2015automatic] as the other compared methods.
Evaluation with colored range data
----------------------------------
First, the results using the evaluation datasets are shown. To measure the data set, two range sensors Focus S 150 by FARO Inc.[^3] and Imager 5010C by Zollar+Flöhlich Inc.[^4] are used. Three-dimensional panoramic point clouds are scanned with both range sensors. In the data measured by Focus S 150, a colored point cloud is obtained using a photo texture function of it. In the evaluation, inputs are a pseudo panorama rendered image obtained from colored point cloud scanned by Focus S 150 and a point cloud scanned by Imager 5010C. Ground truth is computed through registration of two point clouds scanned by the two sensors.
In the indoor scene dataset, motions are obtained by rotating sensors 5 times in the vertical direction and 5 times in the horizontal direction to measure the data.
![Transition graph of rotation error when changing the number of motion. Blue line: [*Scaleless*]{}, Red line: Ours.[]{data-label="fig:graphr"}](roterr_mot.pdf){width=".8\linewidth"}
![Transition graph of translation error when changing the number of motion. Blue line: [*Scaleless*]{}, Red line: Ours. Bottom shows the result of our method only.[]{data-label="fig:grapht"}](evalgraph2.pdf){width=".8\linewidth"}
The result of calibration with changing the number of motion in the indoor scene is shown in Fig. \[fig:graphr\] and Fig. \[fig:grapht\]. The horizontal axis of the graphs indicates the number of horizontal and vertical motions used for calibration. For example, when the number is one, it indicates that the calibration is performed using two motions in total with one horizontal and one vertical motion. Evaluation is performed by conducting calibration 10 times in each motion number sampling motions randomly. Figure \[fig:graphr\] and Fig. \[fig:grapht\] show the graphs plotting the average and standard deviation of the error of rotation and translation of extrinsic parameter. The blue line indicates the error of the [*Scaleless*]{} and the red line indicates the error of the extrinsic parameter estimated by ours. In terms of rotation, there is no great improvement in accuracy. However, for the translation error, the accuracy improves dramatically using the proposed method as shown in Fig. \[fig:grapht\]. It is also shown that the error is gradually decreasing by increasing the number of motions.
The results compared with other methods ([*Manual*]{}, [*MI*]{}) are shown in Fig. \[fig:indoorcomp\]. In the registration by maximizing MI, only 1 scan is used, and the initial point was shifted from the ground truth by a fixed distance ($0.1 m$) in the random direction only for the translation parameter. In the [*Manual*]{} calibration, calibration is carried out by acquiring 30 correspondings as much as possible from all directions.
![Error from the ground truth of the calibration result by each method in indoor scene[]{data-label="fig:indoorcomp"}](indoorcomp.pdf){width="\linewidth"}
From the Fig. \[fig:indoorcomp\], the rotational error is less than 1 degree in any method. While, in translational error, ours achieves the least error compared to other methods.
![Error from the ground truth of the calibration result by each method in outdoor scene[]{data-label="fig:outdoorcomp"}](outdoorcomp.pdf){width="\linewidth"}
Evaluation results using the outdoor environment dataset are also shown in Fig. \[fig:outdoorcomp\]. Motions are obtained by rotating sensors by 3 times in the vertical direction and 3 times in the horizontal direction to measure the data. For the rotation, our method obtained the best result. However, in any method, the errors are less than 0.5 degrees and no significant difference is seen. On the other hand, ours has the best estimation result for the translation. Considering that the accuracy is less than 1 cm with the same number of motions in the indoor environment, we can say that the indoor scene is good for calibration.
Ladybug 3 and Imager 5010C
--------------------------
![The pictures in which panorama images taken by Ladybug 3 and panorama rendered reflectance image scanned by Imager 5010C are alternately arranged like a checker. When extrinsic parameter is correct, consistency is established between the two images. We set stitching distance of panoramic image to $4 m$ in indoor scene and $7 m$ in outdoor scene.[]{data-label="fig:zflb"}](lbzfresult.pdf){width="\linewidth"}
Next, we show the results of calibration using omnidirectional camera Ladybug 3 by FLIR Inc.[^5] and Imager 5010C. The appearance of the sensor configuration is shown in Fig. \[fig:sensorconfig\] (a).
For the evaluation, as shown in Fig. \[fig:zflb\], the evaluation is performed by overlaying the image of Ladybug 3 and the reflectance image obtained by panorama rendering from the center of the estimated camera position in the point cloud. Images are displayed alternately in the checker pattern. We then confirm the consistency on the two images. From Fig. \[fig:zflb\], [*Scaleless*]{} before optimization does not have consistency between RGB image and reflectance image, but the results of the proposed method have consistency between two images.
Ladybug 3 and Velodyne HDL-64E
------------------------------
We show the result of extrinsic calibration of HDL-64E by Velodyne Inc. [^6] which is multi-beam LiDAR and Ladybug 3 in the indoor scene. The appearance of the sensor configuration is shown in Fig. \[fig:sensorconfig\] (b). In the measurement of data, the rover loading the sensors is operated to generate the rotation motion in the vertical direction and the horizontal direction. For the rotation in the vertical direction, motions are generated by raising and lowering only the front wheel by about 4 cm step. When acquiring data, we stopped the rover and measured it with stop-and-scan. To obtain range data and camera image scanned at the same position, we visually check the timing when the LiDAR and the camera were stationary.
The ground truth of extrinsic parameter between HDL-64E and Ladybug 3 is indirectly obtained by using the point cloud measured by Imager 5010C under the same environment and computing the relative positions and orientations with Imager 5010C. Regarding the HDL-64E and Imager 5010C, the position and orientation are obtained by aligning the range data scanned by each sensor. For Ladybug 3 and Imager 5010C, the extrinsic parameter is obtained by manually specifying the correspondence point between the panorama image and the three-dimensional reflectance image and computing the extrinsic parameter using the correspondences. In [*Scaleless*]{} and the proposed method, eight horizontal rotation motions and eight vertical rotation motions are randomly sampled each time, and an average error of 10 times is recorded. In [*Manual*]{} calibration, calibration is carried out using only one scan, taking corresponding points between the panoramic image of Ladybug 3 and the 3D reflectance image of HDL-64E. For [*MI*]{}, MI is calculated with 16 scan sets of image-point cloud scanned at each position.
![Error from the ground truth of the calibration result by each method using HDL-64E and Ladybug 3. In [*MI*]{}, [*Scaleless*]{}, and ours, calibration performed with 16 scans[]{data-label="fig:velocomp"}](velocomp.pdf){width="\linewidth"}
From Fig. \[fig:velocomp\], [*Manual*]{} calibration fails to obtain accurate results because the narrow scan range and the sparse point cloud of HDL-64E make correspondence construction difficult. Also in [*MI*]{}, since HDL-64E has low resolution and information of reflectance is not clear, optimization cannot be completed with this dataset. On the other hand, in the motion-based methods, accuracies are less than 1 degree in the rotation in both [*Scaleless*]{} and ours. However, in [*Scaleless*]{}, translation is significantly different from the ground truth. In contrast, in ours, highly accurate translation results are obtained.
The proposed method can also work with motions acquired by operating rover. To obtain all the extrinsic parameter of 6-DoF by hand-eye calibration based method, it is necessary to rotate in two or more directions. However, regarding the rotation motion in the vertical direction, a part of the platform on which the sensors are mounted must be raised. Although this operation is more difficult than the horizontal rotation, this experiment demonstrates that the proposed method works well with the vertical rotational motions obtained by a reasonable mobile platform operation such as raising and lowering the small step.
Conclusion
==========
In this paper, we present targetless automatic 2D-3D calibration based on hand-eye calibration using sensor fusion odometry for camera motion estimation. The proposed method can be fully utilized with less translation and larger rotation camera motions. It is also preferable to carry out the measurement for calibration in the place surrounded by flat terrains as much as possible.
It is necessary to rotate the sensor in multi directions to carry out the hand-eye calibration. However, in many cases it is more difficult to make rotation in the vertical direction than in the horizontal direction. Although the proposed method also requires satisfying this conditions, it is enough for carrying out the calibration to use vertical rotation obtained through reasonable movement by using mobile platform. Therefore this method is highly practical and it is possible to calibrate dynamically during scanning by choosing appropriate motions.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work was partially supported by the social corporate program (Base Technologies for Future Robots) sponsored by NIDEC corporation and also supported by JSPS Research Fellow Grant No.16J09277.
[^1]: $^{1}$ Ryoichi Ishikawa and Takeshi Oishi are with Institute of Industrial Science, The University of Tokyo, Japan [{ishikawa, oishi}@cvl.iis.u-tokyo.ac.jp]{}
[^2]: $^{2}$Katsushi Ikeuchi is with Microsoft, USA, [[email protected]]{}
[^3]: https://www.faro.com
[^4]: http://www.zf-laser.com
[^5]: https://www.ptgrey.com/
[^6]: http://velodynelidar.com/
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'C. J. A. P. Martins, M. C. Ferreira, M. D. Julião, A. C. O. Leite, A. M. R. V. L. Monteiro, P. O. J. Pedrosa and P. E. Vielzeuf'
title: ' Fundamental Cosmology in the E-ELT Era '
---
Introduction
============
In the middle of the XIX century Urbain Le Verrier and others mathematically discovered two new planets by insisting that the observed orbits of Uranus and Mercury agreed with the predictions of Newtonian physics. The first of these—which we now call Neptune—was soon observed by Johann Galle and Heinrich d’Arrest. However, the second (dubbed Vulcan) was never found. We now know that the discrepancies in Mercury’s orbit were a consequence of the fact that Newtonian physics can’t adequately describe Mercury’s orbit, and accounting for them was the first success of Einstein’s General Relativity.
Over the past several decades, cosmologists have mathematically discovered two new components of the universe—which we have called dark matter and dark energy—but so far these have not been directly detected. Whether they will prove to be Neptunes or Vulcans remains to be seen but even their mathematical discovery highlights the fact that the standard $\Lambda$CDM paradigm, despite its phenomenological success, is at least incomplete.
Something similar applies to particle physics, where to some extent it is our confidence in the standard model that leads us to the expectation that there must be new physics beyond it. Neutrino masses, dark matter and the size of the baryon asymmetry of the universe all require new physics, and, significantly, all have obvious astrophysical and cosmological implications. Recent years have indeed made it clear that further progress in fundamental particle physics will increasingly depend on progress in cosmology.
After a quest of several decades, the recent LHC evidence for a Higgs-like particle [@atlas; @cms] finally provides strong evidence in favour of the notion that fundamental scalar fields are part of Nature’s building blocks. A pressing follow-up question is whether the associated field has a cosmological role, or indeed if there is some cosmological counterpart.
It goes without saying that fundamental scalar fields already play a key role in most paradigms of modern cosmology. Among others they are routinely invoked to describe period of exponential expansion of the early universe (inflation), cosmological phase transitions and their relics (cosmic defects), the dynamical dark energy which may be powering the current acceleration phase, and the possible spacetime variation of nature’s fundamental couplings.
Even more important than each of these paradigms is the fact that they don’t occur alone: whenever a scalar field plays one of the above roles, it will also leave imprints in other contexts that one can look for. For example, in realistic models of inflation, the inflationary phase ends with a phase transition at which cosmic defects will form (and the energy scales of both will therefore be unavoidably related). More importantly, in the context of this workshop, in realistic models of dark energy, where the dark energy is due to a dynamical scalar field, this field will couple to the rest of the model and lead to potentially observable variations of nature’s fundamental couplings; we will return to this point later in this contribution. Although this complementary point is often overlooked, it will be crucial for future consistency tests.
Varying fundamental couplings
=============================
Nature is characterised by a set of physical laws and fundamental dimensionless couplings, which historically we have assumed to be spacetime-invariant. For the former this is a cornerstone of the scientific method (it’s hard to imagine how one could do science at all if it were not the case), but for the latter it is only a simplifying assumption without further justification. These couplings determine the properties of atoms, cells, planets and the universe as a whole, so it’s remarkable how little we know about them. We have no ’theory of constants’ that describes their role in physical theories or even which of them are really fundamental. If they vary, all the physics we know is incomplete.
Fundamental couplings are indeed expected to vary in many extensions of the current standard model. In particular, this will be the case in theories with additional spacetime dimensions, such as string theory. Interestingly, the first generation of string theorists had the hope that the theory would ultimately predict a unique set of laws and couplings for low-energy physics. However, following the discovery of the evidence for the acceleration of the universe this claim has been pragmatically replaced by an ’anything goes’ approach, sometimes combined with anthropic arguments. Regardless of the merit of such approaches, experimental and observational tests of the stability of these couplings may be their best route towards a testable prediction.
It goes without saying that a detection of varying fundamental couplings will be revolutionary: it will immediately prove that the Einstein Equivalence Principle is violated (and therefore that gravity can’t be purely geometry) and that there is a fifth force of nature. But even improved null results are important and useful. The simple way to understand this is to realise that the natural scale for cosmological evolution of one of these couplings (driven by a fundamental scalar field) would be Hubble time. We would therefore expect a drift rate of the order of $10^{-10}$yr${}^{-1}$. However, current local bounds, coming from atomic clock comparison experiments, are 6 orders of magnitude stronger [@rs208].
Recent astrophysical evidence suggests a parts-per-million spatial variation of the fine-structure constant $\alpha$ at low redshifts [@webb]; although no known model can explain such a result without considerable fine-tuning, it should also be said that there is also no identified systematic effect that can explain it. One possible cause for concern (with these and other results) is that almost all of the existing data has been taken with other purposes in mind, whereas this kind of measurements needs customised analysis pipelines and wavelength calibration procedures beyond those supplied by standard pipelines. This is, of course, one of the reasons for the ongoing ESO UVES Large Programme, whose first results are discussed in the contributions of P. Molaro and H. Rahmani in these proceedings.
In the short term the PEPSI spectrograph at the LBT can also play a role here, and in the longer term a new generation of high-resolution, ultra-stable spectrographs like ESPRESSO (for the VLT) and ELT-HIRES, which have these tests as a key science driver, will significantly improve the precision of these measurements and should be able to resolve the current controversy. A key technical improvement will be that ultimately one must do the wavelength calibration with laser frequency combs.
In theories where a dynamical scalar field yields varying $\alpha$, the other gauge and Yukawa couplings are also expected to vary. In particular, in Grand Unified Theories the variation of $\alpha$ is related to that of energy scale of Quantum Chromodynamics, whence the nucleon masses necessarily vary when measured in an energy scale that is independent of QCD (such as the electron mass). It follows that we should expect a varying proton-to-electron mass ratio, $\mu=m_p/m_e$. Obviously, the specific relation between $\alpha(z)$ and $\mu(z)$ will be highly model-dependent, but this very fact makes this a unique discriminating tool between competing models.
It follows from this that it’s highly desirable to identify systems where various constants can be simultaneously measured, or systems where a constant can be measured in several independent ways. Systems where combinations of constants can be measured are also interesting, and may lead to consistency tests [@FJMM1; @FJMM2]. These points are illustrated in the contributions of M. Julião and A.M. Monteiro in these proceedings.
In passing, let us also briefly comment on other probes of varying constants. The CMB is in principle a very clean one, but in most simple models a parts per million variation of at redshifts a few leads to variations at redshift $z\sim1089$ that are below the sensitivity of Planck. However, these studies do have a feature of interest, namely that they lead to constraints on the coupling between the putative scalar field and electromagnetism, independently (and on a completely different scale) from what is done in local tests, as illustrated in @erminia; another example is provided in M. Martinelli’s contribution to these proceedings. Compact objects such as solar-type stars and neutron stars have also been leading to interesting constraints [@jpv; @angeles].
Dynamical dark energy and varying couplings
===========================================
Observations suggest that the universe is dominated by an energy component whose gravitational behaviour is quite similar to that of a cosmological constant. Its value is so small that a dynamical scalar field is arguably a more likely explanation. Such a field must be slow-rolling (which is mandatory for $p<0$) and be dominating the dynamics around the present day. It follows that if the field couples to the rest of the model (which it will naturally do, unless some symmetry is postulated to suppress the couplings) it will lead to potentially observable long-range forces and time dependencies of the constants of nature.
In models where the degree of freedom responsible for the varying constants also provides the dark energy, the redshift of the couplings is parametrically determined, and any available measurements (be they detections of null results) can be used to set constraints on combinations of the scalar field coupling and the dark energy equation of state. See the contributions of R. Thompson and P. Vielzeuf in these proceedings for illustrations of this point. One can show that ELT-HIRES will either find variations or rule out—at more than 10 sigma—the simplest classes of these models (containing a single linearly coupled dynamical scalar field).
However, this is not all. Standard observables such as supernovae are of limited use as dark energy probes, both because they probe relatively low redshifts and because to ultimately obtain the required cosmological parameters one effectively needs to take second derivatives of noisy data. A clear detection of varying $w(z)$ is crucial, given that we know that $w\sim-1$ today. Since the field is slow-rolling when dynamically important (close to the present day), a convincing detection of a varying $w(z)$ will be tough at low redshift, and we must probe the deep matter era regime, where the dynamics of the hypothetical scalar field is fastest.
Varying fundamental couplings are ideal for probing scalar field dynamics beyond the domination regime [@amendola]: such measurements can presently be made up to redshift $z\sim4$, and future facilities such as the E-ELT may be able to significantly extend this redshift range. Importantly, even null measurements of varying couplings can lead to interesting constraints on dark energy scenarios. ALMA, ESPRESSO and ELT-HIRES can realise the prospect of a detailed characterisation of dark energy properties all the way until $z\sim4$, and possibly beyond. In the case of ELT-HIRES, a reconstruction using quasar absorption lines is expected to be more accurate than using supernova data (its key advantage being huge redshift lever arm), See P. Pedrosa’s contribution to these proceedings, as well as @amendola, for further details.
Dark energy reconstruction using varying fundamental constants does in principle require a mild assumption on the field coupling, but there are in-built consistency checks, so that inconsistent assumptions can be identified and corrected. Explicit examples of incorrect assumptions that lead to observational inconsistencies can be found in @moi1 and P. Vielzeuf’s contribution to these proceedings.
It’s important to keep in mind that the E-ELT will also contribute to the above task by further means. First and foremost there is the detection of the redshift drift signal. This is a key driver for ELT-HIRES, and possibly—at a fundamental level—ultimately the most important E-ELT deliverable. Indeed, as shown in @moi1, having the ability to measure the stability of fundamental couplings and the redshift drift with a single instrument is a crucial strategic advantage. (Nevertheless, it should also be said that other facilities such as PEPSI at the LBT, the SKA and ALMA may also be able do measure the redshift drift.) Additionally, the ELT-IFU (in combination with JWST) should find Type Ia supernovas up to a redshift $z\sim5$. An assessment of the impact of these future datasets on fundamental cosmology is currently in progress. Interesting synergies are also expected to exist between these ground-based spectroscopic methods and Euclid, which need to be further explored.
Consistency tests
=================
Whichever way one finds direct evidence for new physics, it will only be trusted once it is seen through multiple independent probes. This was manifest in the case of the discovery of the recent acceleration of the universe, where the supernova results were only accepted by the wider community once they were confirmed through CMB, large-scale structure and other data. It is clear that history will repeat itself in the case of varying fundamental couplings and/or dynamical dark energy. It is therefore crucial to develop consistency tests, in other words, astrophysical observables whose behaviour will also be non-standard as a consequence of either or both of the above.
The temperature-redshift relation, $$T(z) = T_0(1+z)$$ is a robust prediction of standard cosmology; it assumes adiabatic expansion and photon number conservation, but it is violated in many scenarios, including string theory inspired ones. At a phenomenological level one can parametrise deviations to this law by adding an extra parameter, say $$T(z) = T_0(1+z)^{1-\beta}$$
Our recent work [@Avgoustidis] has shown that forthcoming data from Planck, ESPRESSO and ELT-HIRES will lead to much stronger constraints: Planck on its own can be as constraining as the existing (percent-level) bounds, ESPRESSO can improve on the current constraint by a factor of about three, and ELT-HIRES will improve on the current bound by one order or magnitude. We emphasise that estimates of all these gains rely on quite conservative on the number of sources (SZ clusters and absorption systems, respectively) where these measurements can be made. If the number of such sources increases, future constraints can be correspondingly stronger.
The distance duality relation, $$d_L = (1+z)^2d_A$$ is an equally robust prediction of standard cosmology; it assumes a metric theory of gravity and photon number conservation, but is violated if there’s photon dimming, absorption or conversion. At a similarly phenomenological level one can parametrise deviations to this law by adding an extra parameter, say $$d_L = (1+z)^{2+\epsilon}d_A$$ with current constraints also being at the percent level, and improvements are similarly expected from Euclid, the E-ELT and JWST.
In fact, in many models where photon number is not conserved the temperature-redshift relation and the distance duality relation are not independent. With the above parametrisations it’s easy to show [@Avgoustidis] that $$\beta=-\frac{2}{3}\epsilon$$ but one can in fact further show that a direct relation exists for any such model, provided the dependence is in redshift only (models where there are frequency- dependent effects are more complex). This link allowed us to use distance duality measurements to improve current constraints on $\beta$, leading to $$\beta= 0.004\pm0.016$$ which is a $40\%$ improvement on the previous constraint. With the next generation of space and ground-based experiments, these constraints can be further improved (as discussed above) by more than one order of magnitude.
In models where the degree of freedom responsible for the varying constants does not provide (all of) the dark energy, the link to dark energy discussed in the previous section no longer holds. However, has shown in @moi1, such wrong assumptions can be identified through (in)consistency tests. For example, it has been shown in @moi4 that in Bekenstein-type models one has $$\frac{T(z)}{T_0}=(1+z)\left[\frac{\alpha(z)}{\alpha_0}\right]^{1/4}\sim(1+z)\left(1+\frac{1}{4}\frac{\Delta\alpha}{\alpha}\right)$$ $$d_L(z)\sim d_A(z)(1+z)^2\left(1+\frac{3}{3}\frac{\Delta\alpha}{\alpha}\right)$$
Interestingly these also hold for disformal couplings (but not for chameleon-type models, where the powers of $\alpha$ are inverted), These effects are relevant for the analysis of Planck data: a parts-per-million $\alpha$ dipole leads, in this class of models, to a micro-Kelvin level dipole on the CMB temperature, in addition to the usual milli-Kelvin one due to our motion.
Note that even if this degree of freedom does not dominate at low redshifts it can still bias cosmological parameter estimations, For example. in varying-$\alpha$ models the peak luminosity of Type Ia supernovas will depend on redshift. This scenario is currently being studied in more detail—see M. Martinelli’s contribution in these proceedings for some preliminary results.
Now, if photon number non-conservation changes $T(z)$, the distance duality relation, etc, this may lead to additional biases, for example for Euclid. In @moi4 we have quantified how these models weaken Euclid constraints on cosmological parameters, specifically those characterising the dark energy equation of state. Our results show that Euclid can, even on its own, constrain dark energy while allowing for photon number non-conservation. Naturally, stronger constraints can be obtained in combination with other probes. Interestingly, the ideal way to break a degeneracy involving the scalar-photon coupling is to use $T(z)$ measurements, which can be obtained with ALMA, ESPRESSO and ELT-HIRES (which, incidentally, may nicely complement each other in terms of redshift coverage). It may already be possible to obtain some useful constraints from Planck clusters, and these will be significantly improved with a future PRISM mission.
Last but not least, the role of redshift drift measurements as a consistency test cannot be over-emphasised [@moi1]. Standard dark energy probes are geometric and/or probe localised density perturbations, while the redshift drift provides a unique measurement of the global dynamics [@sandage; @loeb; @liske]. It does not map out our (present-day) past light-cone, but directly measures evolution by comparing past light cones at different times. Therefore it provides an ideal probe of dark sector in deep matter era, complementing supernovas and constants. In fact, as recently shown in @moi3, its importance as a probe of cosmology does not stem purely from its intrinsic sensitivity, but also from the fact that it is sensitive to cosmological parameters that are otherwise hard to probe (in other words, it can break some key degeneracies). One illustrative example [@moi3] is that the CMB is only sensitive to the combination $\Omega_m h^2$, while the redshift drift is sensitive to each of them.
Conclusions
===========
We have highlighted the key role that will be played by forthcoming high-resolution ultra-stable spectrographs in fundamental cosmology, by enabling a new generation of precision consistency tests. The most exciting and revolutionary among these is clearly the redshift drift, which is a key driver for ELT-HIRES, but may also be within the reach of other facilities, like PEPSI (at the LBT), SKA or even ALMA (although no sufficiently detailed studies exist for these at present).
Finally, let us point out that the ELT will enable further relevant tests, including tests of strong gravity around the galactic black hole (through ELT-CAM), and astrophysical tests of the Equivalence Principle, which were not discussed in this contribution. Interesting synergies with other facilities, particularly ALMA and Euclid, remain to be fully explored.
We acknowledge the financial support of grant PTDC/FIS/111725/2009 from FCT (Portugal). CJM is also supported by an FCT Research Professorship, contract reference IF/00064/2012. We are also grateful to the staff at the Sexten Center for Astrophysics (Gabriella and Gabriella) for the warm hospitality during the meeting.
Aad, G. et al. (ATLAS Collaboration) 2012, Phys. Lett. B716, 1 Amendola, L. et al. 2012, Phys. Rev. D86, 063515 Avgoustidis, A., Luzzi, G., Martins, C.J.A.P. & Monteiro, A.M.R.V.L. 2012, JCAP 1202, 013 Avgoustidis, A. et al. 2013, i,arXiv:1305.7031 Calabrese, E. et al. 2011, Phys. Rev. D84, 023518 Chatrchyan, S. et al. (CMS Collaboration) 2012, Phys. Lett. B716, 30 Ferreira, M.C., Julião, M.D., Martins, C.J.A.P., & Monteiro, A.M.R.V.L., 2012, Phys. Rev. D86, 125025 Ferreira, M.C., Julião, M.D., Martins, C.J.A.P., & Monteiro, A.M.R.V.L., 2013, Phys. Lett. B724, 1 Liske, J. et al. 2008, Mon. Not. Roy. Astron. Soc. 386, 1192 Loeb, A., 1998, Astrophys. J., 499, 111 Martinelli, M., Pandolfi, S., Martins, C.J.A.P., & Vielzeuf, P.E. 2012, Phys. Rev. D86, 123001 Pérez-García, A. & Martins, C.J.A.P. 2012, Phys. Lett. B718, 241 Rosenband, T. et al. 2008, Science, 319, 1808 Sandage, A. 1962, Astrophys. J. 281, L77 Vieira, J.P.P., Martins, C.J.A.P. & Monteiro, M.J.P.F.G. 2012, Phys. Rev. D86, 043003 Vielzeuf, P.E. & Martins, C.J.A.P. 2012, Phys. Rev. D85, 087301 Webb, J.K. [*et al.*]{} 2011, Phys. Rev. Lett. 107, 191101
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $E_k$ be the set of positive integers having exactly $k$ prime factors. We show that almost all intervals $[x,x+\log^{1+\varepsilon} x]$ contain $E_3$ numbers, and almost all intervals $[x,x+\log^{3.51} x]$ contain $E_2$ numbers. By this we mean that there are only $o(X)$ integers $1\leq x\leq X$ for which the mentioned intervals do not contain such numbers. The result for $E_3$ numbers is optimal up to the $\varepsilon$ in the exponent. The theorem on $E_2$ numbers improves a result of Harman, which had the exponent $7+\varepsilon$ in place of $3.51$. We also consider general $E_k$ numbers, and find them on intervals whose lengths approach $\log x$ as $k\to \infty$.'
author:
- '<span style="font-variant:small-caps;">Joni Teräväinen</span>'
bibliography:
- 'myreferences.bib'
title: '<span style="font-variant:small-caps;">Almost primes in almost all short intervals</span>'
---
Introduction
============
When studying $E_k$ numbers (products of exactly $k$ primes), it is natural to ask, how short intervals include such numbers almost always. Since Wolke’s work [@wolke], the essential question has been minimizing the number $c$ such that almost all intervals $[x,x+\log^{c}x]$ contain an $E_k$ number, meaning that all but $o(X)$ such intervals with integer $x\in [1,X]$ contain such a number. Wolke showed in 1979 that the value $c=5\cdot 10^6$ is admissible for $E_2$ numbers. This was improved to $c=7+\varepsilon$ for $E_2$ numbers by Harman [@harman-almostprimes] in 1982. Wolke’s and Harman’s methods are based on reducing the problem to estimates for sums over the zeros of the Riemann zeta function, and on the fact that the density hypothesis is known to hold in a non-trivial strip (namely Jutila’s [@jutila-density] region $\sigma\geq \frac{11}{14}$ in Harman’s argument[^1] ). To the author’s knowledge, Harman’s exponent for $E_2$ numbers was the best one known also for $E_k$ numbers with $k\geq 3$.\
If one considers $P_k$ numbers, which are products of no more than $k$ primes, one can obtain improvements. Mikawa [@mikawa] showed in 1989 that for any function $\psi(x)$ tending to infinity, the interval $[x,x+\psi(x)\log^{5}x]$ contains a $P_2$ number almost always. Furthermore, Friedlander and Iwaniec [@friedlander Chapters 6 and 11] proved that for any such function $\psi(x)$ the interval $[x,x+\psi(x)\log x]$ contains a $P_4$ number almost always. They also hint how to prove the same result for $P_3$ numbers. There is however a crucial difference between $E_k$ and $P_k$ numbers, since the $E_k$ numbers are subject to the famous parity problem, and hence cannot be dealt with using only classical combinatorial sieves, which are the basis of the arguments on $P_k$ numbers. Therefore, the $E_k$ numbers are also a much closer analog of primes than the $P_k$ numbers.\
One would naturally expect almost all intervals $[x,x+\psi(x)\log x]$ to have also prime numbers in them, and this would follow from the heuristic that the proportion of $x$ for which $[x,x+\lambda \log x]$ contains exactly $m$ primes for fixed $m$ and $\lambda>0$ should be given by the Poisson distribution $\frac{\lambda^m}{m!}e^{-\lambda}$. Such results are however far beyond the current knowledge, as the shortest intervals, almost all of which are known to contain primes, are $[x,x+x^{\frac{1}{20}+\varepsilon}]$ by a result of Jia [@jia]. However, the results of Goldston-Pintz-Y[i]{}ld[i]{}r[i]{}m [@goldston1],[@goldston2] on short gaps between primes tell that for any $\lambda>0$ there is a positive proportion of integers $x\leq X$ for which $[x,x+ \lambda\log x]$ contains a prime, but it is not known whether this proportion approaches $1$ as $\lambda$ increases. A recent result of Freiberg [@freiberg], in turn, gives exactly $m$ primes on an interval $[x,x+\lambda \log x]$ for at least $X^{1-o(1)}$ integers $x\leq X$. Concerning conditional results, Gallagher [@gallagher] showed that the Poisson distribution of primes in short intervals would follow from a certain uniform form of the Hardy-Littlewood prime $k$-tuple conjecture. Under the Riemann hypothesis, it was shown by Selberg [@selberg] in 1943 that almost all intervals $[x,x+\psi(x)\log^2 x]$ contain primes. For $E_2$ numbers, under the density hypothesis, Harman’s argument from [@harman-almostprimes] would give the exponent $c=3+\varepsilon$.\
In this paper, we establish the exponent $c=1+\varepsilon$ for $E_3$ numbers and the exponent $c=3.51$ for $E_2$ numbers. Our results for $E_2,E_3$ and $E_k$ numbers are stated as follows.
\[t1\] Almost all intervals $[x,x+(\log \log x)^{6+\varepsilon}\log x]$ contain a product of exactly three distinct primes.
\[t2\] For any integer $k\geq 4$, there exists $C_k>0$ such that almost all intervals $[x,x+(\log_{k-1} x)^{C_k}\log x]$ contain a product of exactly $k$ distinct primes. Here $\log_{\ell}$ is the $\ell$th iterated logarithm.
\[t3\]\[3\] Almost all intervals $[x,x+\log^{3.51} x]$ with $x\leq X$ contain a product of exactly two distinct primes.
Theorems \[t1\] and \[t2\] are direct consequences of the following theorem.
\[t4\] Let $X$ be large enough, $k\geq 3$ a fixed integer, and $\varepsilon>0$ small enough but fixed. Define the numbers $P_1,...,P_{k-1}$ by setting $P_{k-1}=(\log X)^{\varepsilon^{-2}},P_{k-2}=(\log \log X)^{6+10\sqrt{\varepsilon}}$ and $P_j=(\log P_{j+1})^{\varepsilon^{-1}}$ for $1\leq j\leq k-3$. For $P_1\log X\leq h \leq X$, we have $$\begin{aligned}
\label{eq15}
\left|\frac{1}{h}\sum_{\substack{x\leq p_1\dotsm p_k\leq x+h\\P_i\leq p_i\leq P_i^{1+\varepsilon},\,i\leq k-1}}1-\frac{1}{X}\sum_{\substack{X\leq p_1\dotsm p_k\leq 2X\\P_i\leq p_i\leq P_i^{1+\varepsilon},\,i\leq k-1}}1\right|\ll\frac{1}{(\log X)(\log_k X)}\end{aligned}$$ for almost all $x\leq X$.
In the theorem above, the average over the dyadic interval is $\gg \frac{1}{\log X}$ by the prime number theorem, so Theorems \[t1\] and \[t2\] indeed follow from Theorem \[t4\]. Similarly, Theorem \[t3\] is a direct consequence of the following.
\[t5\] Let $X$ be large enough, $P_1=\log^a X$ with $a=2.51$, $\varepsilon>0$ fixed, and $P_1\log X\leq h\leq X$. We have $$\begin{aligned}
\label{eq25}
\frac{1}{h}\sum_{\substack{x\leq p_1p_2\leq x+h\\ P_1\leq p_1<P_1^{1+\varepsilon}}}1\gg \frac{1}{X}\sum_{\substack{X\leq p_1p_2\leq 2X\\ P_1\leq p_1\leq P_1^{1+\varepsilon}}}1\end{aligned}$$ for almost all $x\leq X$.
Since $h\geq P_1\log X$, we have the dependence $c=a+1$ between the exponent $a$ in Theorem \[t5\] and the smallest exponent $c$ for which we can show that the interval $[x,x+\log^c x]$ contains an $E_2$ number almost always.
Note that Theorems \[t4\] and \[t5\] tell us that there are $\gg \frac{h}{\log X}$ $E_k$ numbers in almost all intervals $[x,x+h]$, where $h$ and $k$ are as in one of the theorems. However, we are not quite able to find $E_k$ numbers on intervals $[x,x+\psi(x)\log x]$ with $\psi$ tending to infinity arbitrarily slowly, unlike in the result of Friedlander and Iwaniec on $P_k$ numbers. In addition, our bound for the number of exceptional values is at best $\ll \frac{x}{\log^{\varepsilon}x}$ and often weaker, while the methods used in [@harman-sieves], [@jia] and [@watt-primes] for primes in almost all short intervals have a tendency to give the bound $\ll \frac{x}{\log^{A}x}$ for any $A>0$, when they work. The limit of our method for $E_2$ numbers is the exponent $3+\varepsilon$, as will be seen later, so proving for example unconditionally the analog of Selberg’s result for $E_2$ numbers would require some further ideas.
To prove our results, we adapt the ideas of the paper [@matomaki] of Matomäki and Radziwiłł on multiplicative functions in short intervals to considering almost primes in short intervals. In that paper, a groundbreaking result is that for any multiplicative function, with values in $[-1,1]$, its average over $[x,x+h]$ is almost always asymptotically equal to its dyadic average over $[x,2x]$, with $h=h(x)\leq x$ any function tending to infinity. The error terms obtained there for general multiplicative functions are not quite good enough for our purposes. Nevertheless, using similar techniques, and replacing the multiplicative function with the indicator function of the numbers $p_1\dotsm p_k$, with $p_i$ primes from carefully chosen intervals, allows us to find $E_k$ numbers on intervals $[x,x+h]$, with $\frac{h}{\log x}$ growing very slowly. In this setting, we can apply various mean, large and pointwise value results for Dirichlet polynomials, some of which work specifically with primes or the zeta function, but not with general multiplicative functions (such as Watt’s theorem on the twisted moment of the Riemann zeta function, a large values theorem from [@matomaki] for Dirichlet polynomials supported on primes, and Vinogradov’s zero-free region). In many places in the argument, we cannot afford to lose even factors of $\log^{\varepsilon}x$, so we need to factorize Dirichlet polynomials in a manner that is nearly nearly lossless, and use an improved form of the mean value theorem for Dirichlet polynomials. To deal with some of the arising Dirichlet polynomials, we also need some sieve methods, similar to those that have been successfully applied to finding primes in short intervals for example in [@harman-sieves], [@jia] and [@watt-primes]. In the case of $E_2$ numbers, in addition to these methods, we benefit from the theory of exponent pairs and Jutila’s large values theorem.\
The structure of the proofs of Theorems \[t4\] and \[t5\] is the following. We will first present the lemmas necessary for proving Theorem \[t4\], and hence Theorems \[t1\] and \[t2\]. Besides employing these lemmas to prove Theorem \[t4\], we notice that they are already sufficient for finding products of exactly two primes in almost all intervals $[x,x+\log^{5+\varepsilon} x]$, which is as good as Mikawa’s result for $P_2$ numbers up to $\varepsilon$ in the exponent (one could also get $c$ slightly below $5$ using exponent pairs, which are just one of the additional ideas required for Theorem \[t5\]). The rest of the paper is then concerned with reducing the exponent $5+\varepsilon$ to $3.51$ for products of two primes, and this requires some further ingredients, as well as all the lemmas that were needed for products of three or more primes.
Acknowledgements
----------------
The author is grateful to his supervisor Kaisa Matomäki for various useful comments and discussions. The author thanks the referee for careful reading of the paper and for useful comments. While working on this project, the author was supported by the Vilho, Yrjö and Kalle Vaisälä foundation of the Finnish Academy of Science and Letters.
Notation {#subsec:notation}
--------
The symbols $p,q,p_i$ and $q_i$ are reserved for primes, and $d,k,\ell, m$ and $n$ are always positive integers. We often use the same capital letter for a Dirichlet polynomial and its length. We call *zeta sums* partial sums of $\zeta(s)$ or $\zeta'(s)$ of the form $\sum_{n\sim N}n^{-s}$ or $\sum_{n\sim N}(\log n)n^{-s}$.\
The function $\nu(\cdot)$ counts the number of distinct prime divisors of a number, $\mu(\cdot)$ is the Möbius function, $\Lambda(\cdot)$ is the von Mangoldt function, and $d_r(m)$ is the number of solutions to $a_1\dotsm a_r=m$ in positive integers. The function $\omega(\cdot)$ is Buchstab’s function (see Harman’s book [@harman-sieves Chapter 1]), defined as $\omega(u)=\frac{1}{u}$ for $1\leq u\leq 2$ and via the differential equation $\frac{d}{du}(u\omega(u))=\omega(u-1)$ for $u>2$, imposing the requirement that $\omega$ be continuous on $[1,\infty)$. We make the convention that $\omega(u)=0$ for $u<1$. In addition, let $\mathcal{P}(z)=\prod_{p<z}p$, and let $S(A,\mathbb{P},z)$ count the numbers in $A$ coprime to $\mathcal{P}(z)$.\
The quantity $\varepsilon>0$ is always small enough but fixed. The symbols $C_1,C_2,...$ denote unspecified, positive, absolute constants. By writing $n\sim X$ in a summation, we mean $X\leq n<2X$. The expression $1_S$ is the indicator function of the set $S$, so that $1_S(n)=1$ if $n\in S$ and $1_S(n)=0$ otherwise. We use the usual Landau and Vinogradov asymptotic notation $o(\cdot), O(\cdot)$ and $\ll, \gg$. The notation $X\asymp Y$ is shorthand for $X\ll Y\ll X$.
Preliminary lemmas
==================
Reduction to mean values of Dirichlet polynomials {#subsec:reduction}
-------------------------------------------------
We present several lemmas that are required for proving both Theorems \[t4\] and \[t5\]. Later on, we give some additional lemmas that are needed only for proving Theorem \[t5\].\
The plan of the proofs of Theorems \[t4\] and \[t5\], and hence of Theorems \[t1\], \[t2\] and \[t3\], is to transform the problem of comparing almost primes in short and long intervals to finding cancellation in the mean square of the corresponding Dirichlet polynomial. The polynomial can be factorized after it is divided into short intervals, and different methods can be applied to different factors. This approach is utilized in many earlier works on primes and almost primes in short intervals; see e.g. [@harman-sieves], [@matomaki]. We then apply several mean, large and pointwise value theorems, which are presented in Subsection \[subsec:Dirichlet bounds\], to find the desired cancellation in the Dirichlet polynomial.\
The following Parseval-type lemma allows us to reduce the problem of finding almost primes in short intervals to finding cancellation in a Dirichlet polynomial.
\[1\] Let $$\begin{aligned}
S_h(x)=\frac{1}{h}\sum_{x\leq n\leq x+h}a_n,\end{aligned}$$ where $a_n$ are complex numbers, and let $2\leq h_1\leq h_2\leq \frac{X}{T_0^3}$ with $T_0\geq 1$. Also let $F(s)=\sum_{n\sim X}\frac{a_n}{n^s}$. Then $$\begin{aligned}
\label{eq17}
\frac{1}{X}\int_{X}^{2X}\left|\frac{1}{h_1}S_{h_1}(x)-\frac{1}{h_2}S_{h_2}(x)\right|^2 dx&\ll \frac{1}{T_0}+\int_{T_0}^{\frac{X}{h_1}}|F(1+it)|^2 dt\nonumber\\
&+\max_{T\geq \frac{X}{h_1}}\frac{X}{Th_1}\int_{T}^{2T}|F(1+it)|^2 dt.\end{aligned}$$
This is Lemma 14 in the paper [@matomaki] (except that we do not specify the value of $T_0$). A related bound can be found for example in [@harman-sieves Chapter 9].
We choose $T_0=X^{0.01}$, and $h_2=\frac{X}{T_0^3}$ in Lemma \[1\], and the average function $S_h(x)$ is given by the short average in or . Now, defining $$\begin{aligned}
F(s)=\sum_{\substack{p_1\dotsm p_k\sim X\\P_i\leq p_i\leq P_i^{1+\varepsilon},i\leq k-1}}(p_1\dotsm p_k)^{-s},\end{aligned}$$ where $P_i$ are as in Theorem \[t4\] or \[t5\], proving Theorems \[t4\] and \[t5\] is reduced to showing that $$\begin{aligned}
\label{eq24}
\int_{T_0}^{T}|F(1+it)|^2 dt=o\left(\left(\frac{Th}{X}+1\right)\cdot \frac{1}{(\log^2 X)(\log_ {k} X)^{2}}\right),\end{aligned}$$ for $T_0=X^{0.01}$ and $h\geq P_1\log X$. Indeed, substituting this to Lemma \[1\] shows that $$\begin{aligned}
\frac{1}{X}\int_{X}^{2X}\left|\frac{1}{h}S_{h}(x)-\frac{1}{h_2}S_{h_2}(x)\right|^2dx=o\left(\frac{1}{(\log^2 X)(\log _k X)^{2}}\right),\end{aligned}$$ where $h_2=\frac{X}{T_0^3}$. It actually suffices to prove for $T\leq X$, since otherwise the mean value theorem (Lemma \[12\]) gives a good enough bound for the last term in .\
Note that for $T\leq X$ the trivial bound for the integral in , coming from the mean value theorem, is $\ll (\log X)^{-1}$. Thus our task is to save slightly more than one additional logarithm in this integral (for $T\leq \frac{X}{h}$, at least).\
Once the required estimates for Dirichlet polynomials have been established, we can apply the prime number theorem in short intervals with Vinogradov’s error term (see [@iwaniec-kowalski Chapter 10]) to see that $$\begin{aligned}
\frac{1}{h_2}S_{h_2}(x)-\frac{1}{X}S_X(X)\ll \exp(-(\log
X)^{\frac{3}{5}-\varepsilon}),\end{aligned}$$ for $h_2=x^{0.97},x\sim X$, and hence deduce Theorems \[t4\] and \[t5\] (and consequently \[t1\], \[t2\] and \[t3\]). For example, we compute $$\begin{aligned}
\frac{1}{h_2}\sum_{\substack{x\leq p_1p_2p_3\leq x+h_2\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\P_2\leq p_2\leq P_2^{1+\varepsilon}}}1&=\frac{1}{h_2}\sum_{\substack{P_1\leq p_1\leq P_1^{1+\varepsilon}\\P_2\leq p_2\leq P_2^{1+\varepsilon}}}\left(\pi\left(\frac{x+h_2}{p_1p_2}\right)-\pi\left(\frac{x}{p_1p_2}\right)\right)\\
&=\frac{1}{h_2}\sum_{\substack{P_1\leq p_1\leq P_1^{1+\varepsilon}\\P_2\leq p_2\leq P_2^{1+\varepsilon}}}\frac{h_2}{p_1p_2\log \frac{x}{p_1p_2}}\\
&\quad+O\left(\exp(-(\log x)^{\frac{3}{5}-\frac{\varepsilon}{2}})\right)\\
&=\sum_{P_1\leq p_1\leq P_1^{1+\varepsilon}\atop P_2\leq p_2\leq P_2^{1+\varepsilon}}\frac{1}{p_1p_2\log \frac{X}{p_1p_2}}+O(\exp(-(\log X)^{\frac{3}{5}-\varepsilon})),\end{aligned}$$ and the same asymptotics hold for the dyadic sum. Sometimes we end up comparing the sums $\frac{1}{h_2}S_{h_2}(x)$ and $\frac{1}{x}S_2(x)$ with $a_n$ not quite equal to the coefficients of $F(s)$, but equal to the indicator function of the numbers $p_1p_2n$ with $p_1$ and $p_2$ from the intervals $[P_1,P_1^{1+\varepsilon}]$ and $[P_2,P_2^{1+\varepsilon}],$ respectively, and $n$ having no prime factors smaller than $p_2$. There may also be a simple cross-conditions on $p_1$ and $p_2$, but comparing the sums still causes no difficulty.\
Thus, in the rest of the paper we can concentrate on bounding Dirichlet polynomials. Although there is a close analogy in the formulations of Theorems \[t4\] and \[t5\], estimating the polynomial arising from the latter is more difficult, and will require several additional ideas.
Factorizations for Dirichlet polynomials
----------------------------------------
In bounding Dirichlet polynomials, factorizations play an important role. We encounter situations where the only cross-condition on the variables in the polynomial is that their product belongs to a certain range, so the variables can be separated by diving them into short ranges and estimating the mean values of the resulting polynomials. The factorization is provided by the following lemma, which also takes into account the improved mean value theorem (Lemma \[2\]).
\[6\]Let $\mathcal{S}\subset [-T,T]$ be measurable and $$\begin{aligned}
F(s)=\sum_{\substack{mn\sim X\\M\leq m\leq M'}}\frac{a_mb_n}{(mn)^s}\end{aligned}$$ for some $M'>M\geq 2$ and for some complex numbers $a_m,b_n$. Let $H\geq 1$ be such that $H\log M$ and $H\log M'$ are integers. Denote $$\begin{aligned}
A_{v,H}(s)=\sum_{e^{\frac{v}{H}}\leq m<e^{\frac{v+1}{H}}}\frac{a_m}{m^s},\quad B_{v,H}(s)=\sum_{n\sim Xe^{-\frac{v}{H}}}\frac{b_n}{n^s}.\end{aligned}$$ Then $$\begin{aligned}
\int_{\mathcal{S}}|F(1+it)|^2dt&\ll |I|^2\int_{\mathcal{S}}|A_{v_0,H}(1+it)B_{v_0,H}(1+it)|^2dt\\
&+T\sum_{\substack{n\in [Xe^{-\frac{1}{H}},Xe^{\frac{1}{H}}]\,\, \text{or}\\ n\in [2X,2Xe^{\frac{1}{H}}]}}|c_n|^2+T\sum_{1\leq h\leq \frac{2X}{T}}\sum_{\substack{m-n=h\atop m,n\in [Xe^{-\frac{1}{H}},Xe^{\frac{1}{H}}]\,\, \text{or}\\ m,n\in [2X,2Xe^{\frac{1}{H}}]\\}}|c_m||c_n|,\end{aligned}$$ with $$\begin{aligned}
c_n=\frac{1}{n}\sum_{n=k\ell\atop M\leq k\leq M'}|a_kb_{\ell}|,\end{aligned}$$ $I=[H\log M,H\log M')$ and $v_0\in I$ a suitable integer.
In applications we have $M'\geq 2M$, so the conditions that $H\log M$ and $H\log M'$ be integers can be ignored, since we can always afford to vary $H$ and $M'$ by the necessary amount.
When proving Theorem \[t4\], we cannot afford to lose any powers of logarithm in some factorizations, and indeed the second term in the lemma crucially has the factor $T$ instead of the factor $X$ occurring in the mean value theorem, and in the first term we will lose a factor of size $\ll H^2\log^2 \frac{M'}{M}$, which in practice is minuscule.
This resembles Lemma 12 in the paper [@matomaki] by Matomäki and Radziwiłł (where, in addition to factorization in short intervals, a Ramaré-type identity is used). We split $F(s)$ into short intervals, obtaining $$\begin{aligned}
F(s)=\sum_{v\in I\cap \mathbb{Z}}\sum_{e^{\frac{v}{H}}\leq m<e^{\frac{v+1}{H}}}\frac{a_m}{m^s}\sum_{Xe^{-\frac{v+1}{H}}\leq n<2Xe^{-\frac{v}{H}}\atop mn\sim X}\frac{b_n}{n^s}.\end{aligned}$$ Observe that $Xe^{-\frac{v+1}{H}}\leq n<Xe^{-\frac{v}{H}}$ can hold above only for $mn\in [Xe^{-\frac{1}{H}},Xe^{\frac{1}{H}}]$. Furthermore, we always have $mn\in [Xe^{-\frac{1}{H}},2Xe^{\frac{1}{H}}]$. This allows us to write $$\begin{aligned}
\label{eq43}
F(s)=\sum_{v\in I\cap \mathbb{Z}}A_{v,H}(s)B_{v,H}(s)+\sum_{k\in [Xe^{-\frac{1}{H}},Xe^{\frac{1}{H}}]\,or\atop k\in [2X,2Xe^{\frac{1}{H}}]}\frac{d_{k}}{k^s}\end{aligned}$$ with $$\begin{aligned}
|d_{k}|\leq \sum_{k=mn}|a_mb_{n}|.\end{aligned}$$ Now the claim of the lemma follows by taking mean squares on both sides of on the line $\Re(s)=1$, applying the improved mean value theorem (Lemma \[2\]), and taking the maximum in the sum over $I$.
Bounds for Dirichlet polynomials {#subsec:Dirichlet bounds}
--------------------------------
We need several mean, large and pointwise value results on Dirichlet polynomials. The following lemma is one of the basic tools.
[(Mean value theorem for Dirichlet polynomials)]{}\[12\] Let $N\geq 1$ and $F(s)=\sum_{n\sim N}\frac{a_n}{n^s}$, where $a_n$ are any complex numbers. Then $$\begin{aligned}
\int_{-T}^{T}|F(it)|^2 dt\ll (N+T)\sum_{n\sim N}|a_n|^2.\end{aligned}$$
See for example Iwaniec and Kowalski’s book [@iwaniec-kowalski Chapter 9].
If the coefficients $a_n$ are supported on the primes or almost primes and are of size $\asymp \frac{1}{n}$, the sum $\sum_{n\sim N}|a_n|^2$ is essentially $\frac{1}{N\log N}$. However, in some places in the proofs of Theorems \[t1\], \[t2\] and \[t3\], it is vital to save one more logarithm in such a situation. This is enabled by an improved mean value theorem.
[(Improved mean value theorem)]{}\[2\] Let $N$ and $F(s)$ be as above. We have $$\begin{aligned}
\label{eq5}
\int_{-T}^{T}|F(it)|^2dt\ll T\sum_{n\sim N}|a_n|^2+T\sum_{1\leq h\leq \frac{N}{T}}\sum_{m-n=h\atop m,n\sim N}|a_m||a_n|.\end{aligned}$$
The number of solutions to $m-n=h$, with $m$ and $n$ primes and $m,n\sim N$, is $\ll \frac{N^2}{\log^2 N}\cdot \frac{h}{\varphi(h)}$ (with $\varphi$ Euler’s totient function), which follows easily from Brun’s sieve, for example. If $T\leq \frac{N}{h}, h\geq \log N$ and $a_n$ is supported on the primes, the first sum in turns out not to be problematic, so we indeed save essentially one additional logarithm with this lemma. We remark that if we have polynomials of length $N\leq T$, Lemma \[2\] reduces to the basic mean value theorem.
This follows from Lemma 7.1 in [@iwaniec-kowalski Chapter 7], taking $Y=10T$ there.
We also put into use a discrete mean value theorem, which is particularly useful when we take the mean square over a rather small set of points.
[(Halász-Montgomery inequality)]{}\[18\] Let $N$ and $F(s)$ be as before. Let $\mathcal{T}\subset [-T,T]$ be [well-spaced]{.nodecor}, meaning that $t,u\in \mathcal{T}$ and $t\neq u$ imply $|t-u|\geq 1.$ Then $$\begin{aligned}
\sum_{t\in \mathcal{T}}|F(it)|^2\ll (N+|\mathcal{T}|T^{\frac{1}{2}})(\log T)\sum_{n\sim N}|a_n|^2.\end{aligned}$$
For a proof, see Iwaniec and Kowalski’s book [@iwaniec-kowalski Chapter 9].
In addition to mean value theorems, we need some large values theorems. We come across some very short Dirichlet polynomials, say of length $\ll T^{o(1)}$, and we make use of the fact that the coefficients of these polynomials are supported on the primes.
\[7\] Let $P\geq 1,\,V>0$ and $$\begin{aligned}
F(s)=\sum_{p\sim P}\frac{a_p}{p^s}\end{aligned}$$ with $|a_p|\leq 1$. Let $\mathcal{T}\subset [-T,T]$ be a well-spaced set of points such that $|F(1+it)|\geq V$ for each $t\in \mathcal{T}$. Then we have $$\begin{aligned}
|\mathcal{T}|\ll T^{2\frac{\log V^{-1}}{\log P}}V^{-2}\exp\left((1+o(1))\frac{\log T}{\log P}\log \log T\right).\end{aligned}$$
We may also apply this lemma to polynomials not supported on primes, provided that $P\gg X^{\varepsilon}$ for some $\varepsilon>0$. In this case, the lemma is essentially the mean value theorem applied to a suitable moment of the polynomial.
This is Lemma 8 in the paper [@matomaki]. There a factor of $2$ occurs instead of $1+o(1)$ in the last exponential, but the exact same proof works with the factor $1+o(1)$.
For proving Theorem \[t5\], we also need a large values theorem designed for long polynomials. The reason for presenting it along with the lemmas for Theorem \[t4\] is that combining it with the other lemmas already gives the exponent $c=5+\varepsilon$ for $E_2$ numbers. The large values result is a theorem of Jutila that improves on the better known Huxley’s large values theorem.
\[13\] (Jutila’s large values theorem). Let $F(s)=\sum_{n\sim N}\frac{a_n}{n^s}$ with $|a_n|\leq d_r(n)$ for some fixed $r$. Let $\mathcal{T}\subset [-T,T]$ be a well-spaced set such that $|F(1+it)|\geq V$ for $t\in \mathcal{T}$, and let $k$ be any positive integer. We have $$\begin{aligned}
|\mathcal{T}|\ll \left(V^{-2}+\frac{T}{N^{2}}V^{-6+\frac{2}{k}}+V^{-8k}\frac{T}{N^{2k}}\right)(NT)^{o(1)}.\end{aligned}$$
The proof can be found in Jutila’s paper [@jutila-density]. We apply formula (1.4) there to $F(s)^{\ell}$, and have $G=\sum_{n\sim N}\frac{|a_n|^2}{n^2}\ll (NT)^{o(1)}N^{-1}$ in the notation of that paper.
In some cases in the proof of Theorem \[t4\], there will be polynomials supported on primes or almost primes for which the best we can do is apply a pointwise bound. These bounds follow in the end from Vinogradov’s zero-free region.
\[17\] Let $$\begin{aligned}
P(s)=\sum_{n_1\dotsm n_k\sim N}g_1(n_1)\dotsm g_k(n_k)(n_1\dotsm n_k)^{-s},\end{aligned}$$ where $k\geq 1$ is a fixed integer and each $g_i$ is either the Möbius function, the characteristic function of the primes, the identity function, or the logarithm function. We have $$\begin{aligned}
|P(1+it)|\ll \exp\left(-(\log N)^{\frac{1}{10}}\right)\end{aligned}$$ when $\exp((\log N)^{\frac{1}{3}})\leq |t|\leq N^{A\log \log N}$ for any fixed $A>0$.
For $k=1$, the claim follows directly from Perron’s formula and Vinogradov’s zero-free region, so let $k\geq 2$. We may assume that $n_1,...,n_k$ belong to some dyadic intervals $I_1,...,I_k$ such that $I_k=[a,b]$ with $a\gg N^{\frac{1}{k}},b\ll N$. Now $$\begin{aligned}
&\sum_{n_1\in I_1,...,n_{k-1}\in I_{k-1}}g(n_1)\dotsm g(n_{k-1})(n_1\dotsm n_{k-1})^{-1-it}\sum_{n_k\in I_k\atop n_k\sim \frac{N}{n_1\dotsm n_{k-1}}}g(n_k) n_k^{-1-it}\\
&\ll (\log N)^{O(1)}\sum_{n_1\in I_1,...,n_{k-1}\in I_{k-1}}(n_1\dotsm n_{k-1})^{-1}\cdot \exp\left(-\frac{\log N^{\frac{1}{k}}}{(\log t)^{\frac{2}{3}+\varepsilon}}\right)\\
&\ll\exp\left(-(\log N)^{\frac{1}{10}}\right),\end{aligned}$$ as wanted.
Moments of Dirichlet polynomials
--------------------------------
We need Watt’s result on the twisted fourth moment of zeta sums (see Subsection \[subsec:notation\] for the definition of zeta sums). This bound comes into play when we estimate the mean square of a product of Dirichlet polynomials where one of the polynomials is a long zeta sum.
\[16\] (Watt). Let $T\geq T_0\geq T^{\varepsilon}, T^{1+o(1)}\gg M,N\geq 1$. Define the Dirichlet polynomials $N(s)=\sum_{n\sim N}n^{-s}$ or $N(s)=\sum_{n\sim N}(\log n)n^{-s}$ and $M(s)=\sum_{m\sim M}\frac{a_m}{m^s}$ with $a_m$ any complex numbers. We have $$\begin{aligned}
\int_{T_0}^{T}|N(1+it)|^4 |M(1+it)|^2dt\ll \left(\frac{T}{MN^2}(1+M^2T^{-\frac{1}{2}})+\frac{1}{T_0^3}\right)T^{o(1)}\max_{m\sim M}|a_m|^2.\end{aligned}$$
An easy partial summation argument shows that we may assume $N(s)=\sum_{n\sim N}n^{-s}$. The lemma will be reduced to Watt’s original twisted moment result [@watt-thm], where $N(s)$ is replaced with $\zeta(s)$. It is well-known that $|N(1+it)|\ll \frac{1}{t}$ for $N\geq t\geq 1$ (see [@iwaniec-kowalski Chapter 8]), so $$\begin{aligned}
\int_{T_0}^{N}|N(1+it)|^4|M(1+it)|^2dt&\ll \max_{m\sim M}|a_m|^2\int_{T_0}^{T}\frac{1}{t^4}dt\cdot T^{o(1)}\\
&\ll \frac{T^{o(1)}}{T_0^3}\max_{m\sim M}|a_m|^2.\end{aligned}$$ Now it suffices to consider the integrals over dyadic intervals $[U,2U]$ with $N\leq U\leq T.$ These are bounded as in Lemma 2 of [@baker-harman-pintz] (using Watt’s result and simple considerations), since translating the results there from the line $\Re(s)=\frac{1}{2}$ to the line $\Re(s)=1$ is an easy matter (and the bound in [@baker-harman-pintz] should be multiplied by $\max_{m\sim M}|a_m|^2$, as we do not assume $|a_m|\leq 1$).
Sieve estimates {#subsec:sieve}
---------------
There are occasions in the proofs of Theorems \[t4\] and \[t5\] where our Dirichlet polynomials are too long, and we need a device for splitting them into shorter ones. This is enabled by Heath-Brown’s identity and the decomposition resulting from it, which tells that either our Dirichlet polynomial can be replaced with a product of many polynomials, which is desirable, or it can be replaced with products of zeta sums, in which case we can make use of Watt’s theorem.
A Dirichlet polynomial $M(s)=\sum_{m\sim M}\frac{a_m}{m^s}$ with $|a_n|\ll d_r(n)$ for fixed $r$ is called [prime-factored]{.nodecor} if, for each $A>0$, we have $|M(1+it)|\ll_A (\log M)^{-A}$ for $\exp((\log M)^{\frac{1}{3}})\leq t\leq M^{A\log \log M}$.
(Heath-Brown’s decomposition) \[11\] Let an integer $k\geq 1$ and a real number $\delta>0$ be fixed, and let $T\geq 2$. Define $P(s)=\sum_{P\leq p<P'}p^{-s}$ with $P\gg T^{\delta},P'\in \left[P+\frac{P}{\log T},2P\right]$. There exist Dirichlet polynomials $G_1(s),...,G_{L}(s)$ and a constant $C>0$ such that $$\begin{aligned}
|P(1+it)|\ll (\log^C X)(|G_1(1+it)|+\dotsm +|G_{L}(1+it)|)\quad \text{for all}\quad t\in [-T,T],\end{aligned}$$ with $L\leq \log^C X$, each $G_j(s)$ being of the form $$\begin{aligned}
G_j(s)=\prod_{i\leq J_j}M_i(s),\quad J_j\leq 2k,\end{aligned}$$ with $M_i(s)$ prime-factored Dirichlet polynomials (which depend on $j$), whose lengths satisfy $M_1\dotsm M_J=X^{1+o(1)},M_i\gg \exp\left(\frac{\log P}{\log \log P}\right)$. Additionally, each $M_i(s)$ with $M_i>X^{\frac{1}{k}}$ is a zeta sum.
For a similar bound, see Harman’s book [@harman-sieves Chapter 7]. It suffices to prove an analogous result for the polynomial $\sum_{P\leq n<P'}\Lambda(n)n^{-s}$ and use summation by parts. We take $f(n)=n^{-1-it}1_{[P,P']}(n)$ in the general Heath-Brown identity [@heath-brown-vaughan] for $\sum_{n\leq N}f(n)\Lambda(n)$, splitting each resulting variables into dyadic intervals, and separating the variables with Perron’s formula. The summation condition in Heath-Brown’s identity guarantees that of the arising polynomials only the zeta sums can have length $>X^{\frac{1}{k}}$. If there are any polynomials of length $\ll \exp\left(\frac{\log P}{\log \log P}\right)$, these can simply be estimated trivially. The fact that the remaining polynomials of length $\gg \exp\left(\frac{\log P}{\log \log P}\right)$ are prime-factored follows from the fact that they have as their coefficients one of the sequences $(1), (\log n)$ and $(\mu(n))$, so that Lemma \[17\] gives a pointwise saving of $\ll_A (\log P)^{-A}$.
There is one more lemma that we need on the coefficients of Dirichlet polynomials arising from almost primes. We need to bound the following quantities that are related to the quantities occurring in the improved mean value theorem for Dirichlet polynomials.
For any sequence $(a_n)$ of complex numbers, set $X_1=\exp(\frac{\log X}{(\log \log X)^4})$ and $$\begin{aligned}
S_1(X,(a_n))&=\max_{\frac{X}{X_1}\leq Y\leq 4X\atop 1\leq H\leq \log^{10} X}H\sum_{Y\leq n\leq Y+\frac{Y}{H}}\frac{|a_n|^2}{n},\\
S_2(X,(a_n))&=\max_{\frac{X}{X_1}\leq Y\leq 4X\atop 1\leq H\leq \log^{10}X}H\sum_{1\leq h\leq \frac{X}{T}}\sum_{Y\leq n\leq Y+\frac{Y}{H}}\frac{|a_n||a_{n+h}|}{n}.\end{aligned}$$
We get bounds of size essentially $\frac{1}{\log X}$ and $\frac{X}{T\log^2 X}$ for $S_1(X,(a_n))$ and $S_2(X,(a_n))$, respectively, under the assumptions of the next lemma.
\[15\] Let $Z_r\geq \dotsm \geq Z_1\geq 1$ for a fixed $r$ with $Z_r\geq \exp(\frac{\log X}{(\log \log X)^3})$, $Z_r\leq z\leq 4X$, and $$\begin{aligned}
\mathcal{Q}=\left\{n\leq 4X:n=p_1\dotsm p_rm,\,\,p_i\in [Z_i,Z_i^2],\,\,(m,\mathcal{P}(z))=1)\right\}.\end{aligned}$$ Let $|a_n|\leq 1_{\mathcal{Q}}(n)$, and let $S_1(X,(a_n))$ and $S_2(X,(a_n))$ be as defined above. Then $$\begin{aligned}
S_1(X,(a_n))\ll \frac{1}{\log z}\quad \text{and}\quad S_2(X,(a_n))\ll \frac{1}{\log^2 z}\cdot \frac{X}{T}.\end{aligned}$$
Notice that we could also take as the set $\mathcal{Q}$ the set $$\begin{aligned}
\mathcal{Q}'=\left\{n\leq 4X:n=p_1\dotsm p_rm,\,\,p_i\in [Z_i,Z_i^2],\,\,(m,\mathcal{P}(p_r))=1\right\}\end{aligned}$$ or the set $$\begin{aligned}
\mathcal{Q}''=\left\{n\leq 4X:n=p_1\dotsm p_r,\,\,p_i\in [Z_i,Z_i^2]\right\}.\end{aligned}$$ Indeed, the sizes of $\mathcal{Q}'$ and $\mathcal{Q}''$ can be bounded by sizes of sets of the form given in the lemma (with the parameter $z=Z_r$ or $z=X^{\frac{1}{r-1}}$). This observation will be used subsequently.
Let $S(A,\mathbb{P},z)$ count the numbers in $A$ having no prime factors below $z$, and let $\Pi$ be the product of all primes in $\bigcup_{i=1}^{r}[Z_i,Z_i^2]\cap[1,z]$. Brun’s sieve yields $$\begin{aligned}
S_1(X,(a_n))&\ll \max_{\frac{X}{X_1}\leq Y\leq 4X\atop 1\leq H\leq \log^{10}X} \frac{H}{Y}\cdot \left|\left[Y,Y+\frac{Y}{H}\right]\cap \mathcal{Q}\right|\\
&\ll \max_{\frac{X}{X_1}\leq Y\leq 4X\atop 1\leq H\leq \log^{10}X} \frac{H}{Y}\cdot \left|\left\{n\in \left[Y,Y+\frac{Y}{H}\right]:\,\left(n,\frac{\mathcal{P}(z)}{\Pi}\right)=1\right\}\right|\\
&\ll \max_{\frac{X}{X_1}\leq Y\leq 4X\atop 1\leq H\leq \log^{10}X}\frac{H}{Y}\cdot\left(\frac{Y}{H\log z}+z^{\frac{1}{2}}\right)\\
&\ll \frac{1}{\log z},\end{aligned}$$ since $z^{\frac{1}{2}}\leq (4X)^{\frac{1}{2}}\ll \frac{Y}{H\log^2 z}$.\
Furthermore, Brun’s sieve also yields $$\begin{aligned}
S_2(X,(a_n))&\ll \max_{X_1\leq Y\leq 4X\atop 1\leq H\leq \log^{10}X}\frac{H}{Y}\sum_{1\leq h\leq \frac{X}{T}} \left|\left\{n\in \left[Y,Y+\frac{Y}{H}\right]:\left(n(n+h),\frac{\mathcal{P}(z)}{\Pi}\right)=1\right\}\right|\\
&\ll \max_{X_1\leq Y\leq 4X\atop 1\leq H\leq \log^{10}X}\frac{H}{Y}\cdot \sum_{1\leq h\leq \frac{X}{T}}\frac{h}{\varphi(h)}\left(\frac{Y}{H\log^2 z}+z^{\frac{1}{2}}\right)\\
&\ll \frac{1}{\log^2 z}\cdot\frac{X}{T},\end{aligned}$$ by the elementary bound $\sum_{m\leq M}\frac{m}{\varphi(m)}\ll M$. This proves the statement.
Mean squares of Dirichlet polynomials
=====================================
With all the necessary lemmas available, we are ready to present the propositions that quickly lead to Theorem \[t4\] and are also necessary in proving Theorem \[t5\].
\[p1\] Let $X\geq 1, T\geq T_0=X^{0.01},0\leq \alpha_1\leq 1$ and $1\leq P\ll X^{o(1)}$, where $P$ is a function of $X$. Define $$\begin{aligned}
K(s)=\sum_{n\sim \frac{X}{P}}\frac{a_n}{n^s}\quad\text{and}\quad P(s)=\sum_{p\sim P}\frac{b_p}{p^s},\end{aligned}$$ where $a_n$ and $b_p$ are arbitrary complex numbers. Denoting $$\begin{aligned}
\mathcal{T}_1=\{t\in [T_0,T]:|P(1+it)|\leq P^{-\alpha_1}\}\end{aligned}$$ we have $$\begin{aligned}
\int_{\mathcal{T}_1}|K(1+it)P(1+it)|^2dt\ll \frac{T}{X}\cdot P^{1-2\alpha_1}\left(S_1\left(\frac{X}{P},(a_n)\right)+S_2\left(\frac{X}{P},(a_n)\right)\right).\end{aligned}$$
The improved mean value theorem (Lemma \[2\]) and definition of $\mathcal{T}_1$ give $$\begin{aligned}
\int_{\mathcal{T}_1}|K(1+it)P(1+it)|^2dt&\ll P^{-2\alpha_1}\int_{\mathcal{T}_1}|K(1+it)|^2dt\\
&\ll P^{-2\alpha_1}\left(T\sum_{k\sim \frac{X}{P}}|a_k|^2+T\sum_{1\leq h\leq \frac{X}{PT}}\sum_{k,k'\sim \frac{X}{P}\atop k-k'=h}|a_k||a_{k'}|\right)\\
&\ll P^{-2\alpha_1}\left(\frac{TP}{X}S_1\left(\frac{X}{P},(a_n)\right)+\frac{TP}{X}S_2\left(\frac{X}{P},(a_n)\right)\right)\\
&= \frac{T}{X}\cdot P^{1-2\alpha_1}\left( S_1\left(\frac{X}{P},(a_n)\right)+S_2\left(\frac{X}{P},(a_n)\right)\right),\end{aligned}$$ which was the claim.
\[p2\] Let $X\geq 1, T\geq T_0=X^{0.01}$ and $1\leq P\ll X^{o(1)}$. Also let $0\leq \alpha_1,\alpha_2\leq 1$ and let the Dirichlet polynomials $K(s)$ and $M(s)$ with $K=\frac{X}{M}\gg X^{\varepsilon}$ be $$\begin{aligned}
K(s)=\sum_{n\sim K}\frac{a_n}{n^s}\quad\text{and}\quad M(s)=\sum_{m\sim M} \frac{c_m}{m^s},\end{aligned}$$ where $|c_m|\leq d_r(m)$ for fixed $r$, and $|a_n|= 1_{S}(n)$ for some set $S$ whose elements have at most $r$ prime factors from $[P,2P]$ and have no prime factors in $[1,X^{0.01}]\setminus \bigcup_{i=1}^{r}[Z_i,Z_i^2]$ for some $Z_i\geq 1$. Write $$\begin{aligned}
P(s)&=\sum_{p\sim P}\frac{b_p}{p^s}\quad\text{with}\quad |b_p|\leq 1\quad \text{and}\\
\mathcal{T}&=\{t\in [T_0,T]:\,\,|P(1+it)|\geq P^{-\alpha_1}\,\, \text{and}\,\, |M(1+it)|\leq M^{-\alpha_2}\}.\end{aligned}$$ We have $$\begin{aligned}
\int_{\mathcal{T}}|K(1+it)M(1+it)|^2dt\ll M^{-2\alpha_2}P^{(2+10\varepsilon)\alpha_1\ell}\cdot (\ell!)^{1+o(1)}\cdot \left(\frac{T}{X}\cdot \frac{1}{\log X}+\frac{1}{\log^2 X}\right),\end{aligned}$$ where $\ell=\lceil\frac{\log \frac{X}{K}}{\log P}\rceil$.
For products of three primes, our variables are picked so that the bound given by this proposition saves $X^{\varepsilon}$ over the trivial bound. However, for products of $k\geq 4$ primes, our savings are much more modest, and the factor $\frac{T}{X}\cdot \frac{1}{\log X}+\frac{1}{\log^2 X}$ becomes necessary.
This result is inspired by Lemma 13 in [@matomaki]. Using the fact that $|M(1+it)|^2\leq M^{-2\alpha_2}(P^{\alpha_1}|P(1+it)|)^{2\ell}$ for $t\in \mathcal{T}$ and splitting polynomials into shorter ones, we have $$\begin{aligned}
\label{eq34}
\int_{\mathcal{T}}|K(1+it)M(1+it)|^2dt&\ll M^{-2\alpha_2}P^{2\alpha_1\ell}\int_{\mathcal{T}}|K(1+it)P(1+it)^{\ell}|^2dt\nonumber\\
&\ll M^{-2\alpha_2}P^{2\alpha_1\ell}\ell^2\int_{ \mathcal{T}}|A(1+it)|^2dt,\end{aligned}$$ where $$\begin{aligned}
A(s)=\sum_{n\sim Y}\frac{A_n}{n^s}\end{aligned}$$ for some $KP^{\ell}\leq Y\leq 2K(2P)^{\ell}$ (so $X\leq Y\leq 2^{\ell}PX$), the coefficients $A_n$ satisfying $$\begin{aligned}
|A_n|\leq\sum_{\substack{n=p_1\dotsm p_{\ell}m\\p_i\sim P\\m\sim K}}|a_m|.\end{aligned}$$ By the improved mean value theorem (Lemma \[2\]), we see that $\eqref{eq34}$ is bounded by $$\begin{aligned}
\ll M^{-2\alpha_2}P^{2\alpha_1\ell}\ell^2\left(T\sum_{n\sim Y}\left|\frac{A_n}{n}\right|^2+T\sum_{1\leq h\leq \frac{Y}{T}}\sum_{m-n=h}\frac{|A_m||A_n|}{mn}\right)\end{aligned}$$ Note that $A_n\neq 0$ implies that $n$ has at most $\ell+r$ prime factors from $[P,2P]$ and that $n$ is coprime to $$\begin{aligned}
\Pi=\prod_{\substack{p\leq X^{0.01}\\p\not \in \bigcup_{i=1}^r[Z_i,Z_i^2]\cup [P,2P]}}p.\end{aligned}$$ Consequently, $|A_n|\leq (\ell+r)!$, and so $$\begin{aligned}
\sum_{n\sim Y}\left|\frac{A_n}{n}\right|^2&\leq \frac{1}{Y}\cdot (\ell+r)!\sum_{n\sim Y}\frac{|A_n|}{n}\\&\ll \frac{1}{Y}(\ell!)^{1+o(1)}\sum_{m\sim K}\frac{|a_m|}{m}\sum_{p_1,...,p_\ell\sim P}\frac{1}{p_1\dotsm p_{\ell}}\\
&\ll (\ell!)^{1+o(1)}\cdot \frac{1}{Y}\sum_{m\sim K\atop (m,\Pi)=1}\frac{|a_m|}{m}\\
&\ll (\ell!)^{1+o(1)}\cdot \frac{1}{X\log X},\end{aligned}$$ where the last step comes from Brun’s sieve and the facts that $Y\geq X$ and $K\gg X^{\varepsilon}$.\
To deal with the second sum arising from the improved mean value theorem, notice that by Brun’s sieve the number of $n\leq y$ with $(n(kn+h),\Pi)=1$ is $\ll \frac{y}{\log^2 y}\frac{hk}{\varphi(hk)}$ with an absolute implied constant. Since $\varphi(ab)\geq \varphi(a)\varphi(b)$ and $\frac{k}{\varphi(k)}\leq 2^{\ell}$ when $k$ has $\ell$ prime factors, we have $$\begin{aligned}
&\sum_{1\leq h\leq \frac{Y}{T}}\sum_{n\sim Y}\frac{|A_n||A_{n+h}|}{n(n+h)}\\
&\leq \frac{1}{Y^2}\cdot (\ell+r)!\sum_{1\leq h\leq \frac{Y}{T}}\sum_{p_1,...,p_{\ell}\sim P}\sum_{\substack{(m,\Pi)=1\\(p_1\dotsm p_{\ell}m+h,\Pi)=1\\m\leq \frac{2Y}{p_1\dotsm p_{\ell}}}}1\\
&\ll \frac{1}{Y^2}\cdot (\ell!)^{1+o(1)}\sum_{1\leq h\leq \frac{Y}{T}}\sum_{p_1,...,p_{\ell}\sim P}\frac{Y}{p_1\dotsm p_{\ell}\log^2 \frac{Y}{p_1\dotsm p_{\ell}}}\frac{p_1\dotsm p_{\ell}h}{\varphi(p_1\dotsm p_{\ell}h)}\\
&\ll \frac{1}{Y\log^2 Y}(\ell!)^{1+o(1)}\sum_{1\leq h\leq \frac{Y}{T}}\frac{h}{\varphi(h)}\sum_{p_1,...,p_{\ell}\sim P}\frac{1}{p_1\dotsm p_{\ell}}\\
&\ll \frac{1}{T}(\ell!)^{1+o(1)}\frac{1}{\log^2 X},\end{aligned}$$ as desired.
\[p3\] Let $X^{1+o(1)}\geq T\geq T_0=X^{0.01}$ and $0\leq \alpha_1\leq 1$. Furthermore, let $$\begin{aligned}
P(s)=\sum_{p\sim P}\frac{a_p}{p^s},\quad \text{and}\quad M(s)=\sum_{M\leq q\leq M'}\frac{1}{q^s},\end{aligned}$$ with $|a_p|\leq 1$, $M'\in [M+\frac{M}{\log P},2M]$, $\log X\leq P\ll X^{o(1)}$ and $PM=X^{1+o(1)}$, and let $$\begin{aligned}
\mathcal{U}=\{t\in [T_0,T]:|P(1+it)|\geq P^{-\alpha_1}\}.\end{aligned}$$ Then, for $\ell=\lfloor \varepsilon\frac{\log X}{\log P}\rfloor$, $$\begin{aligned}
&\int_{\mathcal{U}}|P(1+it)M(1+it)|^2dt\\
&\ll (P^{2\alpha_1-1}\log^2 X)^{(1+o(1))\ell}X^{o(1)}+(\log X)^{-100}\left(1+\frac{|\mathcal{U}'|T^{\frac{1}{2}}}{X^{\frac{2}{3}-o(1)}}\right)\end{aligned}$$ for some well-spaced set $\mathcal{U}'\subset \mathcal{U}$.
Heath-Brown’s decomposition (Lemma \[11\]) with $k=3$ allows us to write, for some $C>0$, $$\begin{aligned}
|M(1+it)|\ll (\log^{C} X)(|G_1(1+it)|+\dotsm +|G_L(1+it)|)\end{aligned}$$ with $L\leq \log^C X$. Here each $G_j(s)$ is either of the form $$\begin{aligned}
G_j(s)=M_1(s)M_2(s)M_3(s),\,\, M_1M_2M_3=X^{1+o(1)},\,\, M_1\geq M_2\geq M_3,\,\, M_3\geq \exp\left(\frac{\log X}{2\log \log X}\right)\end{aligned}$$ with $M_i(s)$ prime-factored polynomials, or of the form $$\begin{aligned}
G_j(s)=N_1(s)N_2(s),\,\, N_1N_2=X^{1+o(1)},\,\, N_1\geq N_2\end{aligned}$$ with $N_i(s)$ zeta sums (it is possible that $N_2(s)$ is the constant polynomial $1^{-s}$). It suffices to bound the contributions of the zeta sums and the prime-factored polynomials separately.\
We look at the zeta sums first. We split the integration domain into dyadic intervals $[T_1,2T_1]$ with $T_0\leq T_1\leq T$. Keeping in mind that $N_1\geq X^{\frac{1}{2}-o(1)}$, $P^{\ell}=X^{\varepsilon+o(1)},$ and $|P(1+it)P^{\alpha_1}|^{2\ell}\geq 1$ for $t\in \mathcal{U}$, Cauchy-Schwarz and Watt’s theorem (Lemma \[16\]) yield $$\begin{aligned}
&\int_{\mathcal{U}\cap [T_1,2T_1]}|P(1+it)N_1(1+it)N_2(1+it)|^2dt\\
&\ll P^{2\alpha_1\ell}\int_{\mathcal{U}\cap[T_1,2T_1]}|N_1(1+it)N_2(1+it)P(1+it)^{\ell}|^2dt\\
&\ll P^{2\alpha_1\ell}\left(\int_{T_1}^{2T}|N_1(1+it)|^4|P(1+it)|^{4\ell}dt\right)^{\frac{1}{2}}\cdot\left(\int_{T_1}^{2T_1}|N_2(1+it)|^4 dt\right)^{\frac{1}{2}}\\
&\ll P^{2\alpha_1 \ell}X^{o(1)}\left(\left(\frac{T_1+T_1^{\frac{1}{2}}P^{4\ell}}{N_1^2P^{2\ell}}+\frac{1}{T_1^3}\right)(2\ell)!^2\right)^{\frac{1}{2}}\cdot \left(\frac{T_1+N_2^2}{N_2^2}\right)^{\frac{1}{2}}\\
&\ll P^{(2\alpha_1-1) \ell}X^{o(1)}\cdot (\ell!)^{2+o(1)}+\frac{P^{2\alpha_1\ell}X^{o(1)}(\ell!)^{2+o(1)}}{T_0}\\
&\ll (P^{2\alpha_1-1}\log^2 X)^{(1+o(1))\ell}X^{o(1)}+X^{-\varepsilon}.\end{aligned}$$ Combining the contributions of the dyadic intervals simply multiplies this bound by $\log X$.\
To bound the contribution of the prime-factored polynomials, we first observe that $$\begin{aligned}
\int_{\mathcal{U}}|P(1+it)M(1+it)|^2 dt\ll \sum_{t\in \mathcal{U}'}|P(1+it)M(1+it)|^2\end{aligned}$$ for some well-spaced $\mathcal{U}'\subset \mathcal{U}$. We make use of the Halász-Montgomery inequality (Lemma \[18\]), and of the prime-factored property applied to the polynomial $M_3$ with length $M_3\in \left[\exp\left(\frac{\log X}{2\log \log X}\right), X^{\frac{1}{3}+o(1)}\right]$, finding that $$\begin{aligned}
&\sum_{t\in\mathcal{U}'}|P(1+it)M_1(1+it)M_2(1+it)M_3(1+it)|^2\\
&\ll (\log X)^{-100-D}\sum_{t\in \mathcal{U}'}|P(1+it)M_1(1+it)M_2(1+it)|^2\\
&\ll (\log X)^{-100-2C}\left(1+\frac{T^{\frac{1}{2}}|\mathcal{U}'|}{X^{\frac{2}{3}-o(1)}}\right),\end{aligned}$$ where $D$ is so large that $D-2C-1$ exceeds the power of logarithm arising from the mean square of the coefficients of the divisor-bounded polynomial $P(s)M_1(s)M_2(s)$. Now the statement is proved.
Proof of Theorem 4
==================
The following proposition yields Theorem \[t4\] (and hence Theorems \[t1\] and \[t2\]) immediately, in view of the remarks of Subsection \[subsec:reduction\]
\[p4\] Let $k\geq 3$ be a fixed integer, $\varepsilon>0$ be small enough and $T_0=X^{0.01}$, as before. Define $$\begin{aligned}
F(s)=\sum_{\substack{p_1\dotsm p_k\sim X\\P_i\leq p_i\leq P_i^{1+\varepsilon}\\
i\leq k-1}}(p_1\dotsm p_k)^{-s},\end{aligned}$$ where $P_i$ are as in Theorem \[t4\]. Then, for $T\geq T_0$, we have $$\begin{aligned}
\int_{T_0}^{T}|F(1+it)|^2dt\ll \left(\frac{TP_1\log X}{X}+1\right)\cdot \frac{1}{(\log^2 X)(\log_k X)^{3}}.\end{aligned}$$
We make use of the ideas introduced in the paper [@matomaki] by Matomäki and Radziwiłł. Trivially, we may assume $T\leq X^{1+o(1)}$. Let $H=(\log_k X)^{3}$, $$\begin{aligned}
Q_{v,H}(s)=\sum_{e^{\frac{v}{H}}\leq p<e^{\frac{v+1}{H}}}p^{-s},\end{aligned}$$ and for each $j=1,...,k$, $$\begin{aligned}
F_{v,H,j}(s)=\sum_{\substack{p_1\dotsm p_{j-1}p_{j+1}\dotsm p_k\sim Xe^{-\frac{v}{H}}\\P_i\leq p_i\leq P_i^{1+\varepsilon},\,i\neq j,\,i\leq k-1}}(p_1\dotsm p_{j-1}p_{j+1}\dotsm p_k)^{-s}.\end{aligned}$$ Define $\alpha_1,...,\alpha_{k-1}$ by $\alpha_{j}=10j\varepsilon$ for $j\leq k-2$, and $\alpha_{k-1}=\frac{1}{12}-\varepsilon$, with $\varepsilon$ so small that $\alpha_{k-2}\leq \frac{\sqrt{\varepsilon}}{10}$. We split the domain of integration as $[T_0,T]=\mathcal{T}_1\cup \mathcal{T}_2\cup\dotsm \cup \mathcal{T}_{k-1} \cup \mathcal{T}$. We write $t\in \mathcal{T}_1$ if $$\begin{aligned}
|Q_{v,H}(1+it)|\leq e^{-\frac{\alpha_1 v}{H}}\end{aligned}$$ for all $v\in I_1=[H\log P_1,(1+\varepsilon)H\log P_1]$. We define recursively $t\in \mathcal{T}_j$ for $j=2,...,k-1$ if $t\not \in \bigcup_{j'\leq j-1} \mathcal{T}_{j'}$ but $$\begin{aligned}
|Q_{v,H}(1+it)|\leq e^{-\frac{\alpha_j v}{H}}\end{aligned}$$ for all $v\in I_j=[H\log P_j,(1+\varepsilon)H\log P_j]$. Finally, we write $$\begin{aligned}
\mathcal{T}=[T_0,T]\setminus \bigcup_{j=1}^{k-1}\mathcal{T}_j.\end{aligned}$$ Lemma \[6\], with the notation of Subsection \[subsec:sieve\], yields $$\begin{aligned}
\label{eq35}
\int_{\mathcal{S}}|F(1+it)|^2dt&\ll H^2(\log^2 P_j)\int_{\mathcal{S}}|Q_{v_j,H}(1+it)F_{v_j,H,j}(1+it)|^2dt\nonumber\\
&+\frac{T}{HX}(S_1(X,(c_n))+S_2(X,(c_n)))\end{aligned}$$ for some $v_j\in I_j$, and any $\mathcal{S}\subset [T_0,T]$. The coefficients $c_n$ in the definitions of $S_1$ and $S_2$ are naturally the convolution of the absolute values of the coefficients of the polynomials $Q_{v_j,H}(s)$ and $F_{v_j,H,j}(s).$ By Lemma \[15\] and the remark related to it, the last two terms above contribute $$\begin{aligned}
&\ll \frac{T}{X}\cdot \frac{1}{H\log X}+\frac{1}{H\log^2 X}\\
&\ll \left(\frac{TP_1\log X}{X}+1\right)\cdot \frac{1}{H\log^2 X}.\end{aligned}$$
We choose $\mathcal{S}=\mathcal{T}_1,...,\mathcal{T}_{k-1},\mathcal{T}$ in . Summarizing, it suffices to estimate for each $j=1,...,k-1$ the quantity $$\begin{aligned}
B_j:=H^2(\log^2 P_j)\int_{\mathcal{T}_j}|Q_{v_j,H}(1+it)F_{v_j,H,j}(1+it)|^2dt,\end{aligned}$$ where $v_j\in [H\log P_j,(1+\varepsilon)H\log P_j]$ is chosen so that the integral is maximal, and additionally the quantity $$\begin{aligned}
B:=H^2(\log^2 X)\int_{\mathcal{T}}|Q_{v_k,H}(1+it)F_{v_k,H,k}(1+it)|^2 dt,\end{aligned}$$ where $v_k\in [H\log \frac{X}{(P_1\dotsm P_{k-1})^{1+\varepsilon}},H\log \frac{2X}{P_1\dotsm P_{k-1}}]$ is also picked so that the integral is maximized.\
The integral over $\mathcal{T}_1$ is bounded with the help of Proposition \[p1\]. We take $K(s)=F_{v_1,H,1}(s)$ and $P(s)=Q_{v_1,H}(s)$. Now Lemma \[15\] and Proposition \[p1\] result in $$\begin{aligned}
B_1&\ll H^2(\log^2 P_1) P_1^{1+\varepsilon-2\alpha_1}\frac{T}{X}\left(\frac{1}{\log X}+\frac{X}{P_1T}\cdot \frac{1}{\log^2 X}\right)\\
&\ll\left(\frac{TP_1\log X}{X}+1\right)\cdot \frac{P_1^{10\varepsilon-2\alpha_1}}{\log^2 X},\end{aligned}$$ and this is an admissible bound, since $\alpha_1=10\varepsilon$ and $P_1\gg(\log_{k} X)^{\varepsilon^{-1}}$.\
For the integral over $\mathcal{T}_j$ with $2\leq j\leq k-1$ we use Proposition \[p2\], with $K(s)=F_{v_j,H,j}(s), M(s)=Q_{v_j,H}(s)$ and $P(s)=Q_{v_{j-1},H}(s)$, and for $\ell=\lceil \frac{\log P_j}{\log P_{j-1}}\rceil$ deduce $$\begin{aligned}
\label{eq44}
B_{j}&\ll H^2(\log^2 P_{j})P_{j}^{-2\alpha_{j}}\cdot P_{j-1}^{(2+10\varepsilon) \alpha_{j-1} \ell}\nonumber\\
&\quad\cdot (\ell!)^{1+o(1)}\cdot \left(\frac{T}{X\log X}+\frac{1}{\log^2 X}\right)\nonumber\\
&\ll P_{j-1}^{10}P_{j}^{2(\alpha_{j-1}-\alpha_{j})+10\varepsilon+(1+\varepsilon)\frac{\log \log P_{j}}{\log P_{j-1}}}\left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^2 X}.\end{aligned}$$ For $2\leq j\leq k-2$, we have $\frac{\log \log P_j}{\log P_{j-1}}\leq 2\varepsilon$ and $\alpha_j-\alpha_{j-1}=10\varepsilon$, so the definitions of $P_{j-1}$ and $P_j$ result in $$\begin{aligned}
B_j\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^2 X}(\log_k X)^{-3},\end{aligned}$$ as wanted. For $j=k-1$, we have $\alpha_{k-2}\leq \frac{\sqrt{\varepsilon}}{10}$, $\alpha_{k-1}=\frac{1}{12}-\varepsilon$ and $P_{k-1}=(\log X)^{\varepsilon^{-2}}$, so taking $j=k-1$ in the above computation gives $$\begin{aligned}
B_{k-1}&\ll P_{k-1}^{-\frac{1}{6}+\frac{1}{4}\sqrt{\varepsilon}+\frac{1+\varepsilon}{6+10\sqrt{\varepsilon}}}\ll P_{k-1}^{-\varepsilon}\ll (\log X)^{-\varepsilon^{-1}},\end{aligned}$$ and therefore the case of $\mathcal{T}_{k-1}$ has been dealt with.\
Finally, the integral over $\mathcal{T}$ is estimated using Proposition \[p3\] with $P(s)=Q_{v_{k-1},H}(s)$ and $M(s)=Q_{v_k,H}(s)$. Denoting $\ell=\lfloor \varepsilon\frac{\log X}{\log P_{k-1}}\rfloor$ and separating by Perron’s formula the variable $p_{k-1}$ from the rest of the variables in $F_{v_k,H,k}(s)$ (and bounding the polynomial corresponding to the variables $p_1,...,p_{k-2}$ by $\ll 1$), we see that $$\begin{aligned}
B&\ll H^2(\log^4 X)\int_{\mathcal{T}}|Q_{v_{k-1},H}(1+it)Q_{v_k,H}(1+it)|^2dt\\
&\ll H^2(\log^4 X)(P_{k-1}^{-\frac{5}{6}+2\varepsilon}\log^2 X)^{(1+o(1))\ell}X^{o(1)}+(\log X)^{-95}\left(1+\frac{|\mathcal{T'}|T^{\frac{1}{2}}}{X^{\frac{2}{3}-o(1)}}\right)\end{aligned}$$ for some well-spaced set $\mathcal{T}'\subset \mathcal{T}$. Since $P_{k-1}=(\log X)^{\varepsilon^{-2}}$, the first term is $\ll X^{-\frac{\varepsilon}{3}}$. In addition, Lemma \[7\] allows us to bound the size of $\mathcal{T}'$ by $$\begin{aligned}
|\mathcal{T}'|\ll T^{2\alpha_{k-1}}P_{k-1}^2 X^{(\varepsilon^2+o(1))}\ll X^{\frac{1}{6}-\frac{\varepsilon}{2}}, \end{aligned}$$ because $\alpha_{k-1}=\frac{1}{12}-\varepsilon$. Therefore, the integral over $\mathcal{T}$ is $\ll (\log X)^{-95}$. In conclusion, we deduced the bound $$\begin{aligned}
B_1+\dotsm +B_{k-1}+B\ll \left(\frac{TP_1\log X}{X}+1\right)\cdot \frac{1}{H\log^2 X},\end{aligned}$$ which finishes the proof of this proposition and of Theorem \[t4\].
A corollary on products of two primes
-------------------------------------
As a byproduct of the methods above, we arrive at the exponent $c=5+\varepsilon$ for products of two primes, which already replicates Mikawa’s exponent for $P_2$ numbers[^2] . Similarly as for products of three or more primes, it suffices to prove $$\begin{aligned}
\int_{T_0}^{T}|F(1+it)|^2dt=o\left(\left(\frac{TP_1\log X}{X}+1\right)\cdot \frac{1}{(\log X)^{2+\varepsilon}}\right),\end{aligned}$$ where $$\begin{aligned}
F(s)=\sum_{\substack{p_1p_2\sim X\\P_1\leq p_1<P_1^{1+\varepsilon}}} (p_1p_2)^{-s},\end{aligned}$$ and $P_1=\log^a X$ with $a=4+\varepsilon.$ We may again suppose $T\leq X^{1+o(1)}$.\
We can redefine the set $\mathcal{T}_1$ in the proof of Proposition \[p4\] with the new values $P_1=\log^{a} X$, $H=(\log X)^{3\varepsilon}$, keeping the value $\alpha_1=10\varepsilon$, and we see again from Proposition \[p1\] that the mean square of $F(1+it)$ over $\mathcal{T}_1$ is suitably small. For applying Propositions \[p2\] and \[p3\], we need more polynomials than the two that correspond to the variables $p_1$ and $p_2$ in . Indeed, Heath-Brown’s decomposition (Lemma \[11\]) enables splitting the polynomial corresponding to $p_2$ as $(\log X)^{O(1)}$ sums of the form $|M_1(s)M_2(s)|+|N_1(s)N_2(s)|$, where $M_1(s)$ and $M_2(s)$ are prime-factored Dirichlet polynomials with $M_1M_2=X^{1+o(1)}$, $\exp\left(\frac{\log X}{2\log \log X}\right)\ll M_1 \ll X^{\frac{1}{3}+o(1)}$ and $N_1(s)$ and $N_2(s)$ zeta sums with $N_1N_2=X^{1+o(1)}$. The contribution of the zeta sums over the complement of $\mathcal{T}_1$ can be managed easily with Watt’s theorem, similarly as in the proof of Proposition \[p3\].\
To estimate the contribution of the prime-factored polynomials $M_i(s)$, we redefine the set $\mathcal{T}_2$ as $\{t\in [T_0,T]: |M_1(1+it)|\leq M_1^{-\alpha_2}\}\setminus \mathcal{T}_1$, and Proposition \[p2\] (with $P(s)$ corresponding to $p_1$ and $K(s)=M_1(s)M_2(s)$) produces a valid bound[^3] in the $\mathcal{T}_2$ case, as long as $a\geq \frac{1}{2(\alpha_2-\alpha_1)}+100\varepsilon$. We take $\alpha_2=\frac{1}{8}-\varepsilon$, which turns out to be the best choice here.\
Finally, when considering the integral over the complement of $\mathcal{T}_1\cup \mathcal{T}_2$, instead of Proposition \[p3\] we apply the simple inequality $$\begin{aligned}
\int_{\mathcal{T}}|M_1(1+it)M_2(1+it)|^2dt\ll (\log X)^{-100}\left(1+\frac{|\mathcal{T}'|T^{\frac{1}{2}}}{M_2}\right)\end{aligned}$$ for some well-spaced $\mathcal{T}'\subset \mathcal{T}$, with $\mathcal{T}\subset [T_0,T]$ arbitrary. This inequality follows just from the prime-factored property of $M_1(s)$ combined with the Halász-Montgomery inequality (Lemma \[18\]). Now, denoting $M_1=X^{\nu+o(1)}$, we need to have $|\mathcal{T}'|\ll X^{\frac{1}{2}-\nu-\varepsilon^2}$ whenever $$\begin{aligned}
\mathcal{T}'\subset \{t\in [T_0,T]: |M_1(1+it)|\geq M_1^{-\alpha_2}\}\end{aligned}$$ is well spaced. Jutila’s large values theorem (Lemma \[13\]) applied with $F(s)=M_1(s)^{\ell}$, $V=M_1^{-(\frac{1}{8}-\varepsilon)\ell}$ and $k=2$, $\ell\in \{2,3\}$ tells that $$\begin{aligned}
|\mathcal{T'}|\ll \begin{cases}X^{\max\{\frac{\nu}{2},\,\,-\frac{11}{4}\nu+1,\,\,1-4\nu\}-2\varepsilon^{2}} \\
X^{\max\{\frac{3}{4}\nu,\,\,-\frac{33}{8}\nu+1,\,\,1-6\nu\}-2\varepsilon^{2}}.\end{cases}\end{aligned}$$ We know that $\nu\leq \frac{1}{3}+o(1)$, and for $\frac{2}{7}\leq \nu\leq \frac{1}{3}$ the first bound is $\ll X^{\frac{1}{2}-\nu-\varepsilon^2}$, while for $\frac{4}{25}\leq \nu\leq \frac{2}{7}$ the second bound is small enough.\
In the case $\nu\leq \frac{4}{25}$, we may simply appeal to Lemma \[7\] to bound $|\mathcal{T}'|$ (with $V=M_1^{-\alpha_2}$), and get $$\begin{aligned}
|\mathcal{T}'|\ll T^{2\alpha_2}X^{2\nu \alpha_2+o(1)}\ll X^{0.29+100\varepsilon}\ll X^{\frac{1}{2}-\nu-\varepsilon}\end{aligned}$$ for $\alpha_2=\frac{1}{8}-\varepsilon$. This proves that $\alpha_2=\frac{1}{8}-\varepsilon$ was permissible, leading to $a=\frac{1}{2\alpha_2}+C_1\varepsilon$, so the admissible exponent becomes $c=a+1\leq 5+2C_1\varepsilon$ (and $\varepsilon>0$ was arbitrary). The rest of the paper therefore deals with improving the value $c=5+\varepsilon$ to $c=3.51$, which will require several further ideas, along with the ones already introduced.
Lemmas for Theorem 5
====================
Exponent pairs {#subsec:Exp}
--------------
In the proof of Theorem \[t5\], several zeta sums arise, and in some instances it is useful to have a smallish, pointwise power saving in these sums. This is given by the theory of exponent pairs. We could compute a long list of exponent pairs and choose the optimal estimate depending on the length of the zeta sum, but it turns out that using a single suitable exponent pair improves the exponent $c$ for $E_2$ numbers by approximately $0.02$, while having more of them would have very little additional advantage, and would complicate the calculations. Therefore, instead of formulating the general definition of exponent pairs (found in [@montgomery Chapter 3]), we write down the estimate coming from this specific pair.
Let $$\begin{aligned}
\sigma(\nu)=-\min\left\{\frac{1-\nu}{126}-\frac{\nu}{21},0\right\}.\end{aligned}$$ Then we have $$\begin{aligned}
\sum_{n\in I}n^{-1-it}\ll t^{-\sigma(\nu)+o(1)}\end{aligned}$$ for each $I=[N_1,N_2]$ with $t^{\nu}\leq N_1\leq N_2\ll t^{\nu+o(1)}$.
This follows immediately from the fact that $(\frac{1}{126},\frac{20}{21})$ is an exponent pair. For the proof of this, see Montgomery’s book [@montgomery Chapter 3].
Lemmas on sieve weights
-----------------------
For finding products of two primes on short intervals, we need some lemmas concerning sieve weights. In the cases of sums $\Sigma_1(h)$ and $\Sigma_2(h)$ in Subsection \[subsec:Sigma1\], there will be too few variables for finding cancellation in the mean square of the corresponding Dirichlet polynomials. However, introducing sieve weights to these sums, we get an additional variable which is summed over all integers in a certain range, and separating that variable gives a long zeta sum (because there are few variables), and Watt’s theorem can be applied to this sum. Also in the case of these sums, we need to make use of an additional saving of a logarithm in the mean value theorem. However, here the coefficients are not supported on almost primes but are closely related to the Dirichlet convolution $\lambda_n* 1$, where $\lambda_n$ are the sieve weights. The sieve weights $\lambda_n$ can be taken to be those of Brun’s pure sieve. Specifically, we take $$\begin{aligned}
\lambda_d^{+}=\begin{cases}\mu(d),\quad \nu(d)\leq R,d\mid \mathcal{P}(w)\\ 0\quad\quad\quad \text{otherwise}\end{cases}\quad\quad \lambda_d^{-}=\begin{cases}\mu(d),\quad \nu(d)\leq R+1,d\mid \mathcal{P}(w)\\ 0\quad\quad\quad \text{otherwise}\end{cases}\end{aligned}$$ where the notations are as in Subsection \[subsec:notation\], and $$\begin{aligned}
w=\exp\left(\frac{\log X}{(\log \log X)^3}\right)\quad \text{and}\quad R=2\left\lfloor (\log \log X)^{\frac{3}{2}}\right\rfloor.\end{aligned}$$ Since the support of $\lambda_n*1$ contains in addition to almost primes only numbers having exceptionally many prime factors, we are able to save one logarithm factor in the mean values. This is done in the following lemma.
\[5\]Let $\lambda_d^{+}$ and $\lambda_d^{-}$ be the sieve weights of Brun’s pure sieve with the above notations. Let $k\geq 0$ be a fixed integer, $R_1,...,R_k\geq 1$ and $$\begin{aligned}
a_n=\sum_{p_1\dotsm p_k\mid n\atop R_i\leq p_i\leq R_i^{1+\varepsilon}}\left|\sum_{n=p_1\dotsm p_k dm}\lambda_d^{\pm}\right|\end{aligned}$$ where either the sign $+$ or $-$ is chosen throughout (for $k=0$, we define $p_1\dotsm p_k=1$). Then for $y\gg_A \frac{x}{\log ^{A}x}$ and $x\sim X$ we have $$\begin{aligned}
&\sum_{x\leq n\leq x+y}|a_n|^2\ll_{A} (\log \log X)^{O_k(1)}\frac{y}{\log X}\label{eq6}\\
&\sum_{1\leq h\leq \frac{x}{T}}\sum_{m-n=h\atop m,n\in [x,x+y]}|a_m||a_n|\ll_{A}(\log \log X)^{O_k(1)} \frac{X}{T}\cdot \frac{y^2}{\log^2 X}\label{eq45}.\end{aligned}$$
For the proof of this lemma, we need a couple of other lemmas.
\[4\] For $x\geq 2$ and positive integer $\ell$, let $$\begin{aligned}
\pi_{\ell}(x)=|\{n\in [1,x]:\nu(n)=\ell\}|.\end{aligned}$$ There exist absolute constants $K$ and $C$ such that $$\begin{aligned}
\pi_{\ell}(x)<\frac{Kx}{\log x}\frac{\left(\log \log x+C\right)^{\ell-1}}{(\ell-1)!}\end{aligned}$$ for all $\ell$ and $x\geq 2$.
This is an elementary result of Hardy and Ramanujan from [@hardy].
\[8\] Let $a\geq 1$ be fixed, and let $R=2\lfloor (\log \log X)^{\frac{3}{2}}\rfloor$ as before. Then for any $A>0$ $$\begin{aligned}
\sum_{n\sim X\atop \nu(n)\geq R}a^{\nu(n)}\ll_{a,A} \frac{X}{\log^{A} X}.\end{aligned}$$
The sum in question can be written as $$\begin{aligned}
\sum_{\ell\geq R}a^{\ell}|\{n\sim X:\nu(n)=\ell\}|,\end{aligned}$$ and by Lemma \[4\] this is $$\begin{aligned}
&\ll \frac {X}{\log X}\sum_{\ell\geq R}\left(\frac{ae(\log \log X+C)}{\ell-1}\right)^{\ell-1}\\
&\ll_{a} X\cdot 2^{-R}\ll_A \frac{X}{\log^A X}\end{aligned}$$ by the definition of $R$.
We can now proceed to proving Lemma \[5\].\
It suffices to consider the lower bound sieve weights. We assume $k\geq 1$, as the case $k=0$ is similar but a little simpler. Define $\theta_n=1*\lambda_n^{-}$. We have $$\begin{aligned}
\theta_n&=\sum_{\substack{d\mid n\\\nu(d)\leq R\\d\mid \mathcal{P}(w)}}\mu(d)\nonumber\\
&=\sum_{d\mid (n,\mathcal{P}(w))}\mu(d)+O\left(\sum_{d\mid n\atop \nu(d)>R}|\mu(d)|\right)\nonumber\\
&=1_{(n,\mathcal{P}(w))=1}+O(2^{\nu(n)}1_{\nu(n)>R}).\end{aligned}$$ Using this, we bound the sum . Denoting by $\Pi$ the product of all the primes in $\bigcup_{i=1}^{k}[R_i,R_i^{1+\varepsilon}]\cap[1,w]$, we observe that $$\begin{aligned}
\label{eq37}
a_n&=\sum_{p_1\dotsm p_k\mid n\atop R_i\leq p_i\leq R_i^{1+\varepsilon}}|\theta_{\frac{n}{p_1\dotsm p_k}}|\leq \nu(n)^k(1_{\left(n,\frac{\mathcal{P}(w)}{\Pi}\right)=1}+2^{\nu(n)}1_{\nu(n)>R}).\end{aligned}$$ The contribution of the first term on the right-hand side of to the sum is $$\begin{aligned}
&\ll \sum_{x\leq n\leq x+y\atop \left(n,\frac{\mathcal{P}(w)}{\Pi}\right)=1} \nu(n)^{2k}\ll (\log \log X)^{O_k(1)}\sum_{x\leq n\leq x+y\atop \left(n,\frac{\mathcal{P}(w)}{\Pi}\right)=1}1\ll (\log \log X)^{O_k(1)}\frac{y}{\log X}\end{aligned}$$ by Brun’s sieve and the fact that $\nu(n)\ll (\log \log X)^3$ when $(n,\mathcal{P}(w))=1.$ On the other hand, the the second term on the right-hand side of contributes to at most $$\begin{aligned}
&\ll \sum_{x\leq n\leq x+y\atop \nu(n)\geq R}\nu(n)^{2k}4^{\nu(n)}\ll_k \sum_{x\leq n\leq x+y\atop \nu(n)\geq R}5^{\nu(n)}\ll_{A,k}\frac{X}{\log^A X}\label{eq46}\end{aligned}$$ by Lemma \[8\]. This proves the first bound in Lemma \[5\].\
The second bound in Lemma \[5\] is proved analogously. The two terms in can be combined in four ways into products of two terms (two of these are symmetric). One of the cases contributes to at most $$\begin{aligned}
\ll \sum_{1\leq h\leq \frac{x}{T}}\sum_{m-n=h\atop m,n\in [x,x+y]}\nu(m)^k\nu(n)^k1_{\left(m,\frac{\mathcal{P}(w)}{\Pi}\right)=1}1_{\left(n,\frac{\mathcal{P}(w)}{\Pi}\right)=1}\ll (\log \log X)^{O_k(1)} \frac{X}{T}\cdot \frac{y^2}{\log^2 X}\end{aligned}$$ by Brun’s sieve. The two symmetric terms obtained by multiplying terms in have an impact of $$\begin{aligned}
\ll \sum_{1\leq h\leq \frac{x}{T}}\sum_{m-n=h\atop m,n\in [x,x+y]}\nu(m)^k\nu(n)^k1_{\left(m,\frac{\mathcal{P}(w)}{\Pi}\right)=1}2^{\nu(n)}1_{\nu(n)>R},\end{aligned}$$ where the coefficients depending on $m$ can be bounded trivially, while the coefficients depending on $n$ save an arbitrary power of logarithm, as in formula . Finally, the fourth term arising from multiplication of also saves an arbitrary power of logarithm by the same argument.
Proof of Theorem 5
==================
Before proving Theorem \[t5\], we need some preparation. Define $$\begin{aligned}
S_h(x)=\sum_{x\leq p_1p\leq x+h\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}1,\quad S_X=S_X(X),\end{aligned}$$ and set $$\begin{aligned}
w=\exp\left(\frac{\log X}{(\log \log X)^3}\right).\end{aligned}$$ We use Buchstab’s identity twice to decompose $$\begin{aligned}
S_h(x)&=\sum_{\substack{x\leq p_1n\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(n,\mathcal{P}(w))=1\\n>1}}1-\sum_{\substack{x\leq p_1q_1n\leq x+h\\ P_1\leq p_1\leq P_1^{1+\varepsilon}\\w\leq q_1<\sqrt {x}\\(n,\mathcal{P}(q_1))=1\\n>1}}1\\
&=\sum_{\substack{x\leq p_1n\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(n,\mathcal{P}(w))=1\\n>1}}1-\sum_{\substack{x\leq p_1q_1n\leq x+h\\ P_1\leq p_1\leq P_1^{1+\varepsilon}\\w\leq q_1<\sqrt {x}\\(n,\mathcal{P}(w))=1\\n>1}}1+\sum_{\substack{x\leq p_1q_1q_2n\leq x+h\\ P_1\leq p_1\leq P_1^{1+\varepsilon}\\w\leq q_2<q_1<\sqrt {x}\\(n,\mathcal{P}(q_2))=1\\n>1}}1.\end{aligned}$$ Call these sums $\Sigma_1(h),\Sigma_2(h)$ and $\Sigma_3(h)$, respectively, and call the corresponding dyadic sums $\Sigma_1(X),\Sigma_2(X)$ and $\Sigma_3(X)$, respectively. We will divide $\Sigma_3(h)$ into two parts $\Sigma_3'(h)$ and $\Sigma_3''(h)$ in such a way that $\Sigma_1(h), \Sigma_2(h)$ and $\Sigma_3'(h)$ can be evaluated asymptotically, while the error from $\Sigma_3''(h)$ is manageable. To be precise, we will prove that $$\begin{aligned}
\frac{1}{h}S_h(x)&=\frac{1}{h}(\Sigma_1(h)-\Sigma_2(h)+\Sigma_3'(h)+\Sigma_3''(h))\nonumber\\
&=\frac{1}{X}(\Sigma_1(X)-\Sigma_2(X)+\Sigma_3'(X))+\frac{1}{h}\Sigma_3''(h)+o\left(\frac{1}{\log X}\right)\label{eq38}\\
&=\frac{1}{X}S_X+\frac{1}{h}\Sigma_3''(h)-\frac{1}{X}\Sigma_3''(X)+o\left( \frac{1}{\log X}\right)\nonumber\\
&\geq \frac{1}{X}S_X-\frac{1}{X}\Sigma_3''(X)+o\left( \frac{1}{\log X}\right)\nonumber\\
&\geq \varepsilon\cdot \frac{1}{X}S_X\label{eq39}\end{aligned}$$ almost always, with the steps and being the nontrivial ones. This estimate will then immediately lead to Theorem \[t5\]. To prove these statements, we require some auxiliary results for the cases of $\Sigma_1(h),\Sigma_2(h)$ and $\Sigma_3(h)$.
Mean square bounds related to Theorem 5
---------------------------------------
We need three additional mean square bounds to deal with the sums $\Sigma_1(h),\Sigma_2(h)$ and $\Sigma_3(h)$. The first is a relative of Proposition \[p3\] and would already improve slightly the exponent $c=5+\varepsilon$ obtained from the proof of Theorem \[t4\]. It will not be applied directly in the proof of Theorem \[t5\], but instead as an ingredient in the proof of Proposition \[p6\].
\[p8\] Let $X^{1+o(1)}\geq T\geq T_0=X^{0.01}$, and $0\leq \alpha_1\leq 1$. Furthermore, let $$\begin{aligned}
P(s)=\sum_{P\leq p\leq P'}\frac{1}{p^s},\quad M(s)=\sum_{m\sim M}\frac{b_m}{m^s},\end{aligned}$$ with $P=X^{\nu+o(1)}$, $P'\in \left[P+\frac{P}{\log X},2P\right]$, $0< \nu\leq \frac{1}{2}$, $|b_m|\leq d_r(m)$ for fixed $r$, and $PM=X^{1+o(1)}$. Also let $$\begin{aligned}
\mathcal{U}=\{t\in [T_0,T]:|P(1+it)|\geq P^{-\alpha_1}\}.\end{aligned}$$ Then, $$\begin{aligned}
\int_{\mathcal{U}}|P(1+it)M(1+it)|^2dt\ll (\log X)^{-100}+X^{\frac{1}{2}-\min\{2\sigma(\nu),\frac{\nu}{2}\}+o(1)}\cdot \frac{|\mathcal{U}'|P}{X}\end{aligned}$$ for some well-spaced $\mathcal{U}'\subset \mathcal{U}$.
Note that Heath-Brown’s decomposition (Lemma \[11\]) gives $$\begin{aligned}
|P(1+it)| \ll (\log^C X)(|G_1(1+it)|+\dotsm+|G_L(1+it)|)\end{aligned}$$ with $L\leq \log^C X$ and each $G_j(s)$ either of the form $G_j(s)=N(s)$ with $N(s)$ a zeta sum of length $P^{1-o(1)}$, or $G_j(s)=M_1(s)M_2(s)$ with $M_1$ and $M_2$ prime-factored polynomials of length $M_1\geq M_2\geq \exp(\frac{\log X}{\log \log X}),M_1M_2=P^{1-o(1)}.$ To bound the contribution of the zeta sum, we divide the integral over $\mathcal{U}$ into integrals over dyadic intervals $[T_1,2T_1]$ with $T_1\in [T_0,T]$, and write $N=T_1^{\mu+o(1)}$ with $\mu\geq \nu$. If $\mu>1$, we know that $|N(1+it)|\ll \frac{\log t}{t}$ and $M(1+it)\ll (\log X)^{O(1)}$, so $$\begin{aligned}
\int_{\mathcal{U}\cap [T_1,2T_1]}|M(1+it)N(1+it)|^2dt\ll \frac{(\log X)^{O(1)}}{T_0}.\end{aligned}$$ If $\mu\leq 1$, we first pick a well-spaced $\mathcal{U}'\subset \mathcal{U}$ such that $$\begin{aligned}
\int_{\mathcal{U}}|M(1+it)N(1+it)|^2dt\ll \sum_{t\in \mathcal{U}'}|M(1+it)N(1+it)|^2.\end{aligned}$$ Now the Halász-Montgomery inequality and the the fact that $N(s)$ is a zeta sum give $$\begin{aligned}
\sum_{t\in\mathcal{U}'\cap [T_1,2T_1]}|M(1+it)N(1+it)|^2&\ll T^{-2\sigma(\nu)+o(1)}\sum_{t\in\mathcal{U}'\cap [T_1,2T_1]}|M(1+it)|^2\\
&\ll T^{-2\sigma(\nu)+o(1)}\left(1+\frac{|\mathcal{U}'|T_1^{\frac{1}{2}+o(1)}}{\frac{X}{P}}\right).\end{aligned}$$
To deal with the contribution of the prime-factored polynomials $M_i(s)$, we may use the Halász-Montgomery inequality in a manner analogous to the above to obtain the estimate $$\begin{aligned}
&\int_{\mathcal{U}}|M(1+it)M_1(1+it)M_2(1+it)|^2dt\ll (\log X)^{-100}\left(1+\frac{|\mathcal{U}'|T^{\frac{1}{2}+o(1)}}{\frac{X}{P^{\frac{1}{2}}}}\right),\end{aligned}$$ since $MM_1\gg \frac{X^{1+o(1)}}{P^{\frac{1}{2}}}$ Taking the maximum of these two results produces the claimed bound.
Our second mean square bound is a type I estimate where we exploit a long zeta sum with the help of Watt’s theorem. In the cases of $\Sigma_1(h)$ and $\Sigma_2(h)$, this is necessary, and in the case $\Sigma_3(h)$ it improves our exponent for Theorem \[t5\]. A closely related estimate can be found for example in [@harman-sieves Chapter 9].
\[p5\] Let $X^{1+o(1)}\gg T\geq T_0$, and let $M(s), N(s),P(s)$ be Dirichlet polynomials with coefficients bounded by $X^{o(1)}$ and supported on the intervals $[M,2M],[N,2N]$,$[P,2P]$, respectively. Denote $Q(s)=\sum_{m\sim Q}\frac{a_m}{m^s}$, and let $N(s)$ be a zeta sum. Suppose in addition that $$\begin{aligned}
MNP=X^{1+o(1)},\,\, PQ^2\leq X^{\frac{1}{4}},\,\,M^2P\ll X^{1+o(1)}.\end{aligned}$$ Then $$\begin{aligned}
\int_{T_0}^{T}|M(1+it)N(1+it)P(1+it)Q(1+it)|^2dt\ll X^{o(1)} \left(Q^{-1}+\frac{1}{T_0}\right)\max_{m\sim Q}|a_m|^2.\end{aligned}$$
In all our applications, the polynomial $Q(s)$ has length essentially $X^{\varepsilon}$, and it is used to win by $X^{\varepsilon^2}$, say, in our estimates.
We will reduce the proposition to Watt’s theorem (Lemma \[16\]). Divide the integration domain into dyadic intervals $[T_1,2T_1]$. By Cauchy-Schwarz, the mean value theorem and Watt’s theorem, we see that $$\begin{aligned}
&\int_{T_1}^{2T1}|M(1+it)N(1+it)P(1+it)Q(1+it)|^2dt\\
&\ll \left(\int_{T_1}^{2T1}|N(1+it)|^4|P(1+it)Q(1+it)^2|^2 dt\right)^{\frac{1}{2}}\\
&\quad\cdot\left(\int_{T_1}^{2T_1}|M(1+it)|^4 |P(1+it)|^2 dt\right)^{\frac{1}{2}}\\
&\ll \left(\left(\frac{T_1^{o(1)}(T_1+T_1^{\frac{1}{2}}P^2Q^4)}{N^2PQ^2}+\frac{T^{o(1)}}{T_1^3}\right)\max_{m\sim Q}|a_m|^4\right)^{\frac{1}{2}}\left(\frac{T_1+M^2P}{M^2P}\right)^{\frac{1}{2}}\\
&\ll\left(\left(\frac{T_1^{o(1)}(T_1+T_1^{\frac{1}{2}}P^2Q^4)}{N^2PQ^2}\right)\max_{m\sim Q}|a_m|^4\right)^{\frac{1}{2}}\left(\frac{T_1+M^2P}{M^2P}\right)^{\frac{1}{2}}+\frac{X^{o(1)}}{T_0}\max_{m\sim Q}|a_m|^2.\end{aligned}$$ Hence, we need $$\begin{aligned}
(X+X^{\frac{1}{2}}P^2Q^4)(X+M^2P)\ll(MNPX^{o(1)})^2,\end{aligned}$$ and this is guaranteed by our conditions.
For the $\Sigma_3(h)$ case in Subsection \[subsec:sigma\], we also need the following mean square bound, which is somewhat analogous to Proposition \[p4\] and is based on Propositions \[p1\], \[p2\] and \[p8\], but it will be clear only later how it is crucial for proving Theorem \[t5\].
\[p6\]Let $0\leq \nu\leq \frac{1}{2},$ $0<\alpha_2\leq 1$, $a=\frac{1}{2\alpha_2}+C_2\varepsilon$, $P_1=\log^a X$, $X^{1+o(1)}\gg T\geq T_0=X^{0.01}$, and $w\leq P_2= X^{\nu+o(1)}$ with $w=\exp\left(\frac{\log X}{(\log \log X)^3}\right)$. Also let $$\begin{aligned}
G(s)=\sum_{\substack{p_1p_2p_3n\sim X\\P_i\leq p_i\leq P_i^{1+\varepsilon},\,i\leq 2\\p_2<p_3\\(n,\mathcal{P}(p_2))=1\\n>1}}a_n(p_1p_2p_3n)^{-s},\end{aligned}$$ where $|a_n|\ll (\log X)^{\varepsilon}$. Suppose that for every Dirichlet polynomial $M(s)=\sum_{m\sim M}\frac{b_m}{m^s}$ with $|b_m|\leq d_r(m)$ for fixed $r$ and $M=X^{\nu+o(1)}$ any well-spaced set $$\begin{aligned}
\mathcal{U}'\subset\{t\in [0,T]:|M(1+it)|\geq M^{-\alpha_2}\}\end{aligned}$$ satisfies $|\mathcal{U}'|\ll X^{\frac{1}{2}-\nu+\min\{2\sigma(\nu),\frac{\nu}{2}\}-\varepsilon}$. Then we have $$\begin{aligned}
\int_{T_0}^{T}|G(1+it)|^2 dt\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^{2+\varepsilon} X}.\end{aligned}$$
Let $\alpha_1=100\varepsilon$ and define $H=\log^{10\varepsilon}X$. Let $$\begin{aligned}
Q_{v,H,1}(s)=\sum_{e^{\frac{v}{H}}\leq p_1<e^{\frac{v+1}{H}}}p_1^{-s},\quad Q_{v,H,2}(s)=\sum_{e^{\frac{v}{H}}\leq p_2<e^{\frac{v+1}{H}}}p_2^{-s}\end{aligned}$$ and $$\begin{aligned}
G_{v,H,1}(s)&=\sum_{\substack{p_2p_3p_4m\sim Xe^{-\frac{v}{H}}\\P_{2}\leq p_{2}\leq P_{2}^{1+\varepsilon}\\p_2<p_3,\,p_2\leq p_4\\(m,\mathcal{P}(p_4))=1}}a_{p_4m}(p_2p_3p_4m)^{-s},\\ G_{v,H,2}(s)&=\sum_{\substack{p_1p_3p_4m\sim Xe^{-\frac{v}{H}}\\P_{1}\leq p_1\leq P_1^{1+\varepsilon}\\(m,\mathcal{P}(p_4))=1}}a_{p_4m}(p_1p_3p_4m)^{-s}.\end{aligned}$$
For $j=1,2$, we have $$\begin{aligned}
\int_{\mathcal{S}}|G(1+it)|^2 dt&\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^{2+\varepsilon} X}\\
&+H^2(\log^2 P_j)(\log^{10(j-1)}X)\int_{\mathcal{S}}|Q_{v_j,H,j}(1+it)G_{v_j,H,j}(1+it)|^2 dt\end{aligned}$$ for some $v_j\in [H\log P_j,(1+\varepsilon)H\log P_j]$ and any measurable $\mathcal{S}\subset [T_0,T]$. In the case $j=1$, this follows from Lemmas \[6\] and \[15\], while in the case $j=2$, we use Perron’s formula to separate the variables in $G(s)$. We partition $[T_0,T]$ as $\mathcal{T}_1\cup \mathcal{T}_2\cup \mathcal{T}$ with $$\begin{aligned}
\mathcal{T}_1&=\{t\in [T_0,T]:|Q_{v_1,H,1}(1+it)|\leq P_1^{-\alpha_1}\},\\
\mathcal{T}_2&=\{t\in [T_0,T]:|Q_{v_2,H,2}(1+it)|\leq P_2^{-\alpha_2}\}\setminus \mathcal{T}_1,\end{aligned}$$ and $\mathcal{T}=[T_0,T]\setminus (\mathcal{T}_1\cup \mathcal{T}_2)$.\
What remains to be done is estimating the integrals $$\begin{aligned}
B_j=H^2(\log^2 P_j)(\log^{10(j-1)}X)\int_{\mathcal{T}_j}|Q_{v_j,H,j}(1+it)G_{v_j,H,j}(1+it)|^2 dt\end{aligned}$$ for $j=1,2$, as well as $$\begin{aligned}
B=H^2(\log^{10} X)\int_{\mathcal{T}}|Q_{v_2,H,2}(1+it)G_{v_2,H,2}(1+it)|^2 dt.\end{aligned}$$ We have $B_1\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{P_1^{10\varepsilon-\alpha_1}}{\log^2 X}$ by Proposition \[p1\] and Lemma \[15\], and this is small enough since $\alpha_1= 100\varepsilon$. We also have, by Proposition \[p2\] with $\ell=\lceil \frac{\log P_2}{\log P_1}\rceil$, $$\begin{aligned}
B_2&\ll H^2(\log^{20} X)P_2^{-2\alpha_2}P_1^{(2+10\varepsilon)\alpha_1\ell}\ell^{(1+o(1))\ell}\\
&\ll P_2^{2(\alpha_1-\alpha_2)+20\varepsilon+\frac{1+2\varepsilon}{a}}\\
&\ll P_2^{-\varepsilon}\ll (\log X)^{-100},\end{aligned}$$ as long as $a\geq \frac{1}{2(\alpha_2-\alpha_1)}+\frac{C_2}{2}\varepsilon$, say. Lastly, Proposition \[p8\] gives, for some well-spaced $\mathcal{U}'$ of the type mentioned in the proposition, $$\begin{aligned}
B\ll (\log X)^{-50}+X^{\frac{1}{2}-\min\{2\sigma(\nu),\frac{\nu}{2}\}+o(1)}\frac{|\mathcal{U}'|X^{\nu+o(1)}}{X}\ll (\log X)^{-50}\end{aligned}$$ by our assumption on $\mathcal{U}'$. Now the proof is complete.
Cases of $\Sigma_1(h)$ and $\Sigma_2(h)$ {#subsec:Sigma1}
----------------------------------------
Let $\lambda_d^{+}$ and $\lambda_d^{-}$ be the sieve weights of Brun’s pure sieve with $R=2\lfloor (\log \log X)^{\frac{3}{2}}\rfloor$ and sieving parameter $w=\exp(\frac{\log X}{(\log \log X)^3})$. We have $$\begin{aligned}
\sum_{x\leq p_1dn\leq x+h\atop P_1\leq p_1\leq P_1^{1+\varepsilon} }\lambda_d^{-}\leq \sum_{\substack{\frac{x}{p_1}\leq n\leq \frac{x+h}{p_1}\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(n,\mathcal{P}(w))=1\\n>1}}1=\Sigma_1(h)\leq \sum_{x\leq p_1dn\leq x+h\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{+}.\end{aligned}$$ We consider the lower bound; the upper bound can be considered similarly. Letting $X_1=\frac{X}{T_0^3}$ with $T_0=X^{0.01}$, we have $$\begin{aligned}
\frac{h}{X_1}\sum_{P_1\leq p_1\leq P_1^{1+\varepsilon}\atop d\mid \mathcal{P}(w)}\lambda_d^{-}\sum_{\frac{X}{p_1d}\leq n\leq \frac{X+X_1}{p_1 d}}1=h\sum_{P_1\leq p_1\leq P_1^{1+\varepsilon}\atop d\mid \mathcal{P}(w)}\frac{\lambda_d^{-}}{p_1d}+O\left(\frac{h}{X_1}w^{R}P_1^{1+\varepsilon}\right),\end{aligned}$$ so $$\begin{aligned}
\label{eq40}
\Sigma_1(h)&\geq \sum_{d\mid \mathcal{P}(w)\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{-}\frac{h}{p_1d}+\left(\sum_{x\leq p_1dn\leq x+h\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{-}-\frac{h}{X_1}\sum_{X\leq p_1dn\leq X+X_1\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{-}\right)\\
&\quad+O\left(\frac{1}{\log^{100} X}\right).\nonumber\end{aligned}$$ By the fundamental lemma of the sieve (see e.g. [@friedlander Chapter 6]), we further deduce that $$\begin{aligned}
\sum_{d\mid \mathcal{P}(w)\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{-}\frac{h}{p_1d}&=(1+O((\log X)^{-100}))\sum_{d\mid \mathcal{P}(w)\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{+}\frac{h}{p_1d}\\
&\geq \frac{h}{X}\Sigma_1(X)+O\left(\frac{h}{\log^{100} X}\right).\end{aligned}$$ Therefore, we may concentrate on the expression in the parentheses in , which is a difference between a short and long average. By Lemma \[1\], it is $o\left(\frac{h}{\log X}\right)$ for $h\geq P_1\log X$ and for almost all $x\leq X$, provided that $$\begin{aligned}
\int_{T_0}^{T}|F(1+it)|^2dt=o\left(\left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^2 X}\right)\end{aligned}$$ for all $T\geq T_0$, where $T_0=X^{0.01}$, and $$\begin{aligned}
F(s)=\sum_{p_1dn\sim X\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{-}(p_1dn)^{-s}.\end{aligned}$$
Such an estimate is given by the following proposition, which is invoked again in the case of the sum $\Sigma_2(h)$.
\[p7\]Let $\varepsilon>0$, $P_1=\log^{a} X$ with $a\geq 2+C_3\varepsilon$ and $$\begin{aligned}
F(s)=\sum_{p_1dn\sim X\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\lambda_d^{\pm}(p_1dn)^{-s}\quad \text{or}\quad F(s)=\sum_{\substack{p_1pdn\sim X\\ P_1\leq p_1\leq P_1^{1+\varepsilon}\\M\leq p\leq M^{1+\varepsilon}}}\lambda_d^{\pm}(p_1pdn)^{-s}\end{aligned}$$ with $M\ll X^{\frac{1}{2}+o(1)}$ , $X^{1+o(1)}\gg T\geq T_0=X^{0.01}$ as before, and either $+$ or $-$ sign chosen throughout. Then, $$\begin{aligned}
\int_{T_0}^{T}|F(1+it)|^2 dt\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^{2+\varepsilon} X}.\end{aligned}$$
Let $D$ be a large constant, and for positive integer $v$ and $H=\log^{10\varepsilon} X$ denote $$\begin{aligned}
P_{v,H}(s)=\sum_{e^{\frac{v}{H}}\leq p<e^{\frac{v+1}{H}}}p^{-s}\end{aligned}$$ and $$\begin{aligned}
F_{v,H}(s)=\sum_{dn\sim Xe^{-\frac{v}{H}}}\lambda_{d}^{\pm}(dn)^{-s}\quad \text{or}\quad F_{v,H}(s)=\sum_{pdn\sim Xe^{-\frac{v}{H}}\atop M\leq p\leq M^{1+\varepsilon}}\lambda_{d}^{\pm}(pdn)^{-s}.\end{aligned}$$ Lemma \[6\] gives $$\begin{aligned}
\label{eq11}
\int_{T_0}^{T}|F(1+it)|^2dt&\ll H^2(\log \log X)^2\int_{T_0}^{T}|P_{v_0,H}(1+it)F_{v_0,H}(1+it)|^2dt\nonumber\\
&+T\sum_{n\in [Xe^{-\frac{1}{H}},Xe^{\frac{1}{H}}]\, or\atop n\in [2X,2Xe^{\frac{1}{H}}]}|a_n|^2+T\sum_{1\leq h\leq \frac{X}{T}}\sum_{\substack{m-n=h\\ m,n\in [Xe^{-\frac{1}{H}},Xe^{\frac{1}{H}}]\,or\\ m,n\in [2X,2Xe^{\frac{1}{H}}]}}|a_m||a_n|,\end{aligned}$$ for some $v_0\in I_0$, where $I_0=[H\log P_1,H\log P_1^{1+\varepsilon}]$ and $$\begin{aligned}
\label{eq2}
a_m=\sum_{p_1\mid m\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\left|\sum_{m=p_1dn}\lambda_d^{\pm}\right|\quad \text{or}\quad a_m=\sum_{p_1\mid m\atop P_1\leq p_1\leq P_1^{1+\varepsilon}}\left|\sum_{m=p_1pdn\atop M\leq p\leq M^{1+\varepsilon}}\lambda_d^{\pm}\right|\end{aligned}$$
Lemma \[5\] tells that the last two terms in contribute, for some constant $C>0$, $$\begin{aligned}
&\ll \frac{T}{X}\left(\frac{(\log \log X)^{C}}{H}\cdot \frac{1}{\log X}+\frac{(\log \log X)^C}{H}\cdot \frac{X}{T}\cdot \frac{1}{\log^2 X}\right)\nonumber\\
&\ll \left(\frac{TP_1\log X}{X}+1\right)\cdot\frac{1}{\log^{2+\varepsilon} X}\end{aligned}$$ by the definition of $H$. We are now left with estimating the integral in . We consider the integrals in two parts, namely the part over $\mathcal{T}_1$ and its complement, with $$\begin{aligned}
\mathcal{T}_1=\{t\in [T_0,T]:|P_{v_0,H}(1+it)|\leq P_1^{-100\varepsilon}\}.\end{aligned}$$
The case of $\mathcal{T}_1$ is dealt with Proposition \[p1\] and Lemma \[5\], and it contributes $$\begin{aligned}
&\ll H^2(\log \log X)^2\frac{T}{X}P_1^{1-200\varepsilon}\left(S_1\left(\frac{X}{P_1},(a_n)\right)+S_2\left(\frac{X}{P_1},(a_n)\right)\right)\\
&\ll (\log \log X)^C\left(\frac{T}{X}\cdot \frac{1}{\log X}+\frac{1}{P_1}\cdot \frac{1}{\log^2 X}\right)\cdot P_1^{1-100\varepsilon}\\
&\ll \left(\frac{TP_1\log X}{X}+1\right)\cdot \frac{1}{\log^{2+\varepsilon} X},\end{aligned}$$ where the coefficients $a_n$ involved in definition of $S_i(X,(a_n))$ are given by .\
We turn to the integral over the complement of $\mathcal{T}_1$ and resort to the Watt-type Proposition \[p5\]. Let $\ell$ be a large positive integer such that $P_1^{\ell}= X^{\varepsilon+o(1)}$. Letting $N_a(s)=\sum_{n\sim Xe^{-a}}n^{-s}$ and $$\begin{aligned}
M_{v,H}(s)=\sum_{\substack{e^{\frac{v}{H}}\leq p_1d<e^{\frac{v+1}{H}}\\P_1\leq p_1\leq P_1^{1+\varepsilon}}}\lambda_{d}^{\pm}(p_1d)^{-s}\quad \text{or}\quad M_{v,H}(s)=\sum_{\substack{e^{\frac{v}{H}}\leq p_1pd<e^{\frac{v+1}{H}}\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\M\leq p\leq M^{1+\varepsilon}}}\lambda_{d}^{\pm}(p_1pd)^{-s},\end{aligned}$$ an application of Perron’s formula to separate variables, along with Lemma \[5\] and $|P_{v_1,H}(1+it)P_1^{100\varepsilon}|^{2\ell}\geq 1$, yields $$\begin{aligned}
\label{eq9}
&\int_{[T_0,T]\setminus \mathcal{T}_1}|F(1+it)|^2 dt\nonumber\\
&\ll H^2(\log^{10} X)P_1^{200\varepsilon \ell} \int_{T_0}^{T}|P_{v_0,H}(1+it)^{\ell}M_{v_1,H}(1+it)N_{\frac{v_1}{H}}(1+it)|^2 dt\\
&+\left(\frac{TP_1\log X}{X}+1\right)\cdot\frac{1}{\log^{2+\varepsilon} X}\nonumber\end{aligned}$$ for some $v_1\in I_1,$ where $I_1=[H\log M,H\log (M^{1+\varepsilon}w^R)].$ Now Proposition \[p5\] with $N(s)=N_{\frac{v_1}{H}}(s),$ $M(s)=M_{v_1,H}(s),$ $P(s)\equiv 1,Q(s)=P_{v_0,H}(s)^{\ell}$ and $\ell=\lfloor \frac{\varepsilon \log X}{\log P_1}\rfloor$ bounds with $$\begin{aligned}
\label{eq41}
X^{o(1)}P_1^{200\varepsilon \ell}\left(Q^{-1}+\frac{1}{T_0}\right)(\ell!)^2\ll (P_1^{-1}(\log^2 X))^{(1+o(1))\ell}+X^{-\varepsilon}\ll X^{-\varepsilon^2}\end{aligned}$$ for $a\geq 2+C_3\varepsilon$, since the condition $M^2P\ll X^{1+o(1)}$ certainly holds.
Note that Proposition \[p7\] immediately shows that $$\begin{aligned}
\frac{1}{h}\Sigma_1(h)-\frac{1}{X_1}\Sigma_1(X_1)\geq o\left(\frac{1}{\log X}\right)\end{aligned}$$ for almost all $x\leq X$, where $X_1=\frac{X}{T_0^3}$. Taking into account formula and repeating the above argument with lower bound sieve weights replaced with upper bound sieve weights, we see that the reverse inequality holds, so $\frac{1}{h}\Sigma_1(h)$ can be replaced with its dyadic counterpart $\frac{1}{X}\Sigma_1(X)$ almost always.\
Now we deal with $\Sigma_2(h)$. We use the same strategy, so that for example for the lower bound we start with $$\begin{aligned}
\Sigma_2 \geq \sum_{\substack{x\leq p_1pdn\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}}}\lambda_d^{-},\end{aligned}$$ an inequality that is valid even when the interval $\left[\frac{x}{p_1p},\frac{x+h}{p_1p}\right]$ contains no integers. This leads us to study the Dirichlet polynomial $$\begin{aligned}
F^{*}(s)=\sum_{\substack{p_1pdn\sim X\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\w\leq p<\sqrt{x}}}\lambda_d^{-}(p_1pdn)^{-s},\end{aligned}$$ where the variable $p$ can be divided into $\ll \log \log X$ intervals of the form $[M,M^{1+\varepsilon}]$ with $M\ll X^{\frac{1}{2}+o(1)}$ (the value of $\varepsilon$ may be varied so that the division becomes exact). For each of these Dirichlet polynomials where $p$ is restricted, Proposition \[p7\] gives a bound of $\left(\frac{TP_1\log X}{X}+1\right)(\log X)^{-2-\varepsilon}$ for their second moment. Now by the same argument as for $\Sigma_1(h)$, we infer that $\frac{1}{h}\Sigma_2(h)$ can also be replaced with its dyadic counterpart $\frac{1}{X}\Sigma_2(X)$ almost always.
Case of $\Sigma_3(h)$ {#subsec:sigma}
---------------------
We are left with the sum $\Sigma_3(h)$. This is the case that determines which value of $a$ we obtain (and hence the value of $c$, which is just $a+1$), since so far in all cases $a\geq 2+C_4\varepsilon$ has been a sufficient assumption. We will establish the value $a=2.51$.\
Let $\beta_1,\beta_2,\beta\in (\frac{1}{6},\frac{1}{2})$ be parameters which are given the values $$\begin{aligned}
\beta_1=0.1680,\quad \beta_2=0.1803,\quad \beta=0.1950\end{aligned}$$ to optimize various subsequent conditions. We split $\Sigma_3(h)$ into three parts $\Sigma_3^{(1)}(h),\Sigma_3^{(2)}(h)$ and $\Sigma_3^{(3)}(h)$, say, the first sum being a type II sum that can be evaluated asymptotically, the second being a type I sum (after Buchstab’s identity) that can mostly be evaluated asymptotically, and the third being a type II sum that can be transformed into Buchstab integrals whose value is suitably small. Explicitly, let $$\begin{aligned}
\Sigma_3^{(i)}(h)=\sum_{\substack{x\leq p_1q_1q_2n\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(q_1,q_2)\in A_i\\(n,\mathcal{P}(q_2))=1\\n>1}}1,\quad i=1,2,3\end{aligned}$$ with $$\begin{aligned}
A_1=&\{(q_1,q_2):\,\,w\leq q_2<q_1,\,\, \text{one of}\,\, q_1,q_2\in [w,X^{\beta_1}]\cup [X^{\beta_2},X^{\beta}]\},\\
A_2=&\{(q_1,q_2):\,\,w\leq q_2<q_1,\,\, \text{either}\,\, q_1^2q_2^3\leq X\,\,\text{or}\,\, q_1q_2^4\leq X,\,\,q_1\leq X^{\frac{1}{4}-2\varepsilon}\}\setminus A_1\\
A_3=&\{(q_1,q_2):\,\,w\leq q_2<q_1\leq X^{\frac{1}{2}}\}\setminus (A_1\cup A_2).\end{aligned}$$
The underlying idea is that the small variable in $A_1$ enables efficient use of large values theorems, the conditions in $A_2$ make it possible to apply Watt’s theorem (after two applications of Buchstab’s identity), and the remaining set $A_3$ can be shown to contribute not too much. We study the sums $\Sigma_3^{(i)}(h)$ separately, starting with $\Sigma_3^{(1)}(h)$.
### Type II sums
We consider the Type II sum $\Sigma_3^{(1)}(h)$. In order to prove that $\frac{1}{h}\Sigma_3^{(1)}(h)$ is asymptotically $\frac{1}{X}\Sigma_3^{(1)}(X)$ almost always, it suffices to prove that $\frac{1}{h}\Sigma_3^{(1)}(h)$ is asymptotically $\frac{1}{X_1}\Sigma_3^{(1)}(X_1)$ almost always with $X_1=\frac{X}{T_0^3}$, and then apply the prime number theorem in short intervals. For this latter asymptotic equivalence, it suffices to show that the Dirichlet polynomial $$\begin{aligned}
G(s)=\sum_{\substack{p_1q_1q_2n\sim X\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\Q_i\leq q_i\leq P_i^{1+\varepsilon}, i\leq 2\\q_2<q_1\\(n,\mathcal{P}(q_2))=1\\n>1}}(p_1q_1q_2n)^{-s}\end{aligned}$$ satisfies $$\begin{aligned}
\int_{T_0}^T |G(1+it)|^2 dt\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^{2+\varepsilon} X}\end{aligned}$$ with $T\leq X^{1+o(1)}$, $T_0=X^{0.01}$, $P_1=\log^a X$ and $Q_1,Q_2\geq w$ otherwise arbitrary, but either $Q_1$ or $Q_2$ is of size $X^{\nu+o(1)}$ with $\nu \in [0,\beta_1]\cup [\beta_2,\beta]$. These cases are similar, so assume $Q_2=X^{\nu+o(1)}$ with $\nu$ as above.\
This is the setting of Proposition \[p6\]. Therefore, if for every polynomial of the form $$\begin{aligned}
M(s)=\sum_{m\sim M}\frac{b_m}{m^s},\end{aligned}$$ with $M=X^{\nu+o(1)}$ and $|b_m|\leq d_r(n)$ for fixed $r$, any well-spaced set $$\begin{aligned}
\mathcal{U}'\subset \{t\in [0,T]:\,\, |M(1+it)|\geq M^{-\alpha_2}\}\end{aligned}$$ satisfies $$\begin{aligned}
|\mathcal{U}'|\ll X^{\frac{1}{2}-\nu+\min\{2\sigma(\nu),\frac{\nu}{2}\}-\varepsilon},\end{aligned}$$ the sum $\Sigma_3^{(1)}(h)$ has the anticipated asymptotic for $a\geq \frac{1}{2\alpha_2}+C_5\varepsilon$. Of course, we fix $\alpha_2=\frac{1}{2\cdot 2.51}+C_6\varepsilon$.\
We are left with estimating $|\mathcal{U}'|$, and to this end we utilize Jutila’s large values theorem. Jutila’s large values theorem (Lemma \[13\]) applied to the $\ell$th moment of $M(s)$ can be reformulated to say that if $$\begin{aligned}
\mathcal{R}(\nu,\alpha_2,k,\ell)=\max\left\{2\nu \alpha_2 \ell,\left(6-\frac{2}{k}\right)\nu \alpha_2 \ell+1-2\nu \ell,\,\,1+8k\ell\nu \alpha_2-2k\ell \nu\right\}\end{aligned}$$ and $$\begin{aligned}
\overline{\mathcal{R}}(\nu,\alpha_2)=\min_{k,\ell\in \{1,2,...\}}\mathcal{R}(\nu,\alpha_2,k,\ell),\end{aligned}$$ then $|\mathcal{U}|\ll X^{\tilde{\mathcal{R}}(\nu,\alpha_2)+o(1)}$. It turns out that the case $k=3$ is always optimal for us, and it suffices to restrict to $4\leq \ell\leq 12$ (so our upper bound for $\overline{\mathcal{R}}(\nu,\alpha_2)$ is a minimum of $9$ piecewise linear functions). Now we check that, with our choices of $\beta_1, \beta_2, \beta$ and $\alpha_2$, $$\begin{aligned}
\overline{\mathcal{R}}(\nu,\alpha_2)\leq \frac{1}{2}-\nu+\min\left\{2\sigma(\nu),\frac{\nu}{2}\right\}-\varepsilon\end{aligned}$$ for $\nu \in [0.05,\beta_1]\cup [\beta_1,\beta_2]$. Verifying this is straightforward, because both sides are piecewise linear functions.[^4]\
We must also prove the desired estimate for $|\mathcal{U}'|$ in the range $\nu\in [0,0.05).$ In this case, we do not appeal to Jutila’s large values theorem, but to Lemma \[7\] (along with its remark), which tells us that $$\begin{aligned}
|\mathcal{U}'|\ll T^{2\alpha_2}X^{2\alpha_2\nu+o(1)}\ll X^{0.42}<X^{\frac{1}{2}-\nu-\varepsilon}\end{aligned}$$ for the same value $\alpha_2=\frac{1}{2\cdot 2.51}+C_6\varepsilon$. This means that for $c=3.51$, $\frac{1}{h}\Sigma_3^{(1)}(h)$ can be replaced with its dyadic counterpart almost always.
### Type I sums
We turn to the sum $\Sigma_3^{(2)}(h)$. By applying Buchstab’s identity twice, we find that $$\begin{aligned}
\Sigma_3^{(2)}(h)&=\sum_{\substack{x\leq p_1q_1q_2n\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(q_1,q_2)\in A_2\\(n,\mathcal{P}(w))=1\\n>1}}1-\sum_{\substack{x\leq p_1q_1q_2q_3n\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(q_1,q_2)\in A_2\\w\leq q_3<q_2\\(n,\mathcal{P}(w))=1\\n>1}}1+\sum_{\substack{x\leq p_1q_1q_2q_3q_4n\leq x+h\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\(q_1,q_2)\in A_2\\w\leq q_4<q_3<q_2\\(n,\mathcal{P}(q_4))=1\\n>1}}1.\end{aligned}$$ Call these sums $\Sigma_3^{(2,1)}(h),\Sigma_3^{(2,2)}(h)$ and $\Sigma_3^{(2,3)}(h)$, respectively. We show that $\frac{1}{h}\Sigma_3^{(2,1)}(h)$ and $\frac{1}{h}\Sigma_3^{(2,2)}(h)$ can be replaced with their dyadic counterparts almost always. We confine to studying $\Sigma_3^{(2,2)}(h)$, as $\Sigma_3^{(2,1)}(h)$ is easier to handle.\
We may make in $\Sigma_3^{(2,2)}(h)$ the additional assumption that all the variables except $P_1$ are in the intervals $[X^{\beta_1},X^{\beta_2}]\cup[X^{\beta},X]$, since otherwise the sum can be dealt with in the same way as $\Sigma_3^{(1)}(h)$. We may also assume that $q_i\in [Q_i,Q_i^{1+\varepsilon}]$ for some $Q_i$. Defining $$\begin{aligned}
F(s)=\sum_{\substack{p_1q_1q_2q_3dn\sim X\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\Q_i\leq q_i\leq Q_i^{1+\varepsilon}\\(q_1,q_2)\in A}}\lambda_d^{\pm} (p_1q_1q_2q_3dn)^{-s},\end{aligned}$$ with $\lambda_d^{\pm}$ the same Brun’s sieve weights as before (the sign being the same throughout), and taking into account the prime number theorem in short intervals and Lemmas \[1\] and \[6\], it suffices to show that $$\begin{aligned}
\int_{T_0}^{T}|F(1+it)|^2 dt\ll \left(\frac{TP_1\log X}{X}+1\right)\frac{1}{\log^{2+\varepsilon} X}.\end{aligned}$$ This bound is achieved similarly as in Proposition \[p7\]. Indeed, if $\mathcal{T}_1$ is defined as in the proof of that proposition, the integral over $\mathcal{T}_1$ can be estimated in the same way as in that proposition. In the complementary case, we separate all the variables, and it remains to show that $$\begin{aligned}
&\int_{[T_0,T]\setminus \mathcal{T}_1} |P_1(1+it)Q_1(1+it)Q_2(1+it)Q_3(1+it)D(1+it)N(1+it)|^2 dt\\
&\ll (\log X)^{-100},\end{aligned}$$ where $N(s)$ is a zeta sum, $P_1(s)$ and $Q_i(s)$ are polynomials supported on primes, and $D(s)$ has the sieve weights $\lambda_d$ as its coefficients (actually, $D(s)$ can be neglected by simply estimating it pointwise). Moreover, the lengths $P_1,Q_i, D$ and $N$ are from the same intervals as $p_1,q_i,d$ and $n$, respectively (in particular, $d\leq \exp\left(\frac{\log X}{\log \log X}\right)$). We appeal to Proposition \[p5\] with $Q(s)=P_1(s)^{\ell}$, $P_1^{\ell}=X^{\varepsilon}$ and with $M(s)$ either $Q_1(s)Q_3(s)$ or $Q_2(s)Q_3(s)$. If $M(s)=Q_1(s)Q_3(s)$, the condition for Proposition $\ref{p5}$ is $Q_2\leq X^{\frac{1}{4}-2\varepsilon}$, $(Q_1Q_3)^2Q_2\leq X$. If in turn $M(s)=Q_2(s)Q_3(s)$, the condition for Proposition \[p5\] is $Q_1\leq X^{\frac{1}{4}-2\varepsilon}$, $Q_1(Q_2Q_3)^2\leq X$, and one of these conditions is always satisfied in our domain $A_2$, since $Q_3\leq Q_2$ and automatically $Q_2\leq X^{\frac{1}{5}}$. Now it follows from that for $a\geq 2+C_7\varepsilon$, $\Sigma_3^{(2,2)}(h)$ has the desired asymptotic, and $\Sigma_3^{(2,1)}(h)$ can be evaluated similarly.\
In the sum $\Sigma_3^{(2,3)}(h)$, we may again assume that all the variables lie in the intervals $[X^{\beta_1},X^{\beta_2}]\cup[X^{\beta},X]$, as otherwise we can use the type II sum argument. Let $\Sigma_3^{(2,4)}(h)$ be what remains of $\Sigma_3^{(2,3)}(h)$ after this reduction. The sum $\Sigma_3^{(2,4)}$ results in a Buchstab integral, and hence is postponed to Subsection \[subsubsec:buchstab\].
### Buchstab integrals {#subsubsec:buchstab}
We are left with the sums $\Sigma_3^{(3)}(h)$ and $\Sigma_3^{(2,4)}(h)$, for which no asymptotic was found. We want to show that $$\begin{aligned}
\frac{1}{X}\Sigma_3^{(3)}(X)+\frac{1}{X}\Sigma_3^{(2,4)}(X)\leq (1-\varepsilon)\frac{1}{X}S_X,\end{aligned}$$ which would complete the proof of Theorem \[t5\], taking into account the estimates and . The following lemma allows us to transform our sums into Buchstab integrals.
Let a positive integer $k$ and $\eta>0$ be fixed. Let $$\begin{aligned}
A\subset \{(u_1,...,u_k)\in \mathbb{R}^k:\,\, u_1,...,u_k\geq \eta,\,\, u_1+...+u_k\leq 1-\eta\}\end{aligned}$$ be any set such that $1_A$ is Riemann integrable. For a point $q=(q_1,...,q_k)\in \mathbb{R}^k$ and $X\geq 2$, define $\mathcal{L}(q)=(\frac{\log q_1}{\log X},...,\frac{\log q_k}{\log X})$. Then $$\begin{aligned}
&\sum_{\substack{p_1q_1\dotsm q_k n\sim X\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\\mathcal{L}(q_1,...,q_k)\in A\\(n,\mathcal{P}(q_k))=1}}1\\
&=(1+o(1))\log(1+\varepsilon)\frac{X}{\log X}\int_{(u_1,...,u_k)\in A}\omega\left(\frac{1-u_1-\dotsm -u_k}{u_k}\right)\frac{du}{u_1\dotsm u_{k-1}u_k^2},\end{aligned}$$ where $\omega(\cdot)$ is Buchstab’s function.
It suffices to prove the statement in the case that $A$ is a box, that is, a set of the form $I_1\times ...\times I_k$ with $I_i$ intervals. Indeed, if the statement holds for boxes, then it holds for finite unions of boxes. Moreover, since $1_A$ is Riemann integrable, for every $\delta>0$ there is a finite union $\mathcal{B}$ of boxes such that $A\setminus \mathcal{B}$ has measure at most $\delta$. The part of $A$ not contained in $\mathcal{B}$ contributes at most $\eta^{-k-1}\delta$ to the integral, and as $\delta\to 0$, this becomes arbitrarily small.\
Now let $A$ be a box. Using the connection between Buchstab’s function and the sieving function (see the Appendix of Harman’s book [@harman-sieves]), summing partially, and using the change of variables $u_i=\frac{\log v_i}{\log X}$, we see that $$\begin{aligned}
\sum_{\substack{p_1q_1\dotsm q_k n\sim X\\P_1\leq p_1\leq P_1^{1+\varepsilon}\\\mathcal{L}(q_1,...,q_k)\in A\\(n,\mathcal{P}(q_k))=1}}1&=\sum_{\substack{P_1\leq p_1\leq P_1^{1+\varepsilon}\\\mathcal{L}(q_1,...,q_k)\in A}}S\left(\left[\frac{X}{p_1q_1\dotsm q_k},\frac{2X}{p_1q_1\dotsm q_k}\right],\mathbb{P},q_k\right)\\
&=(1+o(1))\sum_{P_1\leq p_1\leq P_1^{1+\varepsilon}\atop \mathcal{L}(q_1,...,q_k)\in A}\frac{X}{p_1q_1\dotsm q_k\log q_k}\omega\left(\frac{\log \frac{X}{p_1q_1\dotsm q_k}}{\log q_k}\right)\\
&=(1+o(1))\sum_{P_1\leq p_1\leq P_1^{1+\varepsilon}}\frac{1}{p_1}\sum_{\mathcal{L}(q_1,...,q_k)\in A}\frac{X}{q_1...q_k\log q_k}\omega\left(\frac{\log \frac{X}{q_1...q_k}}{\log q_k}\right)\\
&=(b+o(1))\int\limits_{\mathcal{L}(v_1,...,v_k)\in A}\frac{X}{v_1\dotsm v_k\log v_1\dotsm \log^2 v_k}\omega\left(\frac{\log \frac{X}{v_1\dotsm v_k}}{\log v_k}\right)dv\\
&=(b+o(1))\frac{X}{\log X}\int\limits_{(u_1,...,u_k)\in A}\frac{1}{u_1\dotsm u_k^2}\omega\left(\frac{ 1-u_1-\dotsm u_k}{u_k}\right)du\end{aligned}$$ with $b=\log(1+\varepsilon)$, as wanted.
Let $$\begin{aligned}
A_3^{*}=&\{(u_1,u_2):\,\, u_2<u_1,\,\,u_1,u_2\in [\beta_1,\beta_2]\cup [\beta,\frac{1}{2}],\,\,2u_1+3u_2\geq 1,\\
&\,\,\max\{u_1+4u_2,4u_1-10\varepsilon\}\geq 1\},\\
A_2^{*}=&\{(u_1,u_2,u_3,u_4):\,\, \beta_1\leq u_4<u_3<u_2<u_1,\,\,u_1,u_2,u_3,u_4\not\in [\beta_2,\beta],\,\, (u_1,u_2)\in A_2\}\end{aligned}$$ be the sets corresponding to the summation conditions in $\Sigma_3^{(3)}(X)$ and $\Sigma_3^{(2,4)}(X)$, respectively. The lemma above directly implies that $$\begin{aligned}
\frac{1}{X}\Sigma_3^{(3)}(X)&=\frac{(1+o(1))\log(1+\varepsilon)}{\log X}J_1,\\
\frac{1}{X}\Sigma_3^{(2,4)}(X)&=\frac{(1+o(1))\log(1+\varepsilon)}{\log X}J_2,\\
\frac{1}{X}S_X&=\frac{(1+o(1))\log(1+\varepsilon)}{\log X}\end{aligned}$$ where $J_1$ and $J_2$ are given by $$\begin{aligned}
J_1&=\int\limits_{(u_1,u_2)\in A_3^{*}}\omega\left(\frac{1-u_1-u_2}{u_2}\right)\frac{du}{u_1u_2^2},\\
J_2&=\int\limits_{(u_1,u_2,u_3,u_4)\in A_2^{*}}\omega\left(\frac{1-u_1-u_2-u_3-u_4}{u_4}\right)\frac{du}{u_1u_2u_3u_4^2}.\end{aligned}$$ To compute $J_1$, we approximate Buchstab’s function by $$\begin{aligned}
\omega(u)\leq \begin{cases}0,\quad u<1\\\frac{1}{u},\quad 1\leq u\leq 2\\\frac{1+\log(u-1)}{u},\quad 2\leq u\leq 3\\\frac{1+\log 2}{3},\quad u>3\end{cases}\end{aligned}$$ For $u\leq 3$ this is an equality, and for $u>3$ the bound very sharp (it differs from the limiting value $e^{-\gamma}$, where $\gamma$ is Euler’s constant, by less than $0.003$), but we only need the fact that it is an upper bound. We compute with Mathematica that $J_1<0.988$ (when $\varepsilon$ in the definition of $A_3^{*}$ is small enough).[^5] The integral $J_2$ only gives a minor contribution, and hence can be estimated crudely as $$\begin{aligned}
J_2&\leq \beta_1^{-5}\int\limits_{(u_1,u_2,u_3,u_4)\in A_2^{*}\atop u_1+u_2+u_3+2u_4\leq 1}du\\
&<\beta_1^{-5}\int\limits_{\substack{\beta_1<u_4<u_3<u_2<u_1\\u_1+u_2+u_3+2u_4\leq 1}}du<0.007\end{aligned}$$ with Mathematica (the last integral could actually be evaluated exactly). To sum up, we have $J_1+J_2<0.995<1-\varepsilon$, and this means, in view of , that with our parameter choices $\beta_1,\beta_2,\beta$, the sums $\Sigma_3^{(3)}(X)$ and $\Sigma_3^{(2,4)}(X)$ can be discarded. Now, from and we have $\frac{1}{h}S_h(x)\geq \varepsilon\cdot \frac{1}{X}S_X$, so Theorem \[t5\] is proved.[$\Box$]{}
We can now observe that $c=3+\varepsilon$ is the limit of this method. Indeed, we are forced to take $\alpha_2\leq \frac{1}{4}$ in the type II case, because nothing nontrivial is known about the large values of Dirichlet polynomials beyond this region, and consequently $a=\frac{1}{2\alpha_2}+\varepsilon\geq 2+\varepsilon$ and $c\geq 3+\varepsilon$.
<span style="font-variant:small-caps;">Department of Mathematics and statistics, University of Turku, 20014 Turku, Finland</span>\
*Email address:*
[^1]: In fact, introducing into Harman’s argument the widest known den’sity hypothesis region $\sigma\geq \frac{25}{32}$, due to Bourgain [@bourgain] from 2000, would give $c=6.86$.
[^2]: Adding to the argument a small refinement from Subsection \[subsec:Exp\], as well as Proposition \[p8\], which is rather similar to Proposition \[p3\], would already give $c$ somewhat smaller than $5$.
[^3]: This bound for $a$ arises by inserting $P_{j-1}=\log^a X$ and $P_j=X^{1+o(1)}$ into formula .
[^4]: These computations can be carried out by hand with a bit of patience. For example, the case $\ell=4$ in Jutila’s bound is good enough in the range $\nu \in [\frac{16315}{90496},\frac{15311}{78512}]$, and the bound for $\ell=5$ is good enough when $\nu \in [\frac{753}{5554},\frac{15311}{91112}]$. These intervals are $[\beta_2,\beta]$ and $[0.1356,\beta_1]$, up to rounding.
[^5]: The Mathematica code can be found at . There is also a Python code for computing the integral at , where the integration method is a rigorous computation of an upper Riemann sum.
|
{
"pile_set_name": "ArXiv"
}
|
---
date: 'Received ; in original form '
title: '`pizza`: an open-source pseudo-spectral code for spherical quasi-geostrophic convection'
---
=1
\[firstpage\]
We present a new pseudo-spectral open-source code nicknamed `pizza`. It is dedicated to the study of rapidly-rotating Boussinesq convection under the 2-D spherical quasi-geostrophic approximation, a physical hypothesis that is appropriate to model the turbulent convection that develops in planetary interiors. The code uses a Fourier decomposition in the azimuthal direction and supports both a Chebyshev collocation method and a sparse Chebyshev integration formulation in the cylindrically-radial direction. It supports several temporal discretisation schemes encompassing multi-step time steppers as well as diagonally-implicit Runge-Kutta schemes. The code has been tested and validated by comparing weakly-nonlinear convection with the eigenmodes from a linear solver. The comparison of the two radial discretisation schemes has revealed the superiority of the Chebyshev integration method over the classical collocation approach both in terms of memory requirements and operation counts. The good parallelisation efficiency enables the computation of large problem sizes with $\mathcal{O}(10^4\times 10^4)$ grid points using several thousands of ranks. This allows the computation of numerical models in the turbulent regime of quasi-geostrophic convection characterised by large Reynolds $Re$ and yet small Rossby numbers $Ro$. A preliminary result obtained for a strongly supercritical numerical model with a small Ekman number of $10^{-9}$ and a Prandtl number of unity yields $Re\simeq 10^5$ and $Ro \simeq 10^{-4}$. `pizza` is hence an efficient tool to study spherical quasi-geostrophic convection in a parameter regime inaccessible to current global 3-D spherical shell models.
Numerical modelling – Planetary interiors – Core.
Introduction
============
Convection under rapid rotation is ubiquitous in astrophysical bodies. The liquid iron cores of terrestrial planets or the atmospheres of the gas giants are selected examples where turbulent convection is strongly influenced by rotational effects [e.g. @Aurnou15]. Such turbulent flows are characterised by very large Reynolds numbers $Re > 10^8$ and yet small Rossby numbers $Ro < 10^{-5}$, $Ro$ being defined as the ratio between the rotation period and the convective overturn time. This specific combination of $Re \gg 1$ and $Ro \ll 1$ corresponds to the so-called *turbulent quasi-geostrophic regime* of rotating convection [e.g. @Julien12a; @Stellmach14]. This implies that, in absence of a magnetic field, the pressure gradients balance the Coriolis force at leading order. As a consequence, the convective flow shows a pronounced invariance along the axis of rotation. At onset of rotating convection for instance, the flow pattern takes the form of quasi-geostrophic elongated columnar structures that have a typical size of $E^{1/3}$, where $E=\nu/\Omega d^2$ is the Ekman number with $\nu$ the kinematic viscosity, $\Omega$ the rotation frequency and $d$ the thickness of the convective layer [e.g. @Busse70; @Dormy04]. Convection in natural objects corresponds to extremely small Ekman numbers with for instance $E\simeq
10^{-15}$ in the Earth core or $E\simeq 10^{-18}$ in the gas giants. The quasi-geostrophy of the convective flow is expected to hold as long as the dynamics is dominated by rotation, or in other words as long as the buoyancy force remains relatively small compared to the Coriolis force [@Gilman77; @Julien12a; @King13; @Cheng15; @Horn15; @Gastine16].
Many laboratory experiments of rotating convection in spherical geometry have been carried out, either under micro-gravity conditions [e.g. @Hart86; @Egbers03]; or on the ground using the centrifugal force as a surrogate of the radial distribution of buoyancy [e.g. @Busse74; @Sumita03; @Shew05]. Because of their limited size, those experiments could only reach $E \simeq 5\times
10^{-6}$, far from the geophysical/astrophysical regime. In complement to the laboratory experiments, rotating convection in spherical geometry can also be studied by means of three-dimensional global numerical simulations. Because of computational limitations, those numerical models are currently limited to $ E \gtrsim 10^{-7}$, $Re \lesssim 10^4$ and $Ro \gtrsim 10^{-3}$, hardly scratching into the turbulent quasi-geostrophic (hereafter QG) regime [@Gastine16; @Schaeffer17]. Reaching lower Ekman numbers is hence mandatory to further explore this regime with $Re\gg 1$ and $Ro \ll 1$.
A way to alleviate the computational constraints inherent in global 3-D computations is to consider a spherical QG approximation of the convective flow [e.g. @Busse86; @Cardin94; @Plaut02; @Aubert03; @Morin04; @Gillet06; @Calkins12; @Teed12; @Guervilly17; @More18] . The underlying assumption of the spherical QG approximation is that the leading-order cylindrically-radial and azimuthal velocity components are invariant along the axis of rotation $z$. Under this approximation, the variations of the axial vorticity along the rotation axis are also neglected and an averaging of the continuity equation along the rotation axis implies a linear dependence of the axial velocity on $z$ [@Schaeffer05; @Gillet06]. The spherical QG approximation hence restricts the computation of the evolution of the convective velocity to two dimensions only. This is a limitation compared to the 3-D QG convective models developed by [@Calkins13] which allow spatial modulations of the convective features along the rotation axis. Because of the radial distribution of the buoyancy forcing in spherical geometry, the temperature is not necessarily well-described by the quasi-geostrophic approximation. Spherical QG models with either a three-dimensional or a two-dimensional treatment of the temperature however yield very similar results [@Guervilly16]. Despite those approximations, the different implementations of the 2-D spherical QG models [e.g. @Aubert03; @Gillet06; @Calkins12; @Teed12; @Guervilly17] have been found to compare favourably to 3-D direct numerical simulations in spherical geometry [e.g. @Aubert03; @Schaeffer05; @Plaut08]. This indicates that such 2-D spherical QG models could be efficiently used to explore the turbulent QG regime of convection with $E < 10^{-8}$ and $Re \gtrsim 10^5$, a parameter regime currently inaccessible to 3-D computations. Quasi-geostrophy is expected to hold as long as the dynamics is dominated by rotation, or in other words as long as the buoyancy force remains relatively small compared to the Coriolis force [@Gilman77; @Julien12a; @King13; @Cheng15; @Horn15; @Gastine16].
The spatial discretisation strategy adopted in spherical QG models usually relies on a hybrid scheme with a truncated Fourier expansion in the azimuthal direction $\phi$ and second-order finite differences in the cylindrically-radial direction $s$ [e.g. @Aubert03; @Calkins12]. Note that [@Brummell93] and [@Teed12] rather employed a spectral Chebyshev collocation technique in $s$ but in the case of a cartesian QG model. The vast majority of those numerical codes adopt a pseudo-spectral approach where the nonlinear terms are treated in the physical space and time-advanced with an explicit Adams–Bashforth time scheme, while the linear terms are time-advanced in the Fourier space using a Crank–Nicolson scheme. In contrast to 3-D models where several codes with active on-going developments are freely accessible to the community [see @Matsui16], there is a no open-source code for spherical QG convection available to the community.
The purpose of this study is precisely to introduce a new open-source pseudo-spectral spherical QG code, nicknamed `pizza`. `pizza` is available at <https://github.com/magic-sph/pizza> as a free software that can be used, modified, and redistributed under the terms of the GNU GPL v3 license. The package also comes with a suite of `python` classes to allow a full analysis of the outputs and diagnostics produced by the code during its execution. The code, written in Fortran, uses a Fourier decomposition in $\phi$ and either a Chebyshev collocation or a sparse Chebyshev integration method in $s$ [e.g. @Stellmach08; @Muite10; @Marti16]. It supports a broad variety of implicit-explicit time schemes encompassing multi-step methods [e.g. @Ascher95] and implicit Runge-Kutta schemes [e.g. @Ascher97]. The parallelisation strategy relies on the Message Passing Interface (`MPI`) library.
The paper is organised as follows. Section \[sec:model\] presents the equations for spherical QG convection. Section \[sec:rschemes\] and \[sec:tschemes\] are dedicated to the spatial and temporal discretisation schemes implemented in `pizza`. The parallelisation strategy is described in section \[sec:mpi\]. The code validation and several examples are discussed in section \[sec:results\] before concluding in section \[sec:conclusion\].
A quasi-geostrophic model of convection {#sec:model}
=======================================
Because of the strong axial invariance of the flow under rapid rotation, the QG models approximate 3-D convection in spherical geometry by a 2-D fluid domain which corresponds to the equatorial plane of a spherical shell. Using the cylindrical coordinates $(s,\phi,z)$, the QG fluid domain hence corresponds to an annulus of inner radius $s_i$ and outer radius $s_o$ rotating against the $z$-axis with an angular frequency $\Omega$. In the following, we adopt a dimensionless formulation of the spherical QG equations using the annulus gap $d=s_o-s_i$ as a reference length scale and the viscous diffusion time $d^2/\nu$ as the reference time scale. The temperature contrast $\Delta T$ between both boundaries defines the temperature scale. Gravity is assumed to grow linearly with the cylindrical radius $s$ and is non-dimensionalised using its value at the external radius $g_o$.
The formulation of the QG model implemented in `pizza` is based on the spherical QG approximation introduced by [@Busse86] and further expanded by [@Aubert03] and [@Gillet06] to include the effects of Ekman pumping. Following [@Schaeffer05] and [@Gillet06] the axial velocity $u_z$ is assumed to vary linearly with $z$. Under this assumption, the Boussinesq continuity equation under the spherical QG approximation yields $$\dfrac{1}{s}\dfrac{\partial (s u_s)}{\partial s}+\dfrac{1}{s}\dfrac{\partial
u_\phi}{\partial \phi}+\beta u_s = 0\,,
\label{eq:cont}$$ where $$\beta = \dfrac{1}{h}\dfrac{\mathrm{d} h}{\mathrm{d} s} =-\dfrac{s}{h^2}\,,$$ and $h=(s_o^2-s^2)^{1/2}$ is half the height of the geostrophic cylinder at the cylindrical radius $s$. We adopt a vorticity-streamfunction formulation to fulfill the QG continuity equation (\[eq:cont\]). The cylindrically-radial and azimuthal velocity components are hence expanded as follows $$u_s = \dfrac{1}{s}\dfrac{\partial \psi}{\partial \phi},\quad
u_\phi = \overline{u_\phi}-\dfrac{\partial \psi}{\partial s}-\beta \psi,
\label{eq:vel_def}$$ where the streamfunction $\psi$ accounts for the non-axisymmetric motions, while $\overline{u_\phi}$ corresponds to the axisymmetric zonal flow component, the overbar denoting an azimuthal average. The axial vorticity $\omega$ is then expressed by $$\omega = \dfrac{1}{s}\dfrac{\partial(s\overline{u_\phi})}{\partial
s}-\mathcal{L}_\beta \psi,
\label{eq:psi}$$ where the operator $\mathcal{L}_\beta$ is defined by $$\mathcal{L}_\beta \psi = \Delta \psi+\dfrac{1}{s}\dfrac{\partial
(\beta s \psi)}{\partial s}\,.$$ In the above equation, $\Delta$ is the Laplacian operator in cylindrical coordinates. Under the QG approximation, the time evolution of the axial vorticity becomes
$$\dfrac{\partial \omega}{\partial t} + \vec{\nabla}\cdot\left( \vec{u}\,\omega
\right) = \dfrac{2}{E}\beta u_s - \dfrac{Ra}{Pr} \dfrac{1}{s_o}\dfrac{\partial
\vartheta}{\partial \phi} +\mathcal{F}(E,\vec{u},\omega)+ \Delta \omega\,,
\label{eq:vort}$$
where $\vartheta$ denotes the temperature perturbation. The reader is referred to [@Gillet06] for a comprehensive derivation of this equation. In the above equation, $\mathcal{F}(E,\vec{u},\omega)$ corresponds to the Ekman-pumping contribution [@Schaeffer05] to non-axisymmetric motions expressed by
$$\mathcal{F}(E,\vec{u},\omega) = -\Upsilon\left[
\omega-\dfrac{\beta}{2}u_\phi+\beta\left(\dfrac{\partial}{\partial
\phi}-\dfrac{5 s_o}{2h}\right) u_s\right]\,.
\label{eq:pumping_full}$$
where $$\Upsilon = \left(\dfrac{s_o}{E}\right)^{1/2}\dfrac{1}{(s_o^2-s^2)^{3/4}}\,.$$ To ensure a correct force balance in the azimuthal direction, the axial vorticity equation (\[eq:vort\]) is supplemented by an equation dedicated to the axisymmetric motions [@Plaut02]. Taking a $\phi$-average of the azimuthal component of the Navier-Stokes equations yields $$\dfrac{\partial \overline{u_\phi}}{\partial t}+\overline{u_s\omega} =
-\Upsilon\,\overline{u_\phi}+\Delta \overline{u_\phi} -
\dfrac{\overline{u_\phi}}{s^2}\,,
\label{eq:uphi}$$ where the first term in the right-hand-side corresponds to the Ekman-pumping contribution for the axisymmetric motions [@Aubert03]. The governing equations for the temperature perturbation under the QG approximation is given by $$\dfrac{\partial \vartheta}{\partial t} + \vec{\nabla}\cdot \left(
\vec{u}\,\vartheta\right)+\beta u_s \vartheta + u_s\dfrac{\mathrm{d}
T_c}{\mathrm{d} s} = \dfrac{1}{Pr}\Delta \vartheta\,,
\label{eq:temp}$$ where $T_c$ is the conducting background state [@Aubert03; @Gillet06]. In the case of a fixed-temperature contrast between $s_i$ and $s_o$, $T_c$ is given by $$T_c = \dfrac{\alpha}{\ln{\eta}}\ln [(1-\eta)s],\quad
\dfrac{\mathrm{d} T_c}{\mathrm d s} =\dfrac{\alpha}{s\ln\eta}\,,$$ where $\alpha$ is a constant coefficient that can be used to rescale the temperature contrast to get a better agreement with the $z$-average of the conducting temperature of a 3-D spherical shell [@Aubert03; @Gillet06]. In the case of fixed temperature boundary conditions, $$\alpha =
\dfrac{\eta}{1-\eta}\left\lbrace\dfrac{1}{(1-\eta^2)^{1/2}}\operatorname{arcsinh}\left[\dfrac{
(1-\eta^2)^{1/2}}{\eta}\right]-1\right\rbrace\,.$$ The dimensionless equations (\[eq:psi\]-\[eq:temp\]) are governed by the Ekman number $E$, the Rayleigh number $Ra$ and the Prandtl number $Pr$ defined by $$E = \dfrac{\nu}{\Omega d^2},\quad Ra = \dfrac{\alpha_T g_o \Delta T
d^3}{\nu\kappa}, \quad Pr= \dfrac{\nu}{\kappa}\,,
\label{eq:controls}$$ where $\alpha_T$ is the thermal expansion coefficient and $\kappa$ is the thermal diffusivity.
We assume in the following no-slip and fixed temperature at both boundaries. This yields $$u_s =u_\phi = \vartheta = 0 \quad\text{at}\quad
s=s_i,s_o\,.
\label{eq:bcs}$$ With the definition of the streamfunction (Eq. \[eq:vel\_def\]), this corresponds to $$\psi =\dfrac{\partial \psi}{\partial s} = \vartheta = \overline{u_\phi} = 0
\quad\text{at}\quad s=s_i,s_o\,.
\label{eq:bcs_psi_intro}$$
Spatial discretisation {#sec:rschemes}
======================
The unknowns $u_s$, $u_\phi$, $\omega$ and $\vartheta$ are expanded in truncated Fourier series in the azimuthal direction up to a maximum order $N_m$. For each field $f=[u_s,u_\phi,\omega,\vartheta]$, one has $$f(s,\phi_k,t) \approx \sum_{m=-N_m}^{N_m} f_m(s,t)\,e^{\mathrm { i } m\phi_k
}\, ,$$ where $\phi_k =2\pi (k-1)/N_\phi$ with $k=1, ..., N_\phi$ defines $N_\phi$ equally-spaced discrete azimuthal grid points. Since all the physical quantities are real, $f_{-m}^*=f_m$, where the star denotes a complex conjugate. Complex to real Fast Fourier Transforms (FFTs) can hence be employed to transform each quantity from a spectral representation to a grid representation $$f(s,\phi_k,t)=2\,\sideset{}{'}\sum_{m=0}^{N_m}\Re\left\lbrace f_m(s,t)
\,e^{\mathrm{ i } m\phi_k } \right\rbrace\,,
\label{eq:fft}$$ where the prime on the summation indicates that the $m=0$ coefficient needs to be multiplied by one half. The inverse transforms are handled by real to complex FFTs defined by $$f_m(s,t) = \dfrac{1}{N_\phi}\sum_{k=1}^{
N_\phi} f(s,\phi_k,t)\,e^{-\mathrm{i}m\phi_k}\,.
\label{eq:ifft}$$ Using $N_\phi \geq 3N_m$ prevents aliasing errors when treating the non-linear terms [@Orszag71; @Boyd01]. This implies to discard the Fourier modes with $N_m<m\leq N_\phi$ when doing the direct FFT (\[eq:fft\]) and to pad with zeroes when computing the inverse transforms (\[eq:ifft\]).
In the radial direction, the Fourier coefficients $f_m$ are further expanded in truncated Chebyshev series up to degree $N_c-1$
$$f_m(s_k,t) = C \,\sideset{}{''}\sum_{n=0}^{N_c-1}\widehat{f}_{mn
} (t)\,T_{n}(x_k)\,,
\label{eq:cheb}$$
where the hat symbols are employed in the following to denote the Chebyshev coefficients. The discrete Chebyshev transform from a spectral representation to a grid representation is given by $$\widehat{f}_{mn}(t) = C\,\sideset{}{''}\sum_{k=1}^{N_r} f_m(s_k,t)
\,T_{n}(x_k)\,.
\label{eq:icheb}$$ In the above equations $C=[2/(N_r-1)]^{1/2}$ is a normalisation factor and the double primes on the summations now indicate that both the first and the last indices are multiplied by one half. $T_n(x_k)$ is the $n$th-order first-kind Chebyshev polynomial defined by $$T_n(x_k) = T_{kn} = \cos[n\arccos(x_k)]= \cos\left[
\dfrac{\pi n (k-1)}{N_r-1}\right]\,,$$ where $$x_k = \cos\left[ \dfrac{\pi(k-1)}{N_r-1}\right], \quad k=1, ..., N_r,$$ is the $k$th-point of a Gauss-Lobatto grid with $N_r$ collocation grid points. For an annulus of inner radius $s_i$ and outer radius $s_o$, the Gauss-Lobatto interval that ranges from $-1$ to $1$ is remapped to the interval $[s_i,s_o]$ by the following affine mapping $$s_k = \dfrac{s_o-s_i}{2}\,x_k+\dfrac{s_o+s_i}{2},\quad k=1, ..., N_r\,.$$ The choice of using Gauss-Lobatto grid points also ensures that fast Discrete Cosine Transforms of first kind (DCTs) can be employed to compute the transforms between Chebyshev representation and radial grid space (\[eq:cheb\]-\[eq:icheb\]). `pizza` relies on the `FFTW`[^1] library [@Frigo05] for all the FFTs and DCTs. This ensure that each single spectral transform is computed in $\mathcal{O}(N\ln N)$ operations, where $N=[N_r,N_m]$.
Spectral equations using Chebyshev collocation
----------------------------------------------
Several approaches can be employed to approximate the solution of a differential equation using Chebyshev polynomials. The most straightforward choice when dealing with a set of non-constant partial differential equations such as Eqs. (\[eq:psi\]-\[eq:temp\]) is to resort to a Chebyshev collocation method [e.g. @CHQZ]. In this kind of approach, the unknowns can be either the Chebyshev coefficients $\widehat{f}_n$ or the values of the approximate solution at the collocation points $f(x_k)$. Both collocation techniques yield dense matrices with similar condition numbers [@Peyret02]. The first one has been widely adopted by the astrophysical and geophysical communities after the seminal work by [@Glatzmaier84].
### Semi-discrete formulation
Expanding $\omega$, $\psi$ and $\vartheta$ in Fourier and Chebyshev modes yield the following set of coupled semi-discrete equations for the time evolution of $\widehat{\omega}_m$ and $\widehat{\psi}_m$ for the non-axisymmetric modes with $m>0$
$$\begin{aligned}
C\sideset{}{''}\sum_{n=0}^{N_c-1}\left\lbrace
\left[\dfrac{\mathrm{d}}{\mathrm{d} t}T_{kn} -\mathcal{A}^C_{mkn} \right]
\widehat{\omega}_{mn}(t)
+\mathcal{B}^C_{mkn}\widehat{\psi}_{mn}(t)\right\rbrace & = \\
-\left[\dfrac{Ra}{Pr}\dfrac{\mathrm{i}m}{s_o}\right]\vartheta_{m}(s_k,t)-
{\mathcal{N}_\omega}_m(s_k,t)\,&\, \\
C \sideset{}{''}\sum_{n=0}^{N_c-1} \left\lbrace
T_{kn}\,\widehat{\omega}_{mn}(t)+\mathcal{C}^C_{mkn}
\widehat{\psi}_{mn}(t)\right\rbrace&
=0\,,
\end{aligned}
\label{eq:psiomcoll}$$
where the collocation matrices are expressed by $$\begin{aligned}
\mathcal{A}^C_{mkn} =&
T_{kn}''+\dfrac{1}{s_k}T_{kn}'-\left[\dfrac{m^2}{s_k^2}+\Upsilon_k\right]T_{
kn} , \\
\mathcal{B}^C_{mkn}
= &\dfrac{\Upsilon_k\beta_k}{2}\,T_{kn}'+\\& \beta_k\left[\dfrac{
\beta_k\Upsilon
_k}{2}+\dfrac{\mathrm{i}m}{s_k}\left(\mathrm{i}m\Upsilon
_k-\dfrac{5s_o \Upsilon_k} { 2h_k } -\dfrac { 2 } { E } \right)\right]T_{kn} ,\\
\mathcal{C}^C_{mkn} = & T_{kn}''+\left[\beta_k+\dfrac{1}{s_k}
\right]T_
{kn}'-\left[\dfrac{\mathrm{d}\beta_k}{\mathrm{d}s}+\dfrac{\beta_k}{s}+\dfrac {
m^2 } { s_k^2 }\right]T_{kn}\,,
\end{aligned}$$ In the above equations, the superscripts $^C$ have been introduced to differentiate the collocation matrices from the forthcoming sparse formulation. For clarity, a given function $f$ discretised at the collocation point $x_k$ is expressed as $f_k=f(x_k)$. $T'_{kn}$ and $T''_{kn}$ are the first and second derivative of the $n$th-order Chebyshev polynomial at the collocation point $x_k$. ${\mathcal{N}_\omega}_m(s_k,t)$ corresponds to the Fourier transform (\[eq:ifft\]) of the advection terms that enters Eq. (\[eq:vort\]) $${\mathcal{N}_\omega}_m(s_k,t) = \dfrac{1}{N_\phi}\sum_{j=1}^{N_\phi}
\left[\vec{\nabla}\cdot(\vec{u}\,\omega)\right]e^{-\mathrm{i}m\phi_j}\,.$$ where $N_\phi=3N_m$ to ensure that the nonlinear terms are alias-free in $\phi$ [@Orszag71].
Instead of introducing the intermediate variable $\omega$, we could rather have substituted its definition (\[eq:psi\]) into Eq. (\[eq:vort\]) to derive a single time-evolution equation that would depend on $\psi$ only. This would imply to solve an equation of the form $$\dfrac{\partial}{\partial t}\left(\dfrac{\partial^2 \psi}{\partial
s^2}\right)+\cdots = \dfrac{\partial^4 \psi}{\partial s^4}+\cdots$$ Though appealing this strategy is however not viable since this kind of time-dependent problem has been shown to be unconditionally unstable when using Chebyshev collocation discretisation [@Gottlieb77; @Hollerbach00].
We proceed the same way to discretise the equations for the mean azimuthal flow $\overline{u_\phi}$ (\[eq:uphi\])
$$\begin{aligned}
C\sideset{}{''}\sum_{n=0}^{N_c-1}&\left[
\dfrac{\mathrm{d}}{\mathrm{d} t}T_{kn}
-T_{kn}''-\dfrac{1}{s_k}T_{kn}'+\right. \\
&\left.\left(\Upsilon_k+\dfrac{1}{s_k^2}
\right)
T_{kn}\right]\widehat{{{u_\phi}}_0}_{n}(t) =
-\mathcal{N}_{u_{\phi}}(s_k,t),
\end{aligned}
\label{eq:uphicoll}$$
where the nonlinear term is expressed by $$\mathcal{N}_{u_{\phi}}(s_k,t) =\dfrac{E}{2}\Upsilon_k
u_{\phi_0}\omega_0+2\sum_{1}^{N_m}
\Re\left\lbrace{u_s}_m\omega^*_m\right\rbrace\,.$$ The first term in the right hand side corresponds to the self-interaction of the zonal wind [@Aubert03]. Finally, the spatial discretisation of the temperature equation (\[eq:temp\]) yields $$\begin{aligned}
C\sideset{}{''}\sum_{n=0}^{N_c-1}\left[
\dfrac{\mathrm{d}}{\mathrm{d} t}T_{kn}
-\dfrac{1}{Pr}\left(T_{kn}''+\dfrac{1}{s_k}T_{kn}'-\dfrac{m^2}{s_k^2}
T_{kn}\right)\right ] \widehat{\vartheta}_{mn}(t) = \\
\left[\dfrac{\mathrm i m}{s_k}\dfrac{\mathrm{d} T_c}{\mathrm{d}
s}\right]\psi_{m}(s_k,t) - {\mathcal{N}_\vartheta}_m(s_k,t)\,,
\end{aligned}
\label{eq:tempcoll}$$ where ${\mathcal{N}_\vartheta}_m(s_k,t)$ corresponds to the FFT of the nonlinear terms that enter Eq. (\[eq:temp\]): $${\mathcal{N}_\vartheta}_m(s_k,t) = \dfrac{1}{N_\phi}\sum_{j=1}^{N_\phi}
\left[\vec{\nabla}\cdot(\vec{u} \vartheta) +\beta_k
u_s\vartheta\right]e^{-\mathrm{i}m\phi_j}\,.$$
### Boundary conditions
In the collocation method, equations (\[eq:psiomcoll\]), (\[eq:uphicoll\]) and (\[eq:tempcoll\]) are prescribed for the $N_r-2$ internal collocation grid points. The remaining boundary points $s=s_i$ and $s=s_o$ are used to impose the boundary conditions (\[eq:bcs\_psi\_intro\]). This implies that the singularity of $\beta$ and its derivatives at the outer boundary $s_o$ is not necessarily an issue when using the collocation method since boundary conditions provide additional constraints there. When a given physical field $f=[\psi,\omega,\vartheta,\overline{u_\phi}]$ is subject to Dirichlet boundary conditions at both boundaries, the following conditions on the Chebyshev coefficients $\widehat{f}_n$ should be fulfilled [e.g. @CHQZ Eq. 3.3.19] $$\sideset{}{''}\sum_{n=0}^{N_c-1} \widehat{f}_{nm} = 0,\ s=s_o;\quad
\sideset{}{''}\sum_{n=0}^{N_c-1} (-1)^{n} \widehat{f}_{nm} = 0,\
s=s_i\,,
\label{eq:bcs_coll_dirichlet}$$ while for Neumann boundary conditions [e.g. @CHQZ Eq. 3.3.23] $$\sideset{}{''}\sum_{n=0}^{N_c-1} n^2 \widehat{f}_{nm} = 0,\
s=s_o;\ \sideset{}{''}\sum_{n=0}^{N_c-1} (-1)^{n+1} n^2 \widehat{f}_{nm} = 0,
\ s=s_i\,.
\label{eq:bcs_coll_neumann}$$
Independently of the subsequent details of the chosen implicit-explicit time scheme employed to time advance the QG equations, Eq. (\[eq:psiomcoll\]) forms a complex-type dense matrix operator of size $(2N_r\times 2N_r)$ for each Fourier mode $m$. Figure \[fig:mat\]a shows the structure of the matrix that enters the left-hand-side of Eq. (\[eq:psiomcoll\]). The top $N_r$ rows corresponds to the time-dependent vorticity equation (\[eq:vort\]), while the bottom $N_r$ rows corresponds to the streamfunction equation (\[eq:psi\]). The four mechanical boundary conditions (\[eq:bcs\_psi\_intro\]) are imposed on the first and last rows of the top-right and bottom-right quadrants of this matrix.
From a numerical implementation standpoint, Chebyshev polynomials at the collocation points $T_{kn}$ and their first and second derivatives $T'_{kn}$ and $T''_{kn}$ form dense real matrices of dimensions $(N_r\times N_r)$ that are precalculated and stored in the initialisation procedure of the code. In `pizza`, the discretised equations (\[eq:psiomcoll\]-\[eq:tempcoll\]) supplemented by the boundary conditions (\[eq:bcs\_coll\_dirichlet\]) or (\[eq:bcs\_coll\_neumann\]) are solved using `LAPACK`[^2]. The LU decomposition is handled by the routine `dgetrf` or its complex-arithmetic counterpart `zgetrf` and require $\mathcal{O}(N_r^3)$ operations per Fourier mode $m$. This needs to be done at the initialisation stage of the code or at each iteration where a change in the time-step size occurs (see § \[sec:tschemes\]). During each time step, the routines `dgetrs` (or `zgetrs`) are employed for the matrix solve and correspond to $\mathcal{O}(N_r^2)$ operations per Fourier mode $m$. The amount of memory required to store the dense complex-type matrix that enters the left-hand-side of Eq. (\[eq:psiomcoll\]) grows as $64\,N_r^2$ for one single azimuthal wavenumber $m$ for a double-precision calculation. This corresponds to 1 Gigabyte of memory per Fourier mode for $N_r=4096$ and hence makes the collocation approach extremely costly when $N_r \gtrsim 10^3$.
![Representation of the coefficients of the left-hand-side matrices obtained for $m=4$ for a setup with $E=10^{-3}$, $Ra=3\times 10^4$ and $Pr=1$ and a CNAB2 time scheme with a fixed $\delta t = 10^{-4}$. (*a*) corresponds to the collocation method (Eq. \[eq:psiomcoll\]). $T$ corresponds to the matrix with the coefficients $T_{kn}=T_n(x_k)$. (*b*) corresponds to the Chebyshev integration method with boundary conditions imposed as the first four tau lines (Eq. \[eq:psiint\]). (*c*) corresponds to Chebyshev integration method with boundary conditions enforced via a Galerkin formulation (Eq. \[eq:psiint\_galerkin\]). For the three panels, the matrix coefficients have been normalised by their maxima such that they share the same color axis. Zero entries are displayed in white.[]{data-label="fig:mat"}](matrices){width="8.4cm"}
Spectral equations using a Chebyshev integration method
-------------------------------------------------------
To circumvent the limitations inherent in the collocation approach, several efficient Chebyshev spectral methods have been developed [e.g. @Coutsias96; @Julien09; @Olver13]. They all involve the solve of sparse matrices that are almost banded and can be inverted in $\mathcal{O}(p\,N_r)$ operations, $p$ being the number of bands of the matrices. One approach, first introduced by [@Clenshaw57], consists of integrating $q$ times a set of $q$th-order ordinary differential equations (ODEs) in Chebyshev space [see also @FoxParker68; @Phillips90; @Greengard91]. First limited to ODEs with constant coefficients, this method has been further extended by [@Coutsias96] to ODEs with rational function coefficients. The comparison of several Chebyshev methods for fourth-order ODEs carried out by [@Muite10] showed the advantages of such a Chebyshev integration method both in terms of matrix condition number and computational cost in the limit of large $N_r$. This technique has been successfully applied to the problem of rotating convection in both Cartesian [@Stellmach08] and spherical geometry [@Marti16].
### Semi-discrete formulation
The Chebyshev integration methodology relies on the following indefinite integral identity [e.g. @CHQZ Eq. 2.4.23] $$\int T_n(x) \mathrm{d}x =
\dfrac{1}{2}\left[\dfrac{T_{n+1}(x)}{n+1}-\dfrac{T_{n-1}(x)}{n-1}\right]~
\text{for}~n > 1,
\label{eq:int1}$$ which in its discrete form corresponds to the following sparse operator $$\widehat{\mathcal{I}}_{kn} =
-\dfrac{1}{2k}\delta_{k+1,n}+\dfrac{1}{2k}\delta_{k-1,n}
~\text{for}~k > 1,$$ where $\delta$ corresponds to the Kronecker symbol. Identities for multiple integration can then be easily derived by recursive applications of Eq. (\[eq:int1\]).
Because of the singularity of $\beta$, we first need to regularise the set of equation (\[eq:psi\]-\[eq:temp\]) to make it suitable for a Chebyshev integration method. We hence adopt the following different definition for the streamfunction $\varPsi$ $$u_s = \dfrac{1}{s}\dfrac{\partial [\zeta(s) \varPsi]}{\partial \phi}; \quad
u_\phi = \overline{u_\phi}
-\dfrac{\partial [\zeta(s) \varPsi]}{\partial s}-\beta \zeta(s)\varPsi\,.$$ Using $\zeta(s) =h^2= s_o^2-s^2$ then yields $$u_s = \dfrac{h^2}{s} \dfrac{\partial \varPsi}{\partial \phi};\quad
u_\phi = \overline{u_\phi}-h^2\dfrac{\partial \varPsi}{\partial
s}+3s\,\varPsi\,.$$ From these definitions, one derives the following expression for the axial vorticity $\omega$ $$\omega =\dfrac{1}{s}\dfrac{\partial (s\overline{u_\phi})}{\partial s}
-\mathcal{L}_I \varPsi\,,$$ where the operator $\mathcal{L}_I$ is given by $$\mathcal{L}_I \varPsi=\Delta\left(h^2
\varPsi\right)-\dfrac{1}{s}\dfrac{\partial}{\partial
s}\left(s^2 \varPsi \right)\,.$$ The expansion of $\varPsi$ and $\vartheta$ in Fourier modes yields the following equation for the time evolution of $\varPsi$ for the non-axisymmetric Fourier modes $$ \left[\left(\dfrac{\partial}{\partial
t}-\Delta\right)\mathcal{L}_I-\dfrac{2}{E}\mathrm{i}m\right]\varPsi_m =
\dfrac{Ra}{Pr}\dfrac{\mathrm i m}{s_o}\vartheta_m+\mathcal{N}_{\omega
m}-\mathcal{F}_\epsilon(E,\varPsi_m)\,.
$$ In the above equation, the classical Ekman pumping term (Eq. \[eq:pumping\_full\]) has been replaced by the approximated form $\mathcal{F}_\epsilon$ defined by $$\mathcal{F}_\epsilon
=\Upsilon_\epsilon \left[\mathcal{L}
_I+\dfrac{s}{2}\dfrac { \partial} {\partial
s}-\left(\dfrac{3s^2}{2h_\epsilon^2}+m^2+\dfrac{5\mathrm{i} ms_o}{
2h_\epsilon}\right)
\right]\varPsi_m
\label{eq:approx_pump}$$ where $h_\epsilon= [(s_o+\epsilon)^2-s^2]^{1/2}$ corresponds to half the height of a geostrophic cylinder that would intersect a sphere with a slightly larger radius $s_o+\epsilon$, with $\epsilon \ll 1$. $\Upsilon_\epsilon$ is defined accordingly by $\Upsilon_\epsilon= s_o^{1/2}/E^{1/2}/h_\epsilon^{3/2}$ . This implies that $\mathcal{F}_\epsilon$ corresponds to the exact Ekman pumping contribution that would occur in a spherical QG set-up with an outer radius $s_o+\epsilon$. In other words, the approximated Ekman pumping $\mathcal{F}_\epsilon$ tends to approach the exact contribution $\mathcal{F}$ in the limit of vanishing $\epsilon$. This approximation is required when using a Chebyshev integration method to avoid the outer boundary singularity of the exact Ekman pumping term and to get a good spectral representation of this quantity once transformed to Chebyshev space. The error introduced by this approximation will be further assessed in § \[sec:results\].
In addition, the Ekman pumping term requires special care since it comprises non-rational function coefficients. In contrast to the collocation method where it can be treated implicitly without any additional cost, this term shall hence be treated as yet another non-linear term since its implicit treatment would yield a dense operator with the Chebyshev integration method [@Hiegemann97].
The equation for the time evolution of $\varPsi$ is regularised by a multiplication by $s^4$ and then integrated four times to yield $$\begin{aligned}
\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int
s^4 \left[\left(\dfrac{\partial}{\partial
t}-\Delta\right)\mathcal{L}_I-\dfrac{2}{E}\mathrm{i}m\right]\varPsi_m =
\alpha r^3+\beta r^2+\gamma r +\delta \\
+\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int s^4 \left[
\dfrac{Ra}{Pr}\dfrac{\mathrm i m}{s_o}\vartheta_m
+\mathcal{N}_{\omega
m}-\mathcal{F}_\epsilon(E,\varPsi_m)\right]\,,
\end{aligned}
\label{eq:psiint}$$ where $\alpha, \beta, \gamma$ and $\delta$ are constant of integration that will not be required once this equation has been supplemented by boundary conditions. At this stage, any single term that enters the above equation can be written as the product $x^q \partial^p f /\partial x^p$, where $p$ and $q$ are positive integers. Following [@Marti16], this equation is then integrated by parts until no differential operator remains, such that each term has the following form $$\sum_{p=0}^4 \underbrace{\int\cdots\int}_{p\times} \left(\sum_q x^q f(x)
\right)\mathrm{d}x^p\,.$$ After expanding $f(x)$ in Chebyshev polynomials using Eq. (\[eq:cheb\]), the semi-discrete representation of Eq. (\[eq:psiint\]) can be derived by multiple application of the recurrence relation (\[eq:int1\]). This yields
$$\begin{aligned}
\sideset{}{''}\sum_{n=0}^{N_c-1}
\left(\dfrac{\mathrm{d} }{\mathrm{d} t}
\mathcal{A}^I_{mkn}-\mathcal{B}^I_{mkn}\right)
\widehat{\varPsi}_{mn}(t) = \\
\sideset{}{''}\sum_{n=0}^{N_c-1}
\mathcal{C}^I_{kn}
\left[\dfrac{Ra}{Pr}\dfrac{\mathrm{i}m}{s_o}\widehat{\vartheta}_{mn}(t)+\widehat
{
\mathcal { N }}_{\omega
mn}-\widehat{\mathcal{F}}_{\epsilon\,n}(E,\varPsi_m)\right],
\end{aligned}
\label{eq:psi_int}$$
for $k>4$. $\mathcal{A}^I_{mkn}$, $\mathcal{B}^I_{mkn}$, and $\mathcal{C}^I_{kn}$ are the discrete representations of the following operators $$\begin{aligned}
\mathcal{A}^I_m = \int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int s^4 \mathcal{L}_I;\
& \mathcal{B}^I_m = \int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int s^4 \left(\Delta
\mathcal{L}_I+\dfrac{2}{E}\mathrm{i} m\right); \\
&\mathcal{C}^I =\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int s^4
\end{aligned}$$ The internal matrix elements are determined using the freely available `python` package developed by [@Marti16][^3] that allows the symbolic computation of those operators[^4]. Excluding boundary conditions, $\mathcal{A}^I_m$, $\mathcal{B}^I_m$ and $\mathcal{C}^I$ correspond to band matrices with $p_u$ super-diagonals and $p_\ell$ sub-diagonals that have a bandwidth defined by $$q=p_\ell+p_u+1\,.$$ The bandwidth of $\mathcal{A}^I_{m}$, $\mathcal{B}^I_m$, and $\mathcal{C}^I$ is 17, 13 and 17, respectively.
We proceed the same way to establish the equations for the axisymmetric zonal flow component and for the temperature perturbation. Eq. (\[eq:uphi\]) and Eq. (\[eq:temp\]) are multiplied by $s^2$ and integrated twice to yield $$\begin{aligned}
\sideset{}{''}\sum_{n=0}^{N_c-1}
\left(\dfrac{\mathrm{d} }{\mathrm{d} t}
\mathcal{D}^I_{kn}-\mathcal{E}^I_{kn}\right)
\widehat{u_\phi}_{0n}(t) = \\
-\sideset{}{''}\sum_{n=0}^{N_c-1}
\mathcal{D}^I_{kn}\left[ \widehat{\mathcal{N}}_{u_{\phi}mn}
+\widehat{\Upsilon_\epsilon {u_\phi}_0}_n\right],
\end{aligned}
\label{eq:uphi_int}$$ for the axisymmetric zonal flow component and $$\begin{aligned}
\sideset{}{''}\sum_{n=0}^{N_c-1}
\left(\dfrac{\mathrm{d} }{\mathrm{d} t}
\mathcal{D}^I_{kn}-\dfrac{1}{Pr}\mathcal{F}^I_{kmn}\right)
\widehat{\vartheta}_{mn}(t) = \\
-\sideset{}{''}\sum_{n=0}^{N_c-1}
\mathcal{D}_{kn}
\left[\mathrm{i}m\widehat{\left(\dfrac{h^2}{s}\dfrac{\mathrm{d}T_c}{\mathrm{d}s}
\varPsi_m\right)}_ {n}+\widehat {\mathcal { N }}_{\vartheta
mn}\right],
\end{aligned}
\label{eq:temp_int}$$ for the temperature. Both equations are only valid for $k>2$. $\mathcal{D}^I_{kn}$, $\mathcal{E}^I_{kn}$ and $\mathcal{F}^I_{mkn}$ are the discrete representation of the following operators $$\mathcal{D}^I = \int\!\!\!\!\int s^2;\
\mathcal{E}^I = s^2-3\int s;\
\mathcal{F}^I_m = \int\!\!\!\!\int s^2 \Delta\,.$$ The bandwidth of $\mathcal{D}^I$, $\mathcal{E}^I$ and $\mathcal{F}^I_m$ is 9, 5 and 5, respectively. In contrast to the semi-discrete equations obtained with the collocation approach, the right-hand-sides of Eq. (\[eq:psi\_int\]-\[eq:temp\_int\]) now involve nonlinear terms that are in Chebyshev space. To avoid aliasing errors, the Chebyshev coefficients of nonlinear terms that have $n> 2N_r/3$ are hence set to zero [@Orszag71].
### Boundary conditions
At this stage, the system of equation (\[eq:psi\_int\]-\[eq:temp\_int\]) needs to be supplemented by boundary conditions. Given the definition of $\varPsi$, the rigid mechanical boundary conditions that require the cancellation of $u_s$ and $u_\phi$ at both boundaries are already ensured by the three following identities: $$\varPsi(s=s_i)=\dfrac{\partial \varPsi}{\partial s}(s=s_i)=0, \quad
\varPsi(s=s_o)=0\,.
\label{eq:bc_varPsi}$$ An extra boundary condition on $\varPsi$ is thus required. Following [@Bardsley18], we make the ansatz $$\varPsi \sim (s_o^2-s^2)^n \ \text{when}\ s \rightarrow s_o\,.$$ This yields the following expression for the viscous term $$\Delta \mathcal{L}_I \varPsi = \dfrac{1}{s^4}(s_o^2-s^2)^{n-3}\left[8 n
\,s_o^8\,(-2n^3+3n^2+5n-6)\right]\,,$$ when $s\rightarrow s_o$. A finite solution requires either $n>3$ or the cancellation of the poynomial on $n$, which has four roots $(-3/2,0,1,2)$. $n=-3/2$ is not allowed and $n=0$ is redundant with the cancellation of $\varPsi$ at $s=s_o$. Hence the first possible solution is $n=1$ which yields $$\varPsi \sim s^2-s_o^2\ \text{when}\ s \rightarrow s_o\,.$$ This corresponds to the following additional boundary condition $$\dfrac{\partial^3 \varPsi}{\partial s^3} = 0\ \text{for}\ s=s_o\,.
\label{eq:bc_d3psi}$$
When using the Chebyshev integration method, the boundary conditions can be either enforced via the tau-Lanczos method or by setting up an adapted Galerkin basis function [@CHQZ; @Boyd01]. In the tau-Lanczos formulation, the top rows of the matrices are used to enforce the boundary conditions, which are actually identical to the ones used in the collocation method (Eqs. \[eq:bcs\_coll\_dirichlet\]-\[eq:bcs\_coll\_neumann\]). The fourth condition on $\varPsi$ given in Eq. (\[eq:bc\_d3psi\]) corresponds to the following last tau line [see @Julien09] $$\sideset{}{''}\sum_{n=0}^{N_c-1} n^2(n^2-1)(n^2-4)\,\widehat{\varPsi}_{n} =
0\,.
\label{eq:bc_d3psi_coll}$$ Figure \[fig:mat\]b shows the structure of the matrix that enters the left-hand-side of Eq. (\[eq:psi\_int\]) when the boundary conditions are enforced using a tau-Lanczos formulation. The two first rows of the matrix correspond to the Dirichlet boundary conditions (Eqs. \[eq:bcs\_coll\_dirichlet\] and \[eq:bc\_varPsi\]), the third one to the above equation and the fourth one to the Neumann boundary condition (Eqs. \[eq:bcs\_coll\_neumann\] and \[eq:bc\_varPsi\]). Below those four full lines the matrix has a banded structure with 8 sub- and super-diagonals. This corresponds to a so-called bordered matrix wich can be inverted in $\mathcal{O}(17\,N_r)$ operations as long as the number of full rows is small compared to the problem size [e.g. @Boyd01]. Appendix \[sec:app1\] gives the details of the matrix inversion procedure as implemented in `pizza`.
We proceed the same way for the boundary conditions on the axisymmetric zonal flow and on the temperature. In those cases the Dirichlet boundary conditions (\[eq:bcs\_coll\_dirichlet\]) are imposed as the two first tau lines of the matrix, while the banded structure below is given by (\[eq:uphi\_int\]) and (\[eq:temp\_int\]), respectively.
Alternatively, the boundary conditions can be imposed by introducing a suitable Galerkin basis. The underlying idea is to define basis functions that satisfy the boundary conditions such that the solutions expressed on this set of functions will also directly fulfill the boundary conditions. The Galerkin basis of functions $\phi_m$ is usually defined as a linear combination of a small number $n_c$ of Chebyshev polynomials $$\phi_n(x) = \sum_{i=0}^{n_c-1} \gamma_i^n T_{n+i}(x)\,.$$ We first construct the Galerkin basis for the four boundary conditions on $\varPsi$ (Eqs. \[eq:bc\_varPsi\] and \[eq:bc\_d3psi\]). Following [@Julien09], the tau conditions (\[eq:bcs\_coll\_dirichlet\], \[eq:bcs\_coll\_neumann\], \[eq:bc\_d3psi\_coll\]) are used to establish a related Galerkin set. Appendix \[sec:app2\] gives the details of the calculation of the $\gamma_i^n$ coefficients for $0\leq i \leq 4$. $\varPsi$ is then decomposed on the Galerkin basis as follows $$\varPsi(s) = \sum_{n=0}^{N_r-5} \widetilde{\varPsi}_n \phi_n(x)\,,$$ where the tilda notation denotes the Galerkin coefficients. The Galerkin coefficients $\widetilde{\varPsi}$ relate to the Chebyshev coefficients $\widehat{\varPsi}$ via $$\widehat{\varPsi} = S_\varPsi\,\widetilde{\varPsi},$$ where $S_\varPsi$ is the stencil matrix that contains the coefficients $\gamma_i$. For the Galerkin basis employed for the equation on $\varPsi$, $S_\varPsi$ is a band matrix with four sub-diagonals. The Galerkin formulation of Eq. (\[eq:psi\_int\]) can be hence written in its matrix form as $$P_4 \left(\dfrac{\mathrm{d} \mathcal{A}^I_m }{\mathrm{d}
t}-\mathcal{B}^I_m\right)
S_\varPsi
\,\widetilde{\varPsi}_m = P_4
C^I\left[\dfrac{Ra}{Pr}\dfrac{\mathrm{i}m}{s_o}\widehat{\vartheta}_{m}+\widehat{
\mathcal { N }}_{\omega
m}-\widehat{\mathcal{F}}_{\epsilon}\right],
\label{eq:psiint_galerkin}$$ where $P_4$ is an operator that removes the top four rows of the matrices, which correspond to the number of boundary conditions [@Julien09]. Figure \[fig:mat\]c shows the structure of the matrix that enters the left-hand-side of Eq. (\[eq:psiint\_galerkin\]). Compared to the bordered matrix obtained when using the tau method, the matrix has now a pure banded structure with an increased bandwidth with 8 sub- and 12 super-diagonals. Those matrices could be solved using standard band matrix solvers. In `pizza`, the LU decomposition is handled by the `LAPACK` routine `dgbtrf` or its complex arithmetic counterpart `zgbtrf` in $\mathcal{O}(q^2\,N_r)$ operations per Fourier mode $m$. `dgbtrs` (or `zgbtrs`) routines are then employed for the matrix solve in $\mathcal{O}(q\,N_r)$ operations per Fourier mode $m$.
We proceed the same way for the zonal velocity and the temperature equations by defining a Galerkin basis that ensures Dirichlet boundary conditions at both boundaries. Several different Galerkin basis sets that satisfy this type of boundary conditions have been frequently used in the context of modelling rotating convection [e.g. @Pino00; @Stellmach08]. Following [@Julien09], we decide here to adopt the following set $$\phi_n(x)=T_{n+2}(x)-T_n(x),\quad\text{for}\quad n<N_r-3\,.
\label{eq:galerkin_dirichlet}$$ In matrix form, the Galerkin formulations of equations (\[eq:uphi\_int\]) and (\[eq:temp\_int\]) yield $$P_2\left(\dfrac{\mathrm{d} \mathcal{D}^I}{\mathrm{d} t}
-\mathcal{E}^I\right) S_{\text{D}}\,
\widetilde{u_\phi}_{0} = \\
-P_2
\mathcal{D}^I\left[ \widehat{\mathcal{N}}_{u_{\phi}m}
+\widehat{\Upsilon_\epsilon {u_\phi}_0}\right],
\label{eq:uphi_galerkin}$$ for the axisymmetric zonal flow component and $$P_2\left(\dfrac{\mathrm{d} D^I }{\mathrm{d} t}
-\dfrac{\mathcal{F}^I_m}{Pr}\right)S_{\text{D}}\,
\widetilde{\vartheta}_m = -P_2
\mathcal{D}^I
\left[\mathrm{i}m\widehat{\left(\dfrac{h^2}{s}\dfrac{\mathrm{d}T_c}{\mathrm{d}s}
\varPsi_m\right)}+\widehat {\mathcal { N }}_{\vartheta m}\right],
\label{eq:temp_galerkin}$$ for the temperature, where $S_\text{D}$ is the stencil matrix (\[eq:galerkin\_dirichlet\]) and $P_2$ is an operator that removes the top two rows.
We note that different type of boundary conditions, such as stress-free and/or fixed flux thermal boundary conditions, would necessitate the derivation of dedicated Galerkin bases following a procedure similar to the one discussed in the appendix \[sec:app2\].
Previous analysis by [@Julien09] showed that the Galerkin approach usually yield matrices with a better condition number than the bordered matrices obtained when using the tau-Lanczos method. This is particularly critical when 2-D or 3-D Chebyshev domains are considered but remains acceptable for 1-D problem as considered here [see Table 1 in @Julien09]. The Galerkin approach should hence be privileged as long as homogeneous boundary conditions are enforced, while inhomogeneous boundary conditions for which a Galerkin description becomes cumbersome are easier to handle with a tau-Lanczos formulation.
Temporal discretisation {#sec:tschemes}
=======================
Name Family Reference Order $\mathcal{I}$ $\mathcal{E}$ Storage Cost $\alpha$
-------- ------------ --------------------------- ------- --------------- --------------- --------- ------ ---------- --
SBDF4 Multi-step [@Wang08], Eq. (2.15) 4 1 1 8 1.01 0.19
SBDF3 Multi-step [@Peyret02], Eq. (4.83) 3 1 1 6 0.97 0.23
SBDF2 Multi-step [@Peyret02], Eq. (4.82) 2 1 1 4 0.96 0.21
CNAB2 Multi-step [@Glatzmaier84], Eq. (5b) 2 1 1 4 1 0.25
BPR353 SDIRK [@Boscarino13], § 8.3 3 5 3 9 3.24 $0.78^*$
ARS443 SDIRK [@Ascher97], § 2.8 3 4 3 9 3.66 $0.71^*$
ARS222 SDIRK [@Ascher97], § 2.6 2 2 2 5 1.81 $0.45^*$
LZ232 SDIRK [@Liu06], § 6 2 2 2 6 1.86 $0.42^*$
\[tab:timeschemes\]
The equations discretised in space can be written as a general ordinary differential equation in time where the right-hand-side is split in two contributions $$\dfrac{\mathrm{d} y}{\mathrm{d} t} = \mathcal{E}(y,t) + \mathcal{I}(y,t),
\quad
y(t_0)=y_0,
\label{eq:EDP}$$ where $\mathcal{I}(y,t)$ corresponds to the linear terms, while $\mathcal{E}(y,t)$ corresponds to the nonlinear advective terms. Temporal stability constraints coming from the linear terms that enter Eqs. (\[eq:vort\]-\[eq:temp\]) is usually more stringent that the one coming from the nonlinear terms. Except for weakly nonlinear calculations, this precludes the usage of purely explicit time schemes such as the popular fourth order Runge-Kutta [e.g. @Grooms11]. Although they offer an enhanced stability, purely implicit schemes are extremely costly since they involve the coupling of all Fourier modes due to the implicit treatment of the nonlinear terms. The potential gain in time step size is hence cancelled by the numerical cost associated with the solve of large matrices. In the following, we hence only consider *implicit-explicit* schemes (hereafter IMEX) to solve Eq. (\[eq:EDP\]) and to produce the numerical approximation $y_n \simeq y(t_n)$. We first consider the general $k$-step IMEX linear multistep scheme $$y_{n+1} = \sum_{j=1}^{k} a_j y_{n+1-j}+\delta t \left(\sum_{j=1}^{k}
b^\mathcal{E}_j \mathcal{E}_{n+1-j}+\sum_{j=0}^{k} b^\mathcal{I}_j
\mathcal{I}_{n+1-j}\right),
\label{eq:multistep}$$ where $\mathcal{E}_{n+1-j}=\mathcal{E}(y_{n+1-j},t_{n+1-j})$ and $\mathcal{I}_{n+1-j}=\mathcal{I}(y_{n+1-j},t_{n+1-j})$. The vectors $\vec{a}$, $\vec{b}^\mathcal{E}$ and $\vec{b}^\mathcal{I}$ correspond to the weighting factors of the IMEX multistep scheme. For instance, the commonly-used second-order scheme assembled from the combination of a Crank-Nicolson for the implicit terms and a second-order Adams-Bashforth for the explicit terms (hereafter CNAB2) corresponds to the following vectors $\vec{a} = (1,0)$, $\vec{b}^\mathcal{I} = (1/2, 1/2)$ and $\vec{b}^\mathcal{E}=(3/2,-1/2)$ for a constant $\delta t$. In practice, Eq. (\[eq:multistep\]) is rearranged as follows $$\begin{aligned}
(I-b_0^\mathcal{I}\delta t\,\mathcal{I})\,y_{n+1} = & \sum_{j=1}^{k} a_j
y_{n+1-j}\\&+\delta t
\sum_{j=1}^{k} \left(b_j^\mathcal{E} \mathcal{E}_{n+1-j}+b_j^\mathcal{I}
\mathcal{I}_{n+1-j}\right)\,,
\end{aligned}
\label{eq:multistep1}$$ where $I$ is the identity matrix. In addition to CNAB2, `pizza` supports several semi-implicit backward differentiation schemes of second, third and fourth order that are known to have good stability properties [heareafter SBDF2, SBDF3 and SBDF4, see @Ascher95; @Garcia10]. The interested reader is referred to the work by [@Wang08] for the derivation of the vectors $\vec{a}$, $\vec{b}^\mathcal{I}$ and $\vec{b}^\mathcal{E}$ when the time step size is variable. Table \[tab:timeschemes\] summarises the main properties of the multistep schemes implemented in `pizza`.
Multistep schemes suffer from several possible limitations: (*i*) when the order is larger than two, they are not self-starting and hence require to be initiated with another lower-order starting scheme; (*ii*) limitations of the time step size to maintain stability is more severe for higher-order schemes [e.g. @Ascher95; @Carpenter05]. In contrast, the multi-stage Runge-Kutta schemes are self-starting and frequently show a stability region that grows with the order of the scheme. To examine their efficiency in the context of spherical QG convection, we have also implemented in `pizza` several Additive Runge Kutta schemes. For this type of IMEX, we restrict ourself to the so-called *Diagonally Implicit Runge Kutta* schemes (hereafter DIRK) for which each sub-stage can be solved sequentially. For such schemes, the equation (\[eq:EDP\]) is time-advanced from $t_n$ to $t_{n+1}$ by solving $\nu$ sub-stages $$\left( I - a_{ii}^{\mathcal{I}} \delta t\,\mathcal{I} \right) y_i =
y_{n}+\delta t
\sum_{j=1}^{i-1} \left(a_{i,j}^{\mathcal{E}} \mathcal{E}_j +
a_{i,j}^{\mathcal{I}}\mathcal{I}_j
\right),\ 1\leq i\leq \nu,
\label{eq:dirks}$$ where $y_i$ is the intermediate solution at the stage $i$. Finally the evaluation of $$y_{n+1} = y_{n}+\delta t\sum_{j=1}^{\nu}\left (b_j^\mathcal{E}
\mathcal{E}_j+b_j^\mathcal{I}\mathcal{I}_j\right).$$ allows the determination of $y_{n+1}$. A DIRK scheme with $\nu$ stages can be represented in terms of the following so-called Butcher tables
$$\renewcommand\arraystretch{1.2}
\begin{array}{c|c}
\vec{c}^\mathcal{I} & \mathbf{A}^\mathcal{I} \\
\hline
& \vec{b}^\mathcal{I}
\end{array}
=
\begin{array}
{c|ccccc}
c_1^\mathcal{I} & a_{11}^\mathcal{I}\\
c_2^\mathcal{I} & a_{21}^\mathcal{I} & a_{22}^\mathcal{I} \\
\vdots & \vdots & \vdots& \ddots\\
c_\nu^\mathcal{I} & a_{\nu1}^\mathcal{I} & a_{\nu2}^\mathcal{I} &\cdots&
a_{\nu\nu}^\mathcal{I}\\
\hline
& b_{1}^\mathcal{I} & b_{2}^\mathcal{I} & \cdots & b_\nu^\mathcal{I}
\end{array}\,,$$ for the implicit terms, and $$\begin{array}{c|c}
\vec{c}^\mathcal{E} & \mathbf{A}^\mathcal{E} \\
\hline
& \vec{b}^\mathcal{E}
\end{array}
=
\begin{array}
{c|ccccc}
0 & 0\\
c_2^\mathcal{E} & a_{21}^\mathcal{E} & 0 \\
\vdots & \vdots & \vdots& \ddots\\
c_{\nu}^\mathcal{E} & a_{\nu1}^\mathcal{E} & a_{\nu2}^\mathcal{E} &\cdots& 0\\
\hline
& b_{1}^\mathcal{E} & b_{2}^\mathcal{E} & \cdots & b_\nu^\mathcal{E}
\end{array}\,,$$ for the explicit terms, where zero values above the diagonal have been omitted. In the following, we only consider the *stiffly accurate* DIRK schemes for which the outcome of the last stage gives the end-result, without needing any assembly stage [@Ascher97]. This corresponds to $b_j^\mathcal{I}=a_{\nu j}^\mathcal{I}$ and $b_j^\mathcal{E}=a_{\nu j}^\mathcal{E}$ for $1<j<\nu$. In addition, to minimise the memory storage which is particularly critical in the Chebyshev collocation approach, only the DIRK schemes that involve one single matrix storage in the implicit solve are retained, i.e. $a_{ii}^\mathcal{I}$ is independent of $i$. The latter restriction corresponds to the so-called SDIRK (Singly Diagonally Implicit Runge–Kutta) schemes. In the following we discuss the convergence and the stability properties of two second order –ARS222 from [@Ascher97] and LZ232 from [@Liu06]–; and two third order SDIRK schemes –ARS443 from [@Ascher97] and BPR353 from [@Boscarino13]–.
{width="16cm"}
The nonlinear advection terms that enter Eqs. (\[eq:psi\]-\[eq:uphi\]) are treated explicitly, while the dissipation terms and the vortex streching term in Eq. (\[eq:vort\]) are treated implicitly. As long as the fluid domain is entirely convecting, the buoyancy term that enters the vorticity equation (\[eq:vort\]) can either be treated explicitly or implicitly without a notable change of the stability properties of the IMEX [e.g. @Stellmach08]. We can expect more significant differences when some regions of the fluid are stably stratified. An implicit treatment of the buoyancy term only implies that the temperature equation (\[eq:temp\]) shall be first time-advanced to produce $\vartheta(t_{n+1})$ before time-advancing the vorticity and streamfunction [e.g. @Glatzmaier84]. The treatment of the Ekman pumping terms depends on the spatial discretisation strategy: while this can be treated implicitly without additional cost in the collocation method, this term has to be treated explicitly when using the Chebyshev integration method.
For an illustrative purpose, we give here the time-stepping equation for $\widehat{\varPsi}_m$ when the Chebyshev integration method (Eq. \[eq:psi\_int\]) is used in conjunction with an SDIRK time scheme (Eq. \[eq:dirks\]) $$\begin{aligned}
\left(\mathcal{A}^I_m -a_{ii}^\mathcal{I} \delta
t\,\mathcal{B}^I_m\right)\widehat{\varPsi}_m(t_{i}) =
\mathcal{A}^I_m \widehat{\varPsi}_m(t_n)+\delta t \sum_{j=1}^{i-1}
a_{i,j}^\mathcal{I}\,
\mathcal{B}^I_m \widehat{\varPsi}_m(t_{j})\\
+ \delta t \sum_{j=1}^{i-1} a_{i,j}^\mathcal{E}\,
\mathcal{C}^I\left[\dfrac{Ra}{Pr}\dfrac{\mathrm{i}m}{s_o}\widehat{\vartheta}_{m}
(t_j)+\widehat{\mathcal{N}}_{\omega
m}(t_j)-\widehat{\mathcal{F}}_{\epsilon}(t_j)\right],
\end{aligned}$$ where the buoyancy term has been treated explicitly and $1\leq i \leq \nu$. This equation needs to be solved $\nu$ times per time step and the outcome of the final stage produces the time-advanced quantity $\widehat\varPsi_m(t_{n+1})$ for the azimuthal wavenumber $m$. A summary of the main properties of the SDIRK schemes implemented in `pizza` is also given in Table \[tab:timeschemes\].
Both families of time integrators (\[eq:multistep1\]) and (\[eq:dirks\]) have a very similar structure and can hence be implemented using a shared framework, provided the programming language supports object-oriented implementation [@Vos11]. In `pizza` we rely on the object-oriented features provided by the Fortran 2003 norm to implement an abstract framework that allows easy switching between different schemes while minimising the number of code lines.
The different time steppers have been validated by running convergence tests. To do so, we consider a physical test problem with $E=3\times 10^{-6}$, $Ra=10^7$, $Pr=0.025$ and initiate the numerical experiment with a random temperature perturbation. We then run the numerical model using an SBDF4 time stepper until a statistically steady-state has been reached. This final state serves as the starting conditions of a suite of numerical simulations that use different fixed time step size $\delta t$ between $10^{-9}$ and $3\times 10^{-6}$ over a fixed physical timespan $t=1.2\times 10^{-3}$. Following [@Grooms11], the error associated with the time stepper is defined as the sum of the relative errors on $\vartheta$, $u_s$ and $u_\phi$, where the relative error for one physical quantity $f$ is expressed by $$e_{\text{rel}}(f) = \left[\dfrac{\left\langle (f-f_\text{ref})^2 \right\rangle
}{\left\langle f_\text{ref}^2 \right \rangle}\right]^{1/2}\,.$$ In the above equation, the angular brackets correspond to an integration over the annulus $$\langle f \rangle = \int_{0}^{2\pi}\int_{s_i}^{s_o}
f(s,\phi) \,s\,\mathrm{d}s\,\mathrm{d}\phi\,.$$ The fourth-order SBDF4 time stepper with the smallest time step size $\delta
t=10^{-9}$ has been used to define the reference solution $f_{\text{ref}}$. Figure \[fig:error\_DeltaT\] shows the error as a function of $\delta t$ for the time schemes given in Table \[tab:timeschemes\] for both the collocation method (left panel) and the Chebyshev integration method with a Galerkin approach to enforce the boundary conditions (right panel). All schemes converge with their expected theoretical order until a plateau is reached around $3\times 10^{-9}$ for the Chebyshev collocation and $10^{-8}$ for the Chebyshev integration method. This can be attributed to the propagation of rounding errors that occur in the spectral transforms and in the calculation of the radial derivatives [@Sanchez04]. In other words, at this level of $\delta t$ the error becomes dominated by the spatial discretisation errors. For a given order, SDIRK schemes are found to be more accurate than their multistep counterparts for the majority of the cases.
{width="16cm"}
This time scheme validation has been carried out with fixed time step sizes on a physical test case that is close to the onset of convection. To examine the efficiency of the different time schemes to model quasi-geostrophic turbulent convection, we also perform a stability analysis on a more turbulent setup. Indeed a precision of a fraction of a percent is usually sufficient when considering parameter studies of turbulent rotating convection . Hence, the determination of the largest time step size $\delta t$ is of practical interest to assess the efficiency of a given time scheme. To do so, we consider a problem with $E=10^{-7}$, $Ra=2\times
10^{11}$ and $Pr=1$, which is approximately 60 times supercritical. We first time-advance the solution until the nonlinear saturation has been reached using a CNAB2 time scheme. We then use the final state of this computation as the starting conditions of several numerical simulations that use different time schemes. Those simulations are computed over $3\times 10^{-4}$ viscous time, which roughly corresponds to two turnover times. Since the advection terms are treated explicitly, the maximum eligible time step size must satisfy the following Courant criterion $$\delta t \leq \alpha \min\left[ \left(\max_{s,\phi}
\dfrac{|u_s|}{\delta
s}\right)^{-1}, \left(\max_{s,\phi}\dfrac{|u_\phi|}{s\,\delta\phi}\right)^{-1}
\right],
\label{eq:cfl}$$ where $\delta s$ correspond to the local spacing of the Gauss-Lobatto grid and $\delta \phi= 2\pi/3 N_m$ to the constant spacing in the azimuthal direction. In the above equation, $\alpha$ corresponds to the Courant-Friedrichs-Lewy number (hereafter CFL). To determine the CFL number of each time scheme, we compute series of simulations with different values of $\alpha$ and let the code runs with the maximum allowed $\delta t$ that fulfills Eq. (\[eq:cfl\]). This implies that $\delta t$ will change at each iteration and hence that the matrices will be rebuilt at each time step. Since LU factorisation is very demanding when using Chebyshev collocation ($\mathcal{O}(N_r^3)$ operations), we restrict the stability analysis to the sparse Chebyshev integration method with a Galerkin approach to enforce the boundary conditions. We use the time evolution of the total enstrophy $\langle
\omega^2 \rangle$ as a diagnostic to estimate the maximum CFL number $\alpha$. Because of the clustering of the Gauss-Lobatto grid points, the time step size limitation usually occurs in the vicinity of the boundaries. Since $\langle
\omega^2 \rangle$ reaches its maximum value in the viscous boundary layers, any violation of Eq. (\[eq:cfl\]) yields spurious spikes in the time evolution of the total enstrophy, well before the code actually crashes. For comparison, we define a reference solution that has been run with an SBDF4 time scheme with the smallest value of $\alpha = 0.05$.
Figure \[fig:pizza\_crash\]a shows the time-averaged and the standard deviation of $\langle\omega^2\rangle$ as a function of $\alpha$ for the time schemes given in Table \[tab:timeschemes\]. The curves are comprised of two parts: one horizontal part where the time-averaged total enstrophy remains in close agreement with the reference case and the other featuring a rapid increase of both the time-averaged and the standard deviation of $\langle \omega^2 \rangle$. We hence define the largest acceptable $\alpha$ for a given time scheme as the value above which the time-averaged total enstrophy becomes more than 0.3% larger than the reference value. The rightmost column of Table \[tab:timeschemes\] documents the obtained values. All multi-step schemes exhibit comparable CFL numbers with only a weak dependence on the theoretical order of the scheme. This is in agreement with the study by [@Carpenter05] who report comparable time step limitations for several SBDF schemes when the problem becomes numerically stiff. In contrast, the SDIRK schemes allow significantly larger CFL numbers with third-order schemes being more stable than the second-order ones. We quantify the efficiency of a time scheme by the ratio $$\sigma = \dfrac{\alpha}{\text{cost}}\,,
\label{eq:efficiency}$$ where the cost corresponds to the average wall time of one iteration without LU factorisation (see the before last column in Table \[tab:timeschemes\]). Figure \[fig:pizza\_crash\]b shows a comparison of the relative efficiency of the time schemes compared to CNAB2. Although the CFL numbers are larger for the SDIRK schemes, they actually have a similar efficiency to multistep schemes due to their higher numerical cost. CNAB2 and ARS222 are found to be the most efficient second-order schemes, while BPR353 and SBDF3 are the best third-order schemes. The CFL numbers derived here are however only indicative since the stability of the schemes is expected to depend on the stiffness of the physical problem [e.g. @Ascher97; @Carpenter05]. It is yet unclear whether the SDIRK schemes considered here will be able to compete with the multistep methods in the limit of turbulent quasi-geostrophic convection. Addressing this question would necessitate a systematic survey of the limits of stability of the time schemes over a broad range of Reynolds and Rossby numbers.
Parallelisation strategy {#sec:mpi}
========================
![Domain decompositions used in `pizza`. The left panel corresponds to the `MPI` configuration where the radial levels are distributed among ranks and all $m$’s are in processor, while the right panel corresponds to the transposed configuration where the azimuthal wavenumbers are distributed and all radial level are in processor. The parallel transposition between those two representations is handled by `mpi_alltoallv` collective communications.[]{data-label="fig:mpi_topo"}](mpi_topo){width="8.4cm"}
The implementation of the algorithm presented before in `pizza` has been designed to run efficiently on massively-parallel architectures. We rely on a message-passing communication framework based on the `MPI` (Message Passing Interface) standard. Several approaches have been considered to efficiently parallelise spectral transforms between physical and spectral space [e.g. @Foster97]. Here we decide to resort to a transpose-based approach, such that all the spectral transforms are applied to data that are local to each processor. Whenever needed global transpositions of the data arrays are used to ensure that the dimension that needs to be transformed becomes local.
In `pizza` the data is distributed in two different configurations. In the first one, the radial level are distributed among `MPI` ranks while all azimuthal wavenumbers are local to each processor. This allows the computation of the 1D Fourier transforms (Eq. \[eq:fft\]), the nonlinear terms in the physical space and the backward inverse transforms (Eq. \[eq:ifft\]). At this stage the data are rearranged in a second `MPI` configuration such that the wavenumbers $m$ are distributed, while all radial levels are now in processor. Since each processor can possibly have a different amount of data to be sent to other processors, this parallel transposition is handled by the `MPI` variant routine `mpi_alltoallv` that offers dedicated arguments to specify the amount of data to be sent and received from each partner. This configuration is used to time-advance the solution either via Chebyshev collocation (Eqs. \[eq:psiomcoll\]-\[eq:tempcoll\]) or via Chebyshev integration method (Eqs. \[eq:psi\_int\]-\[eq:temp\_int\]). This implies the solve of linear problems and possibly DCTs (Eq. \[eq:cheb\]) to transform the data from Chebyshev to radial space. Figure \[fig:mpi\_topo\] summarises the data distribution used in `pizza`.
{width="16cm"}
In the following, we examine the scalability performance of `pizza` using the `occigen` cluster[^5]. This cluster consists of more than 2000 computational nodes, each node being configured with two Intel 12 cores E5-2690V3 series processor with a clock frequency of 2.6 GHz. To build the executable, we make use of the Intel compiler version 17.0, Intel `MPI` version 5.1.3, Intel `MKL` version 17.0 for the linear solve and the matrix vector products and `FFTW` version 3.3.5 for Fourier and Chebyshev transforms. We first analyse the strong scaling performance of the code by running sequences of numerical simulations with several fixed problem size and an increasing number of `MPI` ranks. The left panels in Figure \[fig:mpi\_perf\] show the wall time per iteration as a function of the number of cores for several problem sizes for both Chebyshev collocation and Chebyshev integration methods. The resolution $(N_r,N_m)$ range from $(97,96)$ to $(12289,12288)$. Because of the dense complex-type matrices of size $(2N_r\times 2N_r)$ involved in the time advance of the coupled vorticity-streamfunction equation (\[eq:psiomcoll\]), we cannot use the collocation method for the largest problem sizes since it already requires more than 1 GB per rank when $N_r=1537$ and $N_m=1536$ with 128 `MPI` ranks. For the spatial resolutions that are sufficiently small to be computed on one single node, we observe an improved performance when the code is running on one single processor (i.e. up to 12 cores) with the Chebyshev collocation. This is not observed in the sparse cases and hence might be attributed to an internal speed-up of the dense matrix solver of the Intel `MKL` library. Apart from this performance shift, both methods show a scalability performance that improves with the problem size. While the efficiency of the strong scalings are quickly degraded for $N_{\text{ranks}} >
N_m/8$ for small problem sizes, `pizza` shows a very good scalability up to $N_{\text{ranks}} = N_m/2$ for the largest problem sizes. The scalability performance of the collocation method is usually better than the Chebyshev integration method for a given problem size. This has to do with the larger amount of computational work spent in solving the dense matrices, which comparatively reduces the fraction of the wall time that corresponds to the `MPI` global transposes.
In complement to the strong scaling analyses, we also examine weak scaling performance tests. This consists of increasing the number of `MPI` ranks and the problem size accordingly, such that the amount of local data per rank stays constant. The spectral transforms implemented in `pizza` require $\mathcal{O}(N_r N_m \ln N_m)$ operations for the FFTs (Eq. \[eq:fft\]) and $\mathcal{O}(N_m N_r \ln N_r)$ for the DCTs (Eq. \[eq:cheb\]). The solve of the linear problems involved in the time advance of the equations (\[eq:psi\]-\[eq:temp\]) grows like $\mathcal{O}(N_m N_r^2)$ for the collocation method and only $\mathcal{O}(N_m
N_r)$ for the Chebyshev integration method. With the 1-D `MPI` domain decomposition discussed above, this implies that an increase of the spatial resolution while keeping a fixed amount of local data corresponds to an increase of the wall time that should scale with $\mathcal{O}(N_r)$ for the collocation method and with $\mathcal{O}(\ln N_r)$ for the Chebyshev integration method. The right panels of Fig \[fig:mpi\_perf\] show the wall time per iteration normalised by those theoretical predictions as a function of the data volume per rank expressed by $N_r N_m/N_\text{ranks}$ for both Chebyshev methods. Using the simulations with a spatial resolution of $(N_r,N_m)=(1537,1536)$ we compute the following best fits between the normalised execution time and the local data volume for each radial discretisation scheme
$$\begin{aligned}
\dfrac{t_\text{run}^\text{coll.}}{N_r} & = 2.2\times 10^{-8}\left(\dfrac{N_r
N_m}{N_\text{ranks}}\right)^{0.98}, \\\
\dfrac{t_\text{run}^\text{int.}}{\ln N_r} & = 3.2\times
10^{-7}\left(\dfrac{N_r
N_m}{N_\text{ranks}}\right)^{0.97},
\end{aligned}
\label{eq:best_fits}$$
where the run time is expressed in seconds. For both methods, the normalised wall time per iteration is nearly proportional to the data volume per rank, indicating a good agreement with the expected theoretical scalings. We can make use of those scalings to estimate the minimum theoretical execution time as a function of the problem size. Based on the results of the strong scaling analyses, we assume that `pizza` shows a good parallel efficiency up to $N_\text{ranks}=N_m/2$ when the collocation method is used and up to $N_\text{ranks}=N_m/4$ when a sparse Chebyshev formulation is employed. This yields
![Minimum wall time per iteration as a function of the problem size $N_r N_m$. The lines correspond to the linear fits derived from the weak scaling tests (see Fig. \[fig:mpi\_perf\]b and d) for both radial discretisation strategies assuming $N_\text{ranks}=N_m/2$ for the collocation method and $N_\text{ranks}=N_m/4$ for the Chebyshev integration method combined with a Galerkin enforcement of boundary conditions. The symbols correspond to the minimum wall times obtained in the strong scaling analyses (Fig. \[fig:mpi\_perf\]a and c).[]{data-label="fig:mpi_min_walltime"}](min_run_time){width="8.4cm"}
$$\begin{aligned}
\min(t_\text{run}^\text{coll.}) &= 4.4\times 10^{-8}\, N_r^{1.98}\,, \\
\min(t_\text{run}^\text{int.}) &= 1.2\times 10^{-6}\,N_r^{0.97} \ln N_r\,.
\end{aligned}
\label{eq:walltimes}$$
Figure \[fig:mpi\_min\_walltime\] shows a comparison between the actual minimum wall times for different spatial resolutions (see Fig. \[fig:mpi\_perf\]) and the above scalings. A good agreement is found for the sparse Chebyshev formulation and for the collocation method with $N_r N_m >
10^5$. Since the computational time of FFTs and DCTs still represents a significant fraction of one time step for small problem sizes, this is not surprising that the scaling given in Eq. (\[eq:walltimes\]) is only approached for sufficiently large problem sizes when the collocation method is employed.
Adopting a Chebyshev integration formulation for the radial scheme provides a significant speed up over the collocation approach, with for instance a factor 10 gain when $N_r N_m \simeq 10^7$. Furthermore, while the collocation method becomes intractable for problem sizes with $N_r N_m > 10^7$ because of its intrinsic large memory prerequisite, the sparse formulation can be employed for spatial resolution larger than $10^4\times 10^4$. Global synchronisation and file lock contention can become an issue when reaching this range of problem sizes. In `pizza` this is remedied by collective calls to `MPI-IO` write operations to handle the outputting of checkpoints and snapshots.
Code validation and examples {#sec:results}
============================
Weakly-nonlinear convection
---------------------------
In absence of a documented benchmark of spherical QG convection, we test the numerical implementation by first looking at the onset of convection. The underlying idea being to compare the results coming from a linear eigensolver with the results from `pizza`. The comparison of the different radial discretisation strategies is of particular interest to quantify the error introduced by the approximation of the Ekman pumping term involved in the sparse formulation (Eq. \[eq:approx\_pump\]). To determine the onset of spherical QG convection, we linearise the system of equation (\[eq:psi\]-\[eq:temp\]) and seek for normal modes with $$f(s,\phi,t) = \Re \left( \sum_{m=0}^\infty f_m(s) e^{\mathrm{i} m
\phi+\lambda t}\right)\,,$$ where ${f_m}=(\psi_m,\vartheta_m)^T$ and $\lambda=\tau+\mathrm{i}\omega_d$, $\tau$ being the growth rate and $\omega_d$ the angular frequency. Since there is no coupling between the Fourier modes, we can seek for the solution $f_m$ of one individual azimuthal wavenumber. This forms the following generalised eigenvalue problem $$\begin{aligned}
\lambda \mathcal{L}_\beta \psi_m & = \dfrac{Ra}{Pr}\dfrac{\mathrm{i}m
}{s_o} \vartheta_m -\dfrac{2}{E}\dfrac{\mathrm{i}m}{s}\beta\psi_m-
\mathcal{F}(E,\psi_m)+ \Delta (\mathcal{L}_\beta \psi_m)\,,
\\
\lambda \vartheta_m & = \Delta \vartheta_m-\dfrac{\mathrm{i}
m}{s}\dfrac{\mathrm{d}T_c}{\mathrm{d}s}\,\psi_m\,,
\end{aligned}
\label{eq:lin}$$ that is supplemented by the boundary conditions (\[eq:bcs\_psi\_intro\]). We solve this generalised eigenvalue problem using the `Linear Solver Builder` package (hereafter `LSB`) developed by [@Valdettaro07]. The linear operators that enter Eq. (\[eq:lin\]) are discretised on the Gauss-Lobatto grid using a Chebyshev collocation method in real space [e.g. @CHQZ]. The entire spectrum of complex eigenvalues $\lambda$ is first computed using the QZ algorithm [@Moler73]. One selected eigenvalue can then be used as a guess to accurately determine the closest eigenpair using the iterative Arnoldi-Chebyshev algorithm [e.g. @Saad92]. As indicated in Table \[tab:onset\_Gillet\], the linear solver has been tested and validated against published values of critical Rayleigh numbers for spherical QG convection with or without Ekman pumping [@Gillet07].
[lccc]{} & $Ra_c$ & $m$ & $\omega_d$\
\
`LSB` & $1.3851\times 10^7$ & $13$ & $-1.3028\times
10^4$\
[@Gillet07] & $1.39\times 10^7$ & $13$ & $-1.300\times 10^4$\
\
`LSB` & $1.5231\times 10^7$ & $14$ & $-1.2705\times 10^4$\
[@Gillet07] & $1.53\times 10^7$ & $14$ & $-1.268\times 10^4$\
\[tab:onset\_Gillet\]
{width="16cm"}
In the following we focus on weakly nonlinear QG convection with $E=3\times
10^{-6}$ and $Pr=0.025$ and a radius ratio $r_i/r_o=0.35$, a physical set up that is quite similar to the one considered by [@Gillet07] for liquid Gallium. Figure \[fig:eigenvalue\] shows the critical eigenmode (with $\tau \simeq 0$) computed with `LSB` for these parameters. The onset of convection takes the form of a thermal Rossby wave that drifts in the retrograde direction with a critical azimuthal wavenumber $m=12$, a drifting frequency $\omega_d=-9.42690\times 10^{3}$ and a critical Rayleigh number $Ra_c=9.55263\times 10^{6}$. The numerical convergence of this calculation has been assessed by computing the Chebyshev spectra of the different eigenfunctions as illustrated on Fig. \[fig:eigenvalue\]c.
{width="16cm"}
To validate the numerical implementation, the growth rate and the drift frequency obtained with `pizza` are compared to the eigenvalues derived with `LSB`. This requires a finite growth rate $\tau$, hence we adopt in the following a marginally supercritical Rayleigh number $Ra=10^7$ and compute the most critical eigenmodes for this $Ra$ both in absence and in presence of Ekman pumping. The corresponding eigenmodes $(\psi,\vartheta)^T$ computed with `LSB` are then used as starting conditions in `pizza`. A meaningful comparison necessitates that the nonlinear calculation remains in the weakly nonlinear regime. We hence restrict the computation to a short time interval of $10^{-2}$ viscous time, which roughly corresponds to 15 periods of the most unstable drifting thermal Rossby wave. To ensure that the numerical error is dominated by the spatial discretisation rather than by the temporal one, we employ the BPR353 time scheme with a small time step size $\delta t=10^{-7}$ (see Fig. \[fig:error\_DeltaT\]). Figure \[fig:compLinDNS\] shows a comparison of the time evolution of the temperature fluctuation $\Re(\vartheta_{m=12})$ at mid depth using the linear eigenmode calculated with `LSB` and using the different radial discretisation schemes implemented in `pizza`. In absence of Ekman pumping (left panels), the different radial schemes yield almost indiscernible time evolution curves. The zoomed-in inset reveals a 6 significant digits agreement between the eigenmode and the weakly nonlinear calculations. When the Ekman pumping contribution is included (right panels), similar accuracy is recovered between the simulation computed with the collocation method and the eigenmode. The two nonlinear calculations that use the Chebyshev integration approach show a more pronounced deviation due to the approximated Ekman pumping term with $\epsilon=3\times 10^{-3}$.
[lcccccc]{} &&& &\
$t$ scheme & $(N_r,N_c,N_m)$ & $\epsilon$ & $\tau$ & $\omega_d$ & $\tau$ & $\omega_d$\
\
- & $(192,192,1)$ & - &$6.149994\times 10^2$ & $-9.536952\times 10^{3}$ & $2.122883\times 10^2$ & $-9.436506\times 10^{3}$\
\
CNAB2 & (193,193,128) & - & $\underline{6.1}50091\times10^{2}$ & $\underline{-9.53695}1\times10^{3}$ & $\underline{2.12}3007\times10^{2}$ & $\underline{-9.436506}\times10^{3}$\
BPR353 & (193,193,128) & - & $\underline{6.14999}6\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.1228}92\times10^{2}$ & $\underline{-9.436506}\times10^{3}$\
SBDF3 & (193,193,128) & - & $\underline{6.1}50048\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.122}955\times10^{2}$ & $\underline{-9.436506}\times10^{3}$\
SBDF4 & (193,193,128) & - & $\underline{6.1}50092\times10^{2}$ & $\underline{-9.536952}\times10^{3}$ & $\underline{2.12}3010\times10^{2}$ & $\underline{-9.43650}7\times10^{3}$\
\
CNAB2 & (193,128,128) & $3\times 10^{-3}$ & $\underline{6.1}50015\times10^{2}$ & $\underline{-9.536952}\times10^{3}$ & $\underline{2.1}48132\times10^{2}$ & $\underline{-9.436}744\times10^{3}$\
CNAB2 & (768,512,128) & $10^{-4}$ & $\underline{6.1}50015\times10^{2}$ & $\underline{-9.536952}\times10^{3}$ & $\underline{2.12}3818\times10^{2}$ & $\underline{-9.4365}12\times10^{3}$\
BPR353\* & (193,128,128) & $3\times 10^{-3}$ & $\underline{6.14999}7\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.1}48114\times10^{2}$ & $\underline{-9.436}745\times10^{3}$\
SBDF3 & (193,128,128) & $3\times 10^{-3}$ & $\underline{6.14999}7\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.1}48114\times10^{2}$ & $\underline{-9.436}745\times10^{3}$\
\
BPR353\* & (193,128,128) & $3\times 10^{-3}$ & $\underline{6.14999}8\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.1}48113\times10^{2}$ & $\underline{-9.436}745\times10^{3}$\
BPR353\* & (769,512,128) & $10^{-4}$ & $\underline{6.14999}5\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.12}3799\times10^{2}$ & $\underline{-9.4365}13\times10^{3}$\
BPR353\* & (3073,2048,128) & $10^{-5}$ & $\underline{6.14999}6\times10^{2}$ & $\underline{-9.53695}3\times10^{3}$ & $\underline{2.122}983\times10^{2}$ & $\underline{-9.43650}7\times10^{3}$\
\[tab:growth\_rates\]
To determine the growth rate and the drift frequency in the nonlinear calculations, we fit the time evolution of $\Re(\vartheta_{m=12})$ at mid depth with the function $a_0 \cos(\omega_d t
+\phi_0) e^{\tau t}$ using least squares, the initial amplitude $a_0$ and phase shift $\phi_0$ being determined by the starting conditions. Table \[tab:growth\_rates\] shows the obtained eigenpairs for the different radial schemes tested with several time integrators and values of $\epsilon$. Overall the best agreement with the eigenvalues are obtained when the third-order BPR353 time scheme is employed. The superiority of the SDIRK scheme likely has to do with the lack of self-starting capabilities of multistep schemes, which hence require a lower-order starting time stepper to complete the first iterations. This procedure introduces errors larger than the theoretical order of the scheme that could account for the slightly larger inaccuracy of those schemes. The approximation of the Ekman pumping contribution when the Chebyshev integration method is used introduces an error that is more pronounced in the growth rate than in the drift frequency. This is expected since dissipation processes usually have a direct impact on the growth rate of an instability. A decrease of $\epsilon$ goes along with a proportional drop of the relative error on $\tau$. This is however accompanied by an increase of the number of radial grid points in order to maintain the spectral convergence of the Ekman pumping term (\[eq:approx\_pump\]).
This comparison validates the implementation of all the linear terms that enter Eqs. (\[eq:psi\]-\[eq:temp\]) for the different radial discretisation schemes. The approximation of the Ekman pumping contribution yields relative error that grow with $\epsilon$. The collocation method should hence be privileged for small problem size. Because of its fastest execution time, the sparse Chebyshev formulation is the recommended approach when dealing with larger problem sizes. A large number of radial grid points indeed permits to accommodate small values of $\epsilon < 10^{-3}$, for which the error associated with the approximate Ekman pumping term becomes negligible.
Nonlinear convection
--------------------
{width="\textwidth"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$r$ scheme $(N_r,N_c,N_m)$ $\epsilon$ $\overline{\delta t}$ $\overline{E_K}\pm\sigma(E_K)$ $\overline{E_Z}\pm\sigma(E_Z)$ Core hours
----------------- ------------------- ------------ ----------------------- ------------------------------------------- ------------------------------------------- -------------------- --
Collocation $(641,641,1280)$ - $2.623\times 10^{-8}$ $1.448\times $8.075\times 10^{7}\pm4.855\times 10^{6}$ $1.8\times 10^{4}$
10^{8}\pm6.281\times 10^{6}$
Integ.+Galerkin $(1025,682,1280)$ $10^{-3}$ $2.052\times 10^{-8}$ $1.448\times 10^{8}\pm6.381\times 10^{6}$ $8.038\times 10^{7}\pm4.884\times $3.7\times 10^{3}$
10^{6}$
Integ.+tau $(1025,682,1280)$ $10^{-3}$ $2.095\times 10^{-8}$ $1.443\times 10^{8}\pm6.439\times 10^{6}$ $8.026\times 10^{7}\pm4.661\times $3.6\times 10^{3}$
10^{6}$,
Integ.+Galerkin $(1025,682,1280)$ $10^{-4}$ $2.100\times 10^{-8}$ $1.452\times $8.012\times 10^{7}\pm4.211\times 10^{6}$ $3.8\times 10^{3}$
10^{8}\pm5.754\times 10^{6}$
Integ.+tau $(1025,682,1280)$ $10^{-4}$ $2.144\times $1.450\times 10^{8}\pm6.663\times 10^{6}$ $8.070\times $3.6\times 10^{3}$
10^{-8}$ 10^{7}\pm5.231\times 10^{6}$
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[tab:E1e7\]
To pursue the code validation procedure, we now examine another physical setup which is not in the weakly nonlinear regime anymore with $E=10^{-7}$, $Pr=1$ and $Ra=2\times 10^{11}$, roughly 60 times the critical Rayleigh number. This corresponds to the setup that has been previously used to determine the Courant number of the different time schemes in § \[sec:tschemes\]. To compare the different radial discretisation schemes, we first compute a simulation until a statistically steady-state has been reached. We then use this physical solution as a starting condition of several numerical simulations that use different radial discretisation schemes and two values of $\epsilon$ with the BPR353 time scheme. Since this is now a turbulent convection model, the time step size will change over time to satisfy the Courant condition (Eq. \[eq:cfl\]). To avoid the costly reconstruction of the matrices at each iteration, we adopt a time step size that is three quarter of the maximum eligible time step. The simulations are then computed over a timespan of roughly $0.03$ viscous time, which corresponds to more than $150$ turnover times. Figure \[fig:compE1e7\]a shows the time evolution of the total and the zonal kinetic energy defined by $$E_K = \dfrac{1}{2}\left\langle u_s^2 + u_\phi^2 \right\rangle =
E_Z+2\pi\sum_{m=1}^{N_m} \int_{s_i}^{s_o}
\left(|u_s^m|^2+|u_\phi^m|^2\right) s\, \mathrm{d}s\,,$$ where the zonal contribution is expressed by $$E_Z = \dfrac{1}{2}\left\langle \overline{u_\phi}^2 \right\rangle =
\pi \int_{s_i}^{s_0} \overline{u_\phi}^2 s\,\mathrm{d} s\,.$$ The three numerical simulations feature a very similar time evolution with roughly 50% of the energy content in the axisymmetric azimuthal motions. They show a quasi-periodic behaviour with quick energy increases followed by slower relaxations. This can be attributed to the time evolution of the zonal jets that slowly drift towards the inner boundary where they become unstable [@Rotvig07]. Panels b and c of Fig. \[fig:compE1e7\] show the time-average radial profiles and $m$ spectra of the kinetic energy, respectively. A good agreement is found between the three radial discretisation schemes. Typical of 2-D QG turbulence, an inverse energy cascade with a $m^{-5/3}$ slope takes place up to a typical lengthscale where the convective features are sheared apart by the zonal jets [here $m\simeq 20$, see @Rhines75]. At smaller lengthscales the spectra transition to a $m^{-5}$ slope frequently observed in Rossby waves turbulence [e.g. @Rhines75; @Schaeffer05a].
For a better quantification of the difference between the three radial schemes, Tab. \[tab:E1e7\] contains the time-average and the standard deviation of $E_K$ and $E_Z$ over the entire run time. Since dealiasing is also required in the radial direction when using a sparse Chebyshev formulation, the two cases that have been computed with the Chebyshev integration method have a larger number of radial grid points to ensure a number of Chebyshev modes comparable to the one used with the collocation method. Because of the change of the grid spacing (Eq. \[eq:cfl\]), this implies a decrease in the average time-step size. The time averages and standard deviation obtained for the three schemes and the two values of $\epsilon$ are found to agree within less than 1%. Given the unsteady nature of the solution, the differences in time step size and the limited time span considered for time averaging, it is not clear whether this difference can solely be attributed to the parametrisation of the Ekman pumping contribution. Notwithstanding this possible source of error, this comparison demonstrates that turbulent convection can be accurately modelled by an efficient sparse Chebyshev formulation with an acceptable error introduced by the Ekman pumping term approximation.
Turbulent QG convection
-----------------------
{width="17.5cm"}
To check the ability of the spectral radial discretisation schemes to model turbulent QG convection, we consider a third numerical configuration with $E=10^{-9}$, $Ra=1.5\times 10^{14}$ and $Pr=1$. This corresponds to strongly supercritical convection ($Ra > 100\,Ra_c$) at a very low Ekman number, a prerequisite to ensure that both large Reynolds and small Rossby numbers are reached at the same time. With the dimensionless units adopted in this study, $$Re= \left[\dfrac{2 E_K}{\pi(s_o^2-s_i^2)}\right]^{1/2},\ Ro=Re\,E\,.$$ For these control parameters, convection develops in the so-called turbulent QG regime [e.g. @Julien12a] with $Re\simeq 10^5$ and $Ro\simeq 10^{-4}$. Numerical models that operate at these extreme parameters demand a large number of grid points –here $(N_r,N_m)=(6145,6144)$– which becomes intractable for the Chebyshev collocation method. We hence only compute this model using the Chebyshev integration method combined with a Galerkin approach to enforce the boundary conditions. For this physical configuration, a time integration of roughly ten convective overturns requires about $10^5$ core hours.
Figure \[fig:E1e9\] shows a snapshot of the vorticity with two zoomed-in insets that emphasise the regions close the boundaries. The mixing of the potential vorticity $(\omega +2/E)/h$ by turbulent convective motions generates multiple zonal jets with alternated directions [e.g. @Dritschel08]. This gives rise to a spatial separation of the vortical structures with alternated concentric rings of cyclonic ($\omega > 0$) and anticyclonic ($\omega < 0$) vorticity. The typical size of these zonal jets is usually well-predicted by the Rhines scale defined by $(Ro/|\beta|)^{1/2}$ [e.g. @Rhines75; @Gastine14; @Verhoeven14; @Heimpel16; @Guervilly17]. This lengthscale marks the separation between Rossby waves at larger scales and turbulent motions at smaller scales. Because of the increase of $|\beta|$ with the cylindrical radius $s$ in spherical geometry, the zonal jets are getting thinner outward. Close to the outer boundary, the dynamics becomes dominated by tilted vortices elongated in the azimuthal direction, a typical pattern of the propagation of thermal Rossby waves. Because of the steepening of $\beta$ at large radii, the vortex stretching term becomes the dominant source of vorticity there, such that the propagation of thermal Rossby waves takes over the nonlinear advective processes. This outer region is hence expected to shrink with an increase of the convective forcing [e.g. @Guervilly17]. At the interface between jets, the vortical structures are sheared apart into elongated filaments, indicating a direct cascade of enstrophy towards smaller scales.
Conclusion {#sec:conclusion}
==========
In this study, we have presented a new open-source code, nicknamed `pizza`, dedicated to the study of rapidly-rotating convection under the 2-D spherical quasi-geostrophic approximation [e.g. @Busse86; @Aubert03; @Gillet06]. The code is available at <https://github.com/magic-sph/pizza> as a free software that can be used, modified, and redistributed under the terms of the GNU GPL v3 license. The radial discretisation relies on a decomposition in Fourier series in the azimuthal direction and in Chebyshev polynomials in the radial direction. For the latter, both a classical Chebyshev collocation method [e.g. @Glatzmaier84; @Boyd01] and a sparse integration method [e.g. @Stellmach08; @Muite10; @Marti16] are supported. We adopt a pseudo-spectral approach where the nonlinear advective terms are treated in the physical space and transformed to the spectral space using fast discrete Fourier and Chebyshev transforms. `pizza` supports several implicit-explicit time schemes encompassing multi-step schemes as well as diagonally-implicit Runge-Kutta schemes [e.g. @Ascher97] that have been validated by convergence tests. The parallelisation strategy relies on a message-passing communication framework based on the `MPI` standard. The code has been tested and validated against onset of quasi-geostrophic convection.
The comparison of the two radial discretisation schemes has revealed the superiority of the Chebyshev integration method. In contrast to the collocation technique that requires the storage and the inversion of dense matrices, the integration method indeed only involves sparse operators. As a consequence, the memory requirements only grows with $\mathcal{O}(N_r)$ and the operation count with $\mathcal{O}(N_r\ln N_r)$ as compared to $\mathcal{O}(N_r^2)$ when using a collocation approach. Multi-step and diagonally-implicit Runge-Kutta schemes have shown comparable efficiency, defined in this study by the ratio of the maximum CFL number over the numerical cost of one iteration. Additional parameter studies with various Reynolds and Rossby numbers are however required to assess the differences between both families of time integrators. We have found a good parallel scaling up to roughly four radial grid points per `MPI` task. This implies that large spatial resolution up to $\mathcal{O}(10^4\times 10^4)$ grid points can be reached with a reasonable wall time if one uses several thousands of `MPI` tasks. Such large grid resolutions allows the study of turbulent quasi-geostrophic convection at low Ekman numbers. Preliminary results for a numerical model with $E=10^{-9}$, $Ra=1.5\times 10^{14}$ and $Pr=1$ shows the formation of multiple zonal jets, when both the Reynolds number is large $\mathcal{O}(10^5)$ and the Rossby number is small $\mathcal{O}(10^{-4})$. This specific combination of $Re
\gg 1$ and $Ro \ll 1$ is a prerequisite to study the turbulent quasi-geostrophic convection regime [@Julien12a], an important milestone to better understand the internal dynamics of planetary interiors.
Future developments of the code include the implementation of the time-evolution of chemical composition to study double-diffusive convection under the spherical QG framework. On the longer term, the QG flow and temperature computed in the equatorial plane of the spherical shell will be coupled to an induction equation computed in the entire shell using classical 3-D pseudo-spectral discretisation [e.g. @Schaeffer06].
I want to thank Alexandre Fournier for his comments that helped to improve the manuscript. Stephan Stellmach and Benjamin Miquel are acknowledged for their fruitful advices about Galerkin bases and Philippe Marti for his help with the symbolic `python` package used to assemble the sparse Chebyshev matrices. I also wish to thank Michel Rieutord for sharing the `Linear Solver Builder` eigensolver. Numerical computations have been carried out on the S-CAPAD platform at IPGP and on the `occigen` cluster at GENCI-CINES (Grant A0020410095). All the figures have been generated using `matplotlib` [@Hunter07]. All the post-processing tools that have been used to construct the different figures are part of the source code of `pizza` and are hence freely accessible. This is IPGP contribution 4015.
[78]{} natexlab\#1[\#1]{}
, U. M., [Ruuth]{}, S. J., & [Wetton]{}, B. T. R., 1995. , [*SIAM Journal on Numerical Analysis*]{}, [**32**]{}(3), 797–823.
, U. M., [Ruuth]{}, S. J., & [Spiteri]{}, R. J., 1997. , [*Applied Numerical Mathematics*]{}, [**25**]{}, 151–167.
, J., [Gillet]{}, N., & [Cardin]{}, P., 2003. , [*Geochemistry, Geophysics, Geosystems*]{}, [**4**]{}, 1052.
, J. M., [Calkins]{}, M. A., [Cheng]{}, J. S., [Julien]{}, K., [King]{}, E. M., [Nieves]{}, D., [Soderlund]{}, K. M., & [Stellmach]{}, S., 2015. , [ *Physics of the Earth and Planetary Interiors*]{}, [**246**]{}, 52–71.
, O. P., 2018. , [ *Proc. R. Soc. A*]{}, [**474**]{}(2213), 20180119.
, S., [Pareschi]{}, L., & [Russo]{}, G., 2013. , [*SIAM Journal on Scientific Computing*]{}, [**35**]{}, A22–A51.
, J. P., 2001. , Second Revised Edition. Dover books on mathematics (Mineola, NY: Dover Publications), ISBN 0486411834.
, N. H. & [Hart]{}, J. E., 1993. , [*Geophysical & Astrophysical Fluid Dynamics*]{}, [**68**]{}, 85–114.
, F. H., 1970. , [*Journal of Fluid Mechanics*]{}, [**44**]{}, 441–460.
, F. H. & [Carrigan]{}, C. R., 1974. , [*Journal of Fluid Mechanics*]{}, [**62**]{}, 579–592.
, F. H. & [Or]{}, A. C., 1986. , [*Journal of Fluid Mechanics*]{}, [**166**]{}, 173–187.
, M. A., [Aurnou]{}, J. M., [Eldredge]{}, J. D., & [Julien]{}, K., 2012. , [*Earth and Planetary Science Letters*]{}, [**359**]{}, 55–60.
, M. A., [Julien]{}, K., & [Marti]{}, P., 2013. , [*Journal of Fluid Mechanics*]{}, [**732**]{}, 214–244.
, C., [Hussaini]{}, M. Y., [Quarteroni]{}, A. M., & [Zang]{}, T. A., 2006. , Springer, Berlin, Heidelberg.
, P. & [Olson]{}, P., 1994. , [*Physics of the Earth and Planetary Interiors*]{}, [**82**]{}, 235–259.
, M. H., [Kennedy]{}, C. A., [Bijl]{}, H., [Viken]{}, S. A., & [Vatsa]{}, V. N., 2005. , [*Journal of Scientific Computing*]{}, [**25**]{}, 157–194.
, J. S., [Stellmach]{}, S., [Ribeiro]{}, A., [Grannan]{}, A., [King]{}, E. M., & [Aurnou]{}, J. M., 2015. , [*Geophysical Journal International*]{}, [**201**]{}, 1–17.
, C. W., 1957. , [*Mathematical Proceedings of the Cambridge Philosophical Society*]{}, [**53**]{}(1), 134–149.
, E., [Hagstrom]{}, T., & [Torres]{}, D., 1996. , [*Mathematics of Computation of the American Mathematical Society*]{}, [**65**]{}(214), 611–635.
, E., [Soward]{}, A. M., [Jones]{}, C. A., [Jault]{}, D., & [Cardin]{}, P., 2004. , [ *Journal of Fluid Mechanics*]{}, [**501**]{}, 43–70.
, D. G. & [McIntyre]{}, M. E., 2008. , [*Journal of the Atmospheric Sciences*]{}, [**65**]{}, 855–874.
, C., [Beyer]{}, W., [Bonhage]{}, A., [Hollerbach]{}, R., & [Beltrame]{}, P., 2003. , [*Advances in Space Research*]{}, [**32**]{}, 171–180.
, I. T. & [Worley]{}, P. H., 1997. , [*SIAM Journal on Scientific Computing*]{}, [**18**]{}, 806–837.
, L. & [Parker]{}, I. A., 1968. , Oxford mathematical handbooks, Oxford University Press, London.
, M. & [Johnson]{}, S. G., 2005. , [*Proceedings of the IEEE*]{}, [**93**]{}(2), 216–231.
, F., [Net]{}, M., [Garc[í]{}a-Archilla]{}, B., & [S[á]{}nchez]{}, J., 2010. , [*Journal of Computational Physics*]{}, [ **229**]{}, 7997–8010.
, T., [Heimpel]{}, M., & [Wicht]{}, J., 2014. , [*Physics of the Earth and Planetary Interiors*]{}, [**232**]{}, 36–50.
, T., [Wicht]{}, J., & [Aubert]{}, J., 2016. , [ *Journal of Fluid Mechanics*]{}, [**808**]{}, 690–732.
, N. & [Jones]{}, C. A., 2006. , [*Journal of Fluid Mechanics*]{}, [**554**]{}, 343–369.
, N., [Brito]{}, D., [Jault]{}, D., & [Nataf]{}, H. C., 2007. , [*Journal of Fluid Mechanics*]{}, [**580**]{}, 83.
, P. A., 1977. , [*GAFD*]{}, [**8**]{}, 93–135.
, G. A., 1984. , [*Journal of Computational Physics*]{}, [**55**]{}, 461–484.
, D. & [Orszag]{}, S. A., 1977. , CBMS-NSF Regional Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics, ISBN 9780898710236.
, L., 1991. , [ *SIAM Journal on Numerical Analysis*]{}, [**28**]{}, 1071–1080.
, I. & [Julien]{}, K., 2011. , [*Journal of Computational Physics*]{}, [**230**]{}, 3630–3650.
, C. & [Cardin]{}, P., 2016. , [*Journal of Fluid Mechanics*]{}, [**808**]{}, 61–89.
, C. & [Cardin]{}, P., 2017. , [*Geophysical Journal International*]{}, [**211**]{}, 455–471.
, J. E., [Glatzmaier]{}, G. A., & [Toomre]{}, J., 1986. , [*Journal of Fluid Mechanics*]{}, [**173**]{}, 519–544.
, M., [Gastine]{}, T., & [Wicht]{}, J., 2016. , [*Nature Geoscience*]{}, [**9**]{}, 19–23.
, M., 1997. , [*Acta mechanica*]{}, [**122**]{}, 231–242.
, R., 2000. , [*International Journal for Numerical Methods in Fluids*]{}, [**32**]{}, 773–797.
, S. & [Shishkina]{}, O., 2015. , [*Journal of Fluid Mechanics*]{}, [**762**]{}, 232–255.
, J. D., 2007. , [*Computing In Science & Engineering*]{}, [**9**]{}(3), 90–95.
, K. & [Watson]{}, M., 2009. , [*Journal of Computational Physics*]{}, [**228**]{}, 1480–1503.
, K., [Knobloch]{}, E., [Rubio]{}, A. M., & [Vasil]{}, G. M., 2012. , [*Physical Review Letters*]{}, [**109**]{}(25), 254503.
, E. M., [Stellmach]{}, S., & [Buffett]{}, B., 2013. , [*Journal of Fluid Mechanics*]{}, [**717**]{}, 449–471.
, H. & [Zou]{}, J., 2006. , [ *Journal of Computational and Applied Mathematics*]{}, [**190**]{}, 74–98.
, P., [Calkins]{}, M. A., & [Julien]{}, K., 2016. , [*Geochemistry, Geophysics, Geosystems*]{}, [**17**]{}, 3031–3053.
, H., [Heien]{}, E., [Aubert]{}, J., [Aurnou]{}, J. M., [Avery]{}, M., [Brown]{}, B., [Buffett]{}, B. A., [Busse]{}, F., [Christensen]{}, U. R., [Davies]{}, C. J., [Featherstone]{}, N., [Gastine]{}, T., [Glatzmaier]{}, G. A., [Gubbins]{}, D., [Guermond]{}, J.-L., [Hayashi]{}, Y.-Y., [Hollerbach]{}, R., [Hwang]{}, L. J., [Jackson]{}, A., [Jones]{}, C. A., [Jiang]{}, W., [Kellogg]{}, L. H., [Kuang]{}, W., [Landeau]{}, M., [Marti]{}, P., [Olson]{}, P., [Ribeiro]{}, A., [Sasaki]{}, Y., [Schaeffer]{}, N., [Simitev]{}, R. D., [Sheyko]{}, A., [Silva]{}, L., [Stanley]{}, S., [Takahashi]{}, F., [Takehiro]{}, S.-i., [Wicht]{}, J., & [Willis]{}, A. P., 2016. , [*Geochemistry, Geophysics, Geosystems*]{}, [**17**]{}, 1586–1607.
, G. B., [Murray]{}, B. T., & [Boisvert]{}, R. F., 1990. , [*Journal of Computational Physics*]{}, [**91**]{}, 228–239.
, C. B. & [Stewart]{}, G. W., 1973. , [*SIAM Journal on Numerical Analysis*]{}, [**10**]{}(2), 241–256.
, C. & [Dumberry]{}, M., 2018. , [*Geophysical Journal International*]{}, [**213**]{}, 434–446.
, V. & [Dormy]{}, E., 2004. , [*Physics of Fluids*]{}, [**16**]{}, 1603–1609.
, B. K., 2010. , [*Journal of Computational and Applied Mathematics*]{}, [**234**]{}, 317–342.
, S. & [Townsend]{}, A., 2013. , [*SIAM Review*]{}, [**55**]{}(3), 462–489.
, S. A., 1971. , [*Journal of Atmospheric Sciences*]{}, [**28**]{}, 1074–1074.
, R., 2002. , Applied Mathematical Sciences 148, Springer New York, ISBN 9780387952215.
, T. N. & A., K., 1990. , [*SIAM Journal on Numerical Analysis*]{}, [**27**]{}, 823–830.
, D., [Mercader]{}, I., & [Net]{}, M., 2000. , [**]{}, [**61**]{}, 1507–1517.
, E. & [Busse]{}, F. H., 2002. , [*Journal of Fluid Mechanics*]{}, [**464**]{}, 345–363.
, E., [Lebranchu]{}, Y., [Simitev]{}, R., & [Busse]{}, F. H., 2008. , [*Journal of Fluid Mechanics*]{}, [**602**]{}, 303–326.
, P. B., 1975. , [*Journal of Fluid Mechanics*]{}, [**69**]{}, 417–443.
, J., 2007. , [**]{}, [**76**]{}, 046306.
, Y., 1992. , Manchester University Press.
, J., [Net]{}, M., [Garc[í]{}a-Archilla]{}, B., & [Sim[ó]{}]{}, C., 2004. , [*Journal of Computational Physics*]{}, [**201**]{}, 13–33.
, N. & [Cardin]{}, P., 2005. , [*Physics of Fluids*]{}, [ **17**]{}(10), 104111–104111–12.
, N. & [Cardin]{}, P., 2005. , [*Nonlinear Processes in Geophysics*]{}, [**12**]{}, 947–953.
, N. & [Cardin]{}, P., 2006. , [*Earth and Planetary Science Letters*]{}, [**245**]{}, 595–604.
, N., [Jault]{}, D., [Nataf]{}, H.-C., & [Fournier]{}, A., 2017. , [ *Geophysical Journal International*]{}, [**211**]{}, 1–29.
, W. L. & [Lathrop]{}, D. P., 2005. , [*Physics of the Earth and Planetary Interiors*]{}, [**153**]{}, 136–149.
, S. & [Hansen]{}, U., 2008. , [*Geochemistry, Geophysics, Geosystems*]{}, [**9**]{}, Q05003.
, S., [Lischper]{}, M., [Julien]{}, K., [Vasil]{}, G., [Cheng]{}, J. S., [Ribeiro]{}, A., [King]{}, E. M., & [Aurnou]{}, J. M., 2014. , [*Physical Review Letters*]{}, [**113**]{}(25), 254501.
, I. & [Olson]{}, P., 2003. , [*Journal of Fluid Mechanics*]{}, [**492**]{}, 271–287.
, R. J., [Jones]{}, C. A., & [Hollerbach]{}, R., 2012. , [*Physics of Fluids*]{}, [ **24**]{}(6), 066604–066604–21.
, L., [Rieutord]{}, M., [Braconnier]{}, T., & [Fraysse]{}, V., 2007. , [*Journal of Computational and Applied Mathematics*]{}, [**205**]{}, 382–393.
, J. & [Stellmach]{}, S., 2014. , [**]{}, [**237**]{}, 143–158.
, P. E. J., [Eskilsson]{}, C., [Bolis]{}, A., [Chun]{}, S., [Kirby]{}, R. M., & [Sherwin]{}, S. J., 2011. , [*International Journal of Computational Fluid Dynamics*]{}, [**25**]{}, 107–125.
, D. & [Ruuth]{}, S. J., 2008. , [*Journal of Computational Mathematics*]{}, [**26**]{}(6), 838–855.
Direct solve of a bordered matrix {#sec:app1}
=================================
Suppose one wants to solve the following linear problem which involves a so-called bordered matrix $\mathcal{A}$ $$\mathcal{A} \psi = f,$$ where $\mathcal{A}$ comprises $p$ full top rows and a banded structure underneath. The matrix problem is sub-divided as follows $$\left(
\begin{array}{cc}
A & B\\ C & D
\end{array}\right) \left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array}
\right) =
\left(\begin{array}{c} g \\ h \end{array}\right),$$ where $A$ is a full square matrix of size $(p\times p)$, $B$ is a full matrix of size $(p\times n-p)$, $C$ is a sparse matrix of size $(n-p\times p)$ and $D$ is a band matrix of size $(n-p)$ with a bandwidth $q$, $q$ being the total number of bands. One first solves the two following banded linear problems $$D x = h\,, \quad D y = C\,.$$ The LU factorisation of the band matrix $D$ requires $\mathcal{O}(q^2\,n)$ operations, while the solve requires $\mathcal{O}(q\,n)$ operations [e.g. @Boyd01 Appendix B2]. We then assemble the Schur complement of the banded block $D$ $$M = A - B D^{-1} C = A -By\,,$$ before solving the small dense problem of size $(p,p)$ $$M \psi_1 = g-Bx\,.$$ This requires $\mathcal{O}(p^3)$ operations for the LU factorisation and $\mathcal{O}(p^2)$ for the solve. This cost remains negligible as long as $p
\ll n$, which is the case for the linear problems considered in the Chebyshev integration method. We finally evaluate $$\psi_2 = x - y\,\psi_1\,,$$ to assemble the final solution given by $\psi=(\psi_1,\psi_2)^T$.
Galerkin basis for streamfunction boundary conditions {#sec:app2}
=====================================================
In this section, we derive a Galerkin basis function for the following combination of boundary conditions that is used in the Chebyshev integration method for the streamfunction equation $$\varPsi = \dfrac{\partial \varPsi}{\partial
s}= 0, \quad\text{for}\quad s=s_i\,,$$ and $$\varPsi = \dfrac{\partial^3 \varPsi}{\partial
s^3}= 0, \quad\text{for}\quad s=s_o\,.$$ We start by defining the following ansatz for the Galerkin set $$\phi_n(x) = \sum_{i=0}^{4} \gamma_{i}^n\, T_{n+i}(x)\,.$$ Following [@McFadden90] and [@Julien09] we then make use of the tau boundary conditions (Eqs. \[eq:bcs\_coll\_dirichlet\],\[eq:bcs\_coll\_neumann\] and \[eq:bc\_d3psi\_coll\]) to form the following system of equations $$\begin{aligned}
\phi_n(1) & = \sum_{i=0}^4 \gamma_i^n &=0, \\
\phi_n(-1) & = \sum_{i=0}^4 (-1)^{i} \gamma_i^n &=0 , \\
\dfrac{\partial^3\phi_n}{\partial x^3}(1)& =
\sum_{i=0}^4 (n+i)^2[(n+i)^2-1][(n+i)^2-4] \gamma_{i}^n
& = 0, \\
\dfrac{\partial\phi_n}{\partial x}(-1) &= \sum_{i=0}^{4} (-1)^{i+1} (n+i)^2
\gamma_{i}^n &= 0,
\end{aligned}$$ Since there are only four equations for five unknowns, there is a degree of freedom in the determination of the coefficients. We thus choose in following $$\gamma_0^n = 1,$$ which yields the following identities for the other coefficients: $$\begin{aligned}
\gamma_1^n &=\frac{8 \left(n + 1\right) \left(n^{2} + 4 n + 5\right)}{2 n^{4}
+
20 n^{3} + 78 n^{2} + 140 n + 95}, \\
\gamma_2^n &=- \frac{2 \left(n + 2\right) \left(2 n^{4} + 16 n^{3} + 58 n^{2}
+
104 n + 75\right)}{\left(n + 3\right) \left(2 n^{4} + 20 n^{3} + 78 n^{2} + 140
n + 95\right)}, \\
\gamma_3^n &=- \frac{8 \left(n + 1\right) \left(n^{2} + 4 n + 5\right)}{2
n^{4}
+ 20 n^{3} + 78 n^{2} + 140 n + 95},
\end{aligned}$$ and $$\gamma_4^n= \frac{\left(n + 1\right) \left(2 n^{4} + 12 n^{3} + 30 n^{2} + 36 n
+ 15\right)}{\left(n + 3\right) \left(2 n^{4} + 20 n^{3} + 78 n^{2} + 140 n +
95\right)}\,.$$
\[lastpage\]
[^1]: <http://fftw.org/>
[^2]: <http://www.netlib.org/lapack/>
[^3]: It can be downloaded as part of the supplementary materials of the study by [@Marti16] [here](%
https://agupubs.onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2F2016GC006438&file=ggge21074-sup-0002-2016GC006438-s02.zip).
[^4]: <https://www.sympy.org/>
[^5]: <https://www.cines.fr/calcul/materiels/occigen>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The *self-repelling Brownian polymer* model (SRBP) initiated by Durrett and Rogers in [@durrett_rogers_92] is the continuous space-time counterpart of the *myopic (or ’true’) self-avoiding walk* model (MSAW) introduced in the physics literature by Amit, Parisi and Peliti in [@amit_parisi_peliti_83]. In both cases, a random motion in space is pushed towards domains less visited in the past by a kind of negative gradient of the occupation time measure.
We investigate the asymptotic behaviour of SRBP in the non-recurrent dimensions. First, extending 1$d$ results from [@tarres_toth_valko_09], we identify a natural stationary (in time) and ergodic distribution of the environment (essentially, smeared-out occupation time measure of the process), as seen from the moving particle. As main result we prove that in three and more dimensions, in this stationary (and ergodic) regime, the displacement of the moving particle scales diffusively and its finite dimensional distributions converge to those of a Wiener process. This result settles part of the conjectures (based on non-rigorous renormalization group arguments) in [@amit_parisi_peliti_83].
The main tool is the non-reversible version of the Kipnis–Varadhan-type CLT for additive functionals of ergodic Markov processes and the *graded sector condition* of Sethuraman, Varadhan and Yau, [@sethuraman_varadhan_yau_00].
[MSC2010:]{} 60K37, 60K40, 60F05, 60J55
[Key words and phrases:]{} self-repelling random motion, local time, central limit theorem
author:
- |
[Illés Horváth]{}\
Institute of Mathematics, Budapest University of Technology\
Egry József u. 1, Budapest, H-1111, Hungary\
email: [{pollux,balint,vetob}@math.bme.hu]{}
title: |
Diffusive limit for\
self-repelling Brownian polymers in $d\ge3$
---
Introduction and background {#s:intro}
===========================
The asymptotic scaling behaviour of *self-repelling* random motions with long memory has been a mathematical challenge since the early eighties. The two basic models considered in the physical and probabilistic literature are the so-called *myopic (or ’true’) self-avoiding random walk* (MSAW), which appeared first in the physics literature in [@amit_parisi_peliti_83], and the so-called *self-repelling Brownian polymer* (SRBP) model, which was initiated in the probabilistic literature in [@norris_rogers_williams_87], [@durrett_rogers_92]. The two models (or, better said, families of models), although having their origins in different cultures and having different motivations, are phenomenologically very similar.
The simplest formulation of the MSAW model is as follows: Let $X(n)$ be a nearest neighbour random walk on $\Z^d$ and $$\ell(n,y):=\ell(0,y)+{\left|\,{\{0<m\le n: X(m)=y\}}\,\right|}$$ its occupation time measure, taken with some (possibly signed) initial values $\ell(0,y)\in\Z$. The walk is governed by the following law: $$\begin{aligned}
\label{law}
&
{\ensuremath{\mathbf{P}\big(X(n+1)=y\bigm|\text{past, }X(n)=x\big)}}=
\\
\notag & \hskip5cm {\ensuremath{{1\!\!1}{\big\{\,{\left|\,{x-y}\,\right|}=1\,\big\}}}}
\frac{r(\ell(n,x)-\ell(n,y))}{\sum_{z:{\left|\,{z-x}\,\right|}=1} r(\ell(n,x)-\ell(n,z))}\end{aligned}$$ where $r:\Z\to(0,\infty)$ is a non-decreasing (and non-constant) weight function. (In the original [@amit_parisi_peliti_83] and the subsequent physics papers, the specific choice $r(u)=\exp\{\beta u\}$, $\beta>0$, was made.) This means phenomenologically that the random walk $X(n)$ is pushed by the negative gradient of its occupation time measure.
The SRBP model is defined as follows. Let $V:\R^d\to\R_+$ be an approximate identity, that is a smooth ($C^{\infty}$), spherically symmetric function with sufficiently fast decay at infinity, and $$\label{FisgradV}
F:\R^d\to\R^d,
\qquad
F(x):=-\grad \, V(x).$$ For reasons which will be clarified later, we also impose the condition of *positive definiteness* of $V$: $$\label{Vposdef}
\wh V(p):=(2\pi)^{-d/2}\int_{\R^d} e^{i p\cdot x}V(x) \,{\mathrm d}x
\ge0.$$ A particular choice could be $V(x):=\exp\{-{\left|\,{x}\,\right|}^2/2\}$.
Let $t\mapsto B(t)\in\R^d$ be standard $d$-dimensional Brownian motion and define the stochastic process $t\mapsto X(t)\in\R^d$ as the solution of the SDE $$\label{Brpoly} X(t)= B(t)+\int_0^t\int_0^s F(X(s)-X(u))\,{\mathrm d}u\,{\mathrm d}s,$$ or $$\label{Brpolydiff}
{\mathrm d}X(t)= {\mathrm d}B(t)+
\big(\int_0^t F(X(t)-X(u))\,{\mathrm d}u\big)\,{\mathrm d}t.$$
[**Remark:**]{} Other types of self-interaction functions $F$ give rise to various different asymptotics. For the few rigorous results (mostly in 1d), see [@norris_rogers_williams_87], [@durrett_rogers_92], [@cranston_lejan_95], [@cranston_mountford_96] and in particular [@mountford_tarres_08] which also contains a survey of the results. Recent 1d results appear in [@tarres_toth_valko_09].
Now, introducing the occupation time measure $$\ell(t, A):=\ell(0,A) + {\left|\,{\{0<s\le t: X(s)\in A\}}\,\right|}$$ where $A\subset \R^d$ is any measurable domain, and $\ell(0,A)$ is some signed initialization, we can rewrite the SDE as follows: $$\label{Brpolydiff2} {\mathrm d}X(t)= {\mathrm d}B(t) - \grad \big(V*\ell
(t,\cdot)\big)(X(t))\,{\mathrm d}t$$ where $*$ stands for convolution in $\R^d$. We assume that $\ell(0,A)$ is a signed Borel measure on $\R^d$ with slow increase: for any $\varepsilon>0$ $$\label{slowincreaseinitially}
\lim_{N\to\infty} N^{-(d+\vareps)}{\left|\,{\ell}\,\right|}(0,[-N,N]^d)=0.$$ The form , compared with , shows explicitly the phenomenological similarity of the two models.
Non-rigorous (but nevertheless convincing) scaling and renormalization group arguments originally formulated for the MSAW, but equally well applicable to the SRBP suggest the following dimension-dependent asymptotic scaling behaviour (see e.g. [@amit_parisi_peliti_83], [@obukhov_peliti_83], [@peliti_pietronero_87]):
1. in $d=1$: $X(t)\sim t^{2/3}$, with intricate, non-Gausssian scaling limit;
2. in $d=2$: $X(t)\sim t^{1/2}(\log t)^\zeta$, with some controversy about the value of the exponent $\zeta$ in the logarithmic correction, and Gaussian (that is Wiener) scaling limit expected;
3. in $d\ge3$: $X(t)\sim t^{1/2}$, with Gaussian (i.e. Wiener) scaling limit expected.
In $d=1$, for some particular cases of the MSAW model (MSAW with edge, rather than site repulsion and MSAW with continuous time and site repulsion), the limit theorem for $X(n)/n^{2/3}$ was established in [@toth_95], respectively, [@toth_veto_09], with the truly intricate limiting distribution identified. The scaling limit of the *process* $t\mapsto X(nt)/n^{2/3}$ was constructed and analyzed in [@toth_werner_98]. The proofs in [@toth_95] and [@toth_veto_09] have some built-in combinatorial elements which make it difficult (if possible at all) to extend these proofs robustly to a full class of 1d models of random motions pushed by the negative gradient of their occupation time measure. However, more recently, a robust proof was given for the super-diffusive behaviour of the 1d models: in [@tarres_toth_valko_09], inter alia, it is proved that for the $1d$ SRBP models $\varliminf_{t\to\infty}t^{-5/4}{\ensuremath{\mathbf{E}\big(X(t)^2\big)}}>0$, $\varlimsup_{t\to\infty}t^{-3/2}{\ensuremath{\mathbf{E}\big(X(t)^2\big)}}<\infty$. These are robust super-diffusive bounds (not depending on microscopic details) but still far from the expected $t^{2/3}$ scaling.
In $d=2$, very little is proved rigorously. For a modified version of MSAW where self-repulsion acts only in one spatial (say, the horizontal) direction, the marginally super-diffusive lower bound $\varliminf_{t\to\infty} t^{-1}(\log t)^{-1/2} {\ensuremath{\mathbf{E}\big(X(t)^2\big)}}>0$ holds, cf. [@valko_09].
In the present paper, we address the $d\ge3$ case of the SRBP model. We identify a stationary and ergodic distribution of the environment as seen from the position of the moving point and in this particular stationary regime, we prove *diffusive limit* (that is non-degenerate CLT with normal scaling) for the displacement. Our general approach is that of martingale approximation for additive functionals of ergodic Markov processes, initiated for reversible processes in the classic Kipnis–Varadhan paper [@kipnis_varadhan_86] and extended to non-reversible cases in [@toth_86], [@varadhan_96], [@sethuraman_varadhan_yau_00]. We shall refer to this approach as the *Kipnis–Varadhan theory*. In particular, validity of the efficient martingale approximation will rely on checking the *graded sector condition* of [@sethuraman_varadhan_yau_00].
Similar results for the MSAW model on the lattice $\Z^d$, $d\ge3$, will be presented in [@horvath_toth_veto_10]
Next, we describe our results in plain words. For precise formulations, see subsection \[ss:setup\_and\_results\].
As a first step, we note that the environment profile appearing on the right-hand side of , , as seen in a moving coordinate frame tied to the current position of the process, $t\mapsto\eta(t,\cdot)$: $$\begin{aligned}
\label{envir}
\eta(t,x)
:=&\,\,
\eta(0,X(t)+x)+
\int_0^t V(X(t)+x-X(u))\,{\mathrm d}u
\\[2pt]
\notag
=&\,\,
\eta(0,X(t)+x)+\big(V*\ell (t,\cdot)\big)(X(t)+x)\end{aligned}$$ is a Markov process in a properly chosen function space $\Omega$, to be specified later. As a first step, we identify a natural *time-stationary and ergodic distribution* of this process . Rather surprisingly, this is the Gaussian (scalar) field $x\mapsto\omega(x)\in\R$ with expectation and covariances $$\label{cov} {\ensuremath{\mathbf{E}\big(\omega(x)\big)}}=0, \qquad C(x-y):= {\ensuremath{\mathbf{E}\big(\omega(x)\omega(y)\big)}} =
g*V(x-y)$$ where $$\label{greenf} g:\R^d\to\R, \qquad g(x):= {\left|\,{x}\,\right|}^{2-d}$$ is the Green function of the Laplacian in $\R^d$. Note that throughout this paper $d\ge3$. This is the *massless free Gaussian field* whose ultraviolet singularity is smeared out by convolution with the smooth and rapidly decaying approximate identity $V$. The Fourier transform of the covariance is $$\label{covarFour}
\wh C(p)={\left|\,{p}\,\right|}^{-2} \wh V(p).$$ See Theorem \[thm:stat\_erg\]. All further results will be meant for the process being in this stationary regime. From this result, by ergodicity, the law of large numbers for the process $X(t)$ drops out, see Corollary \[cor:lln\].
The main result of the paper refers to the *diffusive limit* of the process $t\mapsto X(t)$. From and , it arises that the displacement is written as $$\label{displ} X(t)= B(t) + \int_0^t\varphi(\eta(s))\,{\mathrm d}s$$ where $\varphi:\Omega\to\R^d$ is a function of the state of the stationary and ergodic Markov process $t\mapsto\eta(t)$: $$\label{phidef}
\varphi(\omega)= -\grad \omega(0).$$ So, the natural approach to the diffusive limit of $X(t)$ is the Kipnis–Varadhan theory. We will prove validity of an efficient martingale approximation by checking Sethuraman–Varadhan–Yau’s *graded sector condition*, cf. [@sethuraman_varadhan_yau_00]. It is easy to see that due to the spherical symmetry of the problem, we get $$\label{sphe}
{\ensuremath{\mathbf{E}\big(X_k(t)X_l(t)\big)}}
=
\delta_{k,l}d^{-1}{\ensuremath{\mathbf{E}\big({\left|\,{X(t)}\,\right|}^2\big)}}.$$ We prove that for $d\ge3$, the limiting variance $$\label{variance}
\sigma^2:=d^{-1}\lim_{t\to\infty}t^{-1}{\ensuremath{\mathbf{E}\big({\left|\,{X(t)}\,\right|}^2\big)}}\in(0,\infty)$$ exists and the finite dimensional marginals of the diffusively rescaled process $$\label{rescaled}
X_N(t)
:=
\frac{X(Nt)}{\sigma \sqrt N}$$ converge to those of a standard $d$-dimensional Brownian motion. See Theorem \[thm:clt\]. The main result shows similarity in spirit and techniques with those of [@komorowski_olla_03], but the differences are also clear.
The results are meant *in probability with respect to the initial profile* $\eta(0,x)$ sampled from the stationary (and ergodic) initial distribution hinted at above. Recent results by Cuny and Peligrad [@cuny_peligrad_09] raise the hope that the Kipnis–Varadhan theory could be enhanced to CLT for *almost all* initial conditions sampled according to the stationary distribution.
The rest of the paper is structured as follows: in section \[s:setup\_and\_results\], we give the formal definitions, introduce notations, identify the stationary measure and formulate our main results precisely. In section \[s:spaces\_and\_operators\], we give the functional analytic background: the Hilbert spaces and the (bounded and unbounded) linear operators involved and the infinitesimal generator will be presented. Ergodicity and LLN for the displacement drop out for free. The short section \[s:KV\] is devoted to recalling the martingale approximation in Kipnis–Varadhan theory and the *graded sector condition* of [@sethuraman_varadhan_yau_00]. Finally, in section \[s:proof\], we check the abstract functional analytic conditions for our particular problem, and we conclude with the proof of the CLT for the displacement.
Formal setup and results {#s:setup_and_results}
========================
The stationary measure {#ss:stat_meas}
----------------------
We start with the *Ansatz* that the stationary distribution of the process $t\mapsto\eta(t,\cdot)$ is translation invariant zero mean Gaussian scalar with some covariance $$\label{ansatzcov}
{\ensuremath{\mathbf{E}\big(\eta(t,x)\eta(t,y)\big)}}=C(y-x),$$ to be identified at the end of the following computations.
In order to prove that this is indeed time-stationary, we have to show that for any test function $x\to u(x)\in\R$, the moment generating functional $$\phi(t,u):={\ensuremath{\mathbf{E}\big(\exp\{\la u,\eta(t)\ra\}\big)}}$$ is actually constant in time. In the present subsection, we use the notation $$\la u, v\ra := \int_{\R^d} v(x)u(x)\,{\mathrm d}x.$$
In the forthcoming computations of the present section all *repeated subscripts* are summed from $1$ to $d$. Using , note that, by standard Itô calculus, $$\begin{aligned}
\label{ito} {\mathrm d}\la u, \eta(t)\ra = - \la \partial_l u, \eta(t)\ra\,{\mathrm d}B_l(t) +
\frac12 \la \partial^2_{ll} u, \eta(t)\ra\,{\mathrm d}t - \la \partial_l u, \eta(t) \ra
\partial_l\eta(t,0) \,{\mathrm d}t + \la u, V\ra \,{\mathrm d}t.\end{aligned}$$ Hence $$\begin{aligned}
\label{condchar}
&
{\ensuremath{\mathbf{E}\big({\mathrm d}\exp\{\la u,\eta(t)\ra\} \bigm|\cF_t\big)}}
=
\\[3pt]
\notag
&\qquad
\exp\{\la u,\eta(t)\ra\}
\left(
\frac12
\la \partial^2_{ll}u, \eta(t)\ra
+
\frac12
\la \partial_l u, \eta(t)\ra^2
-
\la \partial_l u, \eta(t) \ra \partial_l\eta(t,0)
+
\la u, V \ra \right){\mathrm d}t.\end{aligned}$$ Now, using the Ansatz that $x\mapsto\eta(t,x)$ (with $t$ fixed!) is a Gaussian field with covariance , by standard computations of Gaussian expectations, from , we obtain $$\begin{aligned}
\label{char}
\frac{{\mathrm d}{\ensuremath{\mathbf{E}\big(\exp\{\la u,\eta(t)\ra\}\big)}}}{{\mathrm d}t}
=
&
\exp\{\la u, C*u\ra/2\}
\Big(
\frac12
\la \partial^2_{ll}u, C*u\ra
+
\frac12
\la \partial_l u, C*\partial_l u\ra
+
\\
\notag
&
+
\frac12
\la \partial_l u, C*u\ra^2
-
\la \partial_{l} u,\partial_{l} C\ra
-
\la \partial_l u, C*u\ra \la u, \partial_lC\ra
+
\la u, V\ra
\Big)\,{\mathrm d}t.\end{aligned}$$ On the right-hand side the first two terms cancel out by an integration by parts. The third and fifth terms cancel one by one due to the simple fact that for any test function $u$, $$\la \partial_l u, C*u\ra=0.$$ Thus, the right-hand side of is canceled out completely iff $$\label{condition}
V = - \partial^2_{ll}C.$$ This is equivalent to , . Note that $d\ge3$ was assumed.
Formal setup and results {#ss:setup_and_results}
------------------------
### State space and Gaussian measure
The proper state space of our basic processes will be the space of smooth scalar fields of slow increase at infinity: $$\label{Omega}
\Omega := \big\{\omega\in C^{\infty}(\R^d\to\R)\,:\,
{\left\|\,{\omega}\,\right\|}_{m,r}<\infty \big\}$$ where ${\left\|\,{\omega}\,\right\|}_{m,r}$ are the seminorms $$\label{seminorms}
{\left\|\,{\omega}\,\right\|}_{m,r} := \sup_{x\in\R^d} \,
\big(1+{\left|\,{x}\,\right|}\big)^{-1/r} \, {\left|\,{\partial^{{\left|\,{m}\,\right|}}_{m_1,\dots,m_d}\omega(x)}\,\right|}$$ defined for the multiindices $m=(m_1,\dots,m_d)$, $m_j\ge0$; and $r\ge1$. The space $\Omega$ endowed with these seminorms is a Fréchet space.
From Minlos’s theorem (see [@simon_74]), it follows that there exists a unique Gaussian probability measure $\pi({\mathrm d}\omega)$ on the space of tempered distributions $\cS^{\prime}(\R^d\to\R)$ with characteristic functional $$\label{charfctnl}
{\ensuremath{\mathbf{E}\big(\exp\{i\la u,\omega\ra\}\big)}}=
\exp\left\{-\frac12\int_{\R^d}\int_{\R^d} u(x)C(x-y)u(y)\,{\mathrm d}x {\mathrm d}y\right\},$$ and from smoothness of the covariance function $C(x)$, it follows that the probability measure $\pi({\mathrm d}\omega)$ is actually concentrated on the space $\Omega\subset\cS^{\prime}(\R^d\to\R)$ and holds.
The Gaussian field $\omega(x)$ is realized e.g. as a moving average of white noise: $$\omega(x)=\int_{\R^d}U(x-y)w(y)\,{\mathrm d}y,$$ where $U$ is the unique positive definite function for which $U*U=V$ and $w$ is $d$-dimensional white noise.
The group of spatial translations $$\label{shift}
\R^d\ni z\mapsto \tau_z:\Omega\to\Omega,
\qquad
(\tau_z\omega)(x):=\omega(x+z)$$ acts naturally on $\Omega$ and preserves the probability measure $\pi({\mathrm d}\omega)$. Actually, the dynamical system $(\Omega, \pi({\mathrm d}\omega), \tau_z:z\in\R^d)$ is *ergodic*.
### Processes
First, we consider the process $t\mapsto(X(t),
\zeta(t,\cdot))\in\R^d\times\Omega$ defined as follows: $$\begin{aligned}
\label{Xeq}
X(t)
&=
X(0) + B(t) - \int_0^t\grad\zeta(s,X(s))\,{\mathrm d}s,
\\[5pt]
\label{zetaeq}
\zeta(t,x)
&=
\zeta(0,x)+\int_0^t V (x-X(s))\,{\mathrm d}s\end{aligned}$$ where $t\mapsto B(t)$ is a standard $d$-dimensional Brownian motion, and $X(0)\in\R^d$, $\zeta(0,\cdot)\in\Omega$ are the initial data for the process $t\mapsto(X(t),\zeta(t,\cdot))$.
Written as a single SDE for the process $t\mapsto X(t)$, we get from , : $$\label{Xeq2}
X(t)
=
X(0) + B(t) +
\int_0^t\left\{\zeta(0,X(s))+
\int_0^s F(X(s)-X(u))\,{\mathrm d}u\right\}{\mathrm d}s.$$ The SDE differs from the original SDE only by the presence of the initial profile $\zeta(0,x)$, which is a natural modification.
From and , it follows that $$\begin{aligned}
\label{Xeqctd}
X(t_0+t)
&=
X(t_0) + \big(B(t+t_0)-B(t_0)\big) -
\int_{t_0}^{t_0+t}\grad\zeta(s,X(s))\,{\mathrm d}s,
\\[5pt]
\label{zetaeqctd}
\zeta(t_0+t,x)
&=
\zeta(t_0,x)+\int_{t_0}^{t_0+t}
V(x- X(s))\,{\mathrm d}s.\end{aligned}$$ From this form, it is apparent that the process $t\mapsto(X(t),\zeta(t,\cdot))\in\R^d\times\Omega$ is Markovian.
The environment profile as seen from the moving point $X(t)$ is $$\label{etadef}
x\mapsto \eta(t,x):=\zeta(t, X(t)+x).$$ From , , we readily obtain that $t\mapsto \eta(t):=\eta(t,\cdot)$ is itself a Markov process on the state space $\Omega$.
We define the function $\varphi:\Omega\to\R^d$ by , and from , , , we readily get .
### Results
\[thm:stat\_erg\] The Gaussian probability measure $\pi({\mathrm d}\omega)$ on $\Omega$, with mean $0$ and covariances is time-invariant and ergodic for the $\Omega$-valued Markov process $t\mapsto\eta(t)$.
\[cor:lln\] For $\pi$-almost all initial profiles $\zeta(0,\cdot)$, $$\label{lln2}
\lim_{t\to\infty}\frac{X(t)}{t}=0
\quad
\mathrm{a.s.}$$
[**Remarks:**]{} It is clear that, in dimensions $d\ge3$, other stationary distributions of the process $t\mapsto\eta(t)$ exist. In particular, due to transience of the process $t\mapsto X(t)$, the stationary measure (presumably) reached from starting with “empty” initial conditions $\eta(0,x)\equiv0$ certainly differs from our ${\mathrm d}\pi$. Our methods and results are valid for the particular stationary distribution ${\mathrm d}\pi$.
The main result of the present paper is the following theorem:
\[thm:clt\] In dimensions $d\ge3$, the following hold:\
(i) The limiting variance $$\label{variance2}
\sigma^2:=d^{-1}\lim_{t\to\infty}t^{-1}{\ensuremath{\mathbf{E}\big({\left|\,{X(t)}\,\right|}^2\big)}}$$ exists and $$\label{bounds}
1 \le \sigma^2 \le 1+\rho^2$$ where $$\label{sumcond}
\rho^2:= d^{-1}\int_{\R^d} {\left|\,{p}\,\right|}^{-2}\wh V(p)\,{\mathrm d}p<\infty.$$ (ii) The finite dimensional marginal distributions of the diffusively rescaled process $$\label{rescaled2}
X_N(t)
:=
\frac{X(Nt)}{\sigma \sqrt N}$$ converge to those of a standard $d$-dimensional Brownian motion. The convergence is meant in probability with respect to the starting state $\eta(0)$ sampled according to ${\mathrm d}\pi$.
Theorem \[thm:clt\] will be proved by use of the martingale approximation of the Kipnis–Varadhan theory and the so-called *graded sector condition* of [@sethuraman_varadhan_yau_00].
Spaces and operators {#s:spaces_and_operators}
====================
The natural formalism for the proofs of our theorems is that of Fock spaces and Gaussian Hilbert spaces, and linear operators over them. For basics of Gaussian Hilbert spaces and Wick products, see [@janson_97], [@simon_74]. Our main Hilbert space is $\cH:=\cL^2(\Omega,\pi)$. This is a Gaussian Hilbert space, and has very natural unitary equivalent representations as Fock spaces. We follow the usual notation of Euclidean quantum field theory, see e.g. [@simon_74]. In subsection \[ss:spaces\], we give formal definition of the three unitary equivalent representations of the Hilbert space $\cL^2(\Omega,\pi)$. In subsection \[ss:operators\], we define the linear operators which are relevant for our purposes and we present their action on the three unitary equivalent formulations. In subsection \[ss:infgen\], the infinitesimal generator of the semigroup of the stationary Markov process $t\mapsto\eta(t,\cdot)\in \Omega$, acting on $\cL^2(\Omega,\pi)$, and its adjoint is computed and the first consequences (ergodicity, LLN) are settled.
Spaces {#ss:spaces}
------
Throughout this paper, we use the convention of unitary Fourier transform $$\label{FourierTransform}
\wh u(p):= (2\pi)^{-d/2}\int_{\R^d}e^{i p\cdot
x}u(x)\,{\mathrm d}x.$$ and the shorthand notation $$\begin{aligned}
\label{notation1}
&
\vx=(x_1,\dots,x_n)\in\big(\R^d\big)^n,
&&
x_m=(x_{m1},\dots,x_{md})\in\R^d,
&&
\partial_{ml}:=\frac{\partial}{\partial x_{ml}},
\\[5pt]
\label{notation3}
&
\vp=(p_1,\dots,p_n)\in\big(\R^d\big)^n,
&&
p_m=(p_{m1},\dots,p_{md})\in\R^d,
&&\end{aligned}$$ $m=1,\dots,n,\,l=1,\dots,d$.
We denote by $\cS_n$, respectively, $\wh \cS_n$, the *symmetric* Schwartz spaces $$\begin{aligned}
\label{Sndef} \cS_n:= & \{u:\R^{dn}\to\C: u(\varpi\vx)=u(\vx),\,
\varpi\in\operatorname*{Perm}(n)\},
\\[5pt]
\label{hSndef} \wh \cS_n:= & \{\wh u:\R^{dn}\to\C: \wh u(\varpi\vp)=\wh
u(\vp),\, \varpi\in\operatorname*{Perm}(n)\}.\end{aligned}$$ In the preceding formulas $\operatorname*{Perm}(n)$ denotes the group of permutations on the $n$ indices.
The spaces $\cS_n$, respectively, $\wh \cS_n$ are endowed with the following scalar products $$\begin{aligned}
\label{Knscprod}
&
\langle u,v\rangle:=
\int_{\R^{dn}}\int_{\R^{dn}}
\overline{u(\vx)}C(\vx-\vy)v(\vy) \,{\mathrm d}\vx{\mathrm d}\vy,
\\[5pt]
\label{hKnscprod}
&
\langle \wh u,\wh v\rangle:=
\int_{\R^{dn}}
\overline{\wh u(\vp)} \wh C(\vp) \wh v(\vp) \,{\mathrm d}\vp\end{aligned}$$ where $$\label{CnhCndef}
C(\vx-\vy):=\prod_{m=1}^n C(x_m-y_m),
\qquad
\wh C(\vp):=\prod_{m=1}^n \wh C(p_m).$$ Let $\cK_n$ and $\wh \cK_n$ be the closures of $\cS_n$, respectively, $\wh \cS_n$ with respect to the Euclidean norms defined by these inner products. The Fourier transform realizes an isometric isomorphism between the Hilbert spaces $\cK_n$ and $\wh \cK_n$.
These Hilbert spaces are actually the symmetrized $n$-fold tensor products $$\label{KnhKn}
\cK_n:=\mathrm{symm}\big(\cK_1^{\otimes n}\big),
\qquad
\cK_n:=\mathrm{symm}\big(\wh\cK_1^{\otimes n}\big).$$ Finally, the full Fock spaces are $$\label{Fockspaces}
\cK:=\overline{\oplus_{n=0}^\infty \cK_n},
\qquad
\wh\cK:=\overline{\oplus_{n=0}^\infty \wh\cK_n}.$$
The Hilbert space of our true interest is $\cH=\cL^2(\Omega,\pi)$. This is itself a graded Gaussian Hilbert space $$\label{Hgraded} \cH=\overline{\oplus_{n=0}^\infty \cH_n}$$ where the subspaces $\cH_n$ are isometrically isomorphic with the subspaces $\cK_n$ of $\cK$ through the identification $$\label{HKisometry} \phi_n: \cK_n\to\cH_n, \quad
\phi_n(u):=\frac{1}{\sqrt{n!}}\int_{\R^{dn}}
u(\vx){\ensuremath{:\!\! \omega(x_1)\dots\omega(x_n) \!\!:\,}}\,{\mathrm d}\vx.$$ Here and in the rest of this paper, we denote by [$:\!\! X_1\dots X_n \!\!:\,$]{} the Wick product of the jointly Gaussian random variables $(X_1,\dots,X_n)$. In order to ease notation the mapping $\phi_1: \cK_1\to\cH_1$ will be simply denoted by $\phi$.
As the graded Hilbert spaces $$\cH:=\overline{\oplus_{n=0}^\infty \cH_n}, \quad
\cK:=\overline{\oplus_{n=0}^\infty \cK_n}, \quad
\wh \cK:=\overline{\oplus_{n=0}^\infty \wh\cK_n}$$ are isometrically isomorphic in a natural way, we shall move freely between the various representations.
Operators {#ss:operators}
---------
### General notation {#sss:general_notation}
We use the standard notation of Fock spaces. First, we give a general framework of notation and identities formulated over the Gaussian Hilbert space $\cH$. Then, we turn to our relevant linear operators and we give their representations in all three Hilbert spaces $\cH$, $\cK$ and $\wh \cK$.
The action of linear operators over $\cH=\overline{\oplus_{n=0}^\infty\cH_n}$ will be typically given in terms of Wick monomials. It is understood that their action is extended by linearity and graph closure.
Given a (bounded or unbounded) densely defined and closed linear operator $A$ over the basic Hilbert space $\cK_1$, its second quantized version, acting over the graded Gaussian Hilbert space $\cH$ will be denoted by ${\mathrm d}\Gamma(A)$. This latter one acts over Wick monomials as follows, ${\mathrm d}\Gamma(A):\cH_n\to\cH_n$, $$\label{sqop}
{\mathrm d}\Gamma(A) {\ensuremath{:\!\! \phi(v_1) \cdots \phi(v_n) \!\!:\,}} = \sum_{m=1}^n
{\ensuremath{:\!\! \phi(v_1)\cdots \phi(Av_m) \cdots \phi(v_n) \!\!:\,}}.$$
Given a vector $u$ from the basic Hilbert space $\cK_1$, the creation and annihilation (raising and lowering) operators associated to it, acting over the Gaussian Hilbert space $\cH$, will be denoted by $a^*(u):\cH_n\to\cH_{n+1}$, respectively, $a(u):\cH_n\to\cH_{n-1}$, acting on Wick monomials as $$\begin{aligned}
\label{cropdef}
a^*(u){\ensuremath{:\!\! \phi(v_1)\dots\phi(v_n) \!\!:\,}}
&=\,\,\,
{\ensuremath{:\!\! \phi(u)\phi(v_1)\dots\phi(v_n) \!\!:\,}},
\\[3pt]
\label{anopdef}
a(u){\ensuremath{:\!\! \phi(v_1)\dots\phi(v_n) \!\!:\,}}
&=\,\,\,
\sum_{m=1}^n\langle u,v_m\rangle
{\ensuremath{:\!\! \phi(v_1) \dots \phi(v_{m-1}) \phi(v_{m+1}) \dots \phi(v_n) \!\!:\,}}.\end{aligned}$$ For basics about creation, annihilation and second quantized operators, see e.g. [@simon_74] or [@janson_97].
We also define the unitary involution $J$ on $\cH$: $$\label{invop}
Jf(\omega):=f(-\omega),
\qquad
J\upharpoonright_{\cH_n}=(-1)^n I\upharpoonright_{\cH_n}.$$
The well-known canonical commutation relations between the operators introduced are: $$\label{ccr1}
[a(u),a(v)]=0,
\qquad
[a^*(u),a^*(v)]=0,
\qquad
[a(u),a^*(v)]=\langle u,v\rangle I,$$ $$\label{ccr2} [{\mathrm d}\Gamma(A),a^*(u)]=a^*(Au), \qquad\qquad
[{\mathrm d}\Gamma(A),a(u)]=-a(A^*u),$$ $$\label{Jcomm} [J,{\mathrm d}\Gamma(A)]=0, \qquad \{J,a^*(u)\}=0, \qquad \{J,a(u)\}=0.$$
Two more operators will be needed: given an element $u\in\cK_1$, *multiplication by* $\phi(u)$ will be denoted $M(u)$, that is, formally, for $f\in\cL^2(\Omega, \pi)$, $$\label{mu}
\big(M(u) f\big)(\omega):= \phi(u)(\omega)f(\omega).$$ Finally, for a fixed element $\vartheta\in\Omega$, we introduce *differentiation in the direction* $\vartheta$: formally $$\label{differ}
D_{\vartheta}f(\omega):=
\lim_{\varepsilon\to0}\varepsilon^{-1}
\big(f(\omega+\varepsilon\vartheta)-f(\omega)\big).$$ Both operators are well-defined on Wick monomials, and are extended by linearity and graph closure.
Given $u\in\cK_1$ the identities and below hold:
(1) The multiplication operator $M(u)$ is actually $$\label{muidentity}
M(u)=a^*(u)+a(u).$$
(2) If $C*u\in\Omega$ then $$\label{differidentity}
D_{C*u}=a(u).$$
Both identities are checked by direct computation on Wick monomials. The identity is a particular case of the *directional derivative* of Malliavin calculus, see [@janson_97].
### Specific linear operators {#sss:specific_operators}
The most relevant operators for our present purposes are $$\label{operators1}
\nabla_l:={\mathrm d}\Gamma(\partial_l),
\qquad
\Delta:=\sum_{l=1}^d\nabla_l^2,
\qquad
a_l:=a(\partial_l\delta_0),
\qquad
a^*_l:=a^*(\partial_l\delta_0)$$ where $\partial_l=\frac{\partial}{\partial x_l}$ and $\delta_0$ is Dirac’s delta concentrated on $0\in\R^d$. Note that $\delta_0$ and all its partial derivatives are in the Hilbert space $\cK_1$.
We give now their action on the spaces $\cH_n$, $\cK_n$ and $\wh\cK_n$. The point is that we are interested primarily in their action on the space $\cL^2(\Omega,\pi)=\overline{\oplus_{n=0}^\infty\cH_n}$, but explicit computations in later sections are handy in the unitary equivalent representations over the space $\wh\cK=\overline{\oplus_{n=0}^\infty\wh\cK_n}$. The action of various operators over $\cH_n$ will be given in terms of the Wick monomials ${\ensuremath{:\!\! \omega(x_1)\dots\omega(x_n) \!\!:\,}}$ and it is understood that the operators are extended by linearity and graph closure.
- The operators $\nabla_l$, $l=1,\dots,d$: $$\begin{aligned}
\label{nablaonHn}
&
\nabla_l:\cH_n\to\cH_n,
&&
\nabla_l{\ensuremath{:\!\! \omega(x_1)\dots\omega(x_n) \!\!:\,}}=
-\sum_{m=1}^n {\ensuremath{:\!\! \omega(x_1)\dots\partial_l\omega(x_m)\dots\omega(x_n) \!\!:\,}},
\\[3pt]
\label{nablaonKn}
&
\nabla_l:\cK_n\to\cK_n,
&&
\nabla_l u(\vx)
=
\sum_{m=1}^n
\frac{\partial u}{\partial x_{ml}}(\vx),
\\[3pt]
\label{nablaonhKn}
&
\nabla_l:\wh\cK_n\to\wh\cK_n,
&&
\nabla_l \wh u(\vp)
=
i \big(\sum_{m=1}^n p_{ml}\big) \wh u(\vp).\end{aligned}$$ Note that these are actually unbounded, closed, skew self-adjoint operators. They are densely defined on $\cH_n$, $\cK_n$, respectively, $\wh\cK_n$.
- The operator $\Delta$: $$\begin{aligned}
\label{DeltaonHn}
&
\Delta:\cH_n\to\cH_n,
&&
\Delta{\ensuremath{:\!\! \omega(x_1)\dots\omega(x_n) \!\!:\,}}
=
\\[2pt]
\notag
&&&\hskip16mm
\sum_{l=1}^d\sum_{m,m'=1}^n
{\ensuremath{:\!\! \omega(x_1) \dots \partial_{l}\omega(x_m) \dots \partial_{l}\omega(x_{m'}) \dots \omega(x_n) \!\!:\,}},
\\[3pt]
\label{DeltaonKn}
&
\Delta:\cK_n\to\cK_n,
&&
\Delta u(\vx)
=
\sum_{l=1}^d\sum_{m,m^{\prime}=1}^n
\frac{\partial^2 u}
{\partial x_{ml}\partial x_{m^{\prime}l}}(\vx),
\\[3pt]
\label{DeltaonhKn}
&
\Delta:\wh\cK_n\to\wh\cK_n,
&&
\Delta \wh u(\vp)
=
-{\left|\,{\sum_{m=1}^n p_m}\,\right|}^2 \wh u(\vp).\end{aligned}$$ The operator $\Delta$ is unbounded, densely defined, self-adjoint and positive. Note that $\Delta$ is *not* the second quantized Laplacian.
- The operator ${\left|\,{\Delta}\,\right|}^{-1/2}=(-\Delta)^{-1/2}$: $$\begin{aligned}
\label{sqrtgreenoponHn}
&
{\left|\,{\Delta}\,\right|}^{-1/2}:\cH_n\to\cH_n,
&&
\text{no explicit formula},
\\[3pt]
\label{sqrtgreenoponKn}
&
{\left|\,{\Delta}\,\right|}^{-1/2}:\cK_n\to\cK_n,
&&
\text{no explicit formula},
\\[3pt]
\label{sqrtgreenoponhKn}
&
{\left|\,{\Delta}\,\right|}^{-1/2}:\wh\cK_n\to\wh\cK_n,
&&
{\left|\,{\Delta}\,\right|}^{-1/2} \wh u(\vp)
=
{\left|\,{\sum_{m=1}^n p_m}\,\right|}^{-1} \wh u(\vp).\end{aligned}$$ The operator ${\left|\,{\Delta}\,\right|}^{-1/2}$ is unbounded, densely defined, self-adjoint and positive.
- The operators ${\left|\,{\Delta}\,\right|}^{-1/2}\nabla_l$, $l=1,\dots,d$: $$\begin{aligned}
\label{nonameopsonHn}
&
{\left|\,{\Delta}\,\right|}^{-1/2}\nabla_l:\cH_n\to\cH_n,
&&
\text{no explicit formula},
\\[3pt]
\label{nonameopsonKn}
&
{\left|\,{\Delta}\,\right|}^{-1/2}\nabla_l:\cK_n\to\cK_n,
&&
\text{no explicit formula},
\\[3pt]
\label{nonameopsonhKn} & {\left|\,{\Delta}\,\right|}^{-1/2}\nabla_l:\wh\cK_n\to\wh\cK_n, &&
{\left|\,{\Delta}\,\right|}^{-1/2}\nabla_l \wh u(\vp) = \frac{i\sum_{m=1}^n p_{ml}
}{{\left|\,{\sum_{m=1}^n p_m}\,\right|}} \wh u(\vp).\end{aligned}$$ These are *bounded* skew self-adjoint operators with operator norm $$\label{nonameopnorm}
{\left\|\,{ {\left|\,{\Delta}\,\right|}^{-1/2}\nabla_l }\,\right\|} =1.$$
- The creation operators $a^*_l$, $l=1,\dots,d$: $$\begin{aligned}
\label{astaronHn}
&
a^*_l:\cH_n\to\cH_{n+1},
&&
a^*_l{\ensuremath{:\!\! \omega(x_1)\dots\omega(x_n) \!\!:\,}}
=
{\ensuremath{:\!\! \partial_l\omega(0)\omega(x_1)\dots\omega(x_n) \!\!:\,}},
\\[5pt]
\label{astaronKn}
&
a^*_l:\cK_n\to\cK_{n+1},
&&
a^*_lu(x_1,\dots,x_{n+1})
=
\\[2pt]
\notag
&&&\hskip10mm
\frac{1}{\sqrt{n+1}}
\sum_{m=1}^{n+1}
\partial_l\delta(x_m) u(x_1,\dots, x_{m-1},x_{m+1},\dots, x_{n+1}) ,
\\[3pt]
\label{astaronhKn}
&
a^*_l:\wh\cK_n\to\wh\cK_{n+1},
&&
a^*_l\wh u(p_1,\dots,p_{n+1})
=
\\[2pt]
\notag
&&&\hskip18mm
\frac{1}{\sqrt{n+1}}
\sum_{m=1}^{n+1}
ip_{ml}\wh u(p_1,\dots, p_{m-1},p_{m+1},\dots, p_{n+1}).\end{aligned}$$ The creation operators $a_l^*$, restricted to the subspaces $\cH_n$, $\cK_n$, respectively, $\wh\cK_n$ are bounded, with operator norm $$\label{astaropnorm}
{\left\|\,{ a^*_l\upharpoonright_{\cH_n} \!\!\!\phantom{\Big|}}\,\right\|}
=
{\left\|\,{ a^*_l\upharpoonright_{\cK_n} \!\!\!\phantom{\Big|}}\,\right\|}
=
{\left\|\,{ a^*_l\upharpoonright_{\wh\cK_n} \!\!\!\phantom{\Big|}}\,\right\|}
=
\sqrt{C(0)}
\sqrt{n+1}.$$
- The annihilation operators $a_l$, $l=1,\dots,d$: $$\begin{aligned}
\label{aonHn}
&
a_l:\cH_n\to\cH_{n-1},
&&
a_l{\ensuremath{:\!\! \omega(x_1)\dots\omega(x_n) \!\!:\,}}
=
\\[2pt]
\notag &&& \hskip20mm \sum_{m=1}^n \partial_lC(x_m)
{\ensuremath{:\!\! \omega(x_1)\dots\omega(x_{m-1}) \omega(x_{m+1}) \dots \omega(x_n) \!\!:\,}},
\\[5pt]
\label{aonKn} & a_l:\cK_n\to\cK_{n-1}, && a_lu(x_1,\dots,x_{n-1}) = \sqrt{n}
\int_{\R^d} u(x_1,\dots,x_{n-1},y) \partial_lC(y) \,{\mathrm d}y,
\\[3pt]
\label{aonhKn} & a_l:\wh\cK_n\to\wh\cK_{n-1}, && a_l\wh u(p_1,\dots,p_{n-1}) =
\sqrt{n} \int_{\R^d} \wh u(p_1,\dots,p_{n-1},q)iq_l\wh C(q) \,{\mathrm d}q.\end{aligned}$$ The annihilation operators $a_l$ restricted to the subspaces $\cH_n$, $\cK_n$, respectively, $\wh\cK_n$ are bounded with operator norm $$\label{astaropnorm}
{\left\|\,{ a_l\upharpoonright_{\cH_n} \!\!\!\phantom{\Big|}}\,\right\|}
=
{\left\|\,{ a_l\upharpoonright_{\cK_n} \!\!\!\phantom{\Big|}}\,\right\|}
=
{\left\|\,{ a_l\upharpoonright_{\wh\cK_n} \!\!\!\phantom{\Big|}}\,\right\|}
=
\sqrt{C(0)}\sqrt{n}.$$ Furthermore, as the notation $a^*_l$ and $a_l$ suggests, these operators are adjoint of each other.
Since all computations will be performed in the representation $\wh\cK$, we give a common core for all the unbounded operators defined above – and some others to appear in future sections: $$\label{coredefin}
\wh\cC:=\oplus_{n=0}^\infty \wh\cC_n,
\qquad
\wh\cC_n:=
\{\wh u \in \wh\cK_n: \sup_{\vp\in\R^{dn}}{\left|\,{\wh u(\vp)}\,\right|} < \infty \}.$$ Note that the operator ${\left|\,{\Delta}\,\right|}^{-1/2}$ is defined on the dense subspace $\wh\cC$ *only for* $d\ge3$. Furthermore, in dimensions $d\ge3$, the operators ${\left|\,{\Delta}\,\right|}^{-1/2}\upharpoonright_{\wh\cK_n}$ defined on the dense subspaces $\wh\cC_n$, are *essentially self-adjoint*. This follows, e.g., from Propositions VIII.1, VIII.2 of [@reed_simon_vol1_80].
Notice also that $\nabla$ is the infinitesimal generator of the *unitary group of spatial translations* while $\Delta$ is the infinitesimal generator of the Markovian semigroup of *diffusion in random scenery* $$\begin{aligned}
\label{shiftgroup}
&
\exp\{z\nabla\}=T_z, \qquad
&&
T_zf(\omega):=f(\tau_z\omega),
\\[5pt]
\label{drscesemigroup}
&
\exp\{t\Delta\}=Q_t, \qquad
&&
Q_tf(\omega):=\int\frac{\exp\{-z^2/(2t)\}}{\sqrt{2\pi t}}
f(\tau_z\omega)\,{\mathrm d}z.\end{aligned}$$
The infinitesimal generator, stationarity,\
Yaglom reversibility, ergodicity {#ss:infgen}
-------------------------------------------
We denote by $P_t$ the semigroup of the process $\eta(t)$: $$\label{semigroup}
P_t:\cH\to\cH,
\qquad
P_t f(\omega)
:=
{\ensuremath{\mathbf{E}\big(f(\eta(t))\bigm|\eta(0)=\omega\big)}}.$$ Then $[0,\infty)\ni t\mapsto P_t\in\cB(\cH)$ is a Markovian contraction semigroup on $\cH$. In order to identify its infinitesimal generator, note that the infinitesimal change in the state of the Markov process $\eta(t)$ is due to the following three terms:
1. infinitesimal spatial shift due to ${\mathrm d}B(t)$;
2. infinitesimal spatial shift due to $-\grad\eta(t,0){\mathrm d}t$;
3. infinitesimal local change in $\eta$ due to increase of local time.
Altogether $$\label{ito2}
\eta(t+{\mathrm d}t,x)=
\eta(t, x+{\mathrm d}B(t) - \grad \eta(t,0) {\mathrm d}t) + V(x){\mathrm d}t.$$
Hence, given a sufficiently regular function on the state space $f:\Omega\to\R$, we compute $$\label{infgencomp}
\lim_{t\to0}\frac{{\ensuremath{\mathbf{E}\big(f(\eta(t)-f(\eta(0)))\bigm|\eta(0)=\omega\big)}}}{t}
=
\left(
\frac12\Delta -
\sum_{l=1}^d M(\partial_l\delta_0) \nabla_l + D_{V}
\right)f(\omega).$$ Recall and note that hence $$V=-C*\sum_{l=1}^d\partial^2_{ll}\delta_0$$ with $$-\sum_{l=1}^d\partial^2_{ll}\delta_0\in\cK_1.$$ Using , and (in this order), we readily obtain the following expression for the infinitesimal generator of the semigroup $P_t$: $$\label{infgen}
G
:=
\frac12\Delta +
\sum_{l=1}^d \big(a^*_l \nabla_l + \nabla_l a_l\big).$$ This operator is well defined on Wick polynomials of the field $\omega(x)$ and is extended by linearity and graph closure. It is not difficult to see that it satisfies the criteria of the Hille–Yoshida theorem (see [@reed_simon_vol1_80]) and thus it is indeed the infinitesimal generator of a Markovian semigroup. We omit these technical details.
The adjoint generator is $$\label{adjinfgen}
G^*
:=
\frac12\Delta -
\sum_{l=1}^d \big(a^*_l \nabla_l + \nabla_l a_l\big).$$ Note that due to the inner coherence of the model the last two terms on the right hand side of combine to give the tidy skew self-adjoint part of the infinitesimal generators in , .
For later use, we introduce notation for the symmetric (self-adjoint) and anti-symmetric (skew self-adjoint) parts of the generator $$\begin{aligned}
\label{symgen}
S
&:=
-\frac12(G+G^*)= -\frac12 \Delta,
\\[5pt]
\label{Adecomp}
A
&:=
\phantom{-}\frac12(G-G^*)=
\sum_{l=1}^d \big(a^*_l \nabla_l + \nabla_l a_l\big)
=: A_++A_-.\end{aligned}$$ It is a standard – though not completely trivial – exercise to check that the operators $S$ and $A$, a priori defined on the dense subspace $\wh\cC$ are indeed essentially self-adjoint, respectively, essentially skew self-adjoint.
Note that $$\label{grading}
S:\cH_n\to\cH_n,
\quad
A_+:\cH_n\to\cH_{n+1},
\quad
A_-:\cH_n\to\cH_{n-1},
\quad
A_{\mp}=-A_{\pm}^*,$$ and $$\label{van_H0_H1}
S\upharpoonright_{\cH_0}=0,
\qquad
A_+\upharpoonright_{\cH_0}=0,
\qquad
A_-\upharpoonright_{\cH_0\oplus\cH_1}=0.$$
It is clear that $$\label{stateq}
G^*\one = 0,$$ and hence, it follows that $\pi$ is indeed stationary distribution of the process $t\mapsto\eta(t)$ and $G^*$ is itself the infinitesimal generator of the stochastic semigroup $P^*_t$ of the time-reversed process.
Actually, so-called *Yaglom reversibility* holds. From , it follows that $$\label{yaglom}
G^{*}=JGJ.$$ This identity means that the stationary forward process $(-\infty,\infty)\ni t\mapsto\eta(t)$ and the *flipped backward process* $$\label{revproc}
(-\infty,\infty)\ni t\mapsto\wt\eta(t):=-\eta(-t)$$ obey the same law. This is a special kind of time-reversal symmetry called Yaglom reversibility, see [@yaglom_47], [@yaglom_49], [@dobrushin_suhov_fritz_88].
Proving ergodicity is easy: The Dirichlet form of the process $t\mapsto\eta(t)$ is $$\label{df}
\cD(f):=
-(f,G f)=
-\frac12 (f, \Delta f)=
\frac12 \sum_{l=1}^d{\left\|\,{\nabla_l f}\,\right\|}^2.$$ So, $$\label{erg}
\big\{ \cD(f)=0 \big\}
\ \iff\
\big\{ \nabla_l f=0,\ l=1,\dots, l \big\}
\ \iff\
\big\{ f=\text{const.} \ \pi\text{-a.s.} \big\},$$ since $z\mapsto\tau_z$ acts ergodically on $(\Omega,\pi)$.
This proves Theorem \[thm:stat\_erg\]. Corollary \[cor:lln\] follows directly from by the ergodic theorem.
CLT for additive functionals of ergodic Markov processes, graded sector condition {#s:KV}
=================================================================================
In the present short section we recall the non-reversible version of the Kipnis–Varadhan CLT for additive functionals of ergodic Markov processes and the *graded sector condition* of Sethuraman, Varadhan and Yau, [@sethuraman_varadhan_yau_00].
Let $(\Omega, \cF, \pi)$ be a probability space: the state space of a *stationary and ergodic* Markov process $t\mapsto\eta(t)$. We put ourselves in the Hilbert space $\cH:=\cL^2(\Omega, \pi)$. Denote the *infinitesimal generator* of the semigroup of the process by $G$, which is a well-defined (possibly unbounded) closed linear operator on $\cH$. The adjoint generator $G^*$ is the infinitesimal generator of the semigroup of the reversed (also stationary and ergodic) process $\eta^*(t)=\eta(-t)$. It is assumed that $G$ and $G^*$ have a *common core of definition* $\cC\subseteq\cH$. Let $f\in\cH$, such that $(f, \one) = \int_\Omega f\,{\mathrm d}\pi=0$. We ask about CLT/invariance principle for $$\label{rescaledintegral}
N^{-1/2}\int_0^{Nt} f(\eta(s))\,{\mathrm d}s$$ as $N\to\infty$.
We denote the *symmetric* and *anti-symmetric* parts of the generators $G$, $G^*$, by $$S:=-\frac12(G+G^*),
\qquad
A:=\frac12(G-G^*).$$ These operators are also extended from $\cC$ by graph closure and it is assumed that they are well-defined self-adjoint, respectively, skew self-adjoint operators $$S^*=S\ge0, \qquad A^*=-A.$$ Note that $-S$ is itself the infinitesimal generator of a Markov semigroup on $\cL^2(\Omega,\pi)$, for which the probability measure $\pi$ is reversible (not just stationary). We assume that $-S$ is itself ergodic: $$\label{Sergodic}
\mathrm{Ker}(S)=\{c1\!\!1 : c\in\C\}.$$
We denote by $R_\lambda\in\cB(\cH)$ the resolvent of the semigroup $s\mapsto e^{sG}$: $$R_\lambda
:=
\int_0^\infty e^{-\lambda s} e^{sG}{\mathrm d}s
=
\big(\lambda I-G\big)^{-1}, \qquad \lambda>0,$$ and given $f\in\cH$ as above, we will use the notation $$u_\lambda:=R_\lambda f.$$
The following theorem yields the efficient martingale approximation of the additive functional :
\[thm:kv\] With the notation and assumptions as before, if the following two limits hold in $\cH$ $$\begin{aligned}
\label{conditionA}
&
\lim_{\lambda\to0}
\lambda^{1/2} u_\lambda=0,
\\[5pt]
\label{conditionB}
&
\lim_{\lambda\to0} S^{1/2} u_\lambda=:v\in\cH,\end{aligned}$$ then $$\label{kv_variance}
\sigma^2:=2\lim_{\lambda\to0}(u_\lambda,f)\in[0,\infty),$$ and there exists a zero mean, $\cL^2$-martingale $M(t)$, adapted to the filtration of the Markov process $\eta(t)$ with stationary and ergodic increments and variance $${\ensuremath{\mathbf{E}\big(M(t)^2\big)}}=\sigma^2t$$ such that $$\label{kv_martappr} \lim_{N\to\infty} N^{-1} {\ensuremath{\mathbf{E}\big(\big(\int_0^N
f(\eta(s))\,{\mathrm d}s-M(N)\big)^2\big)}} =0.$$ In particular, if $\sigma>0$, then the finite dimensional marginal distributions of the rescaled process $t\mapsto \sigma^{-1} N^{-1/2}\int_0^{Nt}f(\eta(s))\,{\mathrm d}s$ converge to those of a standard $1d$ Brownian motion.
[**Remarks:**]{}
#### (1)
Conditions and of the theorem are jointly equivalent to the following $$\label{conditionC}
\lim_{\lambda,\lambda'\to0}(\lambda+\lambda')(u_\lambda,u_{\lambda'})=0.$$ Indeed, straightforward computations yield: $$\label{A+B=C}
(\lambda+\lambda')(u_\lambda,u_{\lambda'}) =
{\left\|\,{S^{1/2}(u_\lambda-u_{\lambda'})}\,\right\|}^2 + \lambda {\left\|\,{u_\lambda}\,\right\|}^2 +
\lambda' {\left\|\,{u_{\lambda'}}\,\right\|}^2.$$
#### (2)
The theorem is a generalization to non-reversible setup of the celebrated Kipnis–Varadhan theorem, [@kipnis_varadhan_86]. To the best of our knowledge, the non-reversible formulation, proved with resolvent rather than spectral calculus, appears first – in discrete-time Markov chain, rather than continuous-time Markov process setup and with condition – in [@toth_86] where it was applied, with bare hand computations, to obtain CLT for a particular random walk in random environment. Its proof follows the original proof of the Kipnis–Varadhan theorem with the difference that spectral calculus is to be replaced by resolvent calculus.
#### (3)
In continuous-time Markov process setup, it was formulated in [@varadhan_96] and applied to tagged particle motion in non-reversible zero mean exclusion processes. In this paper, the *(strong) sector condition* was formulated, which, together with an $H_{-1}$-bound on the function $f\in\cH$, provide sufficient condition for and of Theorem KV to hold.
#### (4)
In [@sethuraman_varadhan_yau_00], the so-called *graded sector condition* is formulated and Theorem KV is applied to tagged particle diffusion in general (non-zero mean) non-reversible exclusion processes, in $d\ge3$.
#### (5)
For a more complete list of applications of Theorem KV together with the strong and graded sector conditions, see the surveys [@olla_01], [@komorowski_landim_olla_09].\
Checking conditions and (or, equivalently, condition ) in particular applications is typically not easy. In the applications to RWRE in [@toth_86], the conditions were checked by some tricky bare hand computations. In [@varadhan_96], respectively, [@sethuraman_varadhan_yau_00], the so-called *sector condition*, respectively, the *graded sector condition* were introduced and checked for the respective models.
We recall from [@sethuraman_varadhan_yau_00] the graded sector condition. Assume that the Hilbert space $\cH=\cL^2(\Omega, \pi)$ is graded $$\label{grading2}
\cH=\overline{\oplus_{n=0}^\infty\cH_n},$$ and the infinitesimal generator is consistent with this grading in the sense of .
\[thm:svy\] Assume that the Hilbert space and the infinitesimal generator $G=-S+A$ are graded in the sense specified above and, in addition, there exist $\gamma\in[0,1)$ and $C<\infty$ such that for any $n\in\N$ and any $g\in\cH_n$, $h\in\cH_{n+1}$ $$\label{gsc}
{\left|\,{(h, A_+ g)}\,\right|}\le C n^{\gamma}\sqrt{(h,Sh)}\sqrt{(g,Sg)}.$$ If $f\in\cH$ with $(f,\one)=0$ is such that $$\label{H-1}
{\left\|\,{S^{-1/2}f}\,\right\|}:=
\lim_{\lambda\to0}(f,u_\lambda)<\infty,$$ then and hold and, as consequence, the conclusions of Theorem KV are valid.
Proof of the CLT {#s:proof}
================
The proof of Theorem \[thm:clt\] consists of three parts. In paragraph \[sss:lower\_bound\], we prove diffusive lower bound on the variance of the displacement $X(t)$. We need this in order to exclude the possibility that the a priori martingale part of the displacement and the martingale approximation of the compensator in the limit just cancel out. (As it is well known, this happens for example in tagged particle diffusion in 1d simple symmetric exclusion process with nearest neighbour jumps, [@arratia_83].) In paragraph \[sss:upper\_bound\], we prove the $H_{-1}$-bound for our particular case. Finally, in subsection \[ss:gsc\], we check conditions of Theorem SVY for our particular model.
Diffusive bounds {#ss:diffusive_bounds}
----------------
### Lower bound {#sss:lower_bound}
For $s,t\in\R$ with $s<t$, let $$\label{Mstdef}
M(s,t):=X(t)-X(s)-\int_s^t \varphi(\eta(u))\,{\mathrm d}u=B(t)-B(s).$$
\[lemma:forwbackw\] (1) Fix $s\in\R$. The process $[s,\infty)\ni t\mapsto M(s,t)$ is a forward martingale with respect to the forward filtration $\{\cF_{(-\infty,t]}:t\ge s\}$ of the process $t\mapsto\eta(t)$.\
(2) Fix $t\in\R$. The process $(-\infty,t]\ni s\mapsto M(s,t)$ is a backward martingale with respect to the backward filtration $\{\cF_{[s,\infty)}:s\le t\}$ of the process $t\mapsto\eta(t)$.
There is nothing to prove about the first statement: the integral on the right-hand side of was chosen exactly so that it compensates the conditional expectation of the infinitesimal increments of $X(t)$.
We turn to the second statement, which does need a proof. This consists of the following ingredients:
(1) The displacements are reverted on the flipped backward trajectories $t\mapsto\wt\eta(t)$ defined in : $$\wt X(t)-\wt X(s)=-X(t)+X(s).$$
(2) The forward process $t\mapsto\eta(t)$ and flipped backward process $t\mapsto\wt\eta(t)$ are identical in law (Yaglom reversibility).
(3) The function $\omega\mapsto\varphi(\omega)$ is odd with respect to the flip-map $\omega\mapsto -\omega$.
Putting these facts together (in this order), we obtain $$\begin{aligned}
\lim_{h\to0}h^{-1}{\ensuremath{\mathbf{E}\big(X(s-h)-X(s)\bigm|\cF_{[s,\infty)}\big)}}
& =
\lim_{h\to0}h^{-1}{\ensuremath{\mathbf{E}\big(-\wt X(-s+h)+\wt X(-s)\bigm|\wt
\cF_{(-\infty,-s]}\big)}}\notag
\\[5pt]
\label{bwmart}
& =
-\varphi(\wt\eta(-s)) = \varphi(\wt\eta(s)).\end{aligned}$$
From Lemma \[lemma:forwbackw\], it follows directly that for any $s<t$, the random variables $M(s,t)$ and $\int_s^t \varphi(\eta(u))\,{\mathrm d}u$ are *uncorrelated*, and therefore $$\begin{aligned}
\label{variancesum}
{\ensuremath{\mathbf{E}\big((X(t)-X(s))^2\big)}}
&=
{\ensuremath{\mathbf{E}\big((M(s,t))^2\big)}}+
{\ensuremath{\mathbf{E}\big(\big(\int_s^t \varphi(\eta(u))\,{\mathrm d}u\big)^2\big)}}
\\[5pt]
\notag
&=
(t-s) +
{\ensuremath{\mathbf{E}\big(\big(\int_s^t \varphi(\eta(u))\,{\mathrm d}u\big)^2\big)}}.\end{aligned}$$ Hence, the lower bound in .
### Upper bound: $H_{-1}$-bound {#sss:upper_bound}
We recall a general result proved in [@sethuraman_varadhan_yau_00]. See also the surveys [@olla_01], [@komorowski_landim_olla_09] and further references cited therein.
Let $t\mapsto\xi(t)$ be the *reversible* Markov process on the same state space $(\Omega, \pi)$ as the original $\eta(t)$ which has the infinitesimal generator $-S$.
\[lemma:v\] Let $\varphi\in \cL^2(\Omega,\pi)$ with $\int \varphi\,{\mathrm d}\pi=0$. Then $$\label{svy_bound}
\limsup_{t\to\infty}
t^{-1}{\ensuremath{\mathbf{E}\big(\big(\int_0^t \varphi(\eta(s))\,{\mathrm d}s\big)^2\big)}}
\le
\lim_{t\to\infty}
t^{-1}{\ensuremath{\mathbf{E}\big(\big(\int_0^t \varphi(\xi(s))\,{\mathrm d}s\big)^2\big)}}
=
2\Vert S^{-1/2} \varphi\Vert^2.$$
In our case, $$\label{drsgen}
S=- \frac12 \Delta,$$ and the reversible process $t\mapsto\xi(t)$ is the so-called *diffusion in random scenery* process. That means: $$\label{rwrs}
\xi(t):=\tau_{Z_t}\omega$$ where $t\mapsto Z_t$ is a Brownian motion in $\R^d$ of covariance $\delta_{ij}$, independent of the field $\omega$. The function $\varphi:\Omega\to\R$ is $\varphi(\omega)=\omega(0)$. Thus, the upper bound in will be $$\label{ourbound}
\lim_{t\to\infty}
t^{-1}{\ensuremath{\mathbf{E}\big(\big(\int_0^t\varphi(\xi(s))\,{\mathrm d}s\big)^2\big)}} = \lim_{t\to\infty}
t^{-1}{\ensuremath{\mathbf{E}\big(\big(\int_0^t\omega(Z_s)\,{\mathrm d}s\big)^2\big)}} =
\int_{\mathbb{R}^d} {\left|\,{p}\,\right|}^{-2}\,\wh V(p)\,{\mathrm d}p.$$ Here, the last step is straightforward computation with expectation taken over the Brownian motion $Z(t)$ *and* over the random scenery $\omega$. The integral on the right-hand side is the same as in , and thus, yields the upper bound in .
Graded sector condition {#ss:gsc}
-----------------------
As a first remark, note that condition is equivalent to $$\label{gsc2}
{\left\|\,{S^{-1/2}A_+S^{-1/2}\upharpoonright_{\cH_n}}\,\right\|}\le Cn^{\gamma},$$ where the operator $S^{-1/2}A_+S^{-1/2}\upharpoonright_{\cH_n}$ is meant as first defined on a dense subspace of $\cH_n$ and extended by continuity. In our case, the dense subspace will be $\wh \cC_n$ specified in and $$S^{-1/2}A_+S^{-1/2}
=
\sum_{l=1}^d {\left|\,{\Delta}\,\right|}^{-1/2} a^*_l \nabla_l{\left|\,{\Delta}\,\right|}^{-1/2}$$ The operators $\nabla_l{\left|\,{\Delta}\,\right|}^{-1/2}$ map the subspaces $\wh \cC_n$ to themselves and are bounded, see , . In order to bound the norm of the operator ${\left|\,{\Delta}\,\right|}^{-1/2} a^*_l:\cH_n\to\cH_{n+1}$, let $\wh u\in\wh \cC_{n}$, then $${\left|\,{\Delta}\,\right|}^{-1/2} a^*_l \wh u(p_1,\dots,p_{n+1})
=
\frac{i}{\sqrt{n+1}}
\frac{1}{{\left|\,{\sum_{m=1}^{n+1} p_m}\,\right|}}
\sum_{m=1}^{n+1}p_{ml}\wh u(p_1,\dots,p_{m-1},p_{m+1},\dots,p_n).$$ Hence $$\begin{aligned}
\label{bou}
&
(n+1)
{\left\|\,{{\left|\,{\Delta}\,\right|}^{-1/2} a^*_l \wh u}\,\right\|}^2
=
\\
\notag
&=
\int_{\R^d}\!\!...\!\!\int_{\R^d}
\frac1{{\left|\,{\sum_{m=1}^{n+1} p_m}\,\right|}^2}
{\left|\,{\sum_{m=1}^{n+1} p_{ml}\wh u(p_1,\!...,p_{m-1},p_{m+1},\!...,p_n)}\,\right|}^2
\prod_{m=1}^{n+1}\frac{\wh V(p_m)}{{\left|\,{p_m}\,\right|}^2}
{\mathrm d}p_1\!...{\mathrm d}p_{n+1}
\\
\notag
&\le
(n+1)^2
\int_{\R^d}\!\!...\!\!\int_{\R^d}
\frac1{{\left|\,{\sum_{m=1}^{n+1} p_m}\,\right|}^2}
{\left|\,{p_{n+1,l}}\,\right|}^2
{\left|\,{\wh u(p_1,\!...,p_{n})}\,\right|}^2
\prod_{m=1}^{n+1}\frac{\wh V(p_m)}{{\left|\,{p_m}\,\right|}^2}
{\mathrm d}p_1\!...{\mathrm d}p_{n+1}
\\
\notag
&=
(n+1)^2
\int_{\R^d}\!\!...\!\!\int_{\R^d}
{\left|\,{\wh u(p_1,\!\!...,p_{n})}\,\right|}^2
\prod_{m=1}^{n}\frac{\wh V(p_m)}{{\left|\,{p_m}\,\right|}^2}
\left(\int_{\R^d}
\frac{p_{n+1,l}^2}{{\left|\,{p_{n+1}}\,\right|}^2}
\frac{\wh V(p_{n+1})}{{\left|\,{\sum_{m=1}^{n+1} p_m}\,\right|}^2}
{\mathrm d}p_{n+1}\right)
{\mathrm d}p_1\!...{\mathrm d}p_{n}.\end{aligned}$$ In the second line, Schwarz’s inequality and the symmetry of the function $\wh u(p_1,\dots,p_n)$ is used.
The innermost integral of the last expression in is bounded above by $$\begin{aligned}
C^2:=
\sup_{p\in\R^d}\int_{\R^d}\frac{\wh V(p+q)}{{\left|\,{q}\,\right|}^2} \,{\mathrm d}q
<\infty.\end{aligned}$$ Thus, for $\wh u\in\wh \cC_n$ $${\left\|\,{{\left|\,{\Delta}\,\right|}^{-1/2} a^*_l \wh u}\,\right\|}^2 \le C^2 (n+1){\left\|\,{\wh u}\,\right\|}^2 .$$ Hence, by continuous extension, $${\left\|\,{{\left|\,{\Delta}\,\right|}^{-1/2} a^*_l\upharpoonright_{\cH_n}}\,\right\|} \le C \sqrt{n+1},$$ and with $\gamma=1/2$ follows.
[**Acknowledgement.**]{} We thank Benedek Valkó for his remarks on the first draft of this paper. BT thanks the kind hospitality of the Mittag Leffler Insitute, Stockholm, where part of this work was done. The work of all authors was partially supported by OTKA (Hungarian National Research Fund) grant K 60708.
[99]{}
D. Amit, G. Parisi, L. Peliti: Asymptotic behavior of the ‘true’ self-avoiding walk. [*Phys. Rev. B*]{}, [**27**]{}: 1635–1645 (1983)
R. Arratia: The motion of a tagged particle in the simple symmetric exclusion system on $\Z$. [*Ann. Probab.*]{}, [**24**]{}: 362–373 (1983)
M. Cranston, Y. Le Jan: Self-attracting diffusions: two case studies. [*Math. Ann.*]{}, [**303**]{}: 87–93 (1995)
M. Cranston, T. S. Mountford: The strong law of large numbers for a Brownian polymer. [*Ann. Probab.*]{}, [**24**]{}: 1300–1323 (1996)
C. Cuny, M. Peligrad: Central limit theorem started at a point for additive functionals of reversible Markov chains. [http://arxiv.org/abs/0910.2631]{}
R. L. Dobrushin, Yu. M. Suhov, J. Fritz: A. N. Kolmogorov: The founder of the theory of reversible Markov processes. [*Uspekhi Mat. Nauk*]{} [**43:6**]{}: 167–188 (1988) \[English translation: [*Russian Math. Surveys*]{} [**43:6**]{}: 157–182\]
R. T. Durrett, L. C. G. Rogers: Asymptotic behavior of Brownian polymers. [*Probab. Theory Related Fields*]{} [**92**]{}: 337–349 (1992)
I. Horváth, B. Tóth, B. Vető: Diffusive limit for the myopic (or ’true’) self-avoiding random walk in $d\ge3$. [*(In preparation.)*]{}
S. Janson: [*Gaussian Hilbert Spaces.*]{} Cambridge University Press, 1997
C. Kipnis, S. R. S. Varadhan: Central limit theorem for additive functionals of reversible Markov processes with applications to simple exclusion. [*Commun. Math. Phys.*]{} [**106**]{}: 1–19 (1986)
T. Komorowski, C. Landim, S. Olla: [*Book in preparation*]{}, Springer.
T. Komorowski, S. Olla: On the sector condition and homogenization of diffusions with a Gaussian drift. [*Journal of Functional Analysis*]{} [**197**]{}: 179–211 (2003)
T. S. Mountford, P. Tarrès: An asymptotic result for Brownian polymers. [*Ann. Inst. H. Poincaré – Probab. Stat.*]{} [**44**]{}: 29–46 (2008)
J. R. Norris, L. C. G. Rogers, D. Williams: Self-avoiding walk: a Brownian motion model with local time drift. [*Probab. Theory Related Fields*]{} [**74**]{}: 271–287 (1987)
S. P. Obukhov, L. Peliti: Renormalisation of the “true” self-avoiding walk. [*J. Phys. A*]{}, [**16**]{}: L147–L151 (1983)
S. Olla: Central limit theorems for tagged particles and for Diffusions in random environment. In: F. Comets, É. Pardoux (eds): [*Milieux aléatoires*]{} [*Panor. Synthèses*]{} [**12**]{}, Soc. Math. France, Paris, 2001.
L. Peliti, L. Pietronero: Random walks with memory. [*Riv. Nuovo Cimento*]{}, [**10**]{}: 1–33 (1987)
M. Reed, B. Simon: [*Methods of Modern Mathematical Physics Vol 1, 2.*]{} Academic Press New York, 1972–75.
S. Sethuraman, S. R. S. Varadhan, H-T. Yau: Diffusive limit of a tagged particle in asymmetric simple exclusion processes. [*Comm. Pure Appl. Math.*]{} [**53**]{}: 972–1006 (2000)
B. Simon: [*The $P(\phi)_2$ Euclidean (Quantum) Field Theory.*]{} Princeton University Press, 1974.
P. Tarrès, B. Tóth, B. Valkó: Diffusivity bounds for 1d Brownian polymers. [http://arxiv.org/abs/0911.2356]{}, [*submitted*]{} (2009)
B. Tóth: Persistent random walk in random environment. [*Probab. Theory Rel. Fields*]{} [**71**]{}: 615–625 (1986)
B. Tóth: The ’true’ self-avoiding walk with bond repulsion on ${\mathbb Z}$: limit theorems. [*Ann. Probab.*]{} [**23**]{}: 1523–1556 (1995)
B. Tóth, B. Vet[ő]{}: Continuous time ‘true’ self-avoiding random walk on $\Z$. [http://arxiv.org/abs/0909.3863]{}, [*submitted*]{} (2009)
B. Tóth, W. Werner: The true self-repelling motion. [*Probab. Theory Related Fields*]{} [**111**]{}: 375–452 (1998)
B. Valkó: [*personal communication*]{}
S. R. S. Varadhan: Self-diffusion of a tagged particle in equilibrium of asymmetric mean zero random walks with simple exclusion. [*Ann. Inst. H. Poincaré – Probab. et Stat.*]{} [**31**]{}: 273–285 (1996)
A. M. Yaglom: On the statistical treatment of Brownian motion. (Russian) [*Doklady Akad. Nauk SSSR*]{} [**56**]{}: 691–694 (1947)
A. M. Yaglom: On the statistical reversibility of Brownian motion. (Russian) [*Mat. Sbornik*]{} [**24**]{}: 457–492 (1949)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Gradient descent methods have been widely used for organizing multi-agent systems, in which they can provide decentralized control laws with provable convergence. Often, the control laws are designed so that two neighboring agents repel/attract each other at a short/long distance of separation. When the interactions between neighboring agents are moreover nonfading, the potential function from which they are derived is radially unbounded. Hence, the LaSalle’s principle is sufficient to establish the system convergence. This paper investigates, in contrast, a more realistic scenario where interactions between neighboring agents have fading attractions. In such setting, the LaSalle type arguments may not be sufficient. To tackle the problem, we introduce a class of partitions, termed *dilute partitions*, of formations which cluster agents according to the inter- and intra-cluster interaction strengths. We then apply dilute partitions to trajectories of formations generated by the multi-agent system, and show that each of the trajectories remains bounded along the evolution, and converges to the set of equilibria.'
author:
- 'Xudong Chen$^*$[^1]'
bibliography:
- 'FC.bib'
title: '**Swarm Aggregation under Fading Attractions**'
---
Introduction
============
The use of gradient descent for organizing a group of mobile autonomous agents has been widely appreciated in mathematics and in its real-world applications. Descent equations often provide the most direct demonstration of the existence of local minima, and provide easily implemented algorithm for finding the minima. Furthermore, in the context of multi-agent control, gradient descent can be interpreted as providing decentralized control laws for pairs of neighboring agents in the system. Specifically, we consider a class of multi-agent systems in which pairs of neighboring agents attract/repel each other in a reciprocal way, depending [*only*]{} on the distances of separation. Then, the resulting dynamics of the agents evolve as a gradient flow over a Euclidean space. We describe below the model in precise terms:\
[**Model**]{}. Let $G = (V,E)$ be an undirected connected graph of $N$ vertices, with $V = \{v_1,\ldots, v_N\}$ the vertex set, and $E$ the edge set. We denote by $(v_i,v_j)$ an edge of $G$. Let $V_i$ be the set of neighbors of $v_i$, i.e., $$V_i = \{v_j\in V\mid (v_i,v_j) \in E\}.$$ To each vertex $v_i$, we assign an agent $i$, with $ x_i \in {\mathbb{R}}^n$ its coordinate. With a slight abuse of notation, we refer to agent $i$ as $x_i$. For every edge $(v_i,v_j) \in E$, we let $d_{ij}$ be the distance between $x_i$ and $x_j$, i.e., $d_{ij}:= \| x_i- x_j\|$. The equations of motion of the $N$ agents $ x_1,\cdots, x_N$ in $\mathbb{R}^n$ are given by $$\label{MODEL}
\frac{d}{dt}{ x}_i=\sum_{v_j\in V_i}g_{ij}(d_{ij})(x_j-x_i),\hspace{10pt} \forall\, v_i \in V.$$ Each scalar function $g_{ij}$ is assumed to be continuously differentiable; we refer to $g_{ij}$, for $(v_i,v_j)\in E$, the [**interaction functions**]{} associated with system . An important property associated with system is that the dynamics of the agents evolve as a gradient flow. A direct computation yields that the associated potential function is given by $$\label{POTENTIAL}
\Psi(x_1,\ldots,x_N):=\sum_{(v_i,v_j)\in E}\int_1^{\|x_j - x_i\|} s g_{ij}(s)ds.$$ Designing of the interaction functions that are necessary for organizing such multi-agent system has been widely investigated: questions about swarm aggregation and avoidance of collisions [@GP; @chu2003self; @XC2014ACC], questions about local/global stabilization of targeted configurations [@chu2003self; @krick2009; @dimarogonas2008stability; @xudongchen2015CDCtriangulatedformationcontrol; @zhiyongsun2015ECC], questions about robustness issues of control laws under perturbations [@AB2012CDC; @sun2014CDC; @USZB; @mou2014CDC], questions about counting number of critical formations [@BDO2014CT; @UH2013E] have all been treated to some degree. We also refer to [@xudongchen2015CDCformationcontroltimevaryinggraph; @XC2014CDC; @JB2006cdc; @cao2008control; @baillieul2007combinatorial; @AB2013TAC; @AL2014ECC; @lin2014distributed; @chen2015decentralized] for other types of models for multi-agent control, as variants of system . For the purpose of achieving swarm aggregation, the interaction functions $g_{ij}$’s are often designed so that neighboring agents attract each other at a long distance. In particular, we note here that if the underlying graph $G$ is connected, and the interaction functions between neighboring agents have non-fading attractions (as considered in most of the literatures: see, for example, [@GP; @chu2003self; @zhiyongsun2015ECC; @krick2009; @dimarogonas2008stability]); then, for any initial condition, the resulting gradient flow will converge to the set of equilibria. In other words, there is no escape of agents to infinity along the evolution of the multi-agent system. Indeed, in any of such case, the associated potential function is [*radially unbounded*]{}, i.e., it approaches to infinity as the size of a formation tends to infinity. So then, each trajectory of system has to remain bounded, and hence converges to the set of equilibria.
On the other hand, it is more realistic to assume that the magnitude of an attraction between two neighboring agents fades away as their mutual distance grows. We refer to [@cucker2007emergent], as an example, for modeling the flocking behavior with fading interactions. Specifically, the authors there considered a second order model: $$\left\{
\begin{array}{l}
\dot x_i = v_i\\
\dot v_i = \sum^N_{j = 1} g(d_{ij}) (v_j - v_i),
\end{array}
\right.$$ with the graph $G$ being complete and without repulsions, i.e., the function $g(d)$ is positive at all distances $d>0$. Also, we recall that the Lennard-Jones force, which describes the interaction between a pair of neutral molecules/atoms, has strong repulsion and fading attraction. We note here that, under the assumption of fading attraction, the potential function associated with system may remain bounded as the size of a formation grows; indeed, one may find a continuous path of formations along which the potential function decreases while the size of formation approaches to infinity. In particular, conventional techniques for proving convergence of gradient flows, such as using the potential function as a Lyapunov function and then appealing to the LaSalle’s principle [@lasalle1960some], may not work in this case. Nevertheless, we are still able to show that all the trajectories generated by system converge to the set of equilibria. The proof of the system convergence relies on the use of a class of partitions, termed [*dilute partitions*]{}, of formations introduced in section III. Roughly speaking, dilute partitions decompose formations into different clusters of agents according to certain combinatorial and metric conditions. We apply dilute partitions to trajectories of formations generated by system , and investigate how clusters of agents evolve over time and interact with each other. In particular, we show that each trajectory generated by system has to remain bounded, and hence converges to the set of equilibria. This approach, via the use of dilute partition, to multi-agent systems might be of independent interest for studying other problems that involve large sized formations.
This paper expands on some preliminary result presented in [@XC2014ACC] by, among others, providing an analysis of system with an arbitrary connected graph (whereas in [@XC2014ACC], we dealt only with the complete graph), a finer description of the dilute partitions and the associated properties, and a considerable amount of analyses and proofs that were left out. The remainder of the paper is organized as follows. In section II, we introduce definitions and notations and describe some preliminary results about system . We also state the main theorem of the paper. In particular, the main theorem states that the equilibria of system have bounded size, and moreover, all trajectories generated by system converge to the set of equilibria under the assumption of fading attractions. Sections III and IV are devoted to establishing properties of system that are needed for proving the main theorem. A detailed organization of these two sections will be given after the statement of the theorem. We provide conclusions in the last section. The paper ends with Appendices containing proofs of some technical results.
Backgrounds and Main Theorem
============================
In this section, we introduce the main definitions used in this work, describe some preliminary results, and state the main theorem of the paper.
Backgrounds and notations
-------------------------
Let $G = (V,E)$ be an undirected graph of $N$ vertices. Let $V'$ be a subset of $V$; a subgraph $G' = (V', E')$ of $G$ is said to be [*induced*]{} by $V'$ if the following condition is satisfied: an edge $(v_i,v_j)$ is in $E'$ if and only if $(v_i,v_j)$ is in $E$.
Given a formation of $N$ agents in ${\mathbb{R}}^n$, with states $x_1,\ldots, x_N$, respectively, we set $p := (x_1,\ldots,x_N) \in {\mathbb{R}}^{nN}$. We call $p$ a [**configuration**]{}; a configuration $p \in P_G$ can be viewed as an [*embedding*]{} of the graph $G$ in ${\mathbb{R}}^n$ by assigning vertex $v_i$ to $x_i$. We call the pair $(G, p)$ a [**framework**]{}. We define the [*configuration space*]{} $P_{G}$, associated with the graph $G$, as follows: $$P_{G}:=\left\{( x_1,\cdots, x_N)\in \mathbb{R}^{nN} \mid x_i\neq x_j, \, \forall \, (v_i,v_j)\in E \right\}.$$ Equivalently, $P_{G}$ is the set of embeddings of the graph $G$ in $\mathbb{R}^n$ whose neighboring vertices have distinct positions. Let $(G, p)$ be a framework, with $p = (x_1,\ldots, x_N) \in P_G$. Let $G' = {\left (}V',E' {\right )}$ be a subgraph of $G$, with $V' = \{v_{i_1},\ldots,v_{i_k}\}$. We call $p'\in P_{G'}$ a [**sub-configuration**]{} of $p$ associated with $G'$ if $p' = (x_{i_1},\ldots, x_{i_k})$, and correspondingly $(G',p')$ a sub-framework of $(G,p)$.\
[**Attraction/Repulsion functions**]{}. We now introduce the class of interaction functions, termed [*attraction/repulsion functions*]{}, that are considered in the paper. Roughly speaking, an attraction/repulsion function between a pair of agents is such that the two agents attract/repel each other at a long/short distance. Furthermore, we require that the repulsion go to infinity as the mutual distance between the agents approaches to zero and that the attraction fade away as the distance grows. A typical example of such function is the Lennard-Jones type interaction: $$\label{eq:typicalexample}
g(d) = -\frac{\sigma_1}{d^{n_1}} + \frac{\sigma_2}{d^{n_2}}$$ with $\sigma_1$, $\sigma_2$ positive real numbers, and $n_1$, $n_2$ positive integers satisfying $n_1 > n_2 > 1$. We now define attraction/repulsion functions in precise terms. Let ${\mathbb{R}}_+$ be the set of strictly positive real numbers. We denote by $\operatorname{C}({\mathbb{R}}_+,{\mathbb{R}})$ the set of continuous functions from ${\mathbb{R}}_+$ to ${\mathbb{R}}$. We have the following definition:
\[def:fadingattraction\] A function $g$ in $\operatorname{C}({\mathbb{R}}_+,{\mathbb{R}})$ is an [**attraction/repulsion function**]{} if $g$ satisfies the following conditions:
1. [**Strong repulsion**]{}: $$\lim_{d\to 0+} dg(d)=-\infty,$$ and moreover, $$\displaystyle \lim_{d\to 0+}\int^1_d sg(s)ds=-\infty.$$
2. [**Fading attraction**]{}: There exists a number $\alpha_+ > 0$ such that $$g(d) > 0, \hspace{10pt} \forall \, d \ge \alpha_+,$$ and moreover, $$\lim_{d\to\infty} d g(d)=0.$$
Note that the function $dg_{ij}(d)$ shows up in Definition \[def:fadingattraction\] because $| dg_{ij}(d) |$ represents the actual magnitude of attraction/repulsion between $x_i$ and $x_j$.
We assume in the remainder of the paper that all the interaction functions $g_{ij}$, for all $(v_i,v_j) \in E$, are attraction/repulsion functions. Furthermore, we assume, without loss of generality, that the positive number $\alpha_+$ in Definition \[def:fadingattraction\] can be applied to all $g_{ij}$, i.e., $$\label{eq:defx+}
g_{ij}(d) > 0, \hspace{10pt} \forall \, d \ge \alpha_+,$$ for all $(v_i,v_j) \in E$.
In the paper, we often deal with sub-systems of , especially, the subsystems induced by subgraphs of $G$. We thus have the following definition:
Let $G' = (V',E')$ be a subgraph of $G$. A multi-agent system is a [**sub-system induced by $G'$**]{} if it is comprised of agents $ x_i$, for $v_i\in V'$, together with the interaction functions $g_{ij}$, for $(v_i,v_j)\in E'$. Specifically, the dynamics of the agents in the induced sub-system are given by: $$\label{eq:inducedsub-system}
\dot{ x}_{i} = \sum_{v_j\in V'_i}g_{ij}(d_{ij}) ( x_j- x_i),\hspace{10pt} \forall v_i\in V'$$ with $V'_i$ the neighbors of $i$ in $G'$.
For each configuration $p\in P_G$, we denote by $f(p)$ the vector field of system . The configuration $p$ is said to be an [**equilibrium**]{} of system if $f(p) = 0$. For each $v_i\in V$, we let $f_i(p)\in {\mathbb{R}}^n$ be defined by restricting $f(p)$ to agent $x_i$, i.e., $$f_i(p):= \sum_{v_j \in V_i} g_{ij}(d_{ij}) ( x_j- x_i).$$ Similarly, for a subgraph $G' = (V', E')$ of $G$, we denote by $f_{V'}(p)\in {\mathbb{R}}^{n|V'|}$ the restricting of $f(p)$ to the sub-configuration $p'$ associated with $G'$.
Preliminaries and the main result
---------------------------------
In this subsection, we describe some preliminary results, and then state the main theorem of the paper. Recall that the dynamics of system is a gradient flow of $\Psi$ defined in . By assuming that all $g_{ij}$, for $(v_i,v_j) \in E$, are attraction/repulsion functions, we have the following fact:
\[lem:phiboundedbelow\] The potential function $\Psi: P_G \to \mathbb{R}$ is bounded below, i.e., $$\inf\{\Psi(p) \mid p\in P_G\} > -\infty.$$
First, note that from the condition of [*strong repulsion*]{}, there is a positive number $\alpha_-> 0$ such that $$g_{ij}(d) < 0, \hspace{10pt} \forall \, d \le \alpha_- \mbox{ and } \forall\, (v_i,v_j)\in E.$$ We also recall that $\alpha_+$ is defined in such that $$g_{ij}(d) > 0, \hspace{10pt} \forall \, d \ge \alpha_+ \mbox{ and } \forall\, (v_i,v_j)\in E.$$ This, in particular, implies that for all $(v_i,v_j) \in E$, $$\min_{d\in [\alpha_-, \alpha_+]} \int^d_{1} sg_{ij}(s) ds = \inf_{d \in {\mathbb{R}}_+} \int^d_{1} sg_{ij}(s)ds.$$ Now, let $$\label{eq:defpsi0}
\psi_0:= \min_{(v_i,v_j) \in E}\,\left\{ \min_{d\in [\alpha_-, \alpha_+]} \int^d_{1} s g_{ij}(s) ds \right\};$$ then, we have $$\Psi(p) \ge |E|\, \psi_0, \hspace{10pt} \forall\, p\in P_G,$$ which completes the proof.
It is well known that along a trajectory of a gradient flow, the potential function is non-increasing. On the other hand, the condition of [*strong repulsion*]{} implies that the potential function $\Psi$ is infinite if the distance of separation of two neighboring agents is zero. This, in particular, implies that there is no collision of neighboring agents along the evolution, and hence solutions of system exist for all time. Furthermore, for a configuration $p = (x_1,\ldots, x_N)\in P_G$, if we let $d_-(p)$ and $d_+(p)$ be defined as follows: $$\label{eq:defd-d+}
\left \{
\begin{array}{l}
d_-(p) := \min \left \{ \|x_j - x_i\| \mid (v_i, v_j)\in E \right \} \vspace{3pt} \\
d_+(p) := \max \left \{ \|x_j - x_i\| \mid (v_i, v_j)\in E \right \},
\end{array}
\right.$$ then we have the following fact:
\[ELB\] Let $p(0)\in P_G$ be the initial condition of system , and $p(t)$ be the trajectory generated by the system. Then, $$\inf \left\{ d_-(p(t)) \mid t\ge 0 \right \} > 0.$$
Let $\psi_0$ be defined in . Then, from the condition of [*strong repulsion*]{}, there exists a number $d > 0$ such that $$\int^d_{1} s g_{ij}(s) ds + {\left (}|E| - 1{\right )}\, \psi_0 > \Psi(p(0))$$ for all $(v_i,v_j) \in E$. We now show that $
d_-(p(t)) > d$ for all $t \ge 0$. Suppose that, to the contrary, there exists an instant $t\ge 0$ such that $\|x_j(t) - x_i(t)\| = d$ for some $(v_i,v_j)\in E$. Then, by definition of $\psi_0$, we have $$\Psi(p(t)) \ge \int^d_{1} sg_{ij}(s) ds + {\left (}|E| - 1{\right )}\, \psi_0 > \Psi(p(0))$$ which contradicts the fact that $\Psi(p(t))$ is non-increasing in $t$. This completes the proof.
Note that from Lemmas \[lem:phiboundedbelow\] and \[ELB\], if the potential function $\Psi$ is such that $$\label{eq:potentialconferencepaper}
\lim_{d_+(p) \to \infty} \Psi(p) = \infty,$$ then each trajectory $p(t)$ of system has to remain bounded, and hence converges to the [set of equilibria]{}. Yet, may not hold under the condition of [*fading attraction*]{}. For example, if each $g_{ij}$ is a Lennard-Jones type interaction, i.e., $$g_{ij}(d) = - \frac{\sigma_{ij,1}}{d^{n_{ij,1}}} + \frac{\sigma_{ij,2}}{d^{n_{ij,2}}}$$ with $n_{ij,1} > n_{ij,2} > 2$; then, for all $(v_i,v_j) \in E$, we have $$\int^{\infty}_{1} sg_{ij}(s)ds = -\frac{\sigma_{ij,1}}{n_{ij,1} - 2} + \frac{\sigma_{ij,2}}{n_{ij,2} - 2} < \infty,$$ and hence $\Psi(p)$ may remain bounded as $d_+(p)$ diverges. Nevertheless, we are still able to establish the convergence of system . We state below the main result of the paper.
\[thm:MAIN\] Let $G = (V, E)$ be a connected undirected graph, and let $g_{ij}$, for $(v_i,v_j) \in E$, be attraction/repulsion functions. Then, the multi-agent system satisfies the following properties:
1. There exist two positive numbers $D_-$ and $D_+$ such that if $p$ is an equilibrium of system , then $$D_- \le d_-(p) \le d_+(p) \le D_+$$ with $d_-(p)$ and $d_+(p)$ defined in .
2. For any initial condition $p(0)\in P_G$, the trajectory $p(t)$ of system converges to the set of equilibria.
In the remainder of the paper, we establish properties of system that are needed to prove Theorem \[thm:MAIN\]. In section \[sec:dp\], we introduce a class of partitions, termed [*dilute partitions*]{}, of frameworks, which decomposes frameworks into disjoint sub-frameworks satisfying certain combinatorial and metric properties. This is a rich question, related to the $k$-means clustering [@macqueen1967some] and its variants. We then apply dilute partitions to unbounded sequences of frameworks, and describe relevant properties associated with it. In section \[sec:cgf\], we apply dilute partitions to frameworks along a class of trajectories generated by system , and establish certain path behavior of the trajectories, which is relevant to the proof of system convergence.
Dilute Partitions and Diluting Sequences {#sec:dp}
========================================
Let $(G, p)$ be a framework. We say that $\sigma = \{(G_i, p_i)\}^m_{i=1}$, with $G_i = (V_i, E_i)$, is a [**partition**]{} of $(G,p)$ if $\sigma$ satisfies the following conditions:
1. The subsets $\{V_1,\ldots,v_m\}$ of $V$ form a partition: $$\label{eq:inducedpartitionofV}
V = \sqcup^m_{i=1} V_i.$$
2. Each $G_i$ is a subgraph of $G$ induced by $V_i$, and each $p_i$ is a sub-configuration of $p$ associated with $G_i$.
We refer to the partition of $V$ [*induced by $\sigma$*]{}.
Now, for a partition $\sigma = \{(G_i, p_i)\}^m_{i=1}$ of a framework $(G,p)$, let the [diameter]{} of a sub-configuration $p_i$ be $$\phi(p_i) : = \max \left \{\| x_k - x_j\| \mid v_j, v_k \in V_i \right\}.$$ We then define the [**intra-cluster distance**]{} of $\sigma$ by $$\label{eq:intradistance}
{\mathcal}{L}_-(\sigma): = \max\left \{ \phi(p_i) \mid 1\le i \le m \right\}.$$ Given two distinct sub-frameworks $(G_i,p_i)$ and $(G_j, p_j)$, let $d(p_i,p_j)$ be the distance between $p_i$ and $p_j$: $$d(p_i,p_j): = \min\left\{\|x_{i'} - x_{j'}\| \mid v_{i'} \in V_i, \, v_{j'} \in V_j \right \}.$$ We say that $(G_i,p_i)$ and $(G_j, p_j)$ are [**adjacent**]{} if there is an edge $(v_{i'},v_{j'})$ of $G$, with $v_{i'}\in V_i$ and $v_{j'} \in V_j$. We then define the [**inter-cluster distance**]{} of $\sigma$ by $$\label{eq:interdistance}
{\mathcal}{L}_+(\sigma) := \min_{(i,j)} \left \{ d(p_i,p_j) \right \},$$ where the minimum is taken over the pairs $(i,j)$ for $(G_i,p_i)$ and $(G_j,p_j)$ to be adjacent. With the definitions and notations above, we define dilute partitions:
\[def:dilutepartition\] Let $(G, p)$ be a framework, with $G$ connected and $p\in P_G$. A partition $\sigma = \{(G_i, p_i)\}^m_{i=1}$ of $(G,p)$ is a **dilute partition** with respect to a positive number $l$ if it satisfies the following two conditions:
1. Each $G_i$ is connected.
2. If $(G_i,p_i)$ and $(G_j,p_j)$ are adjacent, then $d(p_i,p_j) > l$ and ${\max\{\phi(p_i), \phi(p_j)\}} < {d(p_i,p_j)}$.
In the remainder of the section, we fix a connected graph $G$, and assume that $G$ has at least two vertices. For a positive number $l$ and a configuration $p\in P_G$, we let $\Sigma(l{\,;\,}p)$ be the set of dilute partitions of $(G,p)$ with respect to $l$. Note that $\Sigma(l{\,;\,}p)$ is nonempty because $\Sigma(l{\,;\,}p)$ always contains the [**trivial partition**]{}, namely, the partition which has only one cluster containing all the agents. A partition of $(G,p)$ is said to be [*nontrivial*]{} if it is not the trivial partition. We also note that from Definition \[def:dilutepartition\], if $\sigma \in \Sigma(l,p)$ and $l \ge l' >0$, then $\sigma \in \Sigma(l',p)$. In other words, we have $\Sigma(l,p) \subseteq \Sigma(l',p)$ for all configurations $p\in P_G$.
We will now state the main result of the section, which relates dilute partitions to sequences of diverging configurations:
[**Diluting sequence**]{}. Let $\{p(i)\}_{i\in{\mathbb{N}}}$ be a sequence of configurations in $P_G$. We say that $\{p(i)\}_{i\in{\mathbb{N}}}$ is [**unbounded**]{} if for any $d > 0$, there exists an $i\in{\mathbb{N}}$ such that $\phi(p(i)) > d$. We now formalize in detail the following fact: for any unbounded sequence $\{p(i)\}_{i\in{\mathbb{N}}}$, there is a subsequence $\{p(n_i)\}_{i\in{\mathbb{N}}}$ such that (i) the agents in $p(n_i)$, for $i\in{\mathbb{N}}$, are clustered in the same way; (ii) the inter-cluster distances diverge while the intra-cluster distances remain bounded. Precisely, we state the following result:
\[CCOSODC\] Let $\{p(i)\}_{i\in{\mathbb{N}}}$ be an unbounded sequence in $P_G$, and $\{l_i\}_{i\in {\mathbb{N}}} $ be a sequence of positive real numbers, with $\lim_{i\to \infty} l_i = \infty $. Then, there is a subsequence $\{p(n_i)\}_{i\in\mathbb{N}}$ out of $\{p(i)\}_{i\in{\mathbb{N}}}$, together with a sequence of nontrivial dilute partitions $$\left \{\sigma_i\in \Sigma(l_i{\,;\,}p(n_i)) \right \}_{ i\in{\mathbb{N}}},$$ such that the following properties are satisfied:
1. All partitions $\sigma_i$ induce the same partition of $V$.
2. There is a positive number $L_0$ such that $${\mathcal}{L}_-(\sigma_i) \le L_0, \hspace{10pt} \forall i\in \mathbb{N}.$$
We refer to $\{p(n(i))\}_{i\in \mathbb{N}}$ as a [**diluting sequence**]{}.
The remainder of the section is organized to establish Theorem \[CCOSODC\]. In particular, we establish in Subsection \[ssec:nontrivialclustering\] a sufficient condition for a framework $(G,p)$ to admit a nontrivial dilute partition. We then provide, in Subsection \[ssec:ProofIII\], a proof of Theorem \[CCOSODC\].
Existence of nontrivial dilute partitions {#ssec:nontrivialclustering}
-----------------------------------------
Naturally, given a framework $(G,p)$, there is a partial order defined over the set of partitions of $(G,p)$: Let $\sigma$ and $\sigma'$ be two partitions of $(G,p)$. Let $V = \sqcup^m_{i = 1} V_i$ and $V = \sqcup^{m'}_{i = 1} V'_i$ be the partitions of the vertex set $V$ induced by $\sigma$ and $\sigma'$, respectively. We say that $\sigma'$ is [**coarser**]{} than $\sigma$, or simply write $\sigma' \prec \sigma$, if $m > m'$, and moreover, each $V_i$ is a subset of $V'_j$ for some $j = 1,\ldots, m'$. Recall that given a configuration $p$, $\phi(p)$ is the diameter of $p$. Now, fix a positive number $l>0$, we establish the following result:
\[NC\] Let $(G,p)$ be a framework, with $G$ a connected graph of at least two vertices and $p\in P_G$. For a positive number $l$, there exists a threshold $d>0$ such that if $\phi(p) > d$, then there is a nontrivial partition in $\Sigma(l{\,;\,}p)$.
The proof will be carried out by contradiction: we assume that for any $d>0$, there exists a configuration $p\in P_G$, with $\phi(p) \ge d$, such that $\Sigma(l{\,;\,}p)$ is a singleton, comprised only of the trivial partition.
Pick any such configuration $p$, with $\phi(p)$ sufficiently large. To proceed, first note that there exists at least a pair of agents $x_i$ and $x_j$ in $p$, with $(v_i,v_j)\in E$, such that $\|x_j -x_i\|\le l$. This holds because otherwise, the agent-wise partition of $(G,p)$— $\left\{{\left (}\{i\}, \varnothing{\right )}, x_i\right\}^N_{i = 1}$, with $(\{i\}, \varnothing)$ a graph of one single vertex $i$ and no edge— is a nontrivial dilute partition in $\Sigma(l{\,;\,}p)$. Now, define a partition $\sigma = \{(G_i,p_i)\}^{m}_{i=1}$ of $(G,p)$, with $G_i = (V_i,E_i)$, as follows: Two vertices $v_{j}$ and $v_{j'}$ are in the same subset $V_i$ if, and only if, there is a chain of vertices $v_{j_1},\ldots,v_{j_q}$ in $V$, with ${v_{j_1}}=v_j$ and $v_{j_q}=v_{j'}$, such that $$\label{eq:definingrule}
{\left (}v_{j_k}, v_{j_{k+1}} {\right )}\in E \hspace{10pt} \mbox{ and } \hspace{10pt} \| x_{j_k} - x_{j_{k+1}}\| \le l$$ for all $k = 1,\ldots, q-1$.
We describe below some properties of the newly constructed partition $\sigma$. First, note that from , each subgraph $G_i$ is connected and $\phi(p_i)$ is bounded above; indeed, we have $$\label{eq:evaluatesize}
\phi(p_i)< l', \hspace{10pt} \mbox{ for } \hspace{5pt} l' := (N-1 )l.$$ Furthermore, we have $$\label{eq:1mN}
1< m < N.$$ To see this, first note that there exists at least an edge $(v_i,v_j)\in E$ such that $\|x_j - x_i\| \le l$, and hence there is at least a subgraph $G_k$ having more than one vertex, which implies that $m < N$. Also, note that $\phi(p)$ can be made sufficiently large; in particular, if we let $\phi(p)\ge l'$, then, from , we must have $m > 1$.
Now, suppose that for any two adjacent frameworks $(G_i,p_i)$ and $(G_j,p_j)$, we have $
d(p_i,p_j) > l'$; then, $\sigma$ is a nontrivial partition in $\Sigma(l' {\,;\,}p)$. Since $l' \ge l$, we have $\sigma \in \Sigma(l{\,;\,}p)$, which is a contradiction. We thus assume that there are two adjacent frameworks $(G_i,p_i)$ and $(G_j,p_j)$ such that $d(p_i,p_j) \le l'$. Similarly, using this condition, we define a partition $\sigma' = \{(G'_i, \, p'_i)\}^{m'}_{i=1}$ of $(G,p)$, with $G'_i = (V'_i, E'_i)$, as follows: Each $V'_i$ is a union of certain subsets $V_j$, and two subsets $V_{j}$ and $V_{j'}$ are belong to the same set $V'_i$ if, and only if, there is a chain of subsets $V_{j_1},\ldots,V_{j_{q'}}$, with $V_{j_1}=V_j$ and $V_{j_{q'}}=V_{j'}$, such that $(G_{j_k}, p_{j_k})$ and $(G_{j_{k+1}}, p_{j_{k+1}})$ are adjacent, and moreover, $$d(p_{j_k},p_{j_{k+1}}) \le l', \hspace{10pt} \forall \, k = 1,\ldots, q'-1.$$ Similarly, by construction, we have that for each $i = 1,\ldots, m'$, the subgraph $G'_i$ is connected, and $$\phi(p'_i) \le l'', \hspace{10pt} \mbox{ for } \hspace{5pt} l'' := (2m -1) \, l'.$$ Furthermore, by applying the same arguments as used to prove , we obtain $1 < m' < m$; indeed, we have $\sigma \succ \sigma'$. We then repeat the argument as above. Specifically, we assume that there is at least a pair of adjacent frameworks $(G'_i, p'_i)$ and $(G'_j, p'_j)$ with $d(p'_i,p'_j) \le l''$. Using this as the defining condition, we obtain another nontrivial partition $\sigma''$ of $(G,p)$, with $\sigma'' \prec \sigma' $. Continuing with this process, we obtain a chain of partitions of $(G,p)$ as $
\sigma \succ \sigma' \succ \sigma'' \succ\cdots
$. Since there are only finitely many partitions of $(G,p)$, the chain terminates in finite steps. For simplicity, but without loss of any generality, we assume that the chain stops at $\sigma'$. In other words, for any two adjacent frameworks $(G'_i, p'_i)$ and $(G'_j, p'_j)$, we have $d(p'_i,p'_j)> l'' $. But then, $\sigma'$ is in $\Sigma(l''{\,;\,}p)$. Since $l'' \ge l$, we have $\sigma'\in \Sigma(l{\,;\,}p)$, which is a contradiction. This completes the proof.
\[rmk:2\] Note that for any configuration $p$, we have $\phi(p) \ge d_+(p)$. Thus, from Proposition \[NC\], we have that for any $l > 0$, there exists $d > 0$ such that if $d_+(p)> d$, then there is a nontrivial partition in $\Sigma(l{\,;\,}p)$.
Proof of Theorem \[CCOSODC\] {#ssec:ProofIII}
----------------------------
For simplicity but without loss of generality, we assume that both sequences $\{ \phi(p(i)) \}_{i\in {\mathbb{N}}}$ and $\{l_i\}_{i\in {\mathbb{N}}}$ monotonically increase and approach to infinity. Note that in the most general case, the condition of monotonicity can be achieved by passing the original sequences to subsequences. The proof of Theorem \[CCOSODC\] is carried out by induction on the number of vertices of $G$.
For the base case $N = 2$, we write $p(i) = (x_1(i),x_2(i))$; then $\phi(p(i)) = \|x_2(i) - x_1(i)\|$. For simplicity, we assume that $
\phi(p(i)) > l_i
$ for all $ i\in{\mathbb{N}}$ (without passing to a subsequence). Let $\sigma_i$ be the agent-wise partition of $(G,p(i))$. Then, the following hold: (i) each $\sigma_i$ is in $\Sigma(l_i{\,;\,}p(i))$; (ii) ${\mathcal}{L}_-(\sigma_i) = 0$ for all $i\in {\mathbb{N}}$; and (iii) all the $\sigma_i$ induce the same partition of $V$, i.e., $V = \{v_1\} \cup\{v_2\}$. This establishes the base case.
For the inductive step, we assume that Theorem \[CCOSODC\] holds for $N \le k-1$, and prove for $N = k$. Since $\{\phi(p(i))\}_{i\in{\mathbb{N}}}$ monotonically increases and approaches to infinity, from Proposition \[NC\], we have that for each $i\in {\mathbb{N}}$, there is a number $j_i\in \mathbb{N}$ such that if $j \ge j_i$, then there is a nontrivial partition of $(G,p(j))$ in $\Sigma(l_i, p(j))$. Without loss of generality, we assume that $j_i = i$ for all $i\in {\mathbb{N}}$. So, for each framework $(G,p(i))$, there is a nontrivial partition $\sigma_i$ in $\Sigma(l_i, p(i))$. Since there are only finitely many partitions of $V$, there must be a subsequence of $\{\sigma_i\}_{i\in {\mathbb{N}}}$ such that all the partitions in the subsequence induce the same partition of $V$. Again, for simplicity but without loss of generality, we assume that the subsequence can be chosen as the original sequence $\{\sigma_i\}_{i\in {\mathbb{N}}}$ itself.
Now, suppose that $\{{\mathcal}{L}_-(\sigma_i)\}_{i\in{\mathbb{N}}}$ is bounded; then, the sequence $\{p(i)\}_{i\in{\mathbb{N}}}$ is a diluting sequence, and hence we complete the proof. We thus assume that $\{{\mathcal}{L}_-(\sigma_i)\}_{i\in {\mathbb{N}}}$ is unbounded. First, let $V = \sqcup^m_{j=1} V_j$ be the partition of $V$ induced by $\sigma_i$, for all $i\in {\mathbb{N}}$. Let $G_j$ be the subgraph induced by $V_j$, and write $$\sigma_i = \{(G_j, p_j(i))\}^m_{j=1}.$$ Without loss of generality, we assume that $\{p_1(i)\}_{i\in {\mathbb{N}}}$ is unbounded. For simplicity, we assume that all the other sequences $\{p_j(i)\}_{i\in{\mathbb{N}}}$, for $j = 2,\ldots, m$, are bounded. But the arguments below can be used to prove general cases.
Since all the partitions $\sigma_i$ are nontrivial, we have that $G_1$ is a proper subgraph of $G$. For a framework $(G_1, p_1)$, with $p_1\in P_{G_1}$, and a positive number $l$, let $\Sigma_1(l{\,;\,}p_1)$ be the set of dilute partitions of $(G_1, p_1)$ with respect to $l$. Appealing to the induction hypothesis, we obtain a subsequence of configurations $\{p_1(n_i)\}_{i\in {\mathbb{N}}}$, with $n_i \ge i$, and a sequence of nontrivial partitions of $(G_1, p_1(n_i))$: $$\left \{\sigma'_i \in \Sigma_1(l_i{\,;\,}p_1(n_i)) \right\}_{ i\in {\mathbb{N}}}.$$ The partitions above satisfy the following two conditions:
1. All $\sigma'_i$ induce the same partition of $V_1$: $
V_1 = \sqcup^{m'}_{j = 1} V_{1_j}
$.
2. There is a positive number $L'_0$ such that $${\mathcal}{L}_-(\sigma'_i) \le L'_0 \hspace{10pt}
\forall i\in{\mathbb{N}}.$$
We now use $\sigma'_i$ and $\sigma_{n_i}$ to construct a new partition $\sigma_i^*$ of $(G, p(n_i))$: First, note that since $\{l_i\}_{i\in {\mathbb{N}}}$ monotonically increases and $n_i \ge i$ for all $i \in {\mathbb{N}}$, we have $ l_{n_i} \ge l_i$ for all $i\in{\mathbb{N}}$. So, if we write $$\sigma'_i = \{(G_{1_j}, p_{1_j}(n_i))\}^{m'}_{j = 1},$$ and define a partition of $(G,p(n_i))$ as follows: $$\sigma^*_i := \{G_{1_j}, p_{1_j}(n_i)\}^{m'}_{j = 1} \cup \{G_j, p_{j}(n_i)\}^{m}_{j = 2},$$ then $\sigma^*_i$ is in fact an element in $\Sigma(l_i{\,;\,}p(n_i))$. Furthermore, from condition b) above, we have $${\mathcal}{L}_-(\sigma^*_i) \le \max \left\{ L'_0, \, \phi(p_2(n_i)),\ldots, \phi(p_m(n_i))\right\}.$$ Since each sequence $\{\phi(p_j(n_i))\}_{i\in{\mathbb{N}}}$, for $j= 2,\ldots, m$, is by assumption bounded above, we conclude that there exists a positive number $L_0$ such that $
{\mathcal}{L}_-(\sigma^*_i) \le L_0
$ for all $i\in{\mathbb{N}}$. We have thus shown that $\{p(n_i)\}_{i\in {\mathbb{N}}}$ is a diluting sequence.
Analysis and Proof of Theorem \[thm:MAIN\] {#sec:cgf}
==========================================
This section is devoted to the proof of Theorem \[thm:MAIN\]. We start with a brief outline of the proof. In subsection \[sec:BSE\], we establish the first part of Theorem \[thm:MAIN\], i.e., we show that the distances between neighboring agents in an equilibrium are bounded both above and below. In subsection \[ssec:sct\], we introduce a class of trajectories generated by system , termed [*self-clustering trajectories*]{}. Roughly speaking, a [*self-clustering trajectory*]{} is such that the agents in the configuration evolve along time to form disjoint clusters, with the intra- and inter-cluster distances, bounded above and below, respectively, by certain prescribed thresholds. We show that any self-clustering trajectory remains bounded if the interactions between agents in different clusters are all attractions. In subsection \[ssec:sotgf\], we prove the convergence of system . The proof is carried out by contradiction: we show that if there were an unbounded trajectory generated by system , then it would be a self-clustering trajectory, and moreover, the interactions between agents in different clusters are all attractions after a finite amount of time. Then, by appealing to the results derived in subsection \[ssec:sct\], we conclude that any such trajectory is bounded which is a contradiction.
Bounded sizes of equilibria {#sec:BSE}
---------------------------
In this subsection, we show that there exists positive numbers $D_+$ and $D_-$ such that if $p$ is an equilibrium of system , then $$\label{D-dpD+}
D_- \le d_-(p) \le d_+(p) \le D_+.$$ [**Existence of an upper bound**]{}. Recall that $\alpha_+$ (defined in ) is such that $$g_{ij}(d) > 0, \hspace{10pt} \forall \, d \ge \alpha_+ \mbox{ and } \forall\, (v_i,v_j)\in E;$$ we then set $$\label{eq:eqfirstdefD+}
D_+ := (N - 1)\,\alpha_+.$$ We show that if $p$ is an equilibrium of system , then $d_+(p) \le D_+$. The proof is carried out by contradiction.
Let $p$ be an equilibrium with $d_+(p) > (N - 1)\alpha_+$. Without loss of generality, we assume that $\|x_N - x_1\| = d_+(p)$. Let $x^j_i$, for $j =1,\ldots, n$, be the $j$-th coordinate of $x_i$; by rotating and/or translating $p$ if necessary, we assume that both $x_1$ and $x_N$ are on the first-coordinate, with $x^1_1 < x^1_N$. In other words, we have $$\label{eq:condition1Nov1}
x^1_N - x^1_1 = d_+(p) > (N - 1) \alpha_+ .$$ Since there are only $N$ agents, there must be a partition $
V = V' \sqcup V''
$, with $v_1\in V'$ and $v_N\in V''$, such that $$x^1_{j} - x^1_{i} > \alpha_+, \hspace{10pt} \forall \, v_i\in V' \mbox{ and } \forall\, v_j\in V''.$$ Indeed, if such bi-partition does not exist, then there is a chain $v_{i_1}, \ldots,v_{i_N}$, with $v_{i_1} = v_1$ and $v_{i_N} = v_N$, such that $
x^1_{i_{j+1}} - x^1_{i_j} \le \alpha_+
$ for all $j = 1,\ldots, N-1$. But then, $$x^1_N - x^1_1 = \sum^{N-1}_{j=1} {\left (}x^1_{i_{j+1}} - x^1_{i_{j}} {\right )}\le (N - 1) \alpha_+,$$ which contradicts .
Following the partition $V = V' \sqcup V''$, we define a subset of $E$ as follows: $$E^*:= \left\{ (v_i, v_j) \in E \mid v_i\in V',\, v_j\in V'' \right \},$$ which is nonempty because $G$ is connected. We further define two variables as follows: $$s'(p):= \sum_{v_i \in V'} x^1_i \hspace{10pt} \mbox{ and } \hspace{10pt} s''(p):= \sum_{v_i \in V''} x^1_i.$$ The dynamics of $s'(p)$ and $s''(p)$ are given by $$\frac{d}{dt} s'(p) = -\frac{d}{dt} s''(p) = \sum_{(v_i,v_j) \in E^*} g_{ij}(d_{ij}) (x^1_j - x^1_i ).$$ Note that for each $(v_i,v_j)\in E^*$, we have $
x^1_j - x^1_i > \alpha_+
$, and hence $g_{ij}(d_{ij}) > 0$. So, $$\frac{d}{dt} s'(p) = -\frac{d}{dt} s''(p) > 0,$$ which contradicts the fact that $p$ is an equilibrium of system . We have thus shown that $D_+$, defined in , is an upper bound for $d_+(p)$ for $p$ an equilibrium.
[**Existence of a lower bound**]{}. We first have some notations. Let $(v_i,v_j)\in E$ be an edge of $G$; for any positive number $d$, we define a function in $\operatorname{C}({\mathbb{R}}_+, {\mathbb{R}})$ as follows: $$\label{eq:defbargij}
{\overline}g_{ij}(d) := dg_{ij}(d).$$ Let $S$ be a subset of ${\mathbb{R}}$, and let ${\overline}g^{-1}_{ij}(S)$ be a subset of ${\mathbb{R}}_+$ defined by $${\overline}g_{ij}^{-1}(S) := \left\{ d \in {\mathbb{R}}_+ \mid {\overline}g_{ij}(d) \in S \right\}.$$ With the notations above, we establish the following fact:
\[lem:lem2\] Let $g_{ij}$, for $(v_i,v_j) \in E$, be an attraction/repulsion function. Then, $$\lim_{\eta \to \infty} \sup \left \{d \in {\overline}g^{-1}_{ij}{\left (}\pm\, \eta {\right )}\right \} = 0.$$
It suffices to show that for any $d\in {\mathbb{R}}_+$, there exists a positive number $\eta_d$ such that if $\eta > \eta_d$, then ${\overline}g^{-1}_{ij}{\left (}\pm \, \eta{\right )}$ is a nonempty subset of the open interval $(0,d)$. First, from the condition of [*fading attraction*]{}, we have $$\sup \left \{|{\overline}g_{ij}(d')| \mid d' \ge d \right \} < \infty.$$ We can thus define $$\eta_d := \sup \left \{|{\overline}g_{ij}(d')| \mid d' \ge d\right \} + 1.$$ Then, by the fact that $\lim_{d \to 0+} {\overline}g_{ij}(d) = -\infty$, we conclude that if $\eta > \eta_d$, then ${\overline}g^{-1}_{ij}{\left (}\pm \, \eta {\right )}$ is nonempty, and is contained in $(0,d)$.
Let $d$ be a positive number; we define a subset of $P_G$ as follows: $$\label{eq:defZGd}
Z_G(d) := \left \{ p\in P_G\mid d_-(p) = d \right \}.$$ Recall that $f(p)$ is the vector field of system at $p$. With Lemma \[lem:lem2\], we establish the following fact:
\[pro:nmvecfld\] Let $Z_G(d)$ be defined in . Then, $$\lim_{d\to 0+} \inf \left \{ \|f(p)\| \mid p\in Z_G(d) \right \} = \infty.$$
The proof will be carried out by induction on the number of vertices of $G$. For the base case $N = 2$, we have $$Z_G(d) = \left \{ (x_1, x_2) \in P_G\mid \|x_2 - x_1\| = d \right \}.$$ We also have for any $p\in P_G$, $$\|f(p)\| = \sqrt{2}\, \|f_1(p)\| = \sqrt{2} \, \|f_2(p)\| = \sqrt{2}\, | {\overline}g_{12}(d_{12}) |.$$ From the condition of [*strong repulsion*]{}, we have $
\lim_{d\to 0+}\sqrt{2}\, | {\overline}g_{12}(d)| = \infty
$, which establishes the base case.
For the inductive step, we assume that Proposition \[pro:nmvecfld\] holds for $N \le k-1$, and prove for $N = k$. The proof will be carried out by contradiction: we assume that there exists a number $\eta > 0$ such that for any $d > 0$, there is a number $d_* \in (0, d)$ and a configuration $p\in Z_{G}{\left (}d_*{\right )}$ such that $\|f(p)\| \le \eta$.
Choose $d$, and hence $d^*$, arbitrarily small; let $p\in Z_G(d_*)$ be such that $ \| f(p) \| \le \eta$. Without loss of generality, we assume that $(v_1,v_2)$ is an edge of $G$, and moreover, $ d_{12} = d_*$. Since $G$ is connected, there is a connected subgraph $G' = (V', E')$ of $G$ which has $(k - 1)$ vertices and contains the edge $(v_1,v_2)$. Label the vertices of $G$ such that $V' = \{v_1,\ldots,v_{k-1}\}$. Note that if we let $p'$ be the sub-configuration of $p$ associated with $G'$, then $p' \in Z_{G'}(d_*)$. Let $S'$ be the sub-system of induced by $G'$, and $f'(p')$ be the associated vector field at $p'$. From the induction hypothesis, if we let $\omega \in {\mathbb{R}}_+$ be such that $$\label{eq:defgamma}
\| f'(p') \| = \omega\, \eta;$$ then, $\omega$ can be made arbitrarily large by assuming that $d_*$ sufficiently small.
For each $v_i \in V'$, let $f'_i(p')$ be defined by restricting $f'(p')$ to $x_i$. From , there exists at least a vertex $v_i\in V'$ such that $$\|f'_i(p') \| \ge \omega' \, \eta, \hspace{10pt} \mbox{for } \omega' := \omega/ \sqrt{k - 1}.$$ Note that $\omega'$ can be made arbitrarily large by increasing $\omega$. Without loss of generality, we assume that $p$ is rotated in a way such that $$\label{eq:assumptiononf'_i}
f'_i(p) = {\left (}\| f'_i(p) \|, 0, \ldots, 0{\right )}\in {\mathbb{R}}^n.$$ There are two cases:
[*Case I*]{}. Suppose that $(v_i,v_k)\notin E$; then, $f_i(p) = f'_i(p)$. In particular, if we let $\omega' > 1$, then $$\|f(p)\| \ge \|f_i(p)\| = \omega'\, \eta > \eta$$ which is a contradiction. The proof is then complete.
[*Case II*]{}. We assume that $(v_i,v_k)\in E$. Recall that $x^1_i$ is the first coordinate of $x_i$. Following , the dynamics of $x^1_i$, in system , is given by $$\dot x^1_i =\| f'_i(p) \| + g_{ik}(d_{ik}) (x^1_k - x^1_i ).$$ Since $|\dot x^1_i | \le \|f(p)\|$, we have $$g_{ik}(d_{ik}) (x^1_k - x^1_i ) \le \| f(p) \| - \| f'_i(p) \|.$$ Using the fact that $\|f(p)\|\le \eta$ and $\|f'_i(p) \|\ge \omega'\, \eta$, we obtain $$\label{eq:ineqforgik}
g_{ik}(d_{ik}) (x^1_k - x^1_i ) \le - (\omega' -1)\, \eta < 0,$$ which further implies that $ |{\overline}g_{ik}(d_{ik}) | \ge (\omega' - 1)\,\eta$. Then, from Lemma \[lem:lem2\], we have that $d_{ik}$ can be made arbitrarily small by increasing $\omega'$. In particular, if $d_{ik}$ is sufficiently small such that $g_{ik}(d_{ik}) < 0$, then, from , we have $x^1_k > x^1_i$.
We next consider the dynamics of $x^1_k$ in system : $$\dot x^1_k = g_{ik}(d_{ik})( x^1_i - x^1_k ) + \sum_{v_j\in V_k - \{v_i\}} g_{jk}(d_{jk})(x^1_j - x^1_k ).$$ Combining with the fact that $|\dot x^1_k | \le \eta$, we know that there is at least a vertex $v_j \in V_k - \{v_i\}$ such that $$\label{eq:3:16pmNov1}
g_{jk}(d_{jk})( x^1_j - x^1_k ) \le -\omega'' \, \eta, \hspace{10pt} \mbox{for } \omega'' := \frac{\omega' - 2 } {k - 1}.$$ The right hand side of the equation can be made negative by increasing $\omega'$, and hence $\omega''$. Appealing again to Lemma \[lem:lem2\], we know that by increasing $\omega''$, we can make $d_{jk}$ arbitrarily small such that $g_{jk}(d_{jk}) < 0$. It then follows from that $
x^1_j > x^1_k$.
Now, for the dynamics of $x^1_j$, we can apply the arguments above as to the dynamics of $x^1_k$. By doing so, we obtain another vertex $v_{j'}\in V_j$ such that $x^1_{j'} > x^1_{j}$. Furthermore, by repeating using the arguments, we obtain an infinite sequence as follows: $$x^1_i < x^1_k < x^1_j < x^1_{j'} < x^1_{j''} < \cdots.$$ This contradicts the fact that $G$ has only $k$ vertices, which completes the proof.
The existence of a lower bound $D_-$ then directly follows from Proposition \[pro:nmvecfld\]; indeed, from Proposition \[pro:nmvecfld\], we can choose $D_-$ to be such that if $d\le D_-$ and $p\in Z_G(d)$, then $\|f(p)\| > 1$. We have thus established the first part of Theorem \[thm:MAIN\].
Self-clustering trajectories {#ssec:sct}
----------------------------
We introduce in this subsection [*self-clustering trajectories*]{} of system . Let $(G,p)$ be a framework, and $\sigma = \{(G_i,p_i)\}^m_{i=1}$ be a partition of $(G,p)$. Recall that the intra- and inter-cluster distances of the partition $\sigma$ are defined (in and , respectively) as follows: $$\left\{
\begin{array}{l}
{\mathcal}{L}_-(\sigma)= \max\left \{ \phi(p_i) \mid 1\le i \le m \right\}, \\
{\mathcal}{L}_+(\sigma) = \min_{(i,j)} \left \{ d(p_i,p_j) \right \}
\end{array}
\right.$$ where the minimum is taken over pairs $(i,j)$, for $(G_i,p_i)$ and $(G_j,p_j)$ adjacent. We then have the following definition:
\[def:selfclustering\] Let $l_0$ and $l_1$ be positive numbers. A trajectory $p(t)$ of system is [**self-clustering**]{}, with respect to $(l_0,l_1)$, if there exists a nontrivial partition $V = \sqcup^{m}_{i =1} V_i$ such that the following condition is satisfied: Let $G_i$ be the subgraph induced by $V_i$, and let $\sigma_t = \{(G_i, p_i(t))\}^m_{i = 1}$ be a partition of $(G,p(t))$. Then, there exists an instant $t_0\ge 0$ such that for all $t \ge t_0$, we have $$\label{eq:defselfclustering}
{\mathcal}{L}_-(\sigma_t) < l_0 \hspace{10pt} \mbox{ and } \hspace{10pt} {\mathcal}{L}_+(\sigma_t) > l_1.$$
Recall that the number $\alpha_+$ (defined in ) is chosen such that $
g_{ij}(d) > 0
$ for all $d \ge \alpha_+$ and for all $(v_i,v_j)\in E$. We prove in this subsection that if a trajectory $p(t)$ is self-clustering, with inter-cluster distances sufficiently large (greater than $\alpha_+$); then, $p(t)$ remains bounded along time $t$. Precisely, we have the following fact:
\[pro:selfclustering\] Let $l_0$ and $l_1$ be positive numbers, and assume that $l_1 > \alpha_+$. Suppose that $p(t)$ is a self-clustering trajectory with respect to $(l_0, l_1)$; then, $p(t)$ remains bounded along the evolution, i.e., $$\sup\{\phi(p(t)) \mid t \ge 0\} < \infty.$$
To prove Proposition \[pro:selfclustering\], we first introduce some notations. Let the centroid of a configuration $p(t)$ be defined as $$c(p(t)) := \sum_{v_i \in V} x_i(t)/ |V|.$$ Then, by computation, we have $d c(p(t))/dt = 0$. So, for simplicity but without loss of generality, we can assume that $c(p(t)) = 0$ for all $t\ge 0$. Now, for each $t\ge 0$, let $$\label{eq:defsigmatforproposition5}
\sigma_t = \{(G_i, p_i(t))\}^m_{i = 1}, \hspace{10pt} \mbox{ with } \hspace{5pt} G_i = (V_i, E_i),$$ be a nontrivial partition of $(G, p(t))$. Because $p(t)$ is a self-clustering trajectory. So, we can assume that holds for the partitions $\sigma_t$ defined in for all $t \ge t_0$. Further, for simplicity, we assume that $t_0 = 0$, i.e., holds since the starting time. Let ${\mathcal}{I} := \{1,\ldots, m\}$ be the index set for the frameworks $\{(G_i,p_i(t))\}^m_{i=1}$ associated with $\sigma_t$. For a subset ${\mathcal}{I}'$ of ${\mathcal}{I}$, let $$G_{{\mathcal}{I}'} = (V_{{\mathcal}{I}'}, E_{{\mathcal}{I}'}), \hspace{10pt} \mbox{ with } \hspace{5pt} V_{{\mathcal}{I}'}: =\sqcup_{j\in {\mathcal}{I}'} V_j$$ be the subgraph of $G$ induced by $V_{{\mathcal}{I}'}$, and let $
p_{{\mathcal}{I}'}(t)
$ be the sub-configuration of $p(t)$ associated with $G_{{\mathcal}{I}'}$. Similarly, let the centroid of $p_{{\mathcal}{I}'}(t)$ be $c(p_{{\mathcal}{I}'}(t)) := \sum_{v_{i}\in V_{{\mathcal}{I}'}} x_i(t) /|V_{{\mathcal}{I}'}|$.
Next, we introduce a set of time-dependent variables, encoding certain metric properties of $p(t)$ along time $t$. First, for a subset ${\mathcal}{I}'\subset{\mathcal}{I}$, let a continuously differentiable function in $t$ be defined as $
\pi({\mathcal}{I}'{\,;\,}t):= \| c(p_{{\mathcal}{I}'}(t)) \|
$. Then, for an integer $k = 1,\ldots, m$, we define a continuous function by $$\label{eq:defPIkt}
\Pi(k{\,;\,}t) := \max_{{\mathcal}{I}'}\{\pi({\mathcal}{I}'{\,;\,}t) \mid |{\mathcal}{I}'| = k\}.$$ For example, if $k = 1$, then $$\label{eq:Pi1thehe}
\Pi(1{\,;\,}t) = \max\{\|c(p_i(t))\| \mid i = 1,\ldots, m\};$$ and if $k = m$, then $$\Pi(m{\,;\,}t) = \|c(p(t))\| = 0.$$ We note here, without a proof, the following fact that for any $t\ge 0$, $$\label{eq:1:30pm}
\Pi(1{\,;\,}t) \ge \ldots \ge \Pi(m{\,;\,}t) = 0.$$ Now, fix an integer $k = 1,\ldots, m-1$, we relate below $\Pi(k,t)$ and $\Pi(k+1,t)$ by formalizing the following fact: if $\Pi(k{\,;\,}t)$ is expanding at a certain instant $t$, then $\Pi(k+1{\,;\,}t)$ cannot be too small. Precisely, we have the following result:
\[lem:boundedforPI\] Let $p(t)$ be the self-clustering trajectory, with respect to $(l_0,l_1)$, in Proposition \[pro:selfclustering\]. Fix an instant $t > 0$, and let $r>0$ be such that $\Pi(1{\,;\,}t') \le r$ for all $t' \le t$. Suppose that there is an integer $k= 1,\ldots, m-1$ such that $\Pi(k{\,;\,}t') \le \Pi(k{\,;\,}t)$ for all $t' \le t$; then, $$\Pi(k+1{\,;\,}t) \ge r - N (r - \Pi(k{\,;\,}t) ) - 2 l_0.$$
We refer to Appendix A for a proof of Lemma \[lem:boundedforPI\]. With Lemma \[lem:boundedforPI\], we prove Proposition \[pro:selfclustering\].
Let $\sigma_t$ be defined in as the nontrivial partitions associated with the self-clustering trajectory $p(t)$. Let $\Pi(k, t)$, for $k = 1,\ldots, m$, be defined in . We first show that $$\label{eq:phiptcaonima}
\phi(p(t)) <2(\Pi(1{\,;\,}t) + l_0), \hspace{10pt} \forall\, t \ge 0.$$ Let $v_{i}$, $v_{j}$ be any two vertices in $V$; we assume that $v_{i} \in V_{i'}$ and $v_{j}\in V_{j'}$. Then, by the triangle inequalities, the distance $d_{ij}(t)$ between $x_i(t)$ and $x_j(t)$ is bounded above by the sum of three terms: $$\begin{array}{lll}
d_{ij}(t) & \le & \|x_{i}(t) - c(p_{i'}(t))\| + \| c(p_{j'}(t)) - x_{j}(t) \| \\
& & + \| c(p_{i'}(t)) - c(p_{j'}(t)) \|;
\end{array}$$ for the first two terms, we have $$\left\{
\begin{array}{l}
\|x_{i}(t) - c(p_{i'}(t))\| < \phi(p_{i'}(t)) < l_0, \\
\| c(p_{j'}(t)) - x_{j}(t) \| < \phi(p_{j'}(t)) < l_0;
\end{array}
\right.$$ for the last term, we have $$\| c(p_{i'}(t)) - c(p_{j'}(t))\| \le 2 \Pi(1{\,;\,}t).$$ It then follows that $d_{ij}(t) < 2(\Pi(1{\,;\,}t) + l_0)$, and hence holds.
It thus suffices to show that $\sup \{\Pi(1, t)\mid t \ge 0 \} < \infty$. The proof is carried out by contradiction. Suppose that, to the contrary, for any $r \ge 0$, there exists an instant $t_1$ such that $\Pi(1,t_1) = r$. Choose $r$ sufficiently large such that $r > \Pi(1{\,;\,}0)$, and let $t_1$ be such that $$\Pi(1{\,;\,}t) \le \Pi(1{\,;\,}t_1) = r, \hspace{10pt} \forall\, t \le t_1.$$ Then, from Lemma \[lem:boundedforPI\], we have $$\Pi(2{\,;\,}t_1) \ge r - 2 l_0.$$ We may increase $r$, if necessary, so that $r - 2 l_0 > \Pi(2{\,;\,}0)$. Choose an instant $t_2 \in (0, t_1]$ such that $$\Pi(2{\,;\,}t) \le \Pi(2{\,;\,}t_2) = r - 2 l_0, \hspace{10pt} \forall\, t \le t_2.$$ Then, appealing again to Lemma \[lem:boundedforPI\], we obtain $$\Pi(3{\,;\,}t_2) \ge r - 2(N + 1) l_0.$$ Repeating this argument, we then obtain a time sequence $
t_1 \ge \ldots \ge t_{m-1}
$ such that for all $k = 1,\ldots, m-1$, we have $$\Pi(k+1{\,;\,}t_k) \ge r - 2 \sum^{k-1}_{i = 0} N^i l_0.$$ In particular, for $k= m$, we have $$\label{eq:8:24pm}
0 = \Pi(m{\,;\,}t_{m-1}) \ge r - 2 \sum^{m - 2}_{i = 0} N^i l_0,$$ which is a contradiction because $r$ can be chosen arbitrarily large, and hence the right hand side of is positive. This completes the proof.
Convergence of the gradient flow {#ssec:sotgf}
--------------------------------
We now return to the proof of Theorem \[thm:MAIN\]. We show that for any initial condition $p(0)$ in $P_G$, the trajectory $p(t)$ converges to the set of equilibria. The proof will be carried out by contradiction, i.e., we assume that there is an initial condition $q(0)\in P_G$ such that the trajectory $q(t)$ of system is unbounded. In the remainder of the section, we fix the trajectory $q(t)$, and derive contradictions.
Since $q(t)$ is unbounded, there is a time sequence $\{t_i\}_{i\in N}$, with $\lim_{i\to \infty} t_i = \infty$, such that $\{q(t_i)\}_{i\in{\mathbb{N}}}$ is unbounded. Choose a monotonically increasing sequence $\{l_i\}_{i\in {\mathbb{N}}}$ out of ${\mathbb{R}}_+$, and let $\lim_{i\to \infty} l_i = \infty$. From Theorem \[CCOSODC\], there is a diluting sequence $\{q(t_{n_i})\}_{i\in {\mathbb{N}}}$, as a subsequence of $\{q(t_i)\}_{i\in {\mathbb{N}}}$, together with a sequence of nontrivial partitions $\{\sigma_{t_i}\}_{i\in{\mathbb{N}}}$ satisfying the following properties:
1. All partitions $\sigma_{t_i}$, for $i\in {\mathbb{N}}$, induce the same partition of $V$: $
V = \sqcup^m_{i=1} V_i
$.
2. There exists $L_0 > 0$ such that $
{\mathcal}{L}_-(\sigma_i) \le L_0
$ for all $ i\in {\mathbb{N}}$.
Without loss of generality, we assume that $n_i = i$ for all $i\in{\mathbb{N}}$, i.e., the subsequence $\{q(t_{n_i})\}_{i\in{\mathbb{N}}}$ can be chosen as $\{q(t_i)\}_{i\in{\mathbb{N}}}$ itself. We can also assume that $L_0$ is large enough so that $L_0 \ge D_+$, with $D_+ = (N - 1) \, \alpha_+$ defined in .
Following the partition $V = \sqcup^m_{i=1} V_i$, we let $G_i = (V_i, E_i)$ be the subgraph induced by $V_i$. For each framework $(G,q(t))$, we let $\sigma_t$ be the nontrivial partition of $(G,q(t))$ defined as $$\label{eq:defsigmat}
\sigma_t:= \{(G_i, q_i(t))\}^m_{i = 1};$$ note that for each $i \in {\mathbb{N}}$, we have $\sigma_{t_i} \in \Sigma(l_i{\,;\,}q(t_i))$. We show below that if $q(t)$ is unbounded, then it [*has to*]{} be a [*self-clustering*]{} trajectory with respect to $(L_0,l_i)$ for all $i\in {\mathbb{N}}$. Precisely, we state the following result:
\[pro:relatepartitiontodynamicalsystem\] Suppose that $q(t)$ were an unbounded trajectory generated by system ; let $\sigma_{t}$, for $t\ge 0$, be the partition of $(G,q(t))$ defined in . Then, for each $i\in {\mathbb{N}}$, there would be a $j_i\in {\mathbb{N}}$ such that for all $t \ge t_{j_i}$, we have $$\label{eq:contradictioncondition1}
\sigma_t\in \Sigma(l_i{\,;\,}q(t)) \hspace{5pt} \mbox{ and } \hspace{5pt} {\mathcal}{L}_-(\sigma_t) < L_0.$$ In particular, $q(t)$ would be a self-clustering trajectory with respect to $(L_0, l_i)$ for all $i\in {\mathbb{N}}$.
We refer to Appendix B for a proof of Proposition \[pro:relatepartitiontodynamicalsystem\]. With Propositions \[pro:selfclustering\] and \[pro:relatepartitiontodynamicalsystem\], we prove the second part of Theorem \[thm:MAIN\].
The proof is carried out by contradiction; we assume that there exists an unbounded trajectory $q(t)$ of system . But then, by combining Propositions \[pro:selfclustering\] and \[pro:relatepartitiontodynamicalsystem\], we derive a contradiction: First, choose an $i\in {\mathbb{N}}$ such that $l_i > \alpha_+$, then from Proposition \[pro:relatepartitiontodynamicalsystem\], $q(t)$ is a self-clustering trajectory with respect to $(L_0, l_i)$; On the other hand, from Proposition \[pro:selfclustering\], we have $
\sup_{t \ge 0} \phi(q(t)) < \infty
$, which contradicts the assumption that $q(t)$ is unbounded. We thus conclude that for any initial condition $p(0)\in P_G$, the trajectory $p(t)$ is bounded, and hence converges to the set of equilibria. This completes the proof.
Conclusions
===========
We have established in this paper the convergence of the multi-agent system under the assumption that the interaction functions $g_{ij}$, for $(v_i,v_j) \in E$, have fading attractions. To tackle this propblem, we introduced dilute partitions in section III, as a new tool, to characterize the behaviors of trajectories generated by system . The use of dilute partitions enabled us to grasp the qualitative properties of the dynamics of the formations that are needed to prove the convergence results: On one hand, it reveals the fact that self-clustering trajectories are all bounded, as shown in Proposition \[pro:selfclustering\]. On the other hand, it precludes the possibility of system having unbounded trajectories, as implied by Proposition \[pro:relatepartitiontodynamicalsystem\]. Further, we note that the class of dilute partitions is itself a rich question. We have exhibited in section III some intriguing facts of it: For example, the existence of a nontrivial dilute partition in Proposition \[NC\], and the existence of diluting sequence in Theorem \[CCOSODC\]. These facts are independent of the dynamical system , and hence can be used to solve other difficult multi-agent control problems that involve large sized formations. Future work may focus on establishing system convergence under the assumption that interaction functions $g_{ij}$’s have not only fading attractions, but also [*finite repulsions*]{}. We are also interested in studying the system behavior when the network topology $G$ is directed and/or time-varying. Of course, in either of the two cases above, system is not a gradient system anymore. It is thus interesting to know whether or not trajectories of system still converge. This also lies in the scope of our future research.
Acknowledgements {#acknowledgements .unnumbered}
================
The author here is grateful for discussions with Prof. Roger Brockett at Harvard University, Prof. Tamer Başar and Prof. M.-A. Belabbas at the University of Illinois at Urbana-Champaign.
Appendix A {#appendix-a .unnumbered}
==========
Let ${\mathcal}{I}' \subset {\mathcal}{I}$, with $|{\mathcal}{I}'| = k$, be chosen such that $
\pi({\mathcal}{I}'{\,;\,}t) = \Pi(k{\,;\,}t)
$. Let $\langle\cdot, \cdot \rangle $ be the standard inner-product in ${\mathbb{R}}^n$. For a vector $v\in {\mathbb{R}}^n$, let $$\hat v :=
\left\{
\begin{array}{ll}
v /\|v\| & \mbox{if } v \neq 0, \\
0 & \mbox{otherwise}.
\end{array}
\right.$$ We first show that for all $i\in {\mathcal}{I}'$, $$\label{eq:idon'tsleepwelltoday}
\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_{i}(t)) \rangle \ge r - N(r - \Pi(k{\,;\,}t)).$$ Let $w_i: = |V_i| / |V_{{\mathcal}{I}'}|$; then, we can express $\pi({\mathcal}{I}'{\,;\,}t)$ as $$\pi({\mathcal}{I}'{\,;\,}t) =\langle \hat c(p_{{\mathcal}{I}'}(t)),\, \sum_{i\in {\mathcal}{I}'}w_i \, c(p_i(t)) \rangle.$$ Then, using the fact that for all $i\in {\mathcal}{I}' $, $$\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_j(t)) \rangle \le \|c(p_j(t))\| \le r,$$ we obtain $$\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_i(t)) \rangle \ge r - \frac{1}{w_i} {\left (}r - \pi({\mathcal}{I}'{\,;\,}t){\right )}.$$ Since $1/w_i \le N$, we establish .
To proceed, we consider the time derivative of $\pi({\mathcal}{I}'{\,;\,}t)^2$ at $t$: First, let $$E_{{\mathcal}{I}'}:=\{(v_{a},v_{b}) \in E \mid v_{a}\in V_{{\mathcal}{I}'},\, v_{b} \notin V_{{\mathcal}{I}'} \}.$$ Note that $E_{{\mathcal}{I}'}$ is nonempty because (i) $V_{{\mathcal}{I}'}$ is a proper subset of $V$since ${\mathcal}{I}'$ is a proper subset of ${\mathcal}{I}$, and (ii) $G$ is connected. Further, for an edge $(v_a, v_b)\in E_{{\mathcal}{I}'}$, let $$\rho_{ab}(t) := \left\langle c(p_{{\mathcal}{I}'}(t)) , x_{b}(t) - x_{a}(t) \right\rangle.$$ Then, with the definitions of $E_{{\mathcal}{I}'}$ and $\rho_{ab}(t)$, we have $$\label{eq:lajitong}
\frac{d}{dt}\pi({\mathcal}{I}'{\,;\,}t)^2 = 2\sum_{(v_{a},v_{b}) \in E_{{\mathcal}{I}'} } g_{ab}(d_{ab}(t))\, \rho_{ab}(t).$$ Note that $d\pi({\mathcal}{I}'{\,;\,}t)^2/dt \ge 0$ because $\Pi(k{\,;\,}t') \le \Pi(k{\,;\,}t) $ for all $t' \le t$. We also note that $g_{ab}(d_{ab}(t)) > 0$ for all $(v_a,v_b) \in E_{{\mathcal}{I}'}$. This holds because $p(t)$ is a self-clustering trajectory with respect to $(l_0, l_1)$ and $l_1 > \alpha_+$, which in particular implies that $d_{ab}(t) > \alpha_+$. All then imply that there is at least an edge $(v_a,v_b) \in E_{{\mathcal}{I}'}$ such that $\rho_{ab}(t) \ge 0$.
Choose any such edge $(v_a,v_b) \in E_{{\mathcal}{I}'}$, and let indices $i, j\in {\mathcal}{I}$ be such that $v_a\in V_{i}$ and $v_{b} \in V_{j}$. It should be clear that $i\in {\mathcal}{I}'$ and $j\notin {\mathcal}{I}'$. Note that since ${\mathcal}{I}'$ is chosen such that $\Pi(k{\,;\,}t) = \pi({\mathcal}{I}'{\,;\,}t)$, we have $$\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_j(t)) \rangle \le \| c(p_{{\mathcal}{I}'}(t))\|.$$ Because otherwise, we can first find an $i'\in {\mathcal}{I}'$ with $\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_i'(t))\rangle \le \|c(p_{{\mathcal}{I}'}(t))\|$, and then replace this $i'\in {\mathcal}{I}'$ with $j$. By doing so, we obtain a strictly larger $\pi({\mathcal}{I}'{\,;\,}t)$, which is a contradiction. On the other hand, we show that $$\label{eq:12:22pm}
\left\langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \right\rangle > \langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_{i}(t)) \rangle - 2l_0. $$ To prove , first note that $$\left\{
\begin{array}{l}
\| x_{a}(t) - c(p_{i}(t))\| < \phi(p_{i}(t)) < l_0 \vspace{3pt}\\
\| x_{b}(t) - c(p_{j}(t))\| < \phi(p_{j}(t)) < l_0.
\end{array}
\right.$$ So, we obtain $$\label{eq:1:59pm}
\left\{
\begin{array}{l}
\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_{i}(t)) - x_{a}(t) \rangle < l_0 \vspace{3pt}\\
\langle \hat c(p_{{\mathcal}{I}'}(t)), x_{b}(t) - c(p_{j}(t)) \rangle < l_0,
\end{array}
\right.$$ which implies that $$\left\langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \right\rangle > \langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_{i}(t)) \rangle - 2l_0 + \rho_{ab}(t).$$ Since $\rho_{ab}(t) \ge 0$, we establish .
Now, by combining and , we obtain the following inequality: $$\label{eq:7:27pmatcaffebene}
\left\langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \right\rangle \ge r - N (r - \Pi(k{\,;\,}t) ) - 2 l_0.$$ Let ${\mathcal}{I}'':= {\mathcal}{I}' \sqcup \{j\}$. Since $j\notin {\mathcal}{I}'$, we have $|{\mathcal}{I}''| = k+1$. It now suffices to show that $$\pi({\mathcal}{I}''{\,;\,}t) \ge \left\langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \right\rangle.$$ Let $\tilde w_{{\mathcal}{I}'}:= |V_{{\mathcal}{I}'}|/|V_{{\mathcal}{I}''}|$ and $\tilde w_j := |V_j| / |V_{{\mathcal}{I}''}|$. It should be clear that $\tilde w_{{\mathcal}{I}'} + \tilde w_{j} = 1$, and $c(p_{{\mathcal}{I}''}(t)) = \tilde w_{{\mathcal}{I}'} c(p_{{\mathcal}{I}'}(t)) + \tilde w_{j} c(p_j(t))$. We now express $\pi({\mathcal}{I}''{\,;\,}t)$ as $$\pi({\mathcal}{I}''{\,;\,}t) = \tilde w_{j} \langle \hat c(p_{{\mathcal}{I}''}(t)),c(p_j(t))\rangle + \tilde w_{{\mathcal}{I}'} \langle \hat c(p_{{\mathcal}{I}''}(t)),c(p_{{\mathcal}{I}'}(t)) \rangle.$$ For the first inner-product, we have $$\langle \hat c(p_{{\mathcal}{I}''}(t)), c(p_j(t)) \rangle \ge \langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \rangle,$$ and the equality holds if and only if $c(p_{{\mathcal}{I}'}(t))$ and $c(p_j(t))$ are aligned. For the second inner-product, first note that $$\|c(p_{{\mathcal}{I}''}(t))\| \le \Pi(k+1{\,;\,}t) \le \Pi(k{\,;\,}t) = \|c(p_{{\mathcal}{I}'}(t))\|,$$ and hence $$\langle \hat c(p_{{\mathcal}{I}''}(t)), c(p_{{\mathcal}{I}'}(t)) \rangle \ge\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_{{\mathcal}{I}''}(t)) \rangle.$$ Then, using the fact that $$\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_j(t)) \rangle \le \| c(p_{{\mathcal}{I}'}(t))\|,$$ we obtain $$\langle \hat c(p_{{\mathcal}{I}'}(t)), c(p_{{\mathcal}{I}''}(t)) \rangle \ge \langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \rangle.$$ Combining the facts above, we conclude that $\pi({\mathcal}{I}''{\,;\,}t) \ge \left\langle \hat c(p_{{\mathcal}{I}'}(t)),\, c(p_{j}(t)) \right\rangle$, which completes the proof.
Appendix B {#appendix-b .unnumbered}
==========
We establish here Proposition \[pro:relatepartitiontodynamicalsystem\]. We need to first introduce a class of subsets of $P_G$, termed [*dissipation zones*]{}, and establish properties that are needed to prove Proposition \[pro:relatepartitiontodynamicalsystem\].
Dissipation zones {#ssec:dz}
-----------------
Let $d$ be a positive number. For each $(v_i,v_j) \in E$, define a subset $X_{G,ij}(d)$ of $P_G$ as follows: $$\label{eq:defXGijd}
X_{G,ij}(d) :=\left \{ p\in P_G \mid \|x_j - x_i\| = d \right \};$$ We further define $X_{G}(d):= \cup_{(v_i,v_j) \in E}\, X_{G,ij}(d)$. Note that if $d > D_+$, then $X_{G}(d)$ does not contain any equilibrium of system . We call any such set $X_{G}(d)$, for $d > D_+$, a [**dissipation zone**]{}. Define a function $\mu:{\mathbb{R}}_+\longrightarrow {\mathbb{R}}$ as follows: $$\label{eq:defmud}
\mu(d) := \inf\{\|f(p)\| \mid p\in X_{G}(d) \};$$ we establish in this subsection the following fact:
\[pro:defnud\] Let $\mu:{\mathbb{R}}_+ \longrightarrow {\mathbb{R}}$ be defined in . Then,
1. $\mu$ is continuous.
2. $\mu(d) > 0$ for all $d \ge D_+$.
Recall that $d_-(p)$ (defined in ) is the minimum distance between a pair of neighboring agents in $p$. For a positive number $d > 0$, we define a subset of $P_G$ as follows: $$\label{eq:defQd}
Q_G(d):= \left\{ p\in P_G \mid d_-(p) \ge d \right \}.$$ We now establish the following fact:
\[lem:evaluatedifferenceoftwovectorfields\] Let $d> 0$ be a fixed number. Then, for any $\epsilon > 0$, there is a $\delta > 0$ such that if $p$ and $p'$ are in $Q_G(d)$ with $\|p - p'\| \le \delta$, then $\|f(p) - f(p')\| \le \epsilon$.
Let $p = (x_1,\ldots, x_N)$ and $p' = (x'_1,\ldots, x'_N)$. Denote by $d_{ij}:= \|x_j - x_i\|$ and $d'_{ij}:= \|x'_j - x'_i\|$. Note that $$\|f(p) - f(p')\|^2 = \sum_{v_i\in V}\|f_i(p) - f_i(p')\|^2,$$ with each term $\|f_i(p) - f_i(p')\|$ bounded above by $$\sum_{v_j\in V_i} \| g_{ij}(d_{ij})(x_j - x_i) - g_{ij}(d'_{ij})(x'_j - x'_i) \|.$$ We also note that if $\|p - p'\| < \delta$, then $$\|(x_j - x_i) - (x'_j - x'_i) \| < 2\delta.$$ It thus suffices to show that for any $\epsilon' > 0$, there is a $\delta' > 0$ such that if two vectors $u$, $u'\in {\mathbb{R}}^n$ satisfy $$\label{eq:forcontinuity0}
\min\{ \|u\|, \|u'\|\} \ge d \hspace{10pt} \mbox{ and } \hspace{10pt} \|u - u'\| < \delta',$$ then, for all $(v_i,v_j) \in E$, we have $$\label{eq:forcontinuity1}
\|g_{ij}(\|u\|) u - g_{ij}(\|u'\|) u' \| < \epsilon'.$$ Recall that ${\overline}g_{ij}(d)$ (defined in ) is given by ${\overline}g_{ij}(d) = dg_{ij}(d)$. From the condition of [*fading attraction*]{}, there exists a number $d_*$ such that if $\|u\| \ge d_*$, then $ {\overline}g_{ij}(\|u\|) < \epsilon' / 2$ for all $(v_i,v_j)\in E$. Hence, if $\min\{\|u\|, \|u'\| \}\ge d_*$, then, $$\|g_{ij}(\|u\|) u - g_{ij}(\|u'\|) u' \| \le {\overline}g_{ij}(\|u\|) + {\overline}g_{ij}(\|u'\|) < \epsilon'.$$ Define a subset $K$ of ${\mathbb{R}}^n$ as follows: $$K:= \{u\in {\mathbb{R}}^n \mid d \le \|u\| \le d_* + 1\}.$$ Since $K$ is compact and the map $$\widetilde g_{ij}: u\mapsto g_{ij}(\|u\|) u$$ is continuous, there exists a $\delta'\in (0,1)$ such that if $u$ and $u'$ are in $K$, with $\|u - u'\| < \delta'$, then $
\|\widetilde g_{ij}(u) - \widetilde g_{ij}(u') \| < \epsilon'
$ for all $(v_i,v_j) \in E$. Now, let $u, u'\in {\mathbb{R}}^n$ satisfy , and let $\delta' < 1$. Then, either $\min\{\|u\|, \|u'\|\} \ge d_*$ or $\{u, u' \}\subset K$. Since holds in either of the two cases, we complete the proof.
With Lemma \[lem:evaluatedifferenceoftwovectorfields\], we prove below Proposition \[pro:defnud\]:
We first show that $\mu$ is continuous, and then show that $\mu(d) > 0$ for all $d\ge D_+$.
[*1). Proof that $\mu$ is continuous*]{}. We fix a distance $d>0$, and show that $\mu$ is continuous at $d$. Specifically, we show that for any $\epsilon > 0$, there is a $\delta > 0$ such that if $|d' - d|< \delta$, then $|\mu(d) - \mu(d')| < \epsilon$.
Let $p\in X_G(d)$ and $B$ be a closed neighborhood of $p$ in $P_G$. Then, there is an open neighborhood $I$ of $d$ in ${\mathbb{R}}_+$ such that $X_G(d')$ intersects $B$ for all $d'\in I$. From Proposition \[pro:nmvecfld\], there exists a $d_*$ such that if $p'' \in X_G(d)$, with $d_-(p'')< d_*$, then $\|f(p'')\| \ge \|f(p')\| $ for all $p'\in B$. This, in particular, implies that for all $d' \in I$, we have $$\label{eq:alihuiwoyoujian}
\inf\{ \|f(p)\| \mid p\in X_{G}(d') \cap Q_G(d_*) \} = \mu(d').$$ Let $\delta>0$ be sufficiently small such that if $|d' - d| < \delta$, then $d'\in I$.
Choose a number $d'$ with $|d' - d| < \delta$; without loss of generality, we assume that $\mu(d) \le \mu(d')$. From , there is a sequence $\{p(i)\}_{i\in{\mathbb{N}}}$, with each $p(i)\in X_{G}(d)\cap Q_G(d_*)$, such that $
\lim_{i\to \infty} \|f(p(i))\| = \mu(d)
$. Note that if $\delta$ is chosen sufficiently small, then for each $p(i)$ in the sequence, there exists a $p'(i)$ in the intersection of $X_{G}(d')$ and $Q_G(d_*/ 2)$ such that $
\|p'(i) -p(i)\| = |d' - d|
$. To see this, let $p(i) = (x_1(i),\ldots, x_N(i))$; without loss of generality, we assume that $\|x_2(i) - x_1(i)\| = d$. We then set $p'(i)$ as follows: let $$x'_1(i) := x_1(i) + (d'/ d - 1) (x_1(i) - x_2(i)),$$ and let $x'_j(i) := x_j(i) $ for $v_j \neq v_1$. Then, by construction, $\|x'_2(i) - x'_1(i)\| = d'$, and $$\|p'(i) - p(i)\| = \|x'_1(i) - x_1(i)\| = |d' - d| < \delta.$$ Moreover, if we let $\delta < d_* /2$, then from the fact that $p(i)\in Q_G(d_*)$, we have $p'(i) \in Q_G(d_*/2)$.
From Lemma \[lem:evaluatedifferenceoftwovectorfields\], we can choose $\delta$ sufficiently small such that if $p$ and $p'$ are in $Q_G(d_*/2)$, with $\|p' - p\| \le \delta$, then $\|f(p) - f(p')\| \le \epsilon$. Since $\|p(i) - p'(i)\| = |d' - d| < \delta$, we have $$\label{eq:boundedfpi}
\| f(p'(i)) - f(p(i)) \| \le \epsilon, \hspace{10pt} \forall\, i\in {\mathbb{N}}.$$ This, in particular, implies that the sequence $\{\| f(p'(i)) \|\}_{i\in {\mathbb{N}}}$ is bounded, and hence there is a converging subsequence $\{\|f(p'(j_i))\|\}_{i\in{\mathbb{N}}}$. We thus let $
\mu':= \lim_{i\to \infty} \| f(p'(j_i)) \|
$. By definition, we have $\mu' \ge \mu(d')$. On the other hand, we also have $\mu(d') \ge \mu(d)$, and hence $
0\le \mu(d') - \mu(d) \le \mu' - \mu(d)
$. Furthermore, from , we have $
\mu' - \mu(d) \le \epsilon
$, and hence $\mu(d') - \mu(d) \le \epsilon$. This establishes the continuity of $\mu$. [*2). Proof that $\mu(d) > 0$ for all $d\ge D_+$*]{}. The proof is carried out by induction on the number of vertices of $G$. For the base case $N = 2$, we have $D_+ = \alpha_+$. The set $X_G(d)$ is nothing but $$X_G(d) = \left \{ (x_1, x_2)\in {\mathbb{R}}^4 \mid \|x_2 - x_1\| = d \right \}.$$ So then, for all $d > \alpha_+$, we have $\mu(d) = \sqrt{2} \, | {\overline}g_{12}(d) | > 0$.
For the inductive step, we assume that the statement holds for all $N \le (k-1)$, and prove for $N = k$. We first have some notations. Let $G' = (V', E')$ be a connected, proper subgraph of $G$. Let $S'$ be the sub-system induced by $G'$, and $f'(p')$ be the associated vector field of $S'$ at $p'\in P_{G'}$. Similarly, let $$X_{G', ij}(d) := \left\{ p'\in P_{G'} \mid \|x_{j} - x_{i}\| = d \right\};$$ and let $X_{G'}(d) := \cup_{(v_i,v_j)\in E'} X_{G', ij}(d)$. We then define $$\mu_{G'}(d) := \inf\left\{ \|f'(p')\| \mid p' \in X_{G'}(d) \right\}.$$ Note that if $d > (|V| -1) \, \alpha_+$, then $
d > (|V'| - 1) \, \alpha_+
$. Thus, we can appeal to the induction hypothesis and obtain $\mu_{G'}(d) > 0$. We further define $$\nu(d) := \min_{G'}\{ \mu_{G'}(d) \},$$ where the minimum is taken over all connected proper subgraphs of $G$. Then, $\nu(d) > 0$ for all $d > D_+ $.
We now fix a number $d > D_+ $, and prove that $\mu(d) > 0$. First, from Proposition \[pro:nmvecfld\], there exists a $d_0 > 0$ such that $$\label{eq:proveforsmallconfiguration}
\| f(p) \| > 1, \hspace{10pt} \mbox{ if } \hspace{5pt} p \in X_{G}(d) \mbox{ and } d_-(p) < d_0.$$ We also claim that there exists a $d_1 > 0$ such that $$\label{eq:proveforlargeconfiguration}
\|f(p)\| > \nu(d) /2, \hspace{10pt} \mbox{ if } \hspace{5pt} p \in X_{G}(d) \mbox{ and } d_+(p) > d_1.$$ Note that if this holds, then the proof is complete. Indeed, let $K$ be a subset of $X(d)$ defined as follows: $$K := \left\{ p\in X_G(d) \mid d_0 \le \|x_j - x_i\| \le d_1, \, \forall (v_i,v_j) \in E\right\}.$$ It is known that system is an equivariant system with respect to the special Euclidean group. In particular, $\|f(p)\| = \|f(p')\|$ if $p$ and $p'$ are related by translation and/or rotation. On the other hand, $K$ is compact modulo translation and rotation, and moreover, $f(p)$ does not vanish over $K$. We thus have that $
\inf_{p\in K} \| f(p) \|> 0
$. Combining this fact with and , we obtain $\mu(d) > 0$.
It thus remains to show that there exists a $d_1 > 0$ such that holds. First, from the condition of [*fading attraction*]{}, there exists an $l > 0$ such that for all $(v_i,v_j) \in E$, we have $$\label{eq:verylargedistance}
0 < {\overline}g_{ij}(d') < \nu(d) / (2k^2), \hspace{10pt} \, \forall\, d' \ge l.$$ Without loss of generality, we assume that $l$ is large enough so that $l > d$. We now define $d_1$ as follows: let $d_1$ be such that if a configuration $p\in X_{G}(d)$ satisfies $d_+(p) > d_1$; then, there is a nontrivial partition $\sigma$ of $(G,p)$ in $\Sigma(l{\,;\,}p)$. Note that from Remark \[rmk:2\], such number $d_1$ exists. We show below that holds for this choice of $d_1$. Let $(v_i, v_j) \in E$ be chosen such that $\|x_j - x_i\| = d$. Since $l> d$, there is a sub-framework $(G',p')$ associated with the partition $\sigma$ such that $v_{i}$ and $v_j$ are vertices of $G'$ (and $x_i$ and $x_j$ are agents in $p'$). Let $S'$ be the sub-system induced by $G'$, and $f'(p')$ be the vector field associated with $S'$ at $p'$. Since $p'\in X_{G'}(d)$ and $G'$ is a proper subgraph of $G$, by definition of $\nu(d)$, we have $\|f'(p')\| \ge \nu(d)$. Let $V'$ be the vertex set of $G'$; without loss of generality, we assume that $V'= \{v_1,\ldots, v_{k'}\}$, with $k' < k$. Define a vector $
h := {\left (}h_1,\ldots, h_{k'}{\right )}\in {\mathbb{R}}^{nk'}
$, with each $h_i\in {\mathbb{R}}^n$ given by $$h_i := \sum_{v_j\in V_i - V'} g_{ij}(d_{ij}) (x_j - x_i).$$ By the fact that $\sigma$ is in $\Sigma(l{\,;\,}p)$, we have $d_{ij} \ge l$ for $v_i\in V'$ and $v_j\notin V'$. Appealing to , we obtain $$\|h_i\| \le \sum_{v_j\in V_i - V'} {\overline}g_{ij}(d_{ij}) < \nu(d) / (2k),$$ which implies that $\|h\| < \nu(d) / 2$. Recall that $f_{V'}(p)$ is the restriction of $f(p)$ to $p'$. By construction, we have $
f_{V'}(p) = f'(p') + h
$, and hence $$\|f(p)\| \ge \|f_{V'}(p)\| \ge \|f'(p')\| - \|h\| > \nu(d) /2.$$ We have thus established . This completes the proof.
Proof of Proposition \[pro:relatepartitiontodynamicalsystem\]
-------------------------------------------------------------
Recall that the subset $X_{G,ij}(d)$ (defined in ) is given by $$X_{G,ij}(d) = \left\{ p\in P_G \mid \|x_j - x_i\| = d \right\}.$$ Now, let $d'$ and $d''$ be two positive numbers, and let $d(X_{G,ij}(d'), X_{G,ij}(d''))$ be the distance between $X_{G,ij}(d')$ and $X_{G,ij}(d'')$, which is defined as follows: $$\inf\left\{ \|p' - p''\| \mid p'\in X_{G,ij}(d'), \, p''\in X_{G,ij}(d'') \right\}.$$ We have the following fact:
\[lem:boundedbelowdistance\] Let $d'$ and $d''$ be two positive numbers. Then, $$d(X_{G,ij}(d'), X_{G,ij}(d'')) = |d' - d''|/ \sqrt{2}.$$
First, note that there are configurations $p' \in X_{G,ij}(d')$ and $p''\in X_{G,ij}(d'')$ such that $$\label{eq:equalitysatisfied}
\|p' - p''\| = |d' - d''|/ \sqrt{2}.$$ Indeed, let $p' = (x'_1,\ldots, x'_N)$ and $p'' = (x''_1,\ldots, x''_N)$; we then set $$\left\{
\begin{array}{l}
x'_i = -x'_j = (d', 0, \ldots, 0) / 2, \vspace{1pt}\\
x''_i = - x''_j = (d'',0,\ldots,0) / 2. \vspace{1pt}\\
\end{array}
\right.$$ For the other agents, we set $x'_k = x''_k$ for all $v_k \in V-\{v_i, v_j\}$, subject to the constraint that $x'_{a} \neq x'_{b}$ and $x''_{a}\neq x''_{b}$, for all $(v_{a},v_{b})$ in $E$.
We now show that if $p'\in X_{G,ij}(d')$ and $p'' \in X_{G,ij}(d'')$, then $$\|p' - p''\| \ge |d' - d''|/ \sqrt{2}.$$ It suffices to show that $$\label{eq:lem81}
\|x'_i - x''_i\|^2 + \|x'_j - x''_j\|^2 \ge \frac{1}{2} (d' - d'')^2.$$ Let $x':= (x'_i + x'_j) / 2$, $x'':= (x''_i + x''_j) / 2$, and let $$\left\{
\begin{array}{ll}
y'_i := x'_i - x' & y'_j := x'_j - x', \vspace{3pt}\\
y''_i := x''_i - x'' & y''_j := x''_j - x''.
\end{array}
\right.$$ First, note that $ y'_i + y'_j = y''_i + y''_j = 0$, and hence $$\label{eq:lem82}
\|x'_i - x''_i\|^2 + \|x'_j - x''_j\|^2 = 2 \|y'_i - y''_i\|^2 + \|x' - x''\|^2.$$ We also note that $\|y'_i\| = d'/2 $ and $\|y'_i\| = d''/2$, and hence by the triangle inequality, $$\label{eq:lem83}
\|y'_i - y''_i\| \ge | \|y'_i\| - \|y''_i \|| = |d' - d''|/ 2.$$ Combining and , we then establish .
To prove Proposition \[pro:relatepartitiontodynamicalsystem\], we further need the following fact:
\[lem:upperboundonvelocity\] Let $p(t)$ be a trajectory generated by system . Then, the following hold:
1. $\sup_{t \ge 0} \|f(p(t)\| < \infty$.
2. For any $\epsilon > 0 $, there exists an instant $T_{\epsilon}$ such that $$\label{eq:defTepsilon}
\Psi{\left (}p(T_{\epsilon}){\right )}- \Psi{\left (}p(\infty){\right )}\le \epsilon.$$
We first prove part 1. Let $d_{ij}(t):= \|x_j(t) -x_i(t)\|$. It suffices to show that for all $(v_i,v_j) \in E$, $$\label{eq:Novsecond8:45am}
\sup \{d_{ij}(t)g_{ij}(d_{ij}(t)) \mid t \ge 0 \} < \infty.$$ From Lemma \[ELB\], there is a number $d_*> 0$ such that $d_-(p(t)) \ge d_*$ for all $t \ge 0$. Then, from the condition of [*fading attraction*]{}, we have $$\sup\{dg_{ij}(d) \mid d \ge d_*\} < \infty,$$ which implies that holds. We now prove part 2. From Lemma \[lem:phiboundedbelow\], the potential function $\Psi$ is bounded below. On the other hand, $\Psi(p(t))$ is non-increasing. Hence, the limit $
\Psi(q(\infty)) := \lim_{t \to \infty}\Psi(q(t))
$ exists, which then implies the existence of $T_{\epsilon}$ such that holds.
With Lemmas \[lem:boundedbelowdistance\] and \[lem:upperboundonvelocity\], we prove Proposition \[pro:relatepartitiontodynamicalsystem\].
Fix an $i\in {\mathbb{N}}$; we prove Proposition \[pro:relatepartitiontodynamicalsystem\] by first exhibiting a $j'_i\in {\mathbb{N}}$ such that $$\label{eq:firstproof}
{\mathcal}{L}_-(\sigma_t) < L_0, \hspace{10pt} \forall\, t\ge t_{j'_i},$$ and then, exhibiting a $j''_i\in {\mathbb{N}}$ such that $$\label{eq:secondproof}
{\mathcal}{L}_+(\sigma_t) > \max\{l_i, L_0\}, \hspace{10pt} \forall\, t \ge t_{j''_i}.$$ Note that if such indices $j'_i$ and $j''_i$ exist, then the proof is complete; indeed, let $j_i := \max\{j'_i, j''_i\}$, then $\sigma_t$ is in $\Sigma(l_i{\,;\,}q(t))$ for all $t \ge t_{j_i}$. We now establish and , respectively.
[*1). Proof of existence of $j'_i$*]{}. We first make some definitions. By assumption, we have $L_0 > D_+$, and hence from Proposition \[pro:defnud\], $\mu(L_0) > 0$. Since $\mu$ is continuous, there exists a $\delta_0 > 0$ such that if we let $I_0: = [L_0 - \delta_0, L_0+ \delta_0]$, then $\mu(d) \ge \mu(L_0)/2$ for all $d \in I_0$. Let $\xi:= \sup_{t \ge 0} \| f(q(t)) \|$, which is a positive real number by Lemma \[lem:upperboundonvelocity\], and let $
\tau_0 := {\delta_0} / (\sqrt{2} \, \xi )
$.
Let $
X_{G,ij}(I_0) := \cup_{d\in I_0} \, X_{G,ij}(d)
$. We show below that if, at certain instant $t_0\ge 0$, we have $q(t_0)\in X_{G,ij}(L_0)$ for some $(v_i,v_j) \in E$; then, $$\label{eq:evaluatept'}
q(t) \in X_{G,ij}(I_0), \hspace{10pt} \forall\, t \in [t_0, t_0 + \tau_0 ].$$ First, note that if the trajectory $q(t)$ leaves $X_{G,ij}(I_0)$ at $t' > t_0$, then it [*has to*]{} intersect either $ X_{G,ij}(L_0 + \delta_0)$ or $ X_{G,ij}(L_0 - \delta_0)$. On the other hand, from Lemma \[lem:boundedbelowdistance\], if $p', p''\in P_G$ are such that $p'\in X_{G,ij}(L_0)$ and $
p''\in X_{G,ij}(L_0 \pm\delta_0)
$, then, $
\|p' - p''\| \ge \delta_0/ \sqrt{2}
$. Furthermore, we have $\|\dot q(t)\| \le \xi$ for all $t \ge 0$. Hence, starting from $q(t_0)\in X_{G,ij}(L_0)$, the trajectory $q(t)$ has to remain within $X_{G,ij}(I_0)$ for at least $\tau_0$ units of time. We have thus established .
On the other hand, we have $$\Psi(q(t_0)) - \Psi(q(t_0 + \tau_0)) = \int^{t_0+ \tau_0}_{t_0}\| f(q(t)) \|^2 \, dt.$$ Combining with the fact that $f(p) \ge \mu(L_0) /2 $ for all $p\in X_{G,ij}(I_0)$, we obtain $$\label{eq:epsilon0decrease}
\Psi(q(t_0)) - \Psi(q(t_0 + \tau_0)) \ge \epsilon_0, \hspace{5pt}\mbox{for} \hspace{5pt} \epsilon_0 := \frac{\mu(L_0)^2\tau_0}{4}.$$ From Lemma \[lem:upperboundonvelocity\], there is an instant $T_{\epsilon_0}$ such that $$\label{eq:deltaisepsilon0}
\Psi(q(T_{\epsilon_0})) - \Psi(q(\infty)) = \epsilon_0.$$ Since the sequence $\{t_i\}_{i\in {\mathbb{N}}}$ monotonically increases, and approaches to infinity, there is a $j'_i\in {\mathbb{N}}$ such that $
t_{j'_i} > T_{\epsilon_0}
$. We now show that holds for the choice of $j'_i$. The proof is carried out by contradiction. Suppose that, to the contrary, there is an instant $t_0 \ge t_{j'_i}$ such that $
L_{-}(\sigma_{t_0}) = L_0
$. Then, $q(t_0) \in X_{G}(L_0)$, and hence by the arguments above, we have $\Psi(q(t_0 + \tau_0))\le \Psi(q(t_0)) - \epsilon_0$. Moreover, since $t_0 \ge t_{j'_i} > T_{\epsilon_0}$, and by the fact that $\Phi(q(t))$ strictly monotonically decreases in $t$, we have $ \Psi(q(t_0)) < \Psi(q(T_{\epsilon_0}))$. Combining these facts, we obtain $$\Psi(q(t + \tau_0)) < \Psi(q(T_{\epsilon_0})) - \epsilon = \Psi(q(\infty)),$$ which is a contradiction. We have thus shown that holds for the choice of $j'_i$.
[*2). Proof of existence of $j''_i$*]{}. The proof here is similar to the proof of existence of $j'_i$. Let $L_1:=\max\{l_i, L_0\}$; from Proposition \[pro:defnud\], there is a closed interval $I_1:= [L_1 - \delta_1, L_1+ \delta_1]$, for some $\delta_1 > 0$, such that $\mu(d) \ge \mu(L_1)/2$ for all $d \in I_1$. Let $\tau_1 := {\delta_1} / (\sqrt{2} \, \xi )$. Suppose that at certain instant $t_1$, $q(t_1)\in X_{G,ij}(L_1)$ for some $(v_i,v_j)\in E$, then from Lemma \[lem:boundedbelowdistance\], $
q(t) \in X_{G,ij}(I_1)
$ for all $t \in [t_1, t_1 + \tau_1 ]$. It then follows that $$\Psi(q(t_1)) - \Psi(q(t_1 + \tau_1)) \ge \epsilon_1, \hspace{5pt} \mbox{for} \hspace{5pt} \epsilon_1 := \frac{\mu(L_1)^2\tau_1}{4}.$$ Appealing again to Lemma \[lem:upperboundonvelocity\], we obtain an instant $T_{\epsilon_1}$ such that $
\Psi(q(T_{\epsilon_1})) - \Psi(q(\infty)) = \epsilon_1
$. Since both sequences $\{t_i\}_{i\in {\mathbb{N}}}$ and $\{l_i\}_{i\in {\mathbb{N}}}$ monotonically increase, and approach to infinity, there is a $j''_i\in {\mathbb{N}}$ such that $
t_{j''_i} > T_{\epsilon_1}$ and $l_{j''_i} > L_1$. Then, holds for the choice of $j''_i$ because otherwise, there will be an instant $t_1$, with $t_1 \ge t_{j''_i}$, such that $L_+(\sigma_{t_1}) = L_1$, and hence $\Psi(q(t_1 + \tau_1)) < \Psi(q(\infty))$, which is a contradiction. This completes the proof.
[^1]: $^*$X. Chen is with the Coordinated Science Laboratory, University of Illinois at Urbana-Champaign. email: [email protected].
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'N.A. Webb'
- 'J.-F. Olive'
- 'D. Barret'
- 'M. Kramer'
- 'I. Cognard'
- 'O. Löhmer'
date: 'Received / Accepted '
title: 'XMM-Newton spectral and timing analysis of the faint millisecond pulsars and '
---
Introduction
============
X-ray emission from millisecond pulsars (MSPs) is thought to be from: charged relativistic particles accelerated in the pulsar magnetosphere (non-thermal emission indicated by a hard power-law spectrum and sharp pulsations); and/or thermal emission from hot polar caps; and/or emission from a pulsar driven synchrotron nebula; or interaction of relativistic pulsar winds with either a wind from a close companion star or the companion star itself [see e.g. @beck02 for a more thorough review]. Until now, X-ray pulsations and spectra have often not been observable for faint MSPs e.g. [@beck96] and [@halp97]. Thus it has been difficult to discriminate between competing neutron star models. However, taking advantage of the large collecting area of [*XMM-Newton*]{} [@jans01], it is becoming possible to observe not only the X-ray spectra but also put a limit on the presence of X-ray pulsations of these faint MSPs.
We have observed two faint MSPs with [*XMM-Newton*]{}. was first detected in an EGRET source error box, in September 1993 [@lund93], using the radio telescope at Arecibo. [@lund95] combined the mass function, eccentricity, orbital size and age of the pulsar, determined from radio data, to predict the expected type of companion star to the millisecond pulsar. They proposed that the secondary star is a helium white dwarf, with a mass between 0.12-0.6M$_\odot$, in a 6.3 hour orbit with the pulsar. They determined a 3.49 ms pulse period, but from the period derivatives, the spin down energy indicated that the pulsar is not the source of the $\gamma$-rays that were originally detected by EGRET. was subsequently detected in the soft X-ray domain by [@beck96], using the ROSAT PSPC. However, there were too few counts detected to fit a spectrum or detect pulsations. Using the HI survey of [@star92], they deduced an interstellar absorption of 4$\times$ 10${\rm^{20}}$ cm${\rm^{-2}}$. Then using their estimated counts and assuming a powerlaw spectrum of $dN/dE \propto E^{-2.5}$, they determined an unabsorbed flux of 1 $\times$ 10${\rm^{-13}}$ erg cm${\rm^{-2}}$ s${\rm^{-1}}$ (0.1-2.4 keV). The corresponding X-ray luminosity was then calculated to be L$_x$ = 4.7 $\times$ 10${\rm^{31}}$ erg s${\rm^{-1}}$, for a distance of 2 kpc. The distance they used was calculated using the radio dispersion measure and the model of [@tayl93] for the galactic distribution of free electrons.
, a 5.26 msec pulsar, was discovered by [@nica95] during a survey for short period pulsars conducted with the 76-m Lovell radio telescope. From the dispersion measure they found a distance of 0.52 kpc. [@call98], and references therein, confirmed a white dwarf secondary of mass 0.16$\pm$0.02M$_\odot$ in a 14.5 hour circular orbit with the pulsar. [@halp96] associated a faint (L$_x \approx $2.5$\times$ 10$^{30}$ ergs s${\rm^{-1}}$, 0.1-2.4 keV) X-ray source detected with the ROSAT PSPC (80$\pm$24 photons), with the radio MSP . However, the number of photons was insufficient to determine a spectrum or any pulsations.
In this work we present the X-ray spectrum of both and for the first time. We also present some evidence for X-ray pulsation from both of these faint pulsars and we compare their nature with other millisecond pulsars, e.g. [@beck93; @zavl98; @zavl02].
Observations and data reduction
===============================
was observed by XMM-Newton on 2000 October 1. The observations spanned 38 ksecs (MOS cameras) and 36.8 ksecs (PN camera), but a solar flare affected approximately 8 ksecs of these observations. The second of our faint MSPs, , was observed on 2001 April 19. The MOS observations lasted 20.8 ksecs and the PN observations 19.2 ksecs. However the whole observation was strongly affected by a solar flare. The MOS data were reduced using Version 5.4.1 of the [*XMM-Newton*]{} SAS (Science Analysis Software). However, for the PN data we took advantage of the development track version of the SAS. Improvements have been made to the [*oal*]{} (ODF (Observation Data File) access layer) task (version 3.106) to correct for spurious and wrong values, premature increments, random jumps and blocks of frames stemming from different quadrants in the timing data in the PN auxillary file, as well as correcting properly for the onboard delays [@kirs03]. Indeed several spurious jumps were corrected using this version. We have verified that this version improved our timing solution using the pulsar [see @webb03].
We employed the MOS cameras in the full frame mode, using a thin filter [see @turn01]. The MOS data were reduced using ‘emchain’ with ‘embadpixfind’ to detect the bad pixels. The event lists were filtered, so that 0-12 of the predefined patterns (single, double, triple, and quadruple pixel events) were retained and the high background periods were identified by defining a count rate threshold above the low background rate and the periods of higher background counts were then flagged in the event list. We also filtered in energy. We used the energy range 0.2-10.0 keV, as recommended in the document ‘EPIC Status of Calibration and Data Analysis’ [@kirs02]. The event lists from the two MOS cameras were merged, to increase the signal-to-noise.
The PN camera was also used with a thin filter, but in timing mode which has a timing resolution of 30$\mu$s [@stru01]. The PN data were reduced using the ‘epchain’ of the SAS. Again the event lists were filtered, so that 0-4 of the predefined patterns (single and double events) were retained, as these have the best energy calibration. We also filtered in energy. The document ‘EPIC Status of Calibration and Data Analysis’ [@kirs02] recommends use of PN timing data above 0.5 keV, to avoid increased noise. We used the data between 0.6-10.0 keV as this had the best signal-to-noise. The times of the events were then converted from times expressed in the local satellite frame to Barycentric Dynamical Time, using the task ‘barycen’ and the coordinates derived from observing the pulsar 4-8 times per month, over years, with the radio telescope at Nançay, France, by one of us (IC), for . For we used the results from the high-precision timing observations taken since 1996 October, using the 100-m Effelsberg radio telescope and the 76-m Lovell telescope by two of us (MK and OL).
{#sec:0751}
Spectral analysis {#sec:0751spectra}
-----------------
We extracted the MOS spectra of using an extraction radius of $\sim$1 and rebinned the data into 15 eV bins. We used a similar neighbouring surface, free from X-ray sources to extract a background file. We used the SAS tasks ‘rmfgen’ and ‘arfgen’ to generate a ‘redistribution matrix file’ and an ‘ancillary response file’. We binned up the data to contain at least 20 counts/bin. The PN data were extracted in a similar way, following the usual XMM-Newton timing data procedure. The data in the RAWY direction were binned into a single bin and the spectrum was extracted using a rectangle of 1 $\times$ 3 pixels, which included all the photons from the pulsar, see e.g. [@kust02]. The background spectrum was extracted from a similar neighbouring surface, free from X-ray sources. Again we used the SAS tasks ‘rmfgen’ and ‘arfgen’ to generate a ‘redistribution matrix file’ and an ‘ancillary response file’. We then used Xspec (Version 11.1.0) to fit the spectrum. We tried to fit simple models to the combined PN and MOS spectra. We find the model fits as given in Table \[tab:src52specfits\] for the spectrum between 0.2-10.0 keV, when the $N_H$ was frozen at 4 $\times
10^{20} {\rm cm}^{-2}$, [see @beck96]. We find that the best fitting spectrum is a single power law, with a similar photon index to the X-ray spectrum of other millisecond pulsars e.g. PSR B1821-24 [@sait97]. The spectrum of , plotted with the power law fit, can be seen in Fig. \[fig:psr0751power\]. Allowing the $N_H$ to vary, gives values compatible with the above values. We determine an unabsorbed flux of 4.4 $\times 10^{-14}\ {\rm
ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV).
![The combined MOS and PN spectrum of fitted with a power law model. The fit parameters can be found in Table \[tab:src52specfits\].[]{data-label="fig:psr0751power"}](fig1.ps){width="6cm"}
------------------- ---------------- --------------------------- ----------------- --------------- --------------- -------------------------------------------------------- ----- ---------------------------- ------------------------
Object Spectral N$_H$ (cm$^{-2}$) kT Photon Abundance $\chi^{\scriptscriptstyle 2}_{\scriptscriptstyle \nu}$ dof Flux ($\times 10^{-13}$) Luminosity
model ($\times$ 10${\rm^{22}}$) (keV) Index (${\rm ergs\ cm}^{-2} {\rm (${\rm ergs\ s}^{-1}$)
s}^{-1}$)
[****]{} Power law 0.04 - 1.59$\pm$0.20 - 1.33 14 0.44 2.1$\times 10^{31}$
Bremsstrahlung 0.04 12.36$\pm$12.31 - 1.52 14
Blackbody 0.04 0.32$\pm$0.04 - 2.21 14
[**Source 52**]{} Power law 0.13$\pm$0.06 - 1.46$\pm$0.13 - 0.86 32 2.0 1.9$\times 10^{43}$
Bremsstrahlung 0.09$\pm$0.05 17.31$\pm$10.42 - 0.87 32
Raymond Smith 0.07$\pm$0.04 17.63$\pm$12.78 - 0.16$\pm$1.20 0.89 31
[****]{} Power law 0.007 - 1.78$\pm$0.36 - 1.27 9 1.2 3.9$\times 10^{30}$
Bremsstrahlung 0.007 1.95$\pm$1.45 - 1.29 9
Blackbody 0.007 0.26$\pm$0.04 - 1.37 9
------------------- ---------------- --------------------------- ----------------- --------------- --------------- -------------------------------------------------------- ----- ---------------------------- ------------------------
Timing analysis {#sec:0751pntiming}
---------------
We have reduced and analysed the timing data in the same way as the timing analysis that we carried out on the MSP [@webb03], which is a MSP that is known to show pulsations in both the radio and X-rays. We corrected the timing data for the orbital movement of the pulsar and the data were folded on the radio ephemeris, see Table \[tab:0751parameters\], taking into account the time-delays due to the orbital motion. We used the data between 0.6-7.0 keV, as we found that the majority of the emission from was in this energy band (see Sect. \[sec:0751spectra\]) and thus the signal-to-noise in this band was the best. We tested the hypothesis that there is no pulsation in the MSP . We searched frequencies at and around the expected frequency and we found the largest peak in the $\chi^{\scriptscriptstyle 2}_{\scriptscriptstyle \nu}$ versus change in frequency from the expected frequency at the expected value, see Fig \[fig:0751chisquare\] where we have taken the resolution of the data (n) and plotted 4n bins in the range -1.75$\times 10^{-4}<
\Delta \rm f < 1.75\times 10^{-4}$. Testing the significance of the peak [@bucc85], we find that it is significant at 1.7$\sigma$. However, as this is the largest peak when searching 700 frequencies about the expected frequency and it falls at the expected value, we tried to fold the data on this frequency. The folded lightcurve (0.6-7.0 keV), counts versus phase, is shown in Fig \[fig:0751foldedlc\]. We find one broad pulse per period. Fitting the lightcurve with a Lorentzian [as @kuip02] we find that the FWHM of the pulse is $\delta\phi_1$=0.311$\pm$0.1, centred at phase $\phi_1$=0.38$\pm$0.04 (errors are 90% confidence). Fitting with a Gaussian gives similar results. Using a Z$^{\scriptscriptstyle
2}_{\scriptscriptstyle 2}$ test [@bucc83], which is independent of binning, we determine a value of 5.5, which corresponds to a probability that the pulse-phase distribution deviates from a statistically flat distribution of 0.94. The pulsed percentage is 52$\pm$8%.
Parameter Value
------------------------------------- --------------------------------------------------------------------------------------------
Right Ascension (J2000) 07$^{\rm h}$ 51$^{\rm m}$ 09${\scriptstyle .}\hspace*{-0.05cm}^{\scriptstyle \rm s}$156312
Declination (J2000) 18$^{\circ}$ 07$^{'}$ 38${\scriptstyle .}\hspace*{-0.05cm}^{\scriptstyle ''}$590620
Period (P) 0.003478770781560571 s
Period derivative (P) 0.726912 $\times 10^{-20}$ s s$^{-1}$
Second period derivative (P) 2.48928 $\times 10^{-30}$ s s$^{-2}$
Frequency ($\nu$) 287.457858811 Hz
Frequency derivative ($\nu$) -6.0066207 $\times 10^{-16}$ Hz s$^{-1}$
Second frequency derivative ($\nu$) -2.0569423 $\times 10^{-25}$ Hz s$^{-2}$
Epoch of the period (MJD) 49301.5
Orbital period 22735.664643 s
a.sin i 0.39660728
Eccentricity 0.0000566981
Time of ascending node (MJD) 49460.430540
: Ephemeris of from the Nançay radio timing data.[]{data-label="tab:0751parameters"}
![$\chi^{\scriptscriptstyle 2}_{\scriptscriptstyle \nu}$ versus change in frequency from the expected pulsation frequency (shown as the solid vertical line at $\Delta$ f = 0.0) for .[]{data-label="fig:0751chisquare"}](fig2.ps){width="8cm"}
![Lightcurve of folded on the radio ephemeris and binned into 6 bins, each of 0.58 msecs. Two cycles are shown for clarity. A typical $\pm$1$\sigma$ error bar is shown. The dashed line shows the background level, where the error bar represents the $\pm$1$\sigma$ error.[]{data-label="fig:0751foldedlc"}](fig3.ps){width="8cm"}
Taking the 261$\pm$60 counts above the background in our observation, we are also able to analyse the variations in the lightcurves from different energy bands. We chose the energy bands: 0.6-1.0 keV; 1.0-2.0 keV; and 2.0-7.0 keV. The data were folded as before and binned into 5 bins, where two cycles in phase, are presented for clarity. These lightcurves can be seen in Fig. \[fig:0751energybands\]. As can also be seen from the spectral fitting, see Sect. \[sec:0751spectra\], emits mostly at lower energies. The lightcurves do not vary dramatically, although the peak may be slightly narrower at lower energies. The pulsed percentage is possibly less at lower energies. We find 32$\pm16$% for each of the energy bands: 0.6-1.0 keV; and 1.0-2.0 keV and 55$\pm23$% for the 2.0-7.0 keV band.
![Lightcurves of from different energy bands, folded on the radio ephemeris and binned into 5 bins. Two cycles are shown for clarity. (a) 0.6-1.0 keV; (b) 1.0-2.0 keV; (c) 2.0-7.0 keV. A typical $\pm$1$\sigma$ error bar is shown. []{data-label="fig:0751energybands"}](fig4.ps){width="11cm"}
The MOS field of view {#sec:0751mosfov}
---------------------
We have detected 46 sources in total using the task ‘emldetect’, with a maximum detection likelihood threshold of 4.5$\sigma$ and ignoring those sources not found on both cameras (unless they lay outside the FOV or were in a chip gap). These sources can be seen in Fig. \[fig:0751field\] and details such as their position, count rates and likelihood of detection can be found in Table \[tab:0751fieldsources\]. is labelled as source 47. [*ROSAT*]{} detected 10 sources in the [*XMM-Newton*]{} FOV [@beck96]. We calculate an unabsorbed flux limit of our observation of 3.0 $\times 10^{-15} {\rm ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV), using PIMMS (Mission Count Rate Simulator) Version 3.3a, with a power law, $\Gamma$ = 2 [as @hasi01].
----- ---------------- ---------------- ----------------- --------
Src R.A. (2000) dec. (2000) Counts/s Like-
ID $^h$ $^m$ $^s$ $^{\circ}$ ’ ” $\times10^{-3}$ lihood
3 7 50 54.1 17 57 59.3 5.6$\pm$0.8 62
4 7 50 54.8 17 58 27.9 5.6$\pm$0.8 63
5 7 51 12.9 17 58 44.3 4.4$\pm$0.6 53
13 7 51 9.7 18 0 1.04 3.3$\pm$0.5 40
17 7 51 46.8 18 0 36.0 6.4$\pm$0.8 99
19 7 51 44.0 18 1 7.05 3.8$\pm$0.7 32
20 7 51 13.8 17 57 21.3 3.7$\pm$0.7 23
23 7 50 45.1 18 1 30.8 3.5$\pm$0.6 31
24 7 50 25.2 18 1 48.4 6.6$\pm$1.1 40
25 7 51 26.9 18 2 0.08 6.4$\pm$0.6 134
26 7 50 20.9 18 2 25.6 5.3$\pm$1.0 34
27 7 50 57.4 18 3 19.2 2.8$\pm$0.4 39
28 7 51 1.6 18 3 20.7 3.7$\pm$0.5 65
29 7 51 35.6 18 3 29.6 4.9$\pm$0.6 64
31 7 50 36.6 18 3 28.8 3.0$\pm$0.7 18
33 7 51 7.9 18 4 21.4 3.6$\pm$0.4 67
34 7 50 37.9 18 5 2.7 3.3$\pm$0.6 28
36 7 50 52.3 18 5 19.1 2.6$\pm$0.4 32
37 7 50 38.0 18 5 40.8 4.0$\pm$0.6 48
38 7 50 48.1 18 5 55.1 3.3$\pm$0.5 60
39 7 51 40.4 18 6 12.9 9.6$\pm$1.1 106
40 7 50 49.2 18 6 39.9 2.8$\pm$0.4 43
41 7 52 1.1 18 6 42.6 5.3$\pm$0.9 38
42 7 51 48.5 18 6 44.6 8.3$\pm$0.8 141
43 7 50 52.7 18 6 55.7 6.9$\pm$0.6 202
44 7 51 13.6 17 56 50.7 6.1$\pm$0.9 56
45 7 51 37.2 18 7 20.4 9.6$\pm$0.7 255
47 7 51 9.1 18 7 36.3 8.9$\pm$0.6 327
51 7 51 4.2 18 8 45.0 12.6$\pm$0.7 466
52 7 51 17.8 18 8 56.4 36.3$\pm$1.2 2017
55 7 51 51.5 18 9 59.2 4.2$\pm$0.6 44
56 7 50 46.5 18 9 45.2 4.6$\pm$0.7 56
59 7 50 41.7 18 10 26.7 3.0$\pm$0.5 37
60 7 51 27.3 18 10 40.2 3.0$\pm$0.5 41
62 7 51 14.9 18 11 30.0 3.1$\pm$0.4 56
63 7 50 40.1 18 12 14.3 17.2$\pm$1.2 355
64 7 51 17.5 18 12 2.9 2.7$\pm$0.4 47
65 7 51 56.1 18 13 55.7 5.4$\pm$1.0 37
66 7 51 54.1 18 13 54.9 8.6$\pm$1.2 73
67 7 51 20.3 18 14 0.7 6.6$\pm$0.6 117
68 7 51 37.5 17 55 46.9 6.9$\pm$1.1 53
71 7 51 13.5 18 15 33.4 3.7$\pm$0.6 40
72 7 50 43.2 18 15 41.3 4.4$\pm$0.8 28
73 7 51 36.8 18 16 0.5 6.3$\pm$0.8 96
75 7 51 24.2 18 17 43.5 5.2$\pm$0.8 38
76 7 51 17.5 17 54 10.7 6.1$\pm$1.0 41
----- ---------------- ---------------- ----------------- --------
: X-ray sources in the MOS field of view. Information includes the source identification number, right ascension and declination of the source, count rate and detection likelihood. The detection likelihood value is the value given by ‘emldetect’ for detection in the two MOS cameras, corrected for the coding error in versions 4.11.15 and earlier.[]{data-label="tab:0751fieldsources"}
![The MOS field of view. The sources detected with a likelihood $>$4.5$\sigma$ are encircled and labelled with a number. These numbers correspond to the source identification numbers in Table \[tab:0751fieldsources\]. is labelled 47.[]{data-label="fig:0751field"}](fig5.ps){width="8.8cm"}
The brightest source in the field of view is source 52, which was not detected by [*ROSAT*]{}, where the deepest observation had a detection limit of 1.5 $\times 10^{-14}\ {\rm ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV). We detect this source with an unabsorbed flux of 2.0 $\times 10^{-13}\ {\rm ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV), more than 13 times brighter than the [*ROSAT*]{} detection limit. This source has been observed as a part of the AXIS (An XMM-Newton International Survey) project [see http://www.ifca.unican.es/$\sim$xray/AXIS/ and also @barc02]. The source identified as the optical counterpart of our source 52 in the i-band photometry is elongated and the spectrum shows calcium H and K lines and the G-band. From the spectral line shifts, a redshift of z=0.255 has been determined by the AXIS group and they propose that this source could be a galaxy. We extracted the MOS spectra of source 52 in the same way as described in Section \[sec:0751spectra\]. We have fitted the spectrum with the extra-galactic origin in mind and the results can be found in Table \[tab:src52specfits\]. The data fitted with a power law model can be found in Fig. \[fig:src52power\]. We investigated both models with the redshift frozen at 0.255 and also models with the redshift as a free parameter. We found that the fits for the frozen and the variable redshift (z) were the same within the error bars, hence we present only one set of results, those where the redshift was fixed at 0.255. We find a reasonably hard X-ray spectrum, which can not be fitted with a black body. We have also investigated a temporal variation, binning the data into 1ks and 5ks bins. However, we find no significant variation during our observation. From the X-ray spectral fitting and short time scale temporal analysis, we can not support nor disregard the galaxy hypothesis.
![The spectrum of the source 52, in the field of view, fitted with a power law model. The fit parameters can be found in Table \[tab:src52specfits\].[]{data-label="fig:src52power"}](fig6.ps){width="6cm"}
{#section}
Spectral analysis {#sec:1012spectra}
-----------------
Due to the observation of being affected by a solar flare during the whole of the observation, we detect only one source in the whole of the field of view. This source is at the centre of the field of view, with the same coordinates as those of and is therefore supposed to be the MSP . We have not carried out any filtering for periods of higher background, as the whole observation would be included. Thus the signal to noise is poor for this observation.
We extracted the spectra in the same way as described in Sect. \[sec:0751\]. We tried to fit simple models to the combined PN and MOS spectra. We find the model fits as given in Table \[tab:src52specfits\] for the spectrum between 0.2-10.0 keV, when the $N_H$ was frozen at 7 $\times 10^{19} {\rm cm}^{-2}$ [see @snow94]. We can not discriminate which of these fits is the best, however we present the spectrum of with a power law fit in Fig. \[fig:psr1012power\]. Allowing the $N_H$ to vary, gives values compatible with the above values. We determine an unabsorbed flux of 1.2 $\times 10^{-13} {\rm
ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV).
![The combined MOS and PN spectrum of fitted with a power law model. The fit parameters can be found in Table \[tab:src52specfits\].[]{data-label="fig:psr1012power"}](fig7.ps){width="6cm"}
Timing analysis {#sec:1012pntiming}
---------------
We corrected the timing data, as before, for the orbital movement of the pulsar and the data were folded on new radio ephemerides calculated to be correct for our observation (see Table \[tab:1012parameters\]), taking into account the time-delays due to the orbital motion. We used the data between 0.6-5.0 keV, as the signal-to-noise was best in this band. Also from the spectral fitting (see Sect. \[sec:1012spectra\]) we found that the majority of the emission from was in this energy band. We tested the hypothesis that there is no pulsation in the MSP . The largest peak in the $\chi^{\scriptscriptstyle 2}_{\scriptscriptstyle \nu}$ versus change in frequency from the expected frequency is at $\Delta \nu \simeq 8
\times 10^{-6} {\rm s}^{-1}$, see Fig \[fig:1012chisquare\]. This implies a $\Delta {\rm f}$/f (=$\Delta {\rm
P}$/P) of $\sim$4 $\times 10^{-8}$, similar to the value that we found for [@webb03] ($\sim$1 $\times
10^{-8}$) and approaching the value of $< 10^{-9}$ determined from the analysis of two revolutions of data of the MSP the [@kirs03; @kirs03b]. It is also well inside the resolution of this dataset ($\sim$1/T$_{obs}$), which is 7 $\times 10^{-5} {\rm s}^{-1}$, thus we can conclude that the data reduction and analysis made to the dataset are reliable. Testing the significance of the peak [@bucc85], we find that it is significant at 3$\sigma$. We folded the data on the radio period given in Table \[tab:1012parameters\]. The folded lightcurve (0.6-5.0 keV), counts versus phase, is shown in the top panel of Fig \[fig:1012foldedlc\]. We find some evidence for two pulses per period. Fitting the lightcurve with two Lorentzians [as @kuip02] we find that the FWHM of the pulses is $\delta\phi_1$=0.10$\pm$0.07, centred at phase $\phi_1$=0.36$\pm$0.04 and $\delta\phi_2$=0.09$\pm$0.03, centred at phase $\phi_2$=0.80$\pm$0.03. Fitting with two Gaussians gives similar results. Using a Z$^{\scriptscriptstyle 2}_{\scriptscriptstyle
4}$ test [@bucc83] on the X-ray data, we determine a value of 13, which corresponds to a probability that the pulse-phase distribution deviates from a statistically flat distribution of 0.99. We find a pulsed fraction of 77$\pm$13%.
In the lower panel in Fig \[fig:1012foldedlc\] we have plotted the radio lightcurve from the timing observations taken using the Effelsberg and Lovell telescopes.
Parameter Value
------------------------------------- ----------------------------------------------------------------------------------------------------
Right Ascension (J2000) 10$^{\rm h}$ 12$^{\rm m}$ 33${\scriptstyle .}\hspace*{-0.05cm}^{\scriptscriptstyle \rm s}$43463677
Declination (J2000) 53$^{\circ}$ 07$^{'}$ 02${\scriptstyle .}\hspace*{-0.05cm}^{\scriptscriptstyle ''}$4965199
Period (P) 0.00525605523 s
Period derivative (P) 0.726912 $\times 10^{-20}$ s s$^{-1}$
Second period derivative (P) 2.48928 $\times 10^{-30}$ s s$^{-2}$
Frequency ($\nu$) 190.267837551251880(508) Hz
Frequency derivative ($\nu$) -6.200023(202)$\times 10^{-16}$ Hz s$^{-1}$
Second frequency derivative ($\nu$) 1.871(294) $\times 10^{-27}$Hz s$^{-2}$
Epoch of the period (MJD) 52018.324635450415
Orbital period 52243.7224464(17) s
a.sin i 0.581817416(121)
Eccentricity $<$1.3 $\times 10^{-6}$
Time of ascending node (MJD) 52018.268144542(23)
: Ephemeris of from the Effelsberg and the Lovell radio timing data. Errors on the last digits are shown in parenthesis after the values.[]{data-label="tab:1012parameters"}
![$\chi^{\scriptscriptstyle 2}_{\scriptscriptstyle \nu}$ versus change in frequency from the expected pulsation frequency (shown as the solid vertical line at $\Delta$ f = 0.0) for .[]{data-label="fig:1012chisquare"}](fig8.ps){width="8cm"}
![Upper panel: Lightcurve folded on the radio ephemeris and binned into 12 bins, each of 0.44 msecs. Two cycles are shown for clarity. A typical $\pm$1$\sigma$ error bar is shown. The dashed line shows the background level, where the error bar represents the $\pm$1$\sigma$ error. Lower panel: Radio profile from the Effelsberg and the Lovell radio timing data. Again two cycles are shown for clarity. []{data-label="fig:1012foldedlc"}](fig9.ps){width="12cm"}
Discussion
==========
We have investigated the two faint millisecond pulsars and in the X-ray band 0.2-10.0 keV, to ascertain the nature of the X-ray spectra. There are only a handful of MSPs that have been seen to pulse in both the radio and in X-rays to date [@beck02], thus we have also investigated whether these two MSPs show pulsations in X-rays.
[@sait97] showed that the MSP has a similar magnetic field value, at the light cylinder radius (B$_{\rm L}$), as the . If this value can be taken to indicate the high-energy magnetospheric activity, they state that one can expect to see similar pulses from pulsars with a similar value of B$_{\rm L}$ and thus such pulsars are good candidates amongst the MSPs in which to look for magnetospheric X-ray pulsation in high energy bands. We have calculated the B$_{\rm L}$ for and , using the values for the magnetic field strength from [@zhan00]. We find 8.3$\times 10^4$ and 3.5$\times 10^4$ G respectively. The value calculated for indicates that it is amongst the top 10 values for a millisecond pulsar. That of places it in the top half. Thus if a high B$_{\rm L}$ indicates high-energy magnetospheric activity this adds support to the detection of X-ray pulsations in our observation of and possibly to that of .
We have also calculated the spin down energy (Ė) and the luminosity in the 0.1-2.4 keV band of these two pulsars, to compare the results with the correlation found between these two parameters by [@beck97]. [@beck97] suggest that pulsars that obey this relationship emit X-rays produced by magnetospheric emission, originating from the co-rotating magnetosphere. For we find log(Ė) of 34.35. To calculate the luminosity, we have used the distance calculated using the radio dispersion measure and the model of [@tayl93]. This gives a value of log(L$_x$) of 31.0$\pm$0.2, which places the point close to the expected value of 31.2 using the correlation proposed by [@beck97]. However, using the more recent model of [@cord03a; @cord03b] diminishes the distance by almost a factor 2, to 1.1 kpc. This gives a log(L$_x$) of 30.5$\pm$0.2 and thus displaces the point further from the expected value. For the MSP we find a log(Ė) of 34.2 and a log(L$_x$) of 30.4$\pm$0.3, where the expected log(L$_x$) is approximately 31.1. This places the point quite some distance from the expected value, which may indicate that the relationship is not a hard and fast rule. Indeed [@poss02] found recently that when analysing pulsars in the 2.0-10.0 keV band, this relationship does not hold true.
PSR J0751+1807
--------------
We find that the best fitting model to the X-ray spectrum of is a power law. This is indicative of a magnetospheric origin of the X-ray emission. We have shown evidence that there appears to be a single broad pulse emitted from this MSP, where the pulsation appears to change only slightly with increasing energy (becomes slightly broader). The observations also indicate that the pulsed fraction possibly increases at higher energies. A broad pulsation can be observed from pulsars showing hard magnetospheric emission or soft thermal emission and thus we can not yet discern the origin of the X-ray emission from with this data set. Indeed this pulsar may be best fitted by a multi-component model, as is the case with several other brighter MSPs e.g. , [@webb03] or , [@zavl98; @zavl02]. Longer observations of will help to distinguish the true nature of the spectrum.
PSR J1012+5307
--------------
We can not discriminate which of the model fits to the MSP is the best, however we find that the single power law has a similar photon index to the X-ray spectrum of (see Sect. \[sec:0751spectra\]), which could indicate a magnetospheric origin of the X-ray emission. However, the temperature of the blackbody (1.9$\pm$0.5 $\times 10^6$ K) is consistent with that emitted from the heated polar caps of a millisecond pulsar [10$^6$-10$^7$ K e.g. @zhan03; @zavl98 and references therein]. Calculating the radius of the emission area from the blackbody model fit, we find a radius of 0.05$\pm^{\scriptscriptstyle 0.01}_{\scriptscriptstyle 0.02}$ km, which is smaller than the expected radius of emission from polar caps [$\sim$1 km e.g. @zhan03; @zavl98 and references therein]. [@zavl96] state however, that spectral fits with simplified blackbody models can produce higher temperatures and smaller sizes due to the fact that the X-ray spectra emerging from light-element atmospheres are harder than blackbody spectra. Alternatively, [@zavl98] and [@zavl02] suggest that the thermal emission can be from non uniform polar caps and we may therefore be seeing the emission from the hotter central region of the caps.
Folding the timing data on the radio frequency, we find some evidence for two pulses emitted from this MSP, separated by approximately 0.4 in phase. This is similar to other millisecond pulsars, e.g. [@webb03] and [@sait97].
Conclusions
===========
We have presented XMM-Newton data of the faint millisecond pulsars and . Both of these pulsars have a reasonably large magnetic field at the light cylinder radius, which could indicate that both of these MSPs should show pulsations in X-rays. We present for the first time the X-ray spectra of these two faint millisecond pulsars. We find that a power law model best fits the spectrum of , $\Gamma$=1.59$\pm$0.20, with an unabsorbed flux of 4.4 $\times
10^{-14} {\rm ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV). A power law is also a good description of the spectrum of , $\Gamma$=1.78$\pm$0.36, with an unabsorbed flux of 1.2 $\times
10^{-13} {\rm ergs\ cm}^{-2} {\rm s}^{-1}$ (0.2-10.0 keV). However, a blackbody model can not be excluded as the best fit to the data. We have also shown evidence to suggest that both of these MSPs may show X-ray pulsations. appears to show a single pulse, whereas may show some evidence for two pulses per pulse period.
We wish to thank A. Marcowith for his advice pertaining to the work in this manuscript. This article was based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. The authors also acknowledge the CNES for its support in this research.
Barcons, X., Carrera, F.J., Watson, M.G., et al., 2002, A&A, 382, 522
Becker, W., Trümper, J., 1993, Nature, 365, 528
Becker, W., Trümper, J., Lundgren, S.C., 1996, MNRAS, 282, L33
Becker, W., Trümper, J., 1997, A&A, 326, 682
Becker, W., Aschenbach, B., 2002, Proceedings of the 270. WE-Heraeus Seminar on Neutron Stars, Pulsars, and Supernova Remnants. MPE Report 278. Eds. W. Becker, H. Lesch, and J. Trümper
Buccheri, R., Bennett, K., Bignami, G., et al., 1983, A&A, 128, 245
Buccheri, R., Sacco, B., Damico, N., Hermsen, W., 1985, Nature, 316, 131
Callanan, P.J., Garnavich, P.M., Koester, D., 1998, MNRAS, 298, 207
Cordes, J.M., Joseph, T., Lazio, W., 2003a, ApJ, submitted
Cordes, J.M., Joseph, T., Lazio, W., 2003a, ApJ, submitted
Halpern, J.P., 1996, ApJ, 459, L9
Halpern, J.P., Wang, F.Y.-H., 1997, BAAS, 29, 1391
Hasinger, G., Altieri, B., Arnaud, M., et al., 2001, A&A, 365, L45
Jansen, F., Lumb, D., Altieri, B., et al., 2001, A&A, 365, L1-6
Kirsch, M., and the EPIC Consortium, 2002, XMM-SOC-CAL-TN-0018
Kirsch M.G.F., Becker W., Benlloch-Garcia S., et al., 2003, Proc. SPIE, 5165
Kirsch M.G.F., 2003, PhD Thesis, Univ. of Tübingen, ISBN3-89959-070-8
Kuiper, L., Hermsen, W., Verbunt, F., Thompson, D.J., Stairs, I.H., Lyne, A.G., Strickman, M.S., Cusumano, G., 2000, A&A, 359, 615
Kuiper, L., Hermsen, W., Verbunt, F., Ord, S., Stairs, I.H., Lyne, A.G., 2002a, ApJ, 577, 917
Kuiper, L., Hermsen, W., 2002b, “The Gamma-Ray Universe”, Eds. A. Goldwurm, D. Neumann, J. Tran, V. Thanh, The Gioi Publishers (Vietnam)
Kuster, M., Kendziorra, M., Benlloch, S., Becker, W., Lammers, U., Vacanti, G., Serpell, E., , New Visions of the Universe in the XMM-Newton and Chandra Era, ESA-SP488, Ed. F. Jansen
Lange, Ch., Camilo, F., Wex, N., Kramer, M., Backer, D.C., Lyne, A.G., Doroshenko, O., 2001, MNRAS, 326, 274
Lundgren, S.C., Zepka, A.F., Cordes, J.M., Becker, W.E., Kanbach, G., Trümper, J., Fierro, J.M., 1993, AAS, 183, 3703
Lundgren, S.C., Zepka, A.F., Cordes, J.M., 1995, ApJ, 453, 419
Nicastro, L., Lyne, A.G., Lorimer, D.R., Harrison, P.A., Bailes, M., Skidmore, B.D., 1995, MNRAS, 273, L68
Possenti, A., Cerutti, R., Colpi, M., Mereghetti, S., 2002, A&A, 387, 993
Saito, Y., Kawai, N., Kamae, T., Shibata, S., Dotani, T., Kulkarni, S.R., 1997, ApJ, 477, 37
Stark, A.A., Gammie, C.F., Wilson, R.W., Baily, W.J., Linke, R., Heiles, C., Hurwitz, M., 1992, ApJS, 79, 77
Snowden, S.L., Hasinger, G., Jahoda, K., et al., 1994, ApJ, 430, 601
Strüder, L., Briel, U., Dennerl, K., et al., 2001, A&A, 365, L18
Taylor, J.H., Cordes, J.M., 1993, ApJ, 411, 674
Turner, M.J.L., Abbey, A., Arnaud, M. et al., 2001, A&A, 365, L27
Webb, N.A., Olive, J.-F., Barret, D., 2003, A&A, submitted
Zavlin, V.E., Pavlov, G.G., Shibanov, Y.A., 1996, A&A, 315, 141
Zavlin, V.E., Pavlov, G.G., 1998, A&A, 329, 583
Zavlin, V.E., Pavlov, G.G., Sanwal, D., Manchester, R.N., Trümper, J., Halpern, P., Becker, W., 2002, ApJ, 569, 894
Zhang, B., Harding, A., 2000, ApJ, 532, 1150
Zhang, L., Cheng, K.S., 2003, A&A, 398, 639
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Constrained shortest distance (CSD) querying is one of the fundamental graph query primitives, which finds the shortest distance from an origin to a destination in a graph with a constraint that the total cost does not exceed a given threshold. CSD querying has a wide range of applications, such as routing in telecommunications and transportation. With an increasing prevalence of cloud computing paradigm, graph owners desire to outsource their graphs to cloud servers. In order to protect sensitive information, these graphs are usually encrypted before being outsourced to the cloud. This, however, imposes a great challenge to CSD querying over encrypted graphs. Since performing constraint filtering is an intractable task, existing work mainly focuses on unconstrained shortest distance queries. CSD querying over encrypted graphs remains an open research problem.
In this paper, we propose `Connor`, a novel graph encryption scheme that enables approximate CSD querying. `Connor` is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient. Using `Connor`, a graph owner can first encrypt privacy-sensitive graphs and then outsource them to the cloud server, achieving the necessary privacy without losing the ability of querying. Extensive experiments with real-world datasets demonstrate the effectiveness and efficiency of the proposed graph encryption scheme.
author:
- |
Meng Shen, Baoli Ma, Liehuang Zhu, Rashid Mijumbi, \
Xiaojiang Du, and Jiankun Hu,
bibliography:
- 'main.bib'
title: 'Cloud-Based Approximate Constrained Shortest Distance Queries over Encrypted Graphs with Privacy Protection'
---
Cloud Computing, Privacy, Graph Encryption, Constrained Shortest Distance Querying
Introduction
============
ecent years have witnessed the prosperity of applications based on graph-structured data [@Graph:Encryption:for:Approximate:Shortest:Distance:Queries; @Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries], such as online social networks, road networks, web graphs [@Shen2017Classification], biological networks, and communication networks [@Shen2014Towards; @Xu2015Achieving]. Consequently, many systems for managing, querying, and analyzing massive graphs have been proposed in both academia (e.g., GraphLab [@GraphLab:A:New:Framework:For:Parallel:Machine:Learning], Pregel [@Pregel:a:system:for:large:scale:graph:processing] and TurboGraph [@TurboGraph]) and industry (e.g., Titan, DEX and GraphBase). With the prevalence of cloud computing, graph owners (e.g., enterprises and startups for graph-based services) desire to outsource their graph databases to a cloud server, which raises a great concern regarding privacy. An intuitive way to enhance data privacy is encrypting graphs before outsourcing them to the cloud. This, however, usually comes at the price of inefficiency, because it is quite difficult to perform operations over encrypted graphs.
Shortest distance querying is one of the most fundamental graph operations, which finds the shortest distance, according to a specific criterion, for a given pair of source and destination in a graph. In practice, however, users may consider multiple criteria when performing shortest distance queries [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]. Taking the road network as an example, a user may want to know the shortest distance, in terms of travelling time, between two cities within a budget for total toll payment. This problem can be represented by a constrained shortest distance (CSD) query, which finds the shortest distance based on one criterion with one or more constraints on other criteria.
In this paper, we focus on single-constraint CSD queries. This is because most practical problems can be represented as a single-constraint CSD query. For instance, such a query on a communication network could return the minimum cost from a starting node to a terminus node, with a threshold on routing delay. In addition, multi-constraint CSD queries can usually be decomposed into a group of sub-queries, each of which can be abstracted as a single-constraint CSD query. Formally, a CSD query[^1] is such that: given an origin $s$, a destination $t$, and a cost constraint $\theta$, finding the shortest distance between $s$ and $t$ whose total cost $c$ does not exceed $\theta$.
Existing studies in this area can be roughly classified into two categories. The *first* category mainly focuses on the CSD query problem over unencrypted graphs [@Bicriterion:path:problems; @Approximation:schemes:for:restricted:shortest:path:problem; @Multiobjective:optimization:Improved:FPTAS:shortest:paths:and:non-linear:objectives:with:applications; @Route:Planning:Bicycles-Exact:Constrained:Shortest:Paths:Made:Practical:via:Contraction:Hierarchy; @Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]. However, these methods cannot be easily applied in the encrypted graph environment, because many operations on plain graphs required in these methods (e.g., addition, multiplication, and comparison) cannot be carried out successfully without a special design for encrypted graphs. The *second* category aims at enabling the shortest distance (or shortest path) queries over encrypted graphs [@Graph:Encryption:for:Approximate:Shortest:Distance:Queries; @Shortest:Paths:and:Distances:with:Differential:Privacy]. They usually adopt distance oracles such that the approximate distance between any two vertices can be efficiently computed, e.g., in a sublinear way. The main limitation of these approaches is that they are incapable of performing constraint filtering over the cloud-based encrypted graphs. Therefore, they cannot be directly applied to answering CSD queries.
Motivated by the limitations of existing schemes, our goal in this paper is to design a practical graph encryption scheme that enables CSD queries over encrypted graphs. As the CSD problem over plain graphs has been proved to be NP-hard [@Approximation:schemes:for:restricted:shortest:path:problem], existing studies (e.g., [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]) usually resort to approximate solutions, which guarantee that the resulting distance is no longer than $\alpha$ times of the shortest distance (where $\alpha$ is an approximation ratio predefined by graph owners), subject to the cost constraint $\theta$. The encryption of graphs would make the CSD problem even more complicated. Hence, we also focus on devising an approximate solution.
Specifically, this paper presents `Connor`, a novel graph encryption scheme targeting the approximate CSD querying over encrypted graphs. `Connor` is built on a secure 2-hop cover labeling index (2HCLI), which is a type of distance oracle such that the approximate distance between any two vertices in a graph can be efficiently computed [@Graph:Encryption:for:Approximate:Shortest:Distance:Queries; @Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]. The vertices of the graph in the secure 2HCLI are encrypted by particular pseudo-random functions (PRFs). In order to protect real values of graph attributes while allowing for cost filtering, we encrypt *costs* and *distances* (between pairs of vertices) by the order-revealing encryption (ORE) [@Order:Revealing:Encryption:New:Constructions:Applications:and:Lower:Bounds; @Practical:Order:Revealing:Encryption:with:Limited:Leakage] and the somewhat homomorphic encryption (SWHE) [@boneh2005evaluating], respectively. Based on the ORE, we design a simple but efficient tree-based ciphertexts comparison protocol, which can accelerate the constraint filtering process on the cloud side.
The main contributions of this paper are as follows.
1. We propose a novel graph encryption scheme, `Connor`, which enables the approximate CSD querying. It can answer an $\alpha$-CSD query in milliseconds and thereby achieves computational efficiency.
2. We design a tree-based ciphertexts comparison protocol, which helps us to determine the relationship of the sum of two integers and another integer over their ciphertexts with controlled disclosure. This protocol can also serve as a building block in other relevant application scenarios.
3. We present a thorough security analysis of `Connor` and demonstrate that it achieves the latest security definition named CQA2-security [@Structured:encryption]. We also implement a prototype and conduct extensive experiments on real-world datasets. The evaluation results show the effectiveness and efficiency of the proposed scheme.
To the best of our knowledge, this is the first work that enables the approximate CSD querying over encrypted graphs.
The rest of this paper is organized as follows. We summarize the related work in Section \[sec:related\_work\] and describe the background of the approximate CSD querying in Section \[sec:background\]. We formally define the privacy-preserving approximate CSD querying problem in Section \[sec:PROBLEM DESCRUPTION\]. After that, the construction of `Connor` is presented in Section \[sec:main scheme\], with a detailed description of the tree-based ciphertexts comparison protocol in Section \[sec:Tree:Based:Ciphertexts:Comparison:Approach\]. We exhibit the complexity and security analyses in Section \[sec:security\], evaluate the proposed scheme through extensive experiments in Section \[sec:evaluation\], and conclude this paper in Section \[sec:conclusion\].
Related Work {#sec:related_work}
============
In an era of cloud computing, security and privacy become great concerns of cloud service users [@keyword_hu; @du_infocom14; @Cheng2017A; @Wu2014MobiFish; @Wu2014Security; @Du2007An]. Here we briefly summarize the related work from two aspects, i.e., CSD querying over plain graphs and graph privacy protection.
**Plain CSD queries.** The constrained shortest distance/path querying over plain graphs has attracted many research attentions. Hansen [@Bicriterion:path:problems] proposed an augmented Dijkstra’s algorithm for exact constrained shortest path queries without an index. This method, however, resulted in a significant computational burden. In order to improve the querying efficiency, another solution [@Multiobjective:optimization:Improved:FPTAS:shortest:paths:and:non-linear:objectives:with:applications] focused on approximate constrained shortest path queries, which were also index-free.
The state-of-the-art solution to the exact constrained shortest path querying with an index was proposed by Storandt [@Route:Planning:Bicycles-Exact:Constrained:Shortest:Paths:Made:Practical:via:Contraction:Hierarchy], which accelerated query procedure with an indexing technique called contraction hierarchies. This approach still results in impractically high query processing cost. Wang et al. [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries] proposed a solution to the approximate constrained shortest path querying over large-scale road networks. This method took full advantage of overlay graph techniques to construct an overlay graph based on the original graph, whose size was much smaller than that of the original one. Consequently, they built a constrained labeling index structure over the overlay graph, which greatly reduced the query cost. Unfortunately, all these solutions are merely suitable to perform queries over unencrypted graphs.
**Graph privacy protection.** Increasing concerns about graph privacy have been raised with the wide adoption of the cloud computing paradigm over the past decade. Chase and Kamara [@Structured:encryption] first introduced the notion of graph encryption, where they proposed several constructions for graph operations, such as adjacency queries and neighboring queries. Cao et al. [@Privacy:Preserving:Query:over:Encrypted:Graph:Structured:Data:in:Cloud:Computing] defined and solved the problem of privacy-preserving query over encrypted graph data in cloud computing by utilizing the principle of “filtering-and-verification”. They built the feature-based index of a graph in advance and then chose the efficient inner product to carry out the filtering procedure. Some approaches [@Analyzing:graphs:with:node:differential:privacy; @Mining:Frequent:Graph:Patterns:with:Differential:Privacy; @Shortest:Paths:and:Distances:with:Differential:Privacy] utilized the differential privacy technique to query graphs privately, which might suffer from weak security. These studies, however, introduced prohibitively great storage costs and were not practical for large-scale graphs. Meng et al. [@Graph:Encryption:for:Approximate:Shortest:Distance:Queries] proposed three computationally efficient constructions that supported the approximate shortest distance querying with distance oracles, which were provably secure against a semi-honest cloud server.
Secure multi-party computation (SMC) techniques have been widely applied to address the privacy-preserving shortest path problem [@Blanton2013Data; @Aly2013Securely; @SMC_SP_Keller2014Efficient; @SMC_Gupta2012A], as well as other secure computation problems [@Bayatbabolghani2017Secure]. Aly et al. [@Aly2013Securely] focused on the shortest path problem over traditional combinatorial graph in a general multi-party computation setting, and proposed two protocols for securely computing shortest paths in the graphs. Blanton et al. [@Blanton2013Data] designed data-oblivious algorithms to securely solve the single-source single-destination shortest path problem, which achieved the optimal or near-optimal performance on dense graphs. Keller and Scholl [@SMC_SP_Keller2014Efficient] designed several oblivious data structures (e.g., priority queues) for SMC and utilized them to compute shortest paths on general graphs. Gupta et al. [@SMC_Gupta2012A] proposed an SMC-based approach for finding policy-compliant paths that have the least routing cost or satisfy bandwidth demands among different network domains. However, existing general-purpose SMC solutions for the shortest path problem may result in heavy communication overhead.
Although there are respectable studies on graph querying over encrypted graphs, the privacy-preserving CSD query remains unsolved. In this paper, we propose a novel and efficient graph encryption scheme for CSD queries.
Background {#sec:background}
==========
This section presents the formal definition of the CSD query problem and introduces the 2HCLI structure for graph queries.
Approximate CSD Query {#sec:definitions}
---------------------
Let $G=(V,E)$ be a directed graph[^2] with a vertex set $V$ and an edge set $E$. Each edge $e \in E$ is associated with a *distance* $d(e) \ge 0$ and a *cost* $c(e) \ge 0$. We regard the cost $c(e)$ as the constraint. We denote the set of edges that connect two vertices as a *path*. For a path $P = (e_1, e_2, \dots , e_k)$, its distance $d(P)$ is defined as $d(P)= \sum_{i=1}^{k} d(e_i)$, which indicates the distance from its origin to its destination. Similarly, we define the cost of $P$ as $c(P)= \sum_{i=1}^{k} c(e_i)$. The notations throughout the paper are summarized in Table \[tab:notations\].
Given a graph $G$, an origin vertex $s \in V$, a destination vertex $t \in V$, and a cost constraint $\theta$, a CSD query is to find the the shortest distance $d$ between $s$ and $t$ with the total cost no more than $\theta$. Since the CSD query problem has been proved to be NP-hard [@Approximation:schemes:for:restricted:shortest:path:problem], we keep in line with existing solutions [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries] and focus on proposing an approximate CSD solution in this paper.
Inspired by a common definition of the approximate shortest path query over plain graphs [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries], we define the approximate CSD query (i.e., $\alpha$-CSD query) as follows.
($\alpha$-CSD QUERY). *Given an origin $s$, a destination $t$, a cost constraint $\theta$ and an approximation ratio $\alpha$, an $\alpha$-CSD query returns the distance $d(P)$ of a path $P$, such that $c(P) \le \theta$ and $d(P) \le \alpha \cdot d_{opt}$, where $d_{opt}$ is the optimal answer to the exact CSD query with the origin $s$, destination $t$ and cost constraint $\theta$.*
Fig. \[fig:graph\] shows a simple graph with five vertices, where the distance and cost of each edge are marked alongside it. Given an origin $a$, a destination $c$, a cost constraint $\theta=4$, the exact CSD query returns the optimal distance $d_{opt}=6$, where the corresponding path is $(a,b,c)$. For an approximation ratio $\alpha=1.5$, a valid answer to the $\alpha$-CSD query with the same parameters (e.g, the origin $a$, the destination $c$, and $\theta=4$) is 8, with the corresponding path $P_\alpha=(a,e,b,c)$. That is because $d(P_\alpha)=8 < \alpha \cdot d_{opt}=9$ and $c(P_\alpha)=3 < \theta$.
Based on the above definition, given two paths $P_{1}$ and $P_{2}$ with the same origin and destination, we say that $P_{1}$ $\alpha$-*dominates* $P_{2}$ iff $c(P_{1}) \le c(P_{2})$ and $d(P_{1}) \le \alpha \cdot d(P_{2})$. With this principle, we can reduce the construction complexity of graph index significantly, because a great deal of redundant entries in the index can be filtered out. We will make a further illustration in the following subsection.
![An example illustrating the $\alpha$-CSD query over a graph.[]{data-label="fig:graph"}](figure/shen1.pdf){height="3cm"}
Constructing Labeling Index {#sec:2HCLI}
---------------------------
The encrypted index designed in this paper is mainly constructed based on the well-known 2HCLI, which is a special data structure that supports the shortest distance query efficiently [@Reachability:and:Distance:Queries:Via:2:Hop:Labels; @Fast:exact:shortest:path:distance:queries:on:large:networks:by:pruned:landmark:labeling; @Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]. Here we briefly describe the basic idea of the 2HCLI, and illustrate its application in building a constrained labeling index.
Given a graph $G=(V,E)$ with a vertex set $V$ and an edge set $E$, each vertex $v \in V$ is associated with an in-label set and an out-label set, which are denoted by $\Delta_{in}(v)$ and $\Delta_{out}(v)$, respectively. Each entity in $\Delta_{in}(v)$ corresponds to the shortest distance from a vertex $u \in V$ to $v$. It implies that $v$ is reachable from $u$ by one or more paths, but is not necessarily a neighbor, or 2-hop neighbour, of $u$. Similarly, each entity in $\Delta_{out}(v)$ corresponds to the shortest distance from $v$ to another vertex $u$ in $V$. To answer a shortest distance query from an origin $s$ to a destination $t$, we first find the common vertices in the labels $\Delta_{out}(s)$ and $\Delta_{in}(t)$, and then select the shortest distance from $s$ to $t$. Note that the entities in $\Delta_{in}(v)$ and $\Delta_{out}(v)$ are carefully selected [@Fast:exact:shortest:path:distance:queries:on:large:networks:by:pruned:landmark:labeling] so that the distance of any two vertices $s$ and $t$ can be computed by $\Delta_{out}(s)$ and $\Delta_{in}(t)$.
Considering the graph in Fig. \[fig:graph\], if we ignore the cost criterion of edges, the *basic* unconstrained shortest distance query with an origin $a$ and a destination $c$ can be answered with the help of the 2HCLI, as shown in Fig. \[fig:2HCLI\_1\]. Given the labels $\Delta_{out}(a)$ and $\Delta_{in}(c)$, it is easy to obtain the set of common vertices, which consists of vertices $b$ and $e$. The final answer to the basic shortest distance query should be 5, because $d(a,e)+d(e,c)=5 < d(a,b)+d(b,c)=6$.
Although it is simple and straightforward to construct the 2HCLI for a graph with only the distance criterion, constructing a labeling index based on the 2HCLI for the CSD query is much more complex. That is because in the CSD query setting with two types of edge criteria, there might be multiple combinations of distance and cost for each pair of vertices in the labels $\Delta_{in}(v)$ and $\Delta_{out}(v)$. For ease of illustration, we also take as an example the graph, as well as the CSD query, in Fig. \[fig:graph\]. The corresponding 2HCLI is shown in Fig. \[fig:2HCLI\_2\], where the 2-tuple alongside each arrow represents the distance and cost from the starting vertex to the ending vertex. Note that in the shortest distance query in Fig. \[fig:2HCLI\_1\], the shortest distance from $a$ to $c$ via $e$ is unique. However, in the CSD query setting depicted in Fig. \[fig:2HCLI\_2\], there are four possible distances with different costs from $a$ to $c$ via $e$. Due to the existence of the cost criterion, the number of possible distances for each pair of vertices could increase dramatically in large-scale graphs, which results in a higher complexity in constructing the 2HCLI and calculating the answers to a CSD query.
![A 2HCLI example of the basic shortest distance query. Each entity $d$ in 2HCLI alongside the arrow indicates the shortest distance from the starting vertex to the ending vertex, e.g., the shortest distance from $a$ to $e$ is 3.[]{data-label="fig:2HCLI_1"}](figure/shen2.pdf){height="3cm"}
![A 2HCLI example of the exact CSD query. Each entity $(dis, cost)$ in the 2HCLI alongside the arrow indicates the distance and cost, respectively. The shortest distance from $a$ to $e$ with a cost constraint $\theta = 4$ is 5.[]{data-label="fig:2HCLI_2"}](figure/shen3.pdf){height="3cm"}
In order to improve the querying efficiency, we adopt a methodology that combines an *offline* filtering operation and an *online* filtering operation.
The offline filtering aims at reducing the construction complexity of the 2HCLI and decreasing the number of entries in the in-label and out-label sets as many as possible. We adopt the method proposed in [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]. The entities in the 2HCLI are carefully selected in such a way that for any CSD query from $u$ to $v$ with a cost constraint $\theta$, the query can be answered correctly using only the 2HCLI. Since the construction of the 2HCLI should be independent of the cost constraint in specific CSD queries, we can use the definition of $\alpha$-*domination* to filter out redundant entries in the in- and out-label sets.
Taking for example the two entries from $e$ to $c$ with $\alpha=1.5$ in Fig. \[fig:2HCLI\_2\], the path $P_{ec}^1 = (e,b,c)$ with the $(dis,cost)$-tuple of (3,2) $\alpha$-*dominates* another path $P_{ec}^2=(e,c)$ with the $(dis,cost)$-tuple of (2,6). Therefore, the entry corresponding to the path $P_{ec}^2$ can be filtered out (as depicted by a dashed arrow), which helps to reduce the number of entries in $\Delta_{in}(c)$. The resulting 2HCLI is exhibited in Fig. \[fig:2HCLI\]. We refer the reader to [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries] for more construction details.
The online filtering aims at selecting the possibly valid answers to a given CSD query, based on only the 2HCLI. For instance, given an $\alpha$-CSD query from $a$ to $c$ with a cost constraint $\theta = 4$, we can first find the common vertex set $V'$ between $\Delta_{out}(a)$ and $\Delta_{in}(c)$, and then return the minimum $d(a, v) + d(v, c)$ with $c(a, v) + c(v, c) \le \theta$ for each $v \in V'$. Since the above comparisons should be conducted with the corresponding ciphertexts, an efficient online filtering approach will be devised in Section \[sec:Tree:Based:Ciphertexts:Comparison:Approach\].
![The resulting 2HCLI after performing the offline filtering on the original 2HCLI in Fig. \[fig:2HCLI\_2\]. Each entity $(u, d, c)$ in the 2HCLI indicates the vertex identifier, distance and cost, respectively. The answer to the approximate CSD query (i.e., the origin $a$, the destination $c$, $\alpha=1.5$, and $\theta=4$) is 6, which happens to be the answer to the exact CSD query.[]{data-label="fig:2HCLI"}](figure/shen4.pdf){height="2.6cm"}
Problem Formulation {#sec:PROBLEM DESCRUPTION}
===================
This section presents the system model and the security model of the privacy-preserving $\alpha$-CSD querying, as well as the preliminaries of the proposed graph encryption scheme.
System Model
------------
We adopt the general system model in the literature [@Structured:encryption; @Graph:Encryption:for:Approximate:Shortest:Distance:Queries] for the privacy-preserving $\alpha$-CSD querying, as illustrated in Fig. \[fig:system\_model\], which mainly involves two types of entities, namely a *user* and a *cloud server*.
The *user* constructs the secure searchable index for the graph and outsources the encrypted index along with the encrypted graph to the cloud server. When the user, say Alice, performs an $\alpha$-CSD query over her encrypted graph, she first generates a query token and then submits it to the cloud server. Upon receiving Alice’s query token, the cloud server executes the pre-designed query algorithms to match entries in the secure index with the token. Finally, the cloud server replies the user with the answer to the $\alpha$-CSD query.
The graph encryption scheme is formally defined as follows.
(GRAPH ENCRYPTION). *A graph encryption scheme $\Pi =(KeyGen, Setup, Query)$ consists of three polynomial-time algorithms that work as follows:*
- **$(K, pk,sk) \gets KeyGen(\lambda)$***: is a probabilistic secret key generation algorithm that takes as input a security parameter $\lambda$ and outputs a secret key $K$ and a public/secret-key pair $(pk, sk)$.*
- **$\widetilde{\Delta} \gets Setup(\alpha, K, pk, sk, \phi, G)$***: is a graph encryption algorithm that takes as input an approximation ratio $\alpha$, a secret keys $K$, a key pair $(pk, sk)$, an amplification factor $\phi$ and a graph $G$, and outputs a secure index $\widetilde{\Delta}$.*
- **$(dist_{q}, \bot) \gets Query((K, pk, sk, \Phi, q), \widetilde{\Delta})$***: is a two-party protocol between a user that holds a secret key $K$, a key pair $(pk, sk)$ and a query $q$, and a cloud server that holds an encrypted graph index $\widetilde{\Delta}$. After executing this protocol, the user receives the distance $dist_{q}$ as the query result and the cloud server receives a terminator $\bot$.*
![The system model of privacy-preserving CSD query scheme.[]{data-label="fig:system_model"}](figure/shen5.pdf){height="2.5cm"}
Security Model
--------------
Graph encryption is a generalization of symmetric searchable encryption (SSE) [@Practical:techniques:for:searches:on:encrypted:data; @Searchable:Symmetric:Encryption:Improved:Definitions:and:Efficient:Constructions; @Dynamic:searchable:symmetric:encryption; @Dynamic:Searchable:Encryption:in:Very:Large:Databases:Data:Structures:and:Implementation; @Practical:Dynamic:Searchable:Encryption:with:Small:Leakage]. Thus, we adopt the security definition of SSE settings in our graph encryption scheme. This security definition is consistent with the latest proposed security definition in [@Searchable:Symmetric:Encryption:Improved:Definitions:and:Efficient:Constructions; @2011:Searchable:symmetric:encryption:Improved:definitions:and:efficient:constructions; @Structured:encryption], which is also known as CQA2-security (i.e., the chosen-query attack security). Now we present the formal CQA2-security definition as follows.
(CQA2-security model). *Let $\Pi =(KeyGen, Setup, Query)$ be a graph encryption scheme and consider the following probabilistic experiments where $\mathcal{A}$ is a semi-honest adversary, $\mathcal{S}$ is a simulator, and $\mathcal{L}_{Setup}$ and $\mathcal{L}_{Query}$ are (stateful) leakage functions.*
**Real$_{\Pi, \mathcal{A}}(\lambda)$**:
- $\mathcal{A}$ outputs a graph $G$, an approximation ratio $\alpha$ and an amplification factor $\phi$.
- The challenger begins by running $Gen(1^{\lambda})$ to generate a secret key $K$ and a public/secret-key pair $(pk, sk)$, and then computes the encrypted index $\widetilde{\Delta}$ by $Setup(\alpha, K, pk, sk, \phi, G)$. The challenger sends the encrypted index $\widetilde{\Delta}$ to $\mathcal{A}$.
- $\mathcal{A}$ makes a polynomial number of adaptive queries, and for each query $q$, $\mathcal{A}$ and the challenger execute $Query((K, pk, sk, \Phi, q), \widetilde{\Delta})$.
- $\mathcal{A}$ computes a bit $b \in \{0,1\}$ as the output of the experiment.
**Ideal$_{\Pi, \mathcal{A}, \mathcal{S}}(\lambda)$**:
- $\mathcal{A}$ outputs a graph $G$, an approximation ratio $\alpha$ and an amplification factor $\phi$.
- Given the leakage function $\mathcal{L}_{Setup}(G)$, $\mathcal{S}$ simulates a secure graph index $\widetilde{\Delta}^{*}$ and sends it to $\mathcal{A}$.
- $\mathcal{A}$ makes a polynomial number of adaptive queries. For each query $q$, $\mathcal{S}$ is given the leakage function $\mathcal{L}_{Query}(G, Q)$, and $\mathcal{A}$ and $\mathcal{S}$ execute a simulation of $Query$, where $\mathcal{A}$ is playing the role of the cloud server and $\mathcal{S}$ is playing the role of the user.
- $\mathcal{A}$ computes a bit $b \in \{0,1\}$ as the output of the experiment.
$\qquad$ We say that the graph encryption scheme $\Pi =(KeyGen, Setup, Query)$ is $(\mathcal{L}_{Setup}, \mathcal{L}_{Query})$-secure against the adaptive chosen-query attack, if for all PPT adversaries $\mathcal{A}$, there exists a PPT simulator $\mathcal{S}$ such that $$\begin{aligned}
|\textbf{Pr}[{\textbf{Real}}_{\Pi, \mathcal{A}}(\lambda) = 1] - \textbf{Pr}[{\textbf{Ideal}}_{\Pi, \mathcal{A}, \mathcal{S}}(\lambda) = 1]| \le negl(\lambda),\end{aligned}$$ where $negl(\lambda)$ is a negligible function.
Preliminaries
-------------
Now we briefly introduce an encryption technique employed in our design, i.e., the order-revealing encryption.
*Order-revealing encryption (ORE)* is a generalization of the order-preserving encryption (OPE) scheme, but provides stronger security guarantees. As pointed by Naveed et al. [@Inference:Attacks:on:Property:Preserving:Encrypted:Databases], the OPE-encrypted databases are extremely vulnerable to *inference attacks*. To address this limitation, the ORE scheme has been proposed [@Order:Revealing:Encryption:New:Constructions:Applications:and:Lower:Bounds; @Practical:Order:Revealing:Encryption:with:Limited:Leakage], which is a tuple of three algorithms $\Pi=(ORE.Setup, ORE.Encrypt, ORE.Compare)$ described as follows:
- ORE.Setup($1^{\lambda}$)$\to sk$: Input a security parameter $\lambda$, output the secret key $sk$.
- ORE.Encrypt($sk, m$)$\to ct$: Input a secret key $sk$ and a message $m$, output a ciphertext $ct$.
- ORE.Compare($ct_{1}, ct_{2}$)$\to z$: Input two ciphertexts $ct_{1}$ and $ct_{2}$, output a bit $r \in \{0,1\}$, which indicates the greater-than or less-than relationship of the corresponding plaintexts $m_{1}$ and $m_{2}$.
Construction of `Connor` {#sec:main scheme}
========================
In this section, we introduce our graph encryption scheme `Connor` for the privacy-preserving $\alpha$-CSD querying.
Construction Overview
---------------------
The construction process is based on two particular pseudo-random functions $h$ and $g$, and a somewhat homomorphic encryption (SWHE) scheme. In this paper, we adopt the concrete instantiation of a SWHE scheme in the literature [@boneh2005evaluating]. The parameters of $h$ and $g$ are illustrated in Equation ,
\[eq:parameters\] $$\begin{aligned}
& h: {\{0,1\}}^{\lambda} \times {\{0,1\}}^{*} \to {\{0,1\}}^{\lambda} \label{eq:parameters-1} \\
& g: {\{0,1\}}^{\lambda} \times {\{0,1\}}^{*} \to {\{0,1\}}^{\lambda + z + k} \label{eq:parameters-2}\end{aligned}$$
where $\lambda$ is the security parameter, and $k$ and $z$ are the output lengths of the ORE and SWHE encryptions, respectively.
We start with a straightforward construction $GraphEnc_{1}=(KeyGen, Setup, Query)$ as follows, including:
- **KeyGen:** Given the security parameter $\lambda$, the *user* randomly generates a secret key $K$ and a pair of public and secret keys $(pk, sk)$ for SWHE.
- **Setup:** Given an original graph $G$, an approximation ratio $\alpha$, and an amplification factor $\phi$, the *user* obtains the encrypted graph index by using Algorithm \[Algorithm:A straightforward approach:setup\]. The 2HCLI $\Delta=\{\Delta_{out}, \Delta_{in}\}$ of $G$ can be generated by the method described in Section \[sec:2HCLI\].
Let $\mathcal{B}$ be the maximum distance over all the sketches and $N=2\mathcal{B}+1$. Motivated by the literature [@Graph:Encryption:for:Approximate:Shortest:Distance:Queries], each distance $d_{u,v}$ is encrypted as $2^{N-d_{u,v}}$ by the SWHE to protect its real value (line 8). Considering that $2^x +2^y$ is bounded by $2^{max(x,y)-1}$, the SWHE encryption of distance allows for obtaining the minimum sum over a certain number of distance pairs.
Each cost $c_{u,v}$, multiplied by the amplification factor $\phi$, is encrypted by the ORE encryption (line 9). $\phi$ is a big integer and should be carefully selected to enlarge the plaintext space of $c_{u,v}$. In practice, the product of $\phi$ and the maximum cost value over all the sketches should be sufficiently large (e.g., at least $2^{80}$), which is used to provide a sufficient randomness to the inputs. Since $\phi$ is kept private by the *user*, the *cloud server* cannot learn the real values of $c_{u,v}$.
- **Query:** To perform an $\alpha$-CSD query with an origin $s$, a destination $t$, and a cost constraint $\theta$, the *user* generates query tokens $\tau_{s}=h(K, s||1)$ and $\tau_{t}=h(K, t||2)$, and sends them to the *cloud server*. The *cloud server* obtains $I_{out}[\tau_{s}]$ and $I_{in}[\tau_{t}]$ from the index. For each encrypted vertex identifier $v$ that appears in both $I_{out}[\tau_{s}]$ and $I_{in}[\tau_{t}]$, the *cloud server* performs a cost constraint filtering operation (which will be described in details in Section \[sec:Tree:Based:Ciphertexts:Comparison:Approach\]), and adds each pair $(D_{s,v}, D_{v,t})$ which satisfies the cost constraint $\phi \theta$ into a candidate set $Y$. Note that the cost constraint is multiplied by $\phi$ because we encrypt the cost $\phi c_{u,v}$, instead of $c_{u,v}$.
Then, the *cloud server* directly obtains $d = \sum_{i=1}^{|Y|}d_{i}$, where $d_{i} = $ SWHE.Eval$(\times, D_{s,v}^{i}, D_{v,t}^{i})$ for each pair $(D_{s,v}^{i}, D_{v,t}^{i})$ in $Y$. The correctness of the above calculation follows homomorphic properties of SWHE. We refer the readers to [@Graph:Encryption:for:Approximate:Shortest:Distance:Queries] for more details.
Finally, the *cloud server* returns $d$ to the *user*, who, in turn, obtains the answer to the $\alpha$-CSD query by decrypting $d$ with its secret key $sk$.
Note that this straightforward approach does not only correctly answer the $\alpha$-CSD query over encrypted graphs, but also protects the vertex identifier, distance, and cost information.
However, the encrypted graph index obtained from Algorithm \[Algorithm:A straightforward approach:setup\], without performing any queries, still results in information leakage. On one hand, it reveals the length of each encrypted sketch, i.e., $I_{out}[u]$ and $I_{in}[u]$, as well as the order information of ORE-encrypted costs in all sketches. On the other hand, it also discloses the number of common vertices between $I_{out}[u]$ and $I_{in}[v]$, which indicates the number of vertices that connect $u$ to $v$. In particular, if the *cloud server* knows that there is no common vertex between $I_{out}[u]$ and $I_{in}[v]$, it learns that $u$ cannot reach $v$.
A secret key $K$, a key pair $(pk, sk)$, an approximation ratio $\alpha$, an amplification factor $\phi$, and an original graph $G$. The encrypted graph index $\widetilde{\Delta}$.
Generate the *2-hop labeling index* $\Delta=\{\Delta_{out}, \Delta_{in}\}$ from $G$. Initialize two dictionaries $I_{out}$ and $I_{in}$. Let $\mathcal{B}$ be the maximum distance over all the sketches and set $N = 2\mathcal{B} +1$.
Set $T_{out,u}=h(K, u||1)$, $T_{in,u}=h(K, u||2)$. Compute $V = h(K, v||0)$. Compute $D_{u,v}=$ SWHE.Enc$(pk, 2^{N-d_{u,v}})$. Compute $C_{u,v}=$ ORE.Enc$(K, \phi c_{u,v})$. Insert ($V,D_{u,v}, C_{u,v}$) into the dictionary $I_{out}[T_{out,u}]$. Repeat the above procedure for each sketch in $\Delta_{in}(u)$ and add entries into $I_{in}[T_{in,u}]$.
**return** $\widetilde{\Delta} = \{ I_{out}, I_{in} \}$ as the encrypted graph index.
Privacy-preserving $\alpha$-CSD Querying
----------------------------------------
In order to enhance protection of sensitive information, we construct a privacy-preserving $\alpha$-CSD querying scheme $GraphEnc_{2}=(KeyGen, Setup, Query)$, where the key generation procedure is the same as in $GraphEnc_{1}$, with improved index construction and CSD query procedures as exhibited in Algorithms \[Algorithm:GraphEnc2:setup\] and \[Algorithm:GraphEnc2:Query\], respectively.
A secret key $K$, a key pair $(pk,sk)$, an approximation ratio $\alpha$, an amplification factor $\phi$, and an original graph $G$. The encrypted graph index $\widetilde{\Delta}$.
Generate the 2HCLI $\Delta=\{\Delta_{out}, \Delta_{in}\}$ of $G$. Initialize two dictionary $I_{out}$ and $I_{in}$. Let $\mathcal{B}$ be the maximum distance over the sketches and set $N = 2 \mathcal{B} +1$.
Set $S_{out,u}=h(K, u||1)$, $T_{out,u}=h(K, u||2)$, $S_{in,u}=h(K, u||3)$, and $T_{in,u}=h(K, u||4)$. Initialize a counter $\omega=0$
Compute $V=h(K, v||0)$. Compute $D_{u,v} =$ SWHE.Enc$(pk, 2^{N-d_{u,v}})$. Compute $C_{u,v}=$ ORE.Enc$(K, \phi c_{u,v})$.
Set $T_{out,u,v} = h(T_{out,u}, \omega)$ and $S_{out,u,v} = g(S_{out,u}, \omega)$. Compute $\Psi_{u,v}=S_{out,u,v} \oplus (V || D_{u,v} || C_{u,v})$. Set $I_{out}[T_{out,u,v}] = \Psi_{u,v}$. Set $\omega=\omega+1$.
Repeat the above procedure for each sketch in $\Delta_{in}(u)$ and obtain $I_{in}[T_{in,u,v}]$, except that: (i) set $T_{in,u,v} = h(T_{in,u}, \omega)$ and $S_{in,u,v} = g(S_{in,u}, \omega)$, and (ii) compute $\Psi_{u,v}=S_{in,u,v} \oplus (V || D_{u,v} || C_{u,v})$.
**return** $\widetilde{\Delta} = \{ I_{out}, I_{in} \}$ as the encrypted graph index.
The *user*’s input are the secret key $K$, secret key pair $(pk, sk)$, an amplification factor $\Phi$, and the query $q=(s, t, \theta)$. The *cloud server*’s input is the encrypted index $\widetilde{\Delta}$. *user*’s output is $dist_{q}$ and *cloud server*’s output is $\bot$.
*user* generates $S_{out,s}=h(K, s||1)$, $T_{out,s}=h(K, s||2)$, $S_{in,t}=h(K, t||3)$ and $T_{in,t}=h(K, t||4)$. *user* constructs a cost constraint tree $T_{\theta}$ based on $\phi * \theta$ using secret $K$ as described in Section \[sec:Tree:Based:Ciphertexts:Comparison:Approach\]. *user* sends $\tau_{s,t} = (S_{out,s}, T_{out,s}, S_{in,t}, T_{in,t}, T_{\theta})$ to *cloud server*.
*cloud server* parses $\tau_{s,t}$ as $(S_{out,s}, T_{out,s}, S_{in,t}, T_{in,t}, T_{\theta})$.
*cloud server* initializes a set $L_{s}$ and a counter $\omega=0$. *cloud server* computes $T_{out,s,v} = h(T_{out,s}, \omega)$. *cloud server* computes $S_{out,s,v} = g(S_{out,s}, \omega)$. *cloud server* performs $(V || D_{s,v} || C_{s,v}) = \Psi_{s,v} \oplus S_{out,s,v} $. *cloud server* add $(V, D_{s,v}, C_{s,v})$ into $L_{s}$. Set $\omega=\omega+1$. *cloud server* computes $T_{out,s,v} = h(T_{out,s}, \omega)$.
*cloud server* initializes a set $L_{t}$ and a counter $\omega=0$. *cloud server* computes $T_{in,v,t} = h(T_{in,t}, \omega)$. *cloud server* computes $S_{in,v,t} = g(S_{in,t}, \omega)$. *cloud server* performs $(V || D_{v,t} || C_{v,t}) = \Psi_{v,t} \oplus S_{in,v,t} $. *cloud server* add $(V, D_{v,t}, C_{v,t})$ into $L_{t}$. Set $\omega=\omega+1$. *cloud server* computes $T_{in,v,t} = h(T_{in,t}, \omega)$.
For each encrypted vertex identifier $v$ that appears in both in $L_{s}$ and $L_{t}$, the *cloud server* performs the cost constraint filtering operation through Algorithm \[Algorithm:tree based ciphertexts comparison\], and add the pair $(D_{s,v}, D_{v,t})$ which satisfies the cost constraint $\phi \theta$ into a set $Y$. The pair that Algorithm \[Algorithm:tree based ciphertexts comparison\] cannot verify is also added into $Y$.
For each pair in $Y$, the *cloud server* first computes $d_{i} = $ SWHE.Eval$(\times, D_{s,v}^{i}, D_{v,t}^{i})$, and then computes $d = \sum_{i=1}^{|Y|}d_{i}$.
*cloud server* returns $d$ to the *user*. *user* decrypts $d$ with $sk$.
**return** Decrypted value of $d$ as $dist_{q}$.
The *Setup* for $GraphEnc_{2}$ works as follows. The *user* first builds the 2HCLI $\Delta$ of graph $G$, and then encrypts sketches associated with $u \in G$ (i.e., $\Delta_{out}(u)$ and $\Delta_{in}(u)$), as described in lines 2-17.
Note that in order to prevent the leakage of the sketch size in the previous straightforward approach, we split each encrypted sketch $I_{out}(u)$ and $I_{in}(u)$, and ensure that they are stored in the dictionary separately, with a size of one. More precisely, we utilize a counter $\omega$ and generate the unique $T_{out,u,v}$ and $S_{out,u,v}$ for each entity in $\Delta_{out}(u)$ (line 11). Similarly, the unique $T_{in,u,v}$ and $S_{in,u,v}$ for each entity in $\Delta_{in}(u)$ can be generated (line 16). The $T_{out,u,v}$ (or $T_{in,u,v}$) indicates the position that this entity will be stored in $I_{out}$ (or $I_{in}$), which ensures each position in the dictionary $I_{out}$ (or $I_{in}$) having only one entity.
$S_{out,u,v}$ (or $S_{in,u,v}$) is used to make an XOR operation with $(V || D_{u,v} || C_{u,v})$. Since $S_{out,u,v}$ (or $S_{in,u,v}$) is different for each sketch, the XOR operation makes the resulting $\Psi_{u,v}$ indistinguishable, which guarantees that the *static* encrypted graph index $\widetilde{\Delta}$ reveals neither the number of common vertices between $I_{out}(u)$ and $I_{in}(v)$, nor the order information of costs.
The *Query* in Algorithm \[Algorithm:GraphEnc2:Query\] works as follows. Assume that the *user* asks for the shortest distance between $s$ and $t$, whose total cost does not exceed $\theta$. She first generates the query token $\tau_{s,t}$ and sends it to the *cloud server* (lines 1-3). Upon receiving the token $\tau_{s,t}$, the *cloud server* searches in the index and obtains $L_{s}$ and $L_{t}$ (lines 5-22). That is, the *cloud server* iteratively judges whether the dictionary $I_{out}$ ($I_{in}$) contains the key $T_{out,s,v}$ ($T_{in,v,t}$) or not. If it exists, then it adds the corresponding entity into the set $L_{s}$ ($L_{t}$).
Once $L_{s}$ and $L_{t}$ are obtained, the *cloud server* performs the cost constraint filtering (line 23) and computes $d$ (line 24), which are the same as described in the straightforward approach. Finally, the *user* gets the final answer by decrypting $d$, which is returned by the *cloud server*, using its $sk$.
Tree-Based Ciphertexts Comparison Approach {#sec:Tree:Based:Ciphertexts:Comparison:Approach}
==========================================
This section introduces a tree-based ciphertexts comparison approach, which is used for cost constraint filtering in the graph encryption scheme described in Section \[sec:main scheme\].
Scenarios
---------
Assume that there is a *user* (i.e., $\mathcal{U}$) and a *server* (i.e., $\mathcal{R}$). $\mathcal{U}$ has many integers which are encrypted by a kind of cryptography algorithm and then outsourced to $\mathcal{R}$. Now, $\mathcal{U}$ wants to ask for $\mathcal{R}$ to obtain integer pairs, e.g., ($x$, $y$), whose sum does not exceed $\theta$. Note that the plaintexts of $x$, $y$ and $\theta$ could not be disclosed to $\mathcal{R}$, except for the greater-than, equality, or less-than relationship. A naive approach is to download all the integers, calculate the summation locally, and choose the integer pairs satisfying the constraint. This method, however, is meaningless if one wants to offload the computation to the cloud. Hence, it is desirable to have a practical solution to this problem.
Note that this scenario is different from the well-known SMC scheme. In the setting of SMC [@SMC_Ben2016; @SMC_Ben2008FairplayMP], a set of (two or more) parties with private inputs wish to compute a function of their inputs while revealing nothing but the result of the function, which is used for many practical applications, such as exchange markets. SMC is a collaborative computing problem that solves the privacy preserving problem among a group of mutually untrusted participants. The ciphertexts of all pairs of ($x$, $y$) and the cost constraint $\theta$ are outsourced to the cloud server, which is responsible for the inequality tests. Furthermore, we could reveal the relationship between the sum of two ciphertexts and another ciphertext to the *server*, which is referred to as *controlled disclosure* in the literature [@Structured:encryption].
It seems that we might leverage the homomorphic encryption technique, since it supports a sum operation of calculating $x + y$. Nevertheless, as the homomorphic encryption is probabilistic, we are unable to determine the relationship between $x + y$ and $\theta$ over their ciphertexts.
Main Idea
---------
The main idea of the tree-based ciphertexts comparison protocol is to encode an integer with the ORE primitive. To the best of our knowledge, none of the existing approaches can support ORE and homomorphism properties simultaneously. Hence, we design a novel method to address this problem, which is motivated by the following facts.
If we want to compare $x+y$ with $\theta$, we can compare $x$ with $\theta /2$ and $y$ with $\theta/2$, respectively. Now, we result in 4 possible cases corresponding to combinations of the two relationships. If $x > \theta /2$ ($x \le \theta /2$) and $y > \theta /2$ ($y \le \theta /2$), we can know that $x+y > \theta$ ($x+y \le \theta $). In the rest two cases, i.e., $x > \theta /2$ and $y < \theta /2$, or $x \le \theta /2$ and $y \ge \theta /2$, we cannot achieve a deterministic result. At this point, we can further divide $\theta / 2$ into $\theta / 4$. And then we can compare $x$ and $y$ with $\theta / 4$ and $3\theta / 4$, respectively.
By iteratively performing such an operation, we can determine the relationship between $x+y$ and $\theta$ with an increasing probability. Due to the ORE property, it is easy to perform the above operations over ciphertexts. Next, we will show how to implement this idea efficiently by utilizing a tree structure.
Details of Protocol
-------------------
To implement the comparison of $x+y$ and $\theta$ over their ciphertexts, we construct a *cost constraint tree*, whose nodes represent specific values that are related to $\theta$. For clarity, we define $E(m)$ as the ORE ciphertext of $m$.
An example of the tree structure is depicted in Fig. \[fig:tree\_model\]. For each node, we assign 0 to its left child path, while 1 to the right child path. If an integer is not greater than the value of this node, we take the left child path for further comparison; otherwise, we take the right child path. Thus, for any path from the root node to a leaf node, we can obtain a path code, which is an effective representation of the comparison procedure. For instance, an incoming integer $5\theta/16$ would traverse Nodes $E(\theta/2)$, $E(\theta/4)$, and $E(3\theta/8)$, and thereby end with a path code of 010. We define the length (i.e., the number of bits) of a path code as $\beta$. Note that $\beta$ is actually equal to the depth of the tree which is denoted by $d_\theta$.
Now the relationship between $x + y$ and $\theta$ can be determined as follows. We first get the ORE ciphertexts of $x$ and $y$, as well as their path codes $c_{x}$ and $c_{y}$ by traversing the tree separately. When computing $c_{x} + c_{y}$, if an overflow occurs (i.e., $c_{x} + c_{y} \ge 2^{\beta}$), we know that $x+y > \theta$ with confidence. If $c_{x} + c_{y} \le 2^{\beta}-2$, we also know that $x+y \le \theta$ with confidence. Otherwise, we are unable to determine the relationship and end up with an *uncertainty*. We summarize this procedure in Algorithm \[Algorithm:tree based ciphertexts comparison\].
![An example of the cost constraint tree with a depth of 3, where circles represent nodes. The boxes in the dashed rectangle indicate path codes for all possible comparison results. Note that these boxes are not a part of the tree. []{data-label="fig:tree_model"}](figure/shen6.pdf){height="4cm"}
Two ORE ciphertexts $E(x)$, $E(y)$ and a cost constraint tree whose depth is $d_{\theta}$. The relationship between $x+y$ and $\theta$.
Initialize a counter $\omega=1$ and two empty strings $c_x$ and $c_y$.
Visit the $\omega$-th level of the tree with $E(x)$ and concatenate $c_x$ with corresponding $0$ or $1$. Visit the $\omega$-th level of the tree with $E(y)$ and concatenate $c_y$ with corresponding $0$ or $1$. Set $\omega = \omega + 1.$
**return** $>$. **return** $\le$.
**return** [*uncertainty*]{}.
**Discussion.** Observe that when we go through a cost constraint tree, one more step can further reduce the uncertainty of the relationship between $x+y$ and $\theta$ by half. We denote the probability of *uncertainty* as
Pr$[\neg certainty]={(\frac{1}{2})}^{\beta}$.
where $\beta$ is the length of the path code. We can easily know the probability of certainty is
Pr$[certainty] = 1-$ Pr$[\neg certainty]= 1 - {(\frac{1}{2})}^{\beta}$.
When the tree depth is 6 (e.g., $\beta=6$), the probability of certainty could reach about 0.9844.
Another observation is the comparison procedure reveals the order information between $x$ (or $y$) and $\theta$. Thus, the *server* can infer the interval that $x$ belongs to with precision of $2^{-\beta}$. To prevent the *server* from inferring the real value of $x$, in `Connor`, the *user* randomly picks a big integer number $\phi$ that is applied to $x$, $y$, and $\theta$ simultaneously, which significantly enlarges the plaintext and ciphertext spaces (e.g., $2^{128}$). The value of $\beta$ is generally a small integer (e.g., 6 in our implementation) that is determined by the *user*, and both $\phi$ and $\theta$ are kept secret by the *user*. Therefore, the *server* cannot infer the real value of $x$ (or $y$) from the order relationship among ciphertexts. We will formally analyze the leakage functions and security issues in the next section.
Complexity and Security Analyses {#sec:security}
================================
This section presents the complexity and security analyses on the proposed graph encryption scheme `Connor`.
Complexity Analysis
-------------------
`Connor` mainly consists of the *Setup* and *Query* algorithms, as described in Algorithms \[Algorithm:GraphEnc2:setup\] and \[Algorithm:GraphEnc2:Query\].
The dominant component in determining the complexity of the *Setup* algorithm is the encryption of the plain 2HCLI generated from a graph $G$. Let $\mu$ be the total sketch for all vertices in $G$, then the time complexity and space complexity are both $\mathcal{O}(n\mu)$, where $n$ is the number of vertices in $G$.
The *Query* algorithm consists of a query token generation process on the *user* side and a CSD query process on the cloud *server* side. Let $\eta$ be the maximum size of the sketch associated with each vertex in $G$. The complexity of the query token generation process is mainly determined by the construction of a cost constraint tree, whose time complexity and space complexity are both $\mathcal{O}(2^{d_{\theta}})$. For the CSD querying process, the time complexity of getting $L_{s}$ and $L_{t}$, performing cost constraint filtering, and performing distance computation are $\mathcal{O}(\eta)$, $\mathcal{O}(\eta d_{\theta})$, and $\mathcal{O}({\eta})$, respectively. The space complexity of the above three components are $\mathcal{O}(\eta)$, $\mathcal{O}(\eta + 2^{d_{\theta}})$, and $\mathcal{O}(\eta)$, respectively. Therefore, the total time complexity and space complexity of the CSD querying process are $\mathcal{O}({\eta} d_{\theta})$ and $\mathcal{O}(\eta + 2^{d_{\theta}})$, respectively.
Security Analysis
-----------------
We now present the security analysis on `Connor`. For clarity, we first discuss the leakage functions, and then prove that `Connor` is secure under the CQA2-security model.
**Setup Leakage.** The leakage function $\mathcal{L}_{Setup}$ of our construction reveals the information that can be deduced from the secure 2HCLI $\widetilde{\Delta}$ of graph $G$, including the total number of vertices in the graph $n$, the maximum distance over all the sketches $\mathcal{B} = max_{u \in V} max_{\{(v, d_{u,v},c_{u,v}) \in \Delta_{out}, (v, d_{u,v},c_{u,v}) \in \Delta_{in}\}} d_{u,v}$, and the size of $\widetilde{\Delta}$. More precisely, the size of $\widetilde{\Delta}$ consists of the total number of sketch entities in $I_{out}$ and $I_{in}$, which are denoted by $\Omega_{out}$ and $\Omega_{in}$, respectively. Thus, the leakage function $\mathcal{L}_{Setup} =
(n, \mathcal{B}, \Omega_{out}, \Omega_{in})$.
Note that the order relationship of pairwise costs and the order relationship between the cost and cost constraint are not included in $\mathcal{L}_{Setup}$, because for each entity in sketches, we make an XOR operation using a unique integer value after we encrypt it, and this makes each entity in sketches are indistinguishable.
**Query Leakage.** The leakage function $\mathcal{L}_{Query}$ of our construction consists of the query pattern leakage, the sketch pattern leakage, and the cost pattern leakage. Intuitively, the query pattern leakage reveals whether a query has appeared before. The sketch pattern leakage reveals the sketch associated to a queried vertex, the common vertices between two different sketches, and the size of the sketches of queried vertices. The cost pattern leakage reveals 1) the order relationship among costs, and 2) the order relationship between costs and the cost constraint during the query procedure. We formalize these leakage functions as follows.
(QUERY PATTERN LEAKAGE). Let $\textbf{\emph{q}}=(q_{1}, q_{2}, \dots, q_{m})$ be a non-empty sequence of queries. Each query $q_{i}$ specifies a tuple ($u_{i}$, $v_{i}$, $\theta_{i}$). For any two queries $q_{i}$ and $q_{j}$, define $Sim(q_{i}, q_{j})=(u_{i}=u_{j}, v_{i}=v_{j}, \theta_{i}=\theta_{j})$, i.e., whether each element of $q_{i}=(u_{i}, v_{i}, \theta_{i})$ matches each element of $q_{j}=(u_{j}, v_{j}, \theta_{j})$, respectively. Then, the query pattern leakage function $\mathcal{L}_{QP}(\textbf{\emph{q}})$ returns an $m \times m$ (symmetric) matrix, in which each entry ($i$, $j$) equals $Sim(q_{i}, q_{j})$. Note that $\mathcal{L}_{QP}(\textbf{\emph{q}})$ does not leak the identities of the query vertices.
(SKETCH PATTERN LEAKAGE). Given a secure 2HCLI $\widetilde{\Delta}$ of a graph $G$ and a query $q = (u, v, \theta)$, the sketch pattern leakage function $\mathcal{L}_{SP}(\widetilde{\Delta}, q)$ is defined as $(\Sigma, \Upsilon)$. $\Sigma$ is a list, each element of which is the sketches associated to the queried vertices, and $\Upsilon$ is a pair $(X, Z)$, where $X={h(v):(v, d, c) \in I_{out}}$ and $Z={h(v):(v, d, c) \in I_{in}}$ are multi-sets and $h: {\{0,1\}}^{\lambda} \times {\{0,1\}}^{*} \to {\{0,1\}}^{\lambda}$ is a particular pseudo-random function.
(COST PATTERN LEAKAGE). The cost constraint $\theta$ in a query $q$ can essentially be represented by a certain number of uniform intervals. Let $d_{\theta}$ be the depth of the cost constraint tree $T_{\theta}$ (c.f. Section \[sec:Tree:Based:Ciphertexts:Comparison:Approach\]). The intervals associated with $\theta$ are $[{{(i-1)\theta} / 2^{d_{\theta}}}, {{i}\theta / 2^{d_{\theta}}}]$, where $1 \le i \le 2^{d_{\theta}}$. Assign each interval with a list $\mu$, i.e., the $i$-th interval is associated with $\mu_{i}$, which stores all the cost values belong to this interval. The leaked interval information forms an array $Arr$, of which the $i$-th element is $\mu_{i}$ (i.e., $Arr[i] = \mu_{i}$). In addition, assume that $z$ is the total number of entries in the sketches of the queried vertices. For each pair of costs $c_{i}$ and $c_{j}$, its order relationship of the greater-than, equality, and less-than can be represented by $1$, $0$, and $-1$, respectively. The leaked order information of costs is a $z \times z$ (symmetric) matrix $\nabla$ with each entry ($i$, $j$) being $1$, $0$, or $-1$. Therefore, the cost pattern leakage function $\mathcal{L}_{CP}(\widetilde{\Delta}, q)=(Arr, \nabla)$.
Thus, $\mathcal{L}_{Query}=(\mathcal{L}_{QP}(\textbf{\emph{q}}), \mathcal{L}_{SP}(\widetilde{\Delta}, q), \mathcal{L}_{CP}(\widetilde{\Delta}, q))$.
The leakage functions are defined over the 2HCLI rather than the original graph. In fact, the information leakage of the original graph is limited to the minimum number of paths for the queried source-destination vertices. It can be defined as an $n \times n$ (symmetric) matrix $\Lambda$, where $n$ is the number of vertices in the graph. Each element in $\Lambda$ is NULL, 0, or a positive integer, which indicates an uncertain status (i.e., topology is well protected), disconnection, or the minimum number of paths of the two queried vertices, respectively.
For the cost values in the 2HCLI, we introduce a *user*-held amplification factor $\phi$ to enlarge the plaintext and ciphertext spaces. Thus, the *server* cannot infer the real cost values just from their order information revealed by the leakage function $\mathcal{L}_{CP}(\widetilde{\Delta}, q))$. For the distance values in the 2HCLI, we use the SWHE encryption to protect their real values from the *server*.
**Theorem 1.** *If the cryptography primitives $g$, $h$, ORE, and the SWHE are secure, then the proposed graph encryption scheme $\Pi =(KeyGen, Setup, Query)$ is $(\mathcal{L}_{Setup}, \mathcal{L}_{Query})$-secure against the adaptive chosen-query attack.*
The key idea is constructing a simulator $\mathcal{S}$. Given the leakage functions $\mathcal{L}_{Setup}$ and $\mathcal{L}_{Query}$, $\mathcal{S}$ constructs a fake encrypted 2HCLI structure $\widetilde{\Delta}^{*} = \{ I_{out}^{*}, I_{in}^{*} \}$ and a list of query $q^{*}$. If for all PPT adversaries $\mathcal{A}$, they cannot distinguish between the two games **Real** and **Ideal**, we can say that our graph encryption scheme is $(\mathcal{L}_{Setup}, \mathcal{L}_{Query})$-secure against the adaptive chosen-query attack.
**Simulating** $\widetilde{\Delta}^{*}$. $\mathcal{S}$ handles each vertex $u_{i}$ ($1 \le i \le n$) to generate a fake $I_{out}^{*}$ in 2HCLI based on the leakage function $\mathcal{L}_{Setup}$. $\mathcal{S}$ randomly chooses $w_{i}$ for $u_{i}$ with $\sum_{1}^{n}w_{i} =\Omega_{out}$, and samples $l_{i} \gets \{{0,1\}}^{\lambda}$ and $\eta_{i} \gets \{{0,1\}}^{\lambda}$ uniformly without repetition. For all $0 \le i < w_{i}$, $\mathcal{S}$ takes the following steps to simulate each sketch: $\mathcal{S}$ computes $l_{w_{i}}=h(l_{i}, w_{i})$ and $\eta_{w_{i}}=h(\eta_{i}, w_{i})$, where $h$ is a particular pseudo-random function. Then, it encrypts each vertex $v$ in the sketch of $u_{i}$ by computing $V^{*}=h(K^{*},v||0)$, where $K^{*}$ is a fake secret key. It randomly generates two integers $d$ and $c$ and obtains ciphertexts $D^{*}$ and $C^{*}$ by encrypting $2^{N-d}$ ($N=2\mathcal{B}+1$) and $c$ using the SWHE and ORE schemes. Let $\Psi_{i}^{*} = \eta_{w_{i}} \oplus (V^{*}||D^{*}||C^{*})$. $\mathcal{S}$ stores $\Psi^{*}$ in the index $I_{out}^{*}$. That is, $I_{out}^{*}[l_{w_{i}}]= \Psi_{i}^{*}$. Similarly, $\mathcal{S}$ generates a fake $I_{in}^{*}$ and finally obtains the fake 2HCLI $\widetilde{\Delta}^{*}=\{I_{out}^{*}, I_{in}^{*}\}$.
Simulating $q^{*}$. Given the leakage function $\mathcal{L}_{Query}=(\mathcal{L}_{QP}(q), \mathcal{L}_{SP}(\widetilde{\Delta}, q), \mathcal{L}_{CP}(\widetilde{\Delta}, q))$, $\mathcal{S}$ simulates the query token as follows. $\mathcal{S}$ first checks if either of the queried vertices $s$ and $t$ has appeared in any previous query. If $s$ appeared previously, $\mathcal{S}$ sets $S_{out,s}^{*}$ and $T_{out,s}^{*}$ to the values that were previously used. Otherwise, it sets $T_{out,s}^{*}=l_{i}$ and $S_{out,s}^{*}=\eta_{i}$ for some previously unused $l_{i}$ and $\eta_{i}$. It then remembers the association among $\eta_{i}$, $l_{i}$, and $s$. $\mathcal{S}$ takes the same steps for the queried vertex $t$: setting $S_{in,t}^{*}$ and $T_{in, t}^{*}$ analogously and associating $t$ with the selected $\eta_{i}$ and $l_{i}$.
To simulate a fake cost constraint tree $T_{\theta}^{*}$, $\mathcal{S}$ first checks if the queried $\theta$ appeared in any previous query. If $\theta$ appeared previously, $\mathcal{S}$ sets the $T_{\theta}^{*}$ to the value that was previously used. Otherwise, $\mathcal{S}$ constructs a full binary tree based $\theta$ and encrypts each tree node by using the ORE scheme with a randomly generated key. $\mathcal{S}$ returns this encrypted tree as $T_{\theta}^{*}$.
$\mathcal{S}$ simulates the query procedure as follows. Given the query token $(S_{out,s}^{*}, T_{out,s}^{*}, S_{in,t}^{*}, T_{in,t}^{*}, T_{\theta}^{*})$, $\mathcal{S}$ first checks if the query has been queried before. If yes, $\mathcal{S}$ returns the value that was previously used as the query result. Otherwise, $\mathcal{S}$ checks whether the queried vertex $s$ (or $t$) has been queried before. If the query vertex $s$ has appeared in a previous query, $\mathcal{S}$ sets $L_{s}^{*}$ to the values that were previously used from $\Sigma$ of $\mathcal{L}_{SP}(\widetilde{f}, q)$. Otherwise, for a newly appeared vertex $s$, $\mathcal{S}$ takes the following steps: To generate the sketches associated with $s$, $\mathcal{S}$ first initializes a set $L_{s}^{*}$ and a counter $\omega^{*}=0$, Then, it iteratively computes $T_{out,s,v}^{*}=h(T_{out,s}^{*}, w_{*})$ and $S_{out,s,v}^{*}=g(S_{out,s}^{*}, w_{*})$, and adds the tuple $(V^{*}, D_{s,v}^{*}, C_{s,v}^{*})$ into $L_{s}^{*}$, until $I_{out}^{*}[T_{out,s,v}^{*}]$ does not exist, where $(V^{*}, D_{s,v}^{*}, C_{s,v}^{*}) = I_{out}^{*}[T_{out,s,v}^{*}] \oplus S_{out,s,v}^{*}$. Similarly, $\mathcal{S}$ obtains the set $L_{t}^{*}$ for vertex $t$. Upon obtaining $L_{s}^{*}$ and $L_{t}^{*}$, $\mathcal{S}$ performs cost constraint filtering operation based on $T_{\theta}^{*}$ to get the candidate set $Y^{*}$. The theorem then follows from the CPA-security of SWHE. That is, $\mathcal{S}$ performs the SWHE computation over $Y^{*}$ and returns the query result.
Since the cryptography primitives $g$, $h$, ORE, and SWHE are secure, the fake 2HCLI structure $\widetilde{\Delta}^{*}$ and the query sequence $q^{*}$ are indistinguishable from the real ones. Therefore, for all PPT adversaries $\mathcal{A}$, they cannot distinguish between the two games **Real** and **Ideal**. Thus, we have $$\begin{aligned}
|\textbf{Pr}[Real_{\Pi, \mathcal{A}}(\lambda) = 1] - \textbf{Pr}[Ideal_{\Pi, \mathcal{A}, \mathcal{S}}(\lambda) = 1]| \le negl(\lambda).\end{aligned}$$ where $negl(\lambda)$ is a negligible function.
Performance Evaluation {#sec:evaluation}
======================
This section presents the evaluation of our graph encryption scheme through experiments on real-world datasets.
Setup
-----
**Testbed.** We implement the method introduced in [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries] for building the 2HCLI. The ORE and SWHE in our implementation follow the methods described in [@Practical:Order:Revealing:Encryption:with:Limited:Leakage] and [@boneh2005evaluating], respectively. The GMP library is used for big integer arithmetic. We set the security parameter $\lambda=128$ and use the OpenSSL library for all the basic cryptographic primitives. All the algorithms in our experiment are implemented in C++. The experiments are conducted on a desktop PC equipped with Intel Xeon processor at 2.6 GHz and 8 GB RAM.
**Graph sets.** The datesets used in our experiments are listed in Table \[tab:datasets\]. All these datasets are publicly available from the Standford SNAP website$\footnote{http://snap.stanford.edu/data/}$ and modeled as directed graphs. For the datasets soc-Epinions1 and Email-EuAll, we randomly select their subsets to make the index construction feasible with the limited computational resources. Since these graphs are unweighted, we generate a distance and a cost for each edge, the value of which follows a uniform distribution between 1 and 100. The cost criterion is used as the constraint.
**Methods to compare.** Since this is the first work to address the CSD querying problem over encrypted graphs, we compare our method with the one over unencrypted graphs. We implement such a method following the state-of-the-art method over plaintext graphs introduced in [@Effective:Indexing:Approximate:Constrained:Shortest:Path:Queries]. The only difference is that we construct 2HCLI over the original graph, instead of an overlay graph. As a result, our implementation has a higher query efficiency but leads to a higher complexity of the index construction.
**Query sets.** We randomly generate 200 queries over each dataset. The origin $s$ and destination $t$ in each query are also randomly selected. The cost constraint $\theta$ for each $(s,t)$ pair is set as follows. We denote the *lower* bound $c_{min}$ as the minimum cost of all paths from $s$ to $t$, and the *upper* bound $c_{max}$ as the minimum cost of the paths with the shortest distance from $s$ to $t$. If the cost constraint $\theta < c_{min}$, there will be no feasible answer to the query; and if the cost constraint $\theta > c_{max}$, the shortest distance is always a valid answer to the query. To mitigate the impact of $\theta$ on the performance, we randomly choose 50 values of $\theta$ for each query, which falls in the interval $[c_{min}, c_{max}]$.
Another important parameter is $\alpha$, which determines the approximation guarantees of $\alpha$-CSD queries. Since $\alpha$ is a constant value for all queries, we view it as a system parameter rather than part of specific queries. In order to achieve a balance between query accuracy and system efficiency, we set the approximation ratio $\alpha = 1.5$ for all queries.
Evaluation of Secure 2HCLI and Query Token
------------------------------------------
**Index Size and Construction Time.** The index construction of the graph is a one-time and offline computation. This process consists of two steps: one is constructing the plain 2HCLI, which is the same as the index construction process of the original plain CSD querying, and the other is encrypting the plain 2HCLI, which is the focus of this paper. Therefore, we consider the outputs of the first step as the index of unencrypted graph.
The index size and construction time are depicted in Table \[tab:index\]. Note that the index size and construction time of different datasets have a great difference, which is mainly caused by the difference in graph topologies. Different from the original shortest distance query, where there is only one shortest path between any two vertices, in the CSD querying problem, there usually exist multiple constrained shortest paths between any two vertices. Intuitively, a dense graph may bear a higher index construction cost than a sparse one.
In general, the size of each encrypted index is roughly 6$\times$ larger than that of the corresponding plain index. The most important observation is that the index construction time of encrypted graphs is slightly higher than the one of unencrypted graphs. Thus, the key point of improving the index construction efficiency over an encrypted graph is accelerating the process of constructing the plain 2HCLI of that graph. We leave this attempt as the future work.
**Query Token Generation.** The construction of query tokens is independent of specific graphs, we now analyze the size and generation time of a query token. The query token mainly consists of 5 elements, namely $S_{out,s}$, $T_{out,s}$, $S_{in, t}$, $T_{in,t}$, and $T_{\theta}$. Each of the first 4 elements has a length of 16 bytes. Since the size of each ORE ciphertext is 16 bytes, a cost tree $T_{\theta}$ whose depth is $d_{\theta}$ has a size of $16 \times (2^{d_{\theta}}-1)$ bytes. Therefore, the total size of a query token is $16 \times (2^{d_{\theta}} + 3)$ bytes. Since $d_{\theta}$ is a relatively small value, the size of a query token is usually less than $1$ KB. The query token generation time with varying $d_{\theta}$ is depicted in Table \[tab:TokenGenTime\]. Although the query token generation time increases significantly with $d_{\theta}$, the time cost is moderate for general cases (e.g., when $d_{\theta} \leq 6$).
Evaluation of Query Efficiency and Accuracy
-------------------------------------------
**Query Efficiency.** To evaluate the query efficiency, for each $\theta$, we generate the cost constraint tree with a different depth $d_{\theta}$. The query time is defined as the time interval from the submission of a query token to the receival of its query results. We compute the average query time of 200 queries.
The average query time with varying $d_{\theta}$ over the encrypted 2HCLI is depicted in Fig. \[fig:QueryTimeForDepth\], where $d_{\theta}$ increases from 1 to 6. We can see that the query time varies a lot for different graph datasets. For each dataset, increasing $d_{\theta}$ can result in a decrease in the query time. This is because a larger $d_{\theta}$ can filter out more distance pairs exceeding the cost constraint and thereby reduce the number of candidates for distance computation using SWHE, which is the dominant operation in time consumption.
![The query time over encrypted 2HCLI with varying $d_{\theta}$.[]{data-label="fig:QueryTimeForDepth"}](figure/shen7.pdf){height="4.5cm"}
Fig. \[fig:QueryTimeForCmp\] presents the query time in the plain and encrypted scenarios for different datasets. The query time over the encrypted 2HCLI is higher than that over the plain 2HCLI because of the time-consuming operations on ciphertexts (e.g., the cost filtering and distance computation). Also, the time complexity of these operations is closely related to the size of a graph index listed in Table \[tab:index\], which leads to the difference among four datasets in Fig. \[fig:QueryTimeForCmp\].
**Query Accuracy**. In `Connor`, there are two components that affect the query accuracy, namely the tree-based ciphertexts comparison and the distance computation. The former may keep some distance pairs that do not satisfy the cost constraint in the candidate set $Y$, while the latter leverages the property of SWHE to obtain an approximate, but not exact, shortest distance based on all candidates in $Y$.
{height="4.5cm"}
{height="4.5cm"}
{height="4.5cm"}
We use the well-known metric *Precision* ($\mathcal{P}$) to evaluate the accuracy of the cost constraint filtering process. $\mathcal{P} = \frac{T_{p}}{T_{p}+F_{p}}$, where $T_{p}$ and $F_{p}$ represent the numbers of distance pairs in $Y$ whose costs truly satisfy or exceed the cost constraint, respectively. We use the same query as introduced above, and compute the $\mathcal{P}$ for each query. Finally, we can obtain the average precision $\bar{\mathcal{P}}$ for all the queries.
Fig. \[fig:QueryAccuracyCost\] presents the relationship between the query precision $\bar{\mathcal{P}}$ and the depth of the cost constraint tree $d_{\theta}$ over different datasets. We can see that for all the datasets, $\bar{\mathcal{P}}$ increases with $d_{\theta}$, because the cost constraint tree with a larger depth $d_{\theta}$ helps us to detect constraint violations with a higher probability, as discussed in Section \[sec:Tree:Based:Ciphertexts:Comparison:Approach\]. In particular, $\bar{\mathcal{P}}$ is more than $94\%$ for all datasets when $d_{\theta} = 6$.
To evaluate the accuracy of the final query results, we propose a metric named the *deviation rate*. Let $r_{e}$ and $r_{p}$ be the query results returned by `Connor` and the algorithm over the corresponding plain graphs, respectively. Then, we define the *deviation rate* $\xi = r_{e} / r_{p}$, which indicates how far $r_{e}$ deviates from $r_{p}$. Obviously, a *deviation rate* closer to 1 depicts more accurate query results.
Fig. \[fig:QueryAccuracyR\] presents the cumulative distribution functions (CDFs) of the *deviation rate* over the dataset p2p-Gnutella04. We can see that $\xi$ is larger than $0.90$ for over $80\%$ of the query results, and larger than $0.73$ in the worst cases. Therefore, `Connor` is capable of achieving a relatively high accuracy with moderate computation complexity.
Conclusion {#sec:conclusion}
==========
In this paper, we have presented `Connor`, the first graph encryption scheme that enables the cloud-based approximate CSD queries. In particular, we proposed a tree-based ciphertexts comparison protocol for cost constraint filtering with controlled disclosure. The security analysis showed that `Connor` could achieve the CQA2-security. We implemented a prototype and evaluated the performance using the real-world graph datasets. The evaluation results demonstrated the effectiveness of `Connor`. In the future work, we plan to design techniques to support dynamic index updates.
\[[{width="0.8in" height="1in"}]{}\] [Meng Shen]{} received the B.Eng degree from Shandong University, Jinan, China in 2009, and the Ph.D degree from Tsinghua University, Beijing, China in 2014, both in computer science. Currently he serves in Beijing Institute of Technology, Beijing, China, as an assistant professor. His research interests include privacy protection of cloud-based services, network virtualization and traffic engineering. He received the Best Paper Runner-Up Award at IEEE IPCCC 2014. He is a member of the IEEE.
\[[{width="0.8in"}]{}\] [Baoli Ma]{} received the B.Eng degree in computer science from Beijing Institute of Technology, Beijing, China in 2015. Currently he is a master student in the School of Computer Science, Beijing Institute of Technology. His research interest is secure searchable encryption.
\[[{width="0.8in"}]{}\] [Liehuang Zhu]{} is a professor in the School of Computer Science, Beijing Institute of Technology. He is selected into the Program for New Century Excellent Talents in University from Ministry of Education, P.R. China. His research interests include Internet of Things, Cloud Computing Security, Internet and Mobile Security.
\[[{width="0.8in"}]{}\] [Rashid Mijumbi]{} received a PhD in telecommunications engineering from the Universitat Politecnica de Catalunya (UPC), Barcelona, Spain. He was a Post-Doctoral Researcher with the UPC and with the Telecommunications Software and Systems Group, Waterford, Ireland, where he participated in several Spanish national, European, and Irish National Research Projects. He is currently a Software Systems Reliability Engineer with Bell Labs CTO, Nokia, Dublin, Ireland. His current research focus is on various aspects of 5G, NFV and SDN systems. He received the 2016 IEEE Transactions Outstanding Reviewer Award recognizing outstanding contributions to the IEEE Transactions on Network and Service Management. He is a Member of IEEE.
\[[{width="0.8in"}]{}\] [Xiaojiang Du]{} is a tenured professor in the Department of Computer and Information Sciences at Temple University, Philadelphia, USA. Dr. Du received his B.S. and M.S. degree in electrical engineering from Tsinghua University, Beijing, China in 1996 and 1998, respectively. He received his M.S. and Ph.D. degree in electrical engineering from the University of Maryland College Park in 2002 and 2003, respectively. His research interests are wireless communications, wireless networks, security, and systems. He has authored over 200 journal and conference papers in these areas, as well as a book published by Springer. Dr. Du has been awarded more than \$5 million US dollars research grants from the US National Science Foundation (NSF), Army Research Office, Air Force, NASA, the State of Pennsylvania, and Amazon. He won the best paper award at IEEE GLOBECOM 2014 and the best poster runner-up award at the ACM MobiHoc 2014. He serves on the editorial boards of three international journals. Dr. Du is a Senior Member of IEEE and a Life Member of ACM.
\[[{width="0.8in"}]{}\] [Jiankun Hu]{} is a Professor at the School of Engineering and IT, University of New South Wales (UNSW) Canberra (also named UNSW at the Australian Defence Force Academy (UNSW@ADFA), Canberra, Australia). He is the invited expert of Australia Attorney-Generals Office assisting the draft of Australia National Identity Management Policy. Prof. Hu has served at the Panel of Mathematics, Information and Computing Sciences (MIC), ARC ERA (The Excellence in Research for Australia) Evaluation Committee 2012. His research interest is in the field of cyber security covering intrusion detection, sensor key management, and biometrics authentication. He has many publications in top venues including IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Computers, IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Transactions on Information Forensics & Security (TIFS), Pattern Recognition, and IEEE Transactions on Industrial Informatics. He is the associate editor of the IEEE Transactions on Information Forensics and Security.
[^1]: For simplicity, we refer to single-constraint CSD queries as CSD queries hereafter.
[^2]: We refer to $G$ as a directed graph in this paper, unless otherwise specified.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The Cassini mission offered us the opportunity to monitor the seasonal evolution of Titan’s atmosphere from 2004 to 2017, i.e. half a Titan year. The lower part of the stratosphere (pressures greater than 10 mbar) is a region of particular interest as there are few available temperature measurements, and because its thermal response to the seasonal and meridional insolation variations undergone by Titan remain poorly known. In this study, we measure temperatures in Titan’s lower stratosphere between 6 mbar and 25 mbar using Cassini/CIRS spectra covering the whole duration of the mission (from 2004 to 2017) and the whole latitude range. We can thus characterize the meridional distribution of temperatures in Titan’s lower stratosphere, and how it evolves from northern winter (2004) to summer solstice (2017). Our measurements show that Titan’s lower stratosphere undergoes significant seasonal changes, especially at the South pole, where temperature decreases by 19 K at 15 mbar in 4 years.\
address:
- |
School of Earth Sciences, University of Bristol, Wills Memorial Building, Queens Road,\
Bristol BS8 1 RJ, UK
- 'Laboratoire de Météorologie Dynamique (LMD/IPSL), Sorbonne Université, ENS, PSL Research University, Ecole Polytechnique, Université Paris Saclay, CNRS, 4 Place Jussieu, F 75252 Paris Cedex 05, France'
- 'LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Univ. Paris Diderot, Sorbonne Paris Cité, 5 place Jules Janssen, 92195 Meudon, France'
- 'Atmospheric, Oceanic, & Planetary Physics, Department of Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK'
author:
- 'M. Sylvestre'
- 'N. A. Teanby'
- 'J. Vatant d’Ollone'
- 'S. Vinatier'
- 'B. Bézard'
- 'S. Lebonnois'
- 'P. G. J. Irwin'
bibliography:
- 'biblio.bib'
title: 'Seasonal evolution of temperatures in Titan’s lower stratosphere'
---
Introduction
============
Titan has a dense atmosphere, composed of $\mathrm{N_2}$ and [$\mathrm{CH_4}$]{}, and many trace gases such as hydrocarbons (e.g. $\mathrm{C_2H_6}$, $\mathrm{C_2H_2}$) and nitriles (e.g. $\mathrm{HCN}$, $\mathrm{HC_3N}$) produced by its rich photochemistry. Like Earth, Titan has a stratosphere, located between 50 km ($\sim 100$ mbar) and 400 km ($\sim 0.01$ mbar), characterized by the increase of its temperature with altitude because of the absorption of incoming sunlight by methane and hazes. Titan’s atmosphere undergoes strong variations of insolation, due to its obliquity ($26.7^{\circ}$) and to the eccentricity of Saturn’s orbit around the Sun (0.0565).\
The Cassini spacecraft monitored Titan’s atmosphere during 13 years (from 2004 to 2017), from northern winter to summer solstice. Its data are a unique opportunity to study the seasonal evolution of its stratosphere, especially with mid-IR observations from Cassini/CIRS (Composite InfraRed Spectrometer, @Flasar2004). They showed that at pressures lower than 5 mbar, the stratosphere exhibits strong seasonal variations of temperature and composition related to changes in atmospheric dynamics and radiative processes. For instance, during northern winter (2004-2008), high northern latitudes were enriched in photochemical products such as HCN or [$\mathrm{C_4H_2}$]{}, while there was a “hot spot” in the upper stratosphere and mesosphere (0.1 - 0.001 mbar, @Achterberg2008 [@Coustenis2007; @Teanby2007b; @Vinatier2007]). These observations were interpreted as evidence of subsidence above the North pole during winter, which is a part of the pole-to-pole atmospheric circulation cell predicted for solstices by Titan GCMs (Global Climate Models, @Lora2015 [@Lebonnois2012a; @Newman2011]). These models also predict that the circulation pattern should reverse around equinoxes, via a transitional state with two equator-to-pole cells. These changes began to affect the South pole in 2010, when measurements showed that pressures inferior to 0.03 mbar exhibited an enrichment in gases such as HCN or $\mathrm{C_2H_2}$, which propagated downward during autumn, consistent with the apparition of a new circulation cell with subsidence above the South pole [@Teanby2017; @Vinatier2015].\
Some uncertainties remain about the seasonal evolution of the lower part of the stratosphere, i.e. at pressures from 5 mbar (120 km) to 100 mbar (tropopause, 50 km). Different estimates of radiative timescales have been calculated for this region. In @Strobel2010, the radiative timescales in this region vary from 0.2 Titan years at 5 mbar to 2.5 Titan years at 100 mbar. This means that the lower stratosphere should be the transition zone from parts of the atmosphere which are sensitive to seasonal insolation variations, to parts of the atmosphere which are not. In contrast, in the radiative-dynamical model of @Bezard2018, radiative timescales are between 0.02 Titan year at 5 mbar and 0.26 Titan year at 100 mbar, implying that this whole region should exhibit a response to the seasonal cycle.\
From northern winter to equinox, CIRS mid-IR observations showed that temperature variations were lower than 5 K between 5 mbar and 10 mbar [@Bampasidis2012; @Achterberg2011]. Temporal variations intensified after spring equinox, as @Coustenis2016 measured a cooling by 16 K and an increase in gases abundances at $70^{\circ}$S from 2010 to 2014, at 10 mbar, associated with the autumn subsidence above the South pole. @Sylvestre2018 showed that this subsidence affects pressure levels as low as 15 mbar as they measured strong enrichments in [$\mathrm{C_2N_2}$]{}, [$\mathrm{C_3H_4}$]{}, and [$\mathrm{C_4H_2}$]{} at high southern latitudes from 2012 to 2016 with CIRS far-IR observations. However, we have little information on temperatures and their seasonal evolution for pressures greater than 10 mbar. Temperatures from the surface to 0.1 mbar can be measured by Cassini radio-occultations, but the published profiles were measured mainly in 2006 and 2007 [@SchinderFlasarMaroufEtAl2011; @Schinder2012], so they provide little information on seasonal variations of temperature.\
In this study, we analyse all the available far-IR Cassini/CIRS observations to probe temperatures from 6 mbar to 25 mbar, and measure the seasonal variations of lower stratospheric temperatures. As these data were acquired throughout the Cassini mission from 2004 to 2017, and cover the whole latitude range, they provide a unique overview of the thermal evolution of the lower stratosphere from northern winter to summer solstice, and a better understanding of the radiative and dynamical processes at play in this part of Titan’s atmosphere.\
Data analysis
=============
Observations
------------
We measure lower stratospheric temperatures using Cassini/CIRS [@Flasar2004] spectra. CIRS is a thermal infrared spectrometer with three focal planes operating in three different spectral domains: 10 - 600$~\mathrm{cm^{-1}}$ (17 - 1000$~\mathrm{\mu m}$) for FP1, 600 - 1100$~\mathrm{cm^{-1}}$ (9 - 17 $~\mathrm{\mu m}$) for FP3, and 1100 - 1400$~\mathrm{cm^{-1}}$ (7 - 9$~\mathrm{\mu m}$) for FP4. FP1 has a single circular detector with an angular field of view of 3.9 mrad, which has an approximately Gaussian spatial response with a FWHM of 2.5 mrad. FP3 and FP4 are each composed of a linear array of ten detectors. Each of these detectors has an angular field of view of 0.273 mrad.\
In this study, we use FP1 far-IR observations, where nadir spectra are measured at a resolution of 0.5$~\mathrm{cm^{-1}}$, in “sit-and-stare” geometry (i.e the FP1 detector probes the same latitude and longitude during the whole duration of the acquisition). In this type of observation, the average spatial field of view is 20$^\circ$ in latitude. An acquisition lasts between 1h30 and 4h30, allowing the recording of 100 to 330 spectra. The spectra from the same acquisition are averaged together, which increases the S/N by a factor $\sqrt{N}$ (where N is the number of spectra). As a result, we obtain an average spectrum where the rotational lines of [$\mathrm{CH_4}$]{} (between 70$~\mathrm{cm^{-1}}$ and 170$~\mathrm{cm^{-1}}$) are resolved and can be used to retrieve Titan’s lower stratospheric temperature. An example averaged spectrum is shown in Fig. \[fig\_spec\].\
We analysed all the available observations with the characteristics mentioned above. As shown in table \[table\_obs\], this type of nadir far-IR observation has been performed throughout the Cassini mission (from 2004 to 2017), at all latitudes. Hence, the analysis of this dataset enables us to get an overview of Titan’s lower stratosphere and its seasonal evolution.\
![Example of average spectrum measured with the FP1 detector of Cassini/CIRS (in black) and its fit by NEMESIS (in red). The measured spectrum was obtained after averaging 106 spectra observed at $89^{\circ}$N in March 2007. The rotational lines of [$\mathrm{CH_4}$]{} are used to retrieve stratospheric temperature. The “haystack” feature is visible only at high latitudes during autumn and winter. []{data-label="fig_spec"}](Spectra_89N_0703){width="1\columnwidth"}
Retrieval method
----------------
We follow the same method as @Sylvestre2018. We use the portion of the spectrum between 70 $\mathrm{cm^{-1}}$ and 400 $\mathrm{cm^{-1}}$, where the main spectral features are: the ten rotational lines of [$\mathrm{CH_4}$]{} (between 70$~\mathrm{cm^{-1}}$ and 170$~\mathrm{cm^{-1}}$), the [$\mathrm{C_4H_2}$]{} band at $220~\mathrm{cm^{-1}}$, the [$\mathrm{C_2N_2}$]{} band at $234~\mathrm{cm^{-1}}$, and the [$\mathrm{C_3H_4}$]{} band at $327~\mathrm{cm^{-1}}$ (see Fig. \[fig\_spec\]). The continuum emission comes from the collisions between the three main components of Titan’s atmosphere (N$_2$, [$\mathrm{CH_4}$]{}, and H$_2$), and from the spectral contributions of the hazes.\
We retrieve the temperature profile using the constrained non-linear inversion code NEMESIS [@Irwin2008]. We define a reference atmosphere, which takes into account the abundances of the main constituents of Titan’s atmosphere measured by Cassini/CIRS [@Coustenis2016; @Nixon2012; @Cottini2012; @Teanby2009], Cassini/VIMS [@Maltagliati2015], ALMA [@Molter2016] and Huygens/GCMS[@Niemann2010]. We also consider the haze distribution and properties measured in previous studies with Cassini/CIRS [@deKok2007; @deKok2010b; @Vinatier2012], and Huygens/GCMS [@Tomasko2008b]. We consider four types of hazes, following @deKok2007: hazes 0 ($70~\mathrm{cm^{-1}}$ to $400~\mathrm{cm^{-1}}$), A (centred at $140~\mathrm{cm^{-1}}$), B (centred at $220~\mathrm{cm^{-1}}$) and C (centred at $190~\mathrm{cm^{-1}}$). For the spectra measured at high northern and southern latitudes during autumn and winter, we add an offset from 1 to $3~\mathrm{cm^{-1}}$ to the nominal haze B cross-sections between 190 $\mathrm{cm^{-1}}$ and 240 $\mathrm{cm^{-1}}$, as in @Sylvestre2018. This modification improves the fit of the continuum in the “haystack” which is a strong emission feature between 190 $\mathrm{cm^{-1}}$ and 240 $\mathrm{cm^{-1}}$ (see Fig. \[fig\_spec\]) seen at high latitudes during autumn and winter (e.g. in @Coustenis1999 [@deKok2007; @Anderson2012; @Jennings2012; @Jennings2015]). The variation of the offset allows us to take into account the evolution of the shape of this feature throughout autumn and winter. The composition of our reference atmosphere and the spectroscopic parameters adopted for its constituents are fully detailed in @Sylvestre2018.\
We retrieve the temperature profile and scale factors applied to the *a priori* profiles of [$\mathrm{C_2N_2}$]{}, [$\mathrm{C_4H_2}$]{}, [$\mathrm{C_3H_4}$]{}, and hazes 0, A, B and C, from the spectra using the constrained non-linear inversion code NEMESIS [@Irwin2008]. This code generates synthetic spectra from the reference atmosphere. At each iteration, the difference between the synthetic and the measured spectra is used to modify the profile of the retrieved variables, and minimise a cost function, in order to find the best fit for the measured spectrum.\
The sensitivity of the spectra to the temperature can be measured with the inversion kernels for the temperature (defined as $K_{ij}~=~\frac{\partial I_i}{\partial T_j}$, where $I_i$ is the radiance measured at wavenumber $w_i$, and $T_j$ the temperature at pressure level $p_j$) for several wavenumbers. The contribution of the methane lines to the temperature measurement can be isolated by defining their own inversion kernels $K^{CH_4}_{ij}$ as follows: $$K^{CH_4}_{ij} = K_{ij} - K^{cont}_{ij}$$ where $K^{cont}_{ij}$ is the inversion kernel of the continuum for the same wavenumber. Figure \[fig\_cf\] shows $K^{CH_4}_{ij}$ for three of the rotational methane lines in the left panel, and the comparison between the sum of the 10 $K^{CH_4}_{ij}$ (for the 10 rotational [$\mathrm{CH_4}$]{} lines) and inversion kernels for the continuum ($K^{cont}_{ij}$ at the wavenumbers of the [$\mathrm{CH_4}$]{} lines and $K_{ij}$ outside of the [$\mathrm{CH_4}$]{} lines) in the right panel. The [$\mathrm{CH_4}$]{} lines allow us to measure lower stratospheric temperatures generally between 6 mbar and 25 mbar, with a maximal sensitivity at 15 mbar. The continuum emission mainly probes temperatures at higher pressures, around the tropopause and in the troposphere. The continuum emission mostly originates from the $\mathrm{N_2}$-$\mathrm{N_2}$ and $\mathrm{N_2}$-[$\mathrm{CH_4}$]{} collisions induced absorption with some contribution from the hazes, for which we have limited constraints. However, Fig. \[fig\_cf\] shows that the continuum emission comes from pressure levels located several scale heights below the region probed by the [$\mathrm{CH_4}$]{} lines, so the lack of constraints on the hazes and tropospheric temperatures does not affect the lower stratospheric temperatures which are the main focus of this study.\
![Sensitivity of temperature measurements at $72^{\circ}N$ in April 2007. *Left panel*: Normalised inversion kernels $K^{CH_4}_{ij}$ in three of the [$\mathrm{CH_4}$]{} rotational lines. *Right panel:* Comparison between the inversion kernels in the continuum ($K^{cont}_{ij}$ for three of the [$\mathrm{CH_4}$]{} lines in dot-dashed lines, and $K_{ij}$ for other wavenumbers in the continuum in dashed lines) and the sum of the inversion kernels $K^{CH_4}_{ij}$ of the [$\mathrm{CH_4}$]{} rotational lines. [$\mathrm{CH_4}$]{} rotational lines dominate the temperature retrievals in the lower stratosphere, generally from 6 to 25 mbar (and up to 35 mbar, depending on the datasets). The continuum emission probes temperatures at pressures higher than 50 mbar, mainly in the troposphere.[]{data-label="fig_cf"}](CF_72N_0704_v2){width="1\columnwidth"}
Error sources
-------------
The main error sources in our temperature retrievals are the measurement noise and the uncertainties related to the retrieval process such as forward modelling errors or the smoothing of the temperature profile. The total error on the temperature retrieval is estimated by NEMESIS and is in the order of 2 K from 6 mbar to 25 mbar.\
The other possible error source is the uncertainty on [$\mathrm{CH_4}$]{} abundance, as @Lellouch2014 showed that it can vary from 1% to 1.5% at 15 mbar. We performed additional temperature retrievals on several datasets, in order to assess the effects of these variations on the temperature retrievals. First, we selected datasets for which [$\mathrm{CH_4}$]{} abundance was measured by @Lellouch2014. In Figure \[fig\_TCH4\], we show examples of these tests for two of these datasets: $52^{\circ}$N in May 2007 and $15^{\circ}$S in October 2006, for which @Lellouch2014 measured respective [$\mathrm{CH_4}$]{} abundances of $q_{CH_4} = 1.20 \pm 0.15\%$ and $q_{CH_4} = 0.95 \pm 0.08 \%$ (the nominal value for our retrievals is $q_{CH_4} = 1.48 \pm 0.09\%$ from @Niemann2010). At $52^{\circ}$N, the temperature profile obtained with the methane abundance from @Lellouch2014 does not differ by more than 4 K from the nominal temperature profile. At 15 mbar (where the sensitivity to temperature is maximal in our retrievals), the difference of temperature between these two profiles is 2 K. Even a [$\mathrm{CH_4}$]{} volume mixing ratio as low as 1% yields a temperature only 4 K warmer than the nominal temperature at 15 mbar. At $15^{\circ}$S, the difference of temperature between the nominal retrieval and the retrieval with the methane abundance retrieved by @Lellouch2014 ($q_{CH_4}=0.95\%$), is approximately 9 K on the whole pressure range.\
We performed additional temperature retrievals using CIRS FP4 nadir spectra measured at the same times and latitudes as the two datasets shown in Figure \[fig\_TCH4\]. In FP4 nadir spectra, the methane band $\nu_4$ is visible between $1200~\mathrm{cm^{-1}}$ and $1360~\mathrm{cm^{-1}}$. This spectral feature allows us to probe temperature between 0.1 mbar and 10 mbar, whereas methane rotational lines in the CIRS FP1 nadir spectra generally probe temperature between 6 mbar and 25 mbar. Temperature can thus be measured with both types of retrievals from 6 mbar to 10 mbar. We performed FP4 temperature retrievals with the nominal methane abundance and the abundances measured by @Lellouch2014, as shown in Figure \[fig\_TCH4\]. FP4 temperature retrievals seem less sensitive to changes in the methane volume mixing ratio, as they yield a maximal temperature difference of 3 K at $52^{\circ}$N , and 4 K at $15^{\circ}$S between 6 mbar and 10 mbar. In both cases, FP1 and FP4 temperature retrievals are in better agreement in their common pressure range when the nominal methane abundance ($q_{CH_4}=1.48\%$) is used for both retrievals. This suggests that $q_{CH_4}=1.48\%$ is the best choice, at least in the pressure range covered by both types of temperature retrievals (from 6 mbar to 10 mbar). Changing the abundance of [$\mathrm{CH_4}$]{} in the whole stratosphere seems to induce an error on the temperature measurements between 6 mbar and 10 mbar (up to 9 K at $15^{\circ}$S), which probably affects the temperature at 15 mbar in the FP1 retrievals, because of the vertical resolution of nadir retrievals (represented by the width of the inversion kernels in Fig. \[fig\_cf\]). Consequently, assessing the effects of [$\mathrm{CH_4}$]{} abundance variations on temperature at 15 mbar by changing $q_{CH_4}$ in the whole stratosphere seems to be a very unfavourable test, and the uncertainties on temperature determined by this method are probably overestimated for the FP1 temperature retrievals. Overall, when retrieving temperature from CIRS FP1 nadir spectra with $q_{CH_4}=1\%$ for datasets spanning different times and latitudes, we found temperatures warmer than our nominal temperatures by 2 K to 10 K at 15 mbar, with an average of 5 K. In [@Lellouch2014], authors found that temperature changes by 4-5 K on the whole pressure range when varying $q_{CH_4}$ at $15^{\circ}$S, but they determined temperatures using FP4 nadir and limb data, which do not probe the 15 mbar pressure level.\
![Temperature profiles from CIRS FP1 and FP4 nadir observations at $52^{\circ}$N in May 2007 (top panel) and $15^{\circ}$S in October 2006 (bottom panel), retrieved with the methane abundances measured by @Niemann2010 (nominal value in this study) and @Lellouch2014. In both cases, the nominal value from @Niemann2010 yields a better agreement between the two types of observations.[]{data-label="fig_TCH4"}](Temperature_profiles_52N_0705 "fig:"){width="1\columnwidth"}\
![Temperature profiles from CIRS FP1 and FP4 nadir observations at $52^{\circ}$N in May 2007 (top panel) and $15^{\circ}$S in October 2006 (bottom panel), retrieved with the methane abundances measured by @Niemann2010 (nominal value in this study) and @Lellouch2014. In both cases, the nominal value from @Niemann2010 yields a better agreement between the two types of observations.[]{data-label="fig_TCH4"}](Temperature_profiles_15S_0610 "fig:"){width="1\columnwidth"}\
Results {#sect_res}
=======
![Evolution of temperatures at 6 mbar (120 km) and 15 mbar (85 km) from northern winter (2004) to summer (2017). The length of the markers shows the average size of the field of view of the CIRS FP1 detector. Temperatures exhibit similar strong seasonal changes at both pressure levels, especially at the poles.[]{data-label="fig_ev_saiso"}](carte_2d_6mbar_v2 "fig:"){width="1\columnwidth"}\
![Evolution of temperatures at 6 mbar (120 km) and 15 mbar (85 km) from northern winter (2004) to summer (2017). The length of the markers shows the average size of the field of view of the CIRS FP1 detector. Temperatures exhibit similar strong seasonal changes at both pressure levels, especially at the poles.[]{data-label="fig_ev_saiso"}](carte_2d_15mbar_v2 "fig:"){width="1\columnwidth"}
![Meridional distribution of temperatures at 6 mbar (120 km) and 15 mbar (85 km), for three different seasons: late northern winter (2007, blue triangles), mid-spring (2013, green circles), and near summer solstice (from July 2016 to September 2017, red diamonds). The plain lines are the meridional distributions given by GCM simulations at comparable seasons (see section \[sect\_discu\]). In both observations and model the meridional gradient of temperatures evolves from one season to another at both pressure levels.[]{data-label="fig_var_saiso"}](var_saiso_GCM){width="1\columnwidth"}
Figures \[fig\_ev\_saiso\] and \[fig\_var\_saiso\] show the temperatures measured with Cassini/CIRS far-IR nadir data at 6 mbar (minimal pressure probed by the CIRS far-IR nadir observations) and 15 mbar (pressure level where these observations are the most sensitive). Figure \[fig\_ev\_saiso\] maps the seasonal evolution of temperatures throughout the Cassini mission (from 2004 to 2017, i.e. from mid-northern winter to early summer), while Figure \[fig\_var\_saiso\] is focused on the evolution of the meridional gradient of temperature from one season to another. In both figures, both pressure levels exhibit significant seasonal variations of temperature and follow similar trends. Maximal temperatures are reached near the equator in 2005 (152 K at 6 mbar, 130 K at 15 mbar, at $18^{\circ}$S, at $L_S=300^{\circ}$), while the minimal temperatures are reached at high southern latitudes in autumn (123 K at 6 mbar, 106 K at 15 mbar at $70^{\circ}$S in 2016, at $L_S=79^{\circ}$).\
The maximal seasonal variations of temperature are located at the poles for both pressure levels. At high northern latitudes ($60^\circ$N - $90^\circ$N), at 15 mbar, the temperature increased overall from winter to summer solstice. For instance at $70^{\circ}$N, temperature increased by 10 K from January 2007 to September 2017. At 6 mbar, temperatures at $60^{\circ}$N stayed approximately constant from winter to spring, whereas latitudes poleward from $70^{\circ}$N warmed up. At $85^{\circ}$N, the temperature increased continuously from 125 K in March 2007 to 142 K in September 2017.\
In the meantime, at high southern latitudes ($60^\circ$S - $90^\circ$S), at 6 mbar and 15 mbar, temperatures strongly decreased from southern summer (2007) to late autumn (2016). It is the largest seasonal temperature change we measured in the lower stratosphere. At $70^{\circ}$S, temperature decreased by 24 K at 6 mbar and by 19 K at 15 mbar between January 2007 and June 2016. This decrease seems to be followed by a temperature increase toward winter solstice. At $70^{\circ}$S, temperatures varied by $+8$ K at 6 mbar from June 2016 to April 2017. Temperatures at high southern latitudes began to evolve in November 2010 at 6 mbar, and 2 years later (in August 2012) at 15 mbar.\
Other latitudes experience moderate seasonal temperature variations. At low latitudes (between $30^{\circ}$N and $30^{\circ}$S), temperature decreased overall from 2004 to 2017 at both pressure levels. For instance, at the equator, at 6 mbar temperature decreased by 6 K from 2006 to 2016. At mid-southern latitudes, temperatures stayed constant from summer (2005) to mid-autumn (June 2012 at 6 mbar, and May 2013 at 15 mbar), then they decreased by approximately 10 K from 2012-2013 to 2016. At mid-northern latitudes temperatures increased overall from winter to spring. At $50^{\circ}$N, temperature increased from 139 K to 144 K from 2005 to 2014. In Figure \[fig\_var\_saiso\], at 6 mbar and 15 mbar, the meridional temperature gradient evolves from one season to another. During late northern winter, temperatures were approximately constant from $70^{\circ}$S to $30^{\circ}$N, and then decreased toward the North pole. In mid-spring, temperatures were decreasing from equator to poles. Near the summer solstice, at 15 mbar, the meridional temperature gradient reversed compared to winter (summer temperatures constant in northern and low southern latitudes then decreasing toward the South Pole), while at 6 mbar, temperatures globally decrease from the equator to the South pole and $70^{\circ}$N, then increase slightly between $70^{\circ}$N and $90^{\circ}$N. At 15 mbar, most of these changes in the shape of the temperature distribution occur because of the temperature variations poleward from $60^{\circ}$. At 6 mbar, temperature variations occur mostly in the southern hemisphere at latitudes higher than $40^{\circ}$S, and near the North pole at latitudes higher than $70^{\circ}$N.\
![Temperature variations in the lower stratosphere during the Cassini mission for different latitudes. The blue profiles were measured during northern winter (in 2007). The red profiles were measured in late northern spring (in 2017 for $85^{\circ}$N, in 2016 for the other latitudes). The seasonal temperature variations are observed at most latitudes, and on the whole probed pressure range.[]{data-label="fig_grad_saiso_vert"}](temp_profiles){width="1\columnwidth"}
Figure \[fig\_grad\_saiso\_vert\] shows the first and the last temperature profiles measured with CIRS nadir far-IR data, for several latitudes. As in Fig. \[fig\_ev\_saiso\], the maximal temperature variations are measured at high southern latitudes for all pressure levels. At $70^{\circ}$S, the temperature decreased by 25 K at 10 mbar. Below 10 mbar the seasonal temperature difference decreases rapidly with increasing pressure until it reaches 10 K at 25 mbar, whereas it is nearly constant between 5 mbar and 10 mbar. $85^{\circ}$N also exhibits a decrease of the seasonal temperature gradient below the 10 mbar pressure level, although it is less pronounced than near the South pole. At $45^{\circ}$S, the temperature decreased by approximately 10 K from 2007 to 2016, over the whole probed pressure range. At the equator, the temperature varies by -5 K from 2005 to 2016 at 6 mbar and the amplitude of this variation seems to decrease slightly with increasing pressure until it becomes negligible at 25 mbar. However the amplitude of these variations is in the same range as the uncertainty on temperature due to potential [$\mathrm{CH_4}$]{} variations.\
Discussion {#sect_discu}
==========
Comparison with previous results
--------------------------------
![Comparison of nadir FP1 temperatures with previous studies. *Top left panel:* Comparison between CIRS nadir FP1 (triangles) and CIRS nadir FP4 temperatures at 6 mbar (circles, @Bampasidis2012\[1\], and @Coustenis2016\[2\]) in 2010 (cyan) and 2014 (purple). *Right panel:* Comparison between temperature profiles from CIRS nadir FP1 observations (thick solid lines), CIRS nadir FP4 observations (thin dot-dashed lines, @Coustenis2016\[2\]), and Cassini radio-occultation (thin dashed line, @SchinderFlasarMaroufEtAl2011\[3\]). Our results are in good agreement with CIRS FP4 temperatures, but diverge somewhat from radio-occultation profiles with increasing pressure. *Bottom left panel:* Comparison between temperatures at 15 mbar from our CIRS FP1 nadir measurements (magenta triangles), Cassini radio-occultations in 2006 and 2007 (cyan circles, @SchinderFlasarMaroufEtAl2011 [@Schinder2012], \[3\], \[4\]), and the Huygens/HASI measurement in 2005 (yellow diamond, @Fulchignoni2005, \[5\]).The dashed magenta line shows the potential effect of the [$\mathrm{CH_4}$]{} variations observed by @Lellouch2014. If we take into account this effect, the agreement between our data, the radio-occultations and the HASI measurements is good.[]{data-label="fig_prev_studies"}](comp_prev_studies_v5){width="1\columnwidth"}
Figure \[fig\_prev\_studies\] shows a comparison between our results and previous studies where temperatures have been measured in the lower stratosphere at similar epochs, latitudes and pressure levels. In the top left and right panels, our temperature measurements are compared to results from CIRS FP4 nadir observations [@Bampasidis2012; @Coustenis2016] which probe mainly the 0.1-10 mbar pressure range. In the top left panel, the temperatures measured at 6 mbar by these two types of observations are in good agreement for the two considered epochs (2009-2010 and 2014). We obtain similar meridional gradients with both types of observations, even if FP4 temperatures are obtained from averages of spectra over bins of $10^{\circ}$ of latitudes (except at $70^{\circ}$N and $70^{\circ}$S where the bins are $20^{\circ}$ wide in latitude), whereas the average size in latitude of the field of view of the FP1 detector is $20^{\circ}$. It thus seems than the wider latitudinal size of the FP1 field of view has little effect on our temperature measurements. In the right panel, our temperature profiles are compared to two profiles measured by @Coustenis2016 using CIRS FP4 nadir observations (at $50^{\circ}$S in April 2010, and at $70^{\circ}$S in June 2012), and with Cassini radio-occultations measurements from @SchinderFlasarMaroufEtAl2011 [@Schinder2012], which probe the atmosphere from the surface to 0.1 mbar (0 - 300 km). CIRS FP1 and FP4 temperature profiles are in good averall agreement. The profile we measured at $28^{\circ}$S in February 2006 and the corresponding radio-occultation profile are within error bars for pressures lower than 13 mbar, then the difference between them increases up to 8 K at 25 mbar. The bottom left panel of Fig. \[fig\_prev\_studies\] shows the radio-occultation temperatures in 2006 and 2007 compared to CIRS nadir FP1 temperatures at 15 mbar, where their sensitivity to the temperature is maximal. Although, the radio-occultations temperatures are systematically higher than the CIRS temperatures by 2 K to 6 K, they follow the same meridional trend. CIRS FP1 temperatures at the equator are also lower than the temperature measured by the HASI instrument at 15 mbar during Huygens descent in Titan’s atmosphere in 2005. If we take into account the effect of the spatial variations of [$\mathrm{CH_4}$]{} at 15 mbar observed by @Lellouch2014 by decreasing the [$\mathrm{CH_4}$]{} abundance to 1% (the lower limit in @Lellouch2014) in the CIRS FP1 temperature measurements (dashed line in the middle panel of Fig. \[fig\_prev\_studies\]), the agreement between the three types of observations is good in the southern hemisphere. The differences between radio-occultations, HASI and CIRS temperatures might also be explained by the difference of vertical resolution. Indeed nadir observations have a vertical resolution in the order of 50 km while radio-occultations and HASI observations have respective vertical resolutions of 1 km and 200 m around 15 mbar.\
Effects of Saturn’s eccentricity
--------------------------------
![Temporal evolution of Titan’s lower stratospheric temperatures at the equator ($5^{\circ}$N - $5^{\circ}$S) at 6 mbar (left panel) and 15 mbar (right panel), compared with a simple model of the evolution of the temperature as a function of the distance between Titan and the Sun (green line). The reduced $\chi^2$ between this model and the observations is 0.95 at 6 mbar and 1.07 at 15 mbar. The amplitude of the temperature variations at Titan’s equator throughout the Cassini mission can be explained by the effect of Saturn’s eccentricity.[]{data-label="fig_eccentricity"}](eccentricity_equator_free_T0){width="1\columnwidth"}
Because of Saturn’s orbital eccentricity of 0.0565, the distance between Titan and the Sun varies enough to affect significantly the insolation. For instance, throughout the Cassini mission, the solar flux received at the equator has decreased by 19% because of the eccentricity. We make a simple model of the evolution of the temperature $T$ at the equator as a function of the distance between Titan and the Sun. In this model we assume that the temperature $T$ at the considered pressure level and at a given time depends only on the absorbed solar flux $F$ and we neglect the radiative exchanges between atmospheric layers:
$$\epsilon \sigma T^4 = F$$
where $\epsilon$ is the emissivity of the atmosphere at this pressure level, and $\sigma$ the Stefan-Boltzmann constant. $T$ can thus be defined as a function of the distance $d$ between Titan and the Sun:
$$T^4 = \frac{\alpha L_{\odot}}{16\epsilon\sigma\pi d^2}
\label{eq_T_dist}$$
where $L_\odot$ is the solar power, and $\alpha$ the absorptivity of the atmosphere. If we choose a reference temperature $T_0$ where Titan is at a distance $d_0$ from the Sun, a relation similar to (\[eq\_T\_dist\]) can be written for $T_0$. If we assume $\epsilon$ and $\alpha$ to be constant, $T$ can then be written as:
$$T = T_0 \sqrt{\frac{d_0}{d}}$$
Figure \[fig\_eccentricity\] shows a comparison between this model and the temperatures measured between $5^{\circ}$N and $5^{\circ}$S from 2006 to 2016, at 6 mbar and 15 mbar. We choose $T_0$ as the temperature at the beginning of the observations (December 2005/January 2006) which provides the best fit between our model and the observations while being consistent with the observations at the same epoch ($T_0=151.7$ K at 6 mbar, and $T_0=129$ K at 15 mbar). At 6 mbar, we measure a temperature decrease from 2006 to 2016. This is similar to what has been measured at 4 mbar by @Bezard2018 with CIRS mid-IR observations, whereas their radiative-dynamical model predicts a small temperature maximum around the northern spring equinox (2009). At 15 mbar, equatorial temperatures are mostly constant from 2005 to 2016, with a marginal decrease in 2016. Our model predicts temperature variations of 8 K at 6 mbar and 7 K at 15 mbar from 2006 to 2016. Both predictions are consistent with the measurements and with radiative timescales shorter than one Titan year at 6 mbar and 15 mbar, as in @Bezard2018 where they are respectively equal to 0.024 Titan year and 0.06 Titan year. At both pressure levels, the model captures the magnitude of the temperature change, but does not fully match its timing or shape (especially in 2012-2014), implying that a more sophisticated model is needed. The remaining differences between our model and the temperature measurements could be decreased by adding a temporal lag to our model (2-3 years at 6 mbar and 3-4 years at 15 mbar), but the error bars on the temperature measurements are too large to constrain the lag to a value statistically distinct from zero. Even with this potential lag, the agreement between the model and the temperatures measured at 6 mbar shows that the amplitude of the temporal evolution throughout the Cassini mission may be explained by the effects of Saturn’s eccentricity. At 15 mbar, given the error bars and the lack of further far-IR temperature measurements at the equator in 2016 and 2017, it remains difficult to draw a definitive conclusion about the influence of Saturn’s eccentricity at this pressure level.\
Implication for radiative and dynamical processes of the lower stratosphere
---------------------------------------------------------------------------
In Section \[sect\_res\], we showed that in the lower stratosphere, the seasonal evolution of the temperature is maximal at high latitudes, especially at the South Pole. At 15 mbar, the strong cooling of high southern latitudes started in 2012, simultaneously with the increase in [$\mathrm{C_2N_2}$]{}, [$\mathrm{C_4H_2}$]{}, and [$\mathrm{C_3H_4}$]{} abundances measured at the same latitudes and pressure-level in @Sylvestre2018. We also show that this cooling affects the atmosphere at least down to the 25 mbar pressure level (altitude of 70 km). The enrichment of the gases and cooling are consistent with the onset of a subsidence above the South Pole during autumn, as predicted by GCMs [@Newman2011; @Lebonnois2012a], and inferred from previous CIRS observations at higher altitudes [@Teanby2012; @Vinatier2015; @Coustenis2016]. As Titan’s atmospheric circulation transitions from two equator-to-poles cells (with upwelling above the equator and subsidence above the poles) to a single pole-to-pole cell (with a descending branch above South Pole), this subsidence drags downward photochemical species created at higher altitudes toward the lower stratosphere. @Teanby2017 showed that enrichment in trace gases may be so strong that their cooling effect combined with the insolation decrease may exceed the adiabatic heating between 0.3 mbar and 10 mbar (100 - 250 km). Our observations show that this phenomena may be at play down as deep as 25 mbar.\
We compare retrieved temperature fields with results of simulations from IPSL 3D-GCM [@Lebonnois2012a] with an updated radiative transfer scheme [@Vatantd'Ollone2017] now based on a flexible *correlated-k* method and up-to-date gas spectroscopic data [@Rothman2013]. It does not take into account the radiative feedback of the enrichment in hazes and trace gases in the polar regions, but it nevertheless appears that there is a good agreement in terms of seasonal cycle between the model and the observations. As shown in Figure \[fig\_var\_saiso\], at 6 mbar meridional distributions and values of temperatures in the model match well the observations. It can be pointed out that in both model and observations there is a noticeable asymmetry between high southern latitudes where the temperature decreases rapidly from the equinox to winter, and high northern latitudes which evolve more slowly from winter to summer. For instance, in both CIRS data and model, between 2007 and 2013 at 6 mbar and $70^{\circ}$N the atmosphere has warmed by only about 2 K, while in the meantime at $70^{\circ}$S it has cooled by about 10-15 K. This is consistent with an increase of radiative timescales at high northern latitudes (due to lower temperatures, @Achterberg2011) which would remain cold for approximately one season even after the return of sunlight. Figure \[fig\_map\_temp\_gcm70N\] shows the temporal evolution of the temperature at $70^{\circ}$N over one Titan year in the lower stratosphere in the GCM simulations and also emphasizes this asymmetry between the ingress and egress of winter at high latitudes. In Figure \[fig\_var\_saiso\], at 15 mbar modeled temperatures underestimate the observations by roughly 5-10 K, certainly due to a lack of infrared coolers such as clouds condensates [@Jennings2015]. However, observations and simulations exhibit similar meridional temperature gradients for the three studied epochs, and similar seasonal temperature evolution. For instance, in 2016-2017 we measured a temperature gradient of -11 K between the North and South Pole, whereas GCM simulations predict a temperature gradient of -12 K. At $70^{\circ}$S, temperature decreases by 10 K between 2007 and 2016-2017 in the GCM and in our observations. Besides, at 15 mbar, the seasonal behaviour remains the same as at 6 mbar, although more damped. Indeed comparison with GCM results also supports the idea that the seasonal effects due to the variations of insolation are damped with increasing depth in the lower stratosphere and ultimately muted below 25 mbar, as displayed in Figure \[fig\_map\_temp\_gcm70N\]. At lower altitudes the seasonal cycle of temperature at high latitudes is even inverted with temperatures increasing in the winter and decreasing in summer. Indeed at these altitudes, due to the radiative timescales exceeding one Titan year, temperature is no more sensitive to the seasonal variations of solar forcing, but to the interplay of ascending and descending large scale vertical motions of the pole-to-pole cell, inducing respectively adiabatic heating above winter pole and cooling above summer pole, as previously discussed in @Lebonnois2012a.
Further analysis of simulations – not presented here - also show that after 2016, temperatures at high southern latitudes began to slightly increase again at 6 mbar, which is consistent with observations, whereas at 15 mbar no change in the trend is observed, certainly due to a phase shift of the seasonal cycle between the two altitudes induced by the difference of radiative timescales, which is also illustrated in Figure \[fig\_map\_temp\_gcm70N\].\
![Seasonal evolution of Titan’s lower stratospheric temperatures modeled by the IPSL 3D-GCM at 70$^{\circ}$N - between 5 mbar and 50 mbar, starting at northern spring equinox. In the pressure range probed by the CIRS far-IR observations (from 6 mbar to 25 mbar), there is a strong asymmetry between the rapid temperature changes after autumn equinox ($L_S = 180^{\circ}$) and the slow evolution of the thermal structure after spring equinox ($L_S = 0^{\circ}$). []{data-label="fig_map_temp_gcm70N"}](Map_Temperature_70N){width="1\columnwidth"}
We also show in Figure \[fig\_grad\_saiso\_vert\] that at high southern latitudes, from 6 to 10 mbar seasonal temperature variations are approximately constant with pressure and can be larger than 10 K, whereas they decrease with increasing pressure below 10 mbar. This transition at 10 mbar may be caused by the increase of radiative timescales in the lower stratosphere. @Strobel2010 estimated that the radiative timescale increases from one Titan season at 6 mbar to half a Titan year at 12 mbar. It can thus be expected that this region should be a transition zone between regions of the atmosphere where the atmospheric response to the seasonal insolation variations is significant and comes with little lag, to regions of the atmosphere where they are negligible. However, this transition should be observable at other latitudes such as $45^{\circ}$S, whereas Figure \[fig\_grad\_saiso\_vert\] shows a seasonal gradient constant with pressure at this latitude. Furthermore, in @Bezard2018, the authors show that the method used to estimate radiative timescales in @Strobel2010 tends to overestimate them, and that in their model radiative timescales are less than a Titan season down to the 35 mbar pressure level, which is more consistent with the seasonal variations measured at $45^{\circ}$S.\
The 10 mbar transition can also be caused by the interplay between photochemical, radiative and dynamical processes at high latitudes. Indeed, as photochemical species are transported downward by the subsidence above the autumn/winter pole, build up and cool strongly the lower atmosphere, the condensation level of species such as HCN, $\mathrm{HC_3N}$, $\mathrm{C_4H_2}$ or $\mathrm{C_6H_6}$ may be shifted upward, toward the 10 mbar level. Hence, below this pressure level, the volume mixing ratios of these gases would rapidly decrease, along with their cooling effect. Many observations, especially during the Cassini mission showed that during winter and autumn, polar regions host clouds composed of ices of photochemical species. For instance, the “haystack” feature showed in Fig. \[fig\_spec\] has been studied at both poles in @Coustenis1999 [@Jennings2012; @Jennings2015], and is attributed to a mixture of condensates, possibly of nitrile origin. Moreover, HCN ice has been measured in the southern polar cloud observed by @deKok2014 with Cassini/VIMS observations. $\mathrm{C_6H_6}$ ice has also been detected by @Vinatier2018 in CIRS observations of the South Pole. The condensation curve for [$\mathrm{C_4H_2}$]{} in @Barth2017 is also consistent with the formation of [$\mathrm{C_4H_2}$]{} ice around 10 mbar with the temperatures we measured at $70^{\circ}$S in 2016. These organic ices may also have a cooling effect themselves as @Bezard2018 showed that at 9 mbar, the nitrile haze measured by @Anderson2011 contributes to the cooling with an intensity comparable to the contribution of gases such as $\mathrm{C_2H_2}$ and $\mathrm{C_2H_6}$.\
Conclusion
==========
In this paper, we analysed all the available nadir far-IR CIRS observations to measure Titan’s lower stratospheric temperatures (6 mbar - 25 mbar) throughout the 13 years of the Cassini mission, from northern winter to summer solstice. In this pressure range, significant temperature changes occur from one season to another. Temperatures evolve moderately at low and mid-latitudes (less than 10 K between 6 and 15 mbar). At the equator, at 6 mbar we measure a temperature decrease mostly due to Saturn’s eccentricity. Seasonal temperature changes are maximal at high latitudes, especially in the southern hemisphere where they reach up to -19 K at $70^{\circ}$S between summer (2007) and late autumn (2016) at 15 mbar. The strong seasonal evolution of high southern latitudes is due to a complex interplay between photochemistry, atmospheric dynamics with the downwelling above the autumn/winter poles, radiative processes with a large contribution of the gases transported toward the lower stratosphere, and possibly condensation due to the cold autumn polar temperatures and strong enrichments in trace gases.\
Recent GCM simulations show a good agreement with the observed seasonal variations in this pressure range, even though these simulations do not include coupling with variations of opacity sources. In particular at high latitudes, the fast decrease of temperatures when entering winter and slower increase when getting into summer is well reproduced in these simulations.
Acknowledgements {#acknowledgements .unnumbered}
================
This research was funded by the UK Sciences and Technology Facilities Research council (grant number ST/MOO7715/1) and the Cassini project. JVO and SL acknowledge support from the Centre National d’Etudes Spatiales (CNES). GCM simulations have been performed thanks to computation facilities provided by the Grand Équipement National de Calcul Intensif (GENCI) on the *Occigen/CINES* cluster (allocation A0040110391). This research made use of Astropy, a community-developed core Python package for Astronomy , and matplotlib, a Python library for publication quality graphics [@Hunter:2007]
Appendix. Cassini/CIRS Datasets analysed in this study {#appendix.-cassinicirs-datasets-analysed-in-this-study .unnumbered}
======================================================
Observations Date N Latitude ($^{\circ}$N) FOV ($^{\circ}$)
---------------------------------- -------------- ----- ------------------------ ------------------
CIRS\_00BTI\_FIRNADCMP001\_PRIME 12 Dec. 2004 224 16.4 20.3
CIRS\_003TI\_FIRNADCMP002\_PRIME 15 Feb. 2005 180 -18.7 18.5
CIRS\_005TI\_FIRNADCMP002\_PRIME 31 Mar. 2005 241 -41.1 25.7
CIRS\_005TI\_FIRNADCMP003\_PRIME 01 Apr. 2005 240 47.8 28.5
CIRS\_006TI\_FIRNADCMP002\_PRIME 16 Apr. 2005 178 54.7 29.9
CIRS\_009TI\_COMPMAP002\_PRIME 06 Jun. 2005 184 -89.7 21.1
CIRS\_013TI\_FIRNADCMP003\_PRIME 21 Aug. 2005 192 30.1 15.5
CIRS\_013TI\_FIRNADCMP004\_PRIME 22 Aug. 2005 248 -53.7 25.0
CIRS\_017TI\_FIRNADCMP003\_PRIME 28 Oct. 2005 119 20.1 19.8
CIRS\_019TI\_FIRNADCMP002\_PRIME 26 Dec. 2005 124 -0.0 17.6
CIRS\_020TI\_FIRNADCMP002\_PRIME 14 Jan. 2006 107 19.5 19.7
CIRS\_021TI\_FIRNADCMP002\_PRIME 27 Feb. 2006 213 -30.2 22.5
CIRS\_022TI\_FIRNADCMP003\_PRIME 18 Mar. 2006 401 -0.4 18.4
CIRS\_022TI\_FIRNADCMP008\_PRIME 19 Mar. 2006 83 25.3 24.1
CIRS\_023TI\_FIRNADCMP002\_PRIME 01 May 2006 215 -35.0 27.8
CIRS\_024TI\_FIRNADCMP003\_PRIME 19 May 2006 350 -15.5 21.6
CIRS\_025TI\_FIRNADCMP002\_PRIME 02 Jul. 2006 307 25.1 21.7
CIRS\_025TI\_FIRNADCMP003\_PRIME 01 Jul. 2006 190 39.7 25.6
CIRS\_028TI\_FIRNADCMP003\_PRIME 07 Sep. 2006 350 29.7 19.7
CIRS\_029TI\_FIRNADCMP003\_PRIME 23 Sep. 2006 312 9.5 19.4
CIRS\_030TI\_FIRNADCMP002\_PRIME 10 Oct. 2006 340 -59.1 23.4
CIRS\_030TI\_FIRNADCMP003\_PRIME 09 Oct. 2006 286 33.9 19.9
CIRS\_031TI\_COMPMAP001\_VIMS 25 Oct. 2006 160 -14.5 16.3
CIRS\_036TI\_FIRNADCMP002\_PRIME 28 Dec. 2006 136 -89.1 12.6
CIRS\_036TI\_FIRNADCMP003\_PRIME 27 Dec. 2006 321 78.6 21.0
CIRS\_037TI\_FIRNADCMP001\_PRIME 12 Jan. 2007 161 75.2 19.1
CIRS\_037TI\_FIRNADCMP002\_PRIME 13 Jan. 2007 107 -70.3 20.6
CIRS\_038TI\_FIRNADCMP001\_PRIME 28 Jan. 2007 254 86.3 16.7
CIRS\_038TI\_FIRNADCMP002\_PRIME 29 Jan. 2007 254 -39.7 22.0
CIRS\_039TI\_FIRNADCMP002\_PRIME 22 Feb. 2007 23 69.9 21.2
CIRS\_040TI\_FIRNADCMP001\_PRIME 09 Mar. 2007 159 -49.2 21.1
CIRS\_040TI\_FIRNADCMP002\_PRIME 10 Mar. 2007 109 88.8 13.3
CIRS\_041TI\_FIRNADCMP002\_PRIME 26 Mar. 2007 102 61.2 19.3
CIRS\_042TI\_FIRNADCMP001\_PRIME 10 Apr. 2007 103 -60.8 26.0
CIRS\_042TI\_FIRNADCMP002\_PRIME 11 Apr. 2007 272 71.5 22.6
CIRS\_043TI\_FIRNADCMP001\_PRIME 26 Apr. 2007 263 -51.4 24.7
CIRS\_043TI\_FIRNADCMP002\_PRIME 27 Apr. 2007 104 77.1 20.0
CIRS\_044TI\_FIRNADCMP002\_PRIME 13 May 2007 104 -0.5 18.8
CIRS\_045TI\_FIRNADCMP001\_PRIME 28 May 2007 231 -22.3 22.6
CIRS\_045TI\_FIRNADCMP002\_PRIME 29 May 2007 346 52.4 29.5
CIRS\_046TI\_FIRNADCMP001\_PRIME 13 Jun. 2007 60 17.6 28.6
CIRS\_046TI\_FIRNADCMP002\_PRIME 14 Jun. 2007 102 -20.8 19.0
CIRS\_047TI\_FIRNADCMP001\_PRIME 29 Jun. 2007 204 9.8 23.2
CIRS\_047TI\_FIRNADCMP002\_PRIME 30 Jun. 2007 238 20.1 23.7
CIRS\_048TI\_FIRNADCMP001\_PRIME 18 Jul. 2007 96 -34.8 31.4
CIRS\_048TI\_FIRNADCMP002\_PRIME 19 Jul. 2007 260 49.5 35.8
CIRS\_050TI\_FIRNADCMP001\_PRIME 01 Oct. 2007 144 -10.1 23.8
CIRS\_050TI\_FIRNADCMP002\_PRIME 02 Oct. 2007 106 29.9 19.7
CIRS\_052TI\_FIRNADCMP002\_PRIME 19 Nov. 2007 272 40.3 26.5
CIRS\_053TI\_FIRNADCMP001\_PRIME 04 Dec. 2007 223 -40.2 25.8
CIRS\_053TI\_FIRNADCMP002\_PRIME 05 Dec. 2007 102 59.4 28.3
CIRS\_054TI\_FIRNADCMP002\_PRIME 21 Dec. 2007 107 60.4 21.1
CIRS\_055TI\_FIRNADCMP001\_PRIME 05 Jan. 2008 190 18.7 30.5
CIRS\_055TI\_FIRNADCMP002\_PRIME 06 Jan. 2008 284 44.6 22.2
CIRS\_059TI\_FIRNADCMP001\_PRIME 22 Feb. 2008 172 -24.9 20.7
CIRS\_059TI\_FIRNADCMP002\_PRIME 23 Feb. 2008 98 17.1 20.0
CIRS\_062TI\_FIRNADCMP002\_PRIME 25 Mar. 2008 115 59.3 17.1
CIRS\_067TI\_FIRNADCMP002\_PRIME 12 May 2008 286 29.5 21.0
CIRS\_069TI\_FIRNADCMP001\_PRIME 27 May 2008 112 -44.6 27.3
CIRS\_069TI\_FIRNADCMP002\_PRIME 28 May 2008 112 9.5 19.3
CIRS\_093TI\_FIRNADCMP002\_PRIME 20 Nov. 2008 161 43.7 21.1
CIRS\_095TI\_FIRNADCMP001\_PRIME 05 Dec. 2008 213 -14.0 20.7
CIRS\_097TI\_FIRNADCMP001\_PRIME 20 Dec. 2008 231 -10.9 23.7
CIRS\_106TI\_FIRNADCMP001\_PRIME 26 Mar. 2009 165 -60.3 19.2
CIRS\_107TI\_FIRNADCMP002\_PRIME 27 Mar. 2009 164 33.5 30.4
CIRS\_110TI\_FIRNADCMP001\_PRIME 06 May 2009 282 -68.1 25.7
CIRS\_111TI\_FIRNADCMP002\_PRIME 22 May 2009 168 -27.1 23.1
CIRS\_112TI\_FIRNADCMP001\_PRIME 06 Jun. 2009 218 48.7 21.0
CIRS\_112TI\_FIRNADCMP002\_PRIME 07 Jun. 2009 274 -58.9 20.2
CIRS\_114TI\_FIRNADCMP001\_PRIME 09 Jul. 2009 164 -71.4 25.4
CIRS\_115TI\_FIRNADCMP001\_PRIME 24 Jul. 2009 146 50.7 20.1
CIRS\_119TI\_FIRNADCMP002\_PRIME 12 Oct. 2009 166 0.4 18.3
CIRS\_122TI\_FIRNADCMP001\_PRIME 11 Dec. 2009 212 39.8 24.7
CIRS\_123TI\_FIRNADCMP002\_PRIME 28 Dec. 2009 186 -46.1 22.3
CIRS\_124TI\_FIRNADCMP002\_PRIME 13 Jan. 2010 272 -1.2 19.0
CIRS\_125TI\_FIRNADCMP001\_PRIME 28 Jan. 2010 156 39.9 27.5
CIRS\_125TI\_FIRNADCMP002\_PRIME 29 Jan. 2010 280 -44.9 27.3
CIRS\_129TI\_FIRNADCMP001\_PRIME 05 Apr. 2010 119 -45.1 28.2
CIRS\_131TI\_FIRNADCMP001\_PRIME 19 May 2010 188 -30.0 22.1
CIRS\_131TI\_FIRNADCMP002\_PRIME 20 May 2010 229 -19.8 21.5
CIRS\_132TI\_FIRNADCMP002\_PRIME 05 Jun. 2010 167 49.4 27.4
CIRS\_133TI\_FIRNADCMP001\_PRIME 20 Jun. 2010 187 -49.7 36.1
CIRS\_134TI\_FIRNADCMP001\_PRIME 06 Jul. 2010 251 -10.0 20.0
CIRS\_138TI\_FIRNADCMP001\_PRIME 24 Sep. 2010 190 -30.1 21.2
CIRS\_139TI\_COMPMAP001\_PRIME\* 14 Oct. 2010 132 -70.9 20.6
CIRS\_139TI\_COMPMAP001\_PRIME\* 14 Oct. 2010 108 -53.8 16.7
CIRS\_148TI\_FIRNADCMP001\_PRIME 08 May 2011 200 -10.0 18.3
CIRS\_153TI\_FIRNADCMP001\_PRIME 11 Sep. 2011 227 9.9 19.0
CIRS\_158TI\_FIRNADCMP501\_PRIME 13 Dec. 2011 369 -29.9 24.7
CIRS\_159TI\_FIRNADCMP001\_PRIME 02 Jan. 2012 275 -42.2 23.7
CIRS\_160TI\_FIRNADCMP001\_PRIME 29 Jan. 2012 322 -40.0 21.7
CIRS\_160TI\_FIRNADCMP002\_PRIME 30 Jan. 2012 280 -0.2 18.3
CIRS\_161TI\_FIRNADCMP001\_PRIME 18 Feb. 2012 121 9.9 18.4
CIRS\_161TI\_FIRNADCMP002\_PRIME 19 Feb. 2012 89 -15.0 17.3
CIRS\_166TI\_FIRNADCMP001\_PRIME 22 May 2012 318 -19.9 19.9
CIRS\_167TI\_FIRNADCMP002\_PRIME 07 Jun. 2012 293 -45.4 21.7
CIRS\_169TI\_FIRNADCMP001\_PRIME 24 Jul. 2012 258 -9.7 20.7
CIRS\_172TI\_FIRNADCMP001\_PRIME 26 Sep. 2012 282 44.9 18.5
CIRS\_172TI\_FIRNADCMP002\_PRIME 26 Sep. 2012 270 -70.4 23.2
CIRS\_174TI\_FIRNADCMP002\_PRIME 13 Nov. 2012 298 -71.8 21.8
CIRS\_175TI\_FIRNADCMP002\_PRIME 29 Nov. 2012 299 -59.9 19.3
CIRS\_185TI\_FIRNADCMP001\_PRIME 05 Apr. 2013 244 15.0 20.1
CIRS\_185TI\_FIRNADCMP002\_PRIME 06 Apr. 2013 303 -88.9 16.8
CIRS\_190TI\_FIRNADCMP001\_PRIME 23 May 2013 224 -0.2 25.6
CIRS\_190TI\_FIRNADCMP002\_PRIME 24 May 2013 298 -45.0 20.0
CIRS\_194TI\_FIRNADCMP001\_PRIME 10 Jul. 2013 186 30.0 19.7
CIRS\_195TI\_FIRNADCMP001\_PRIME 25 Jul. 2013 186 19.6 24.5
CIRS\_197TI\_FIRNADCMP001\_PRIME 11 Sep. 2013 330 60.5 19.4
CIRS\_198TI\_FIRNADCMP001\_PRIME 13 Oct. 2013 187 88.9 8.7
CIRS\_198TI\_FIRNADCMP002\_PRIME 14 Oct. 2013 306 -69.8 24.0
CIRS\_199TI\_FIRNADCMP001\_PRIME 30 Nov. 2013 329 68.4 23.9
CIRS\_200TI\_FIRNADCMP001\_PRIME 01 Jan. 2014 187 49.9 19.6
CIRS\_200TI\_FIRNADCMP002\_PRIME 02 Jan. 2014 210 -59.8 21.3
CIRS\_201TI\_FIRNADCMP001\_PRIME 02 Feb. 2014 329 19.9 26.8
CIRS\_201TI\_FIRNADCMP002\_PRIME 03 Feb. 2014 234 -39.6 20.9
CIRS\_203TI\_FIRNADCMP001\_PRIME 07 Apr. 2014 187 75.0 18.0
CIRS\_203TI\_FIRNADCMP002\_PRIME 07 Apr. 2014 239 0.5 27.5
CIRS\_204TI\_FIRNADCMP002\_PRIME 18 May 2014 199 0.4 27.0
CIRS\_205TI\_FIRNADCMP001\_PRIME 18 Jun. 2014 144 -45.1 20.5
CIRS\_205TI\_FIRNADCMP002\_PRIME 18 Jun. 2014 161 30.3 19.1
CIRS\_206TI\_FIRNADCMP001\_PRIME 19 Jul. 2014 181 -50.3 17.8
CIRS\_206TI\_FIRNADCMP002\_PRIME 20 Jul. 2014 161 30.6 18.4
CIRS\_207TI\_FIRNADCMP001\_PRIME 20 Aug. 2014 179 -70.0 17.8
CIRS\_207TI\_FIRNADCMP002\_PRIME 21 Aug. 2014 163 79.7 17.6
CIRS\_208TI\_FIRNADCMP001\_PRIME 21 Sep. 2014 329 -80.0 15.6
CIRS\_208TI\_FIRNADCMP002\_PRIME 22 Sep. 2014 175 60.5 17.8
CIRS\_209TI\_FIRNADCMP001\_PRIME 23 Oct. 2014 181 -35.2 17.7
CIRS\_209TI\_FIRNADCMP002\_PRIME 24 Oct. 2014 233 50.5 18.5
CIRS\_210TI\_FIRNADCMP001\_PRIME 10 Dec. 2014 329 -70.3 25.2
CIRS\_210TI\_FIRNADCMP002\_PRIME 11 Dec. 2014 237 -19.6 27.6
CIRS\_211TI\_FIRNADCMP001\_PRIME 11 Jan. 2015 225 19.6 25.0
CIRS\_211TI\_FIRNADCMP002\_PRIME 12 Jan. 2015 258 40.0 19.3
CIRS\_212TI\_FIRNADCMP002\_PRIME 13 Feb. 2015 257 -40.0 30.1
CIRS\_213TI\_FIRNADCMP001\_PRIME 16 Mar. 2015 187 -31.6 19.6
CIRS\_213TI\_FIRNADCMP002\_PRIME 16 Mar. 2015 258 23.4 20.5
CIRS\_215TI\_FIRNADCMP001\_PRIME 07 May 2015 250 -50.0 31.0
CIRS\_215TI\_FIRNADCMP002\_PRIME 08 May 2015 232 -30.0 21.7
CIRS\_218TI\_FIRNADCMP001\_PRIME 06 Jul. 2015 249 -20.0 19.9
CIRS\_218TI\_FIRNADCMP002\_PRIME 07 Jul. 2015 232 -40.0 25.2
CIRS\_222TI\_FIRNADCMP001\_PRIME 28 Sep. 2015 125 30.0 21.7
CIRS\_222TI\_FIRNADCMP002\_PRIME 29 Sep. 2015 233 -0.1 18.6
CIRS\_230TI\_FIRNADCMP001\_PRIME 15 Jan. 2016 282 -15.0 19.5
CIRS\_231TI\_FIRNADCMP001\_PRIME 31 Jan. 2016 254 15.0 19.6
CIRS\_231TI\_FIRNADCMP002\_PRIME 01 Feb. 2016 236 0.4 18.9
CIRS\_232TI\_FIRNADCMP001\_PRIME 16 Feb. 2016 249 -50.2 24.5
CIRS\_232TI\_FIRNADCMP002\_PRIME 17 Feb. 2016 92 -19.8 21.5
CIRS\_234TI\_FIRNADCMP001\_PRIME 04 Apr. 2016 328 19.8 24.7
CIRS\_235TI\_FIRNADCMP001\_PRIME 06 May 2016 163 -60.0 19.7
CIRS\_235TI\_FIRNADCMP002\_PRIME 07 May 2016 221 15.7 20.1
CIRS\_236TI\_FIRNADCMP001\_PRIME 07 Jun. 2016 88 -70.5 20.5
CIRS\_236TI\_FIRNADCMP002\_PRIME 07 Jun. 2016 238 60.8 20.0
CIRS\_238TI\_FIRNADCMP002\_PRIME 25 Jul. 2016 220 15.4 20.5
CIRS\_248TI\_FIRNADCMP001\_PRIME 13 Nov. 2016 185 -88.9 18.3
CIRS\_248TI\_FIRNADCMP002\_PRIME 14 Nov. 2016 186 30.3 17.4
CIRS\_250TI\_FIRNADCMP002\_PRIME 30 Nov. 2016 219 -19.8 28.4
CIRS\_259TI\_COMPMAP001\_PIE 01 Feb. 2017 302 -69.0 20.6
CIRS\_270TI\_FIRNADCMP001\_PRIME 21 Apr. 2017 166 -74.7 25.4
CIRS\_283TI\_COMPMAP001\_PRIME\* 10 Jul. 2017 114 60.0 26.5
CIRS\_283TI\_COMPMAP001\_PRIME\* 10 Jul. 2017 134 67.5 24.7
CIRS\_287TI\_COMPMAP001\_PIE 11 Aug. 2017 305 88.9 9.3
CIRS\_288TI\_COMPMAP002\_PIE 11 Aug. 2017 269 66.7 23.7
CIRS\_292TI\_COMPMAP001\_PRIME 12 Sep. 2017 192 70.4 19.2
: \[table\_obs\]Far-IR CIRS datasets presented in this study. N stands for the number of spectra measured during the acquisition. FOV is the field of view. The asterisk denotes datasets where two different latitudes were observed.
References {#references .unnumbered}
==========
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A numerical recipe is given for obtaining the density image of an initially compact quantum mechanical wavefunction that has expanded by a large but finite factor under free flight. The recipe given avoids the memory storage problems that plague this type of calculation by reducing the problem to the sum of a number of fast Fourier transforms carried out on the relatively small initial lattice. The final expanded state is given exactly on a coarser magnified grid with the same number of points as the initial state. An important application of this technique is the simulation of measured time-of-flight images in ultracold atom experiments, especially when the initial clouds contain superfluid defects. It is shown that such a finite-time expansion, rather than a far-field approximation is essential to correctly predict images of defect-laden clouds, even for long flight times. Examples shown are: an expanding quasicondensate with soliton defects and a matter-wave interferometer in 3D.'
address: 'Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland'
author:
- 'P. Deuar'
bibliography:
- 'TOF.bib'
title: 'A tractable prescription for large-scale free flight expansion of wavefunctions'
---
Discrete Fourier transform ,Ultracold atoms ,Free flight evolution,Time of flight imaging ,Far-field image ,Solitons ,Wavefunction ,Classical field ,
02.60.Cb ,02.70.-c ,67.85.-d ,03.75.Lm ,03.75.-b
Introduction
============
The free flight expansion of a quantum wavefunction, though physically very simple, is often a troublesome computational problem if the state that is required is not quite yet in the far field regime. The snag is that a computational lattice that both resolves the small initial cloud and encompasses the large expanded cloud can be prohibitively large. Here, it will be described how to overcome this while still using standard discrete fast Fourier transform (FFT) tools.
For example, this is commonly desired when simulating experiments in ultracold atoms. A ubiquitous experimental procedure in this field is the release of the atoms from a trap and the subsequent observation of the density of a strongly expanded cloud. Given that the imaged expanded cloud is usually much larger than the initial one pre-release, the observed expanded atom density corresponds approximately to the velocity distribution in the initial cloud. More precisely, it corresponds to the velocity distribution that is formed early on after release, when the interatomic interaction energy has been converted into kinetic energy. This is the picture that is often used to interpret the data.
This interpretation assumes that the detection is occurring in the far field where all structure is large compared to the initial cloud. However, in practice this is often not a good enough approximation, particularly if one is interested in fine structure inside the atomic cloud, such as defects or interparticle correlations. The reality is that the expansion is usually by a factor of tens or hundreds, so that interesting features such as defects or correlations that are of the order of 10% or 1% of the initial cloud in size have not yet yet attained a velocity profile at the time of detection. They are already distorted from their spatial profile in the initial cloud, but their shape has not yet stabilized to its far field form. Some examples where a long but not quite far-field expansion occurs include the interference pattern generated after release of a pair of elongated clouds [@Gring12; @AduSmith13], the study of Hanbury Brown-Twiss correlations in expanding clouds [@Perrin12; @Gawryluk15] and two-particle correlations in a halo of supersonically scattered atoms [@Jaskula10; @Kheruntsyan12].
The basic numerical task here is to predict the detected density image based on whatever model we are using for the atomic field. For excited or thermal gases an ensemble of classical field [@Kagan97; @Davis01; @Goral01; @Brewczyk07; @Proukakis08; @Blakie08; @book13] or truncated Wigner wavefunctions [@Steel98; @Sinatra02; @Polkovnikov10; @Martin10a] are often used. The straightforward approach is to place the whole field $\Psi({{\mathbf{x}}},t)$ from the outset in an x-space large enough to accommodate the whole expansion. However, it is rarely technically feasible to carry out the entire expansion by this method in three dimensions despite the seemingly trivial physics. A good description of the initial state in a three-dimensional lattice can easily require ${{\mathcal{O}}}(10^5)$ lattice points or more, and an expansion by a factor of 10–100 in each direction would lead to $10^8 - 10^{11}$ lattice points. This is either intractable or impractical on a simple computer, and even more so for studies of defect statistics or correlations which require surveys of hundreds or thousands of realizations.
Barring access to supercomputing resources, a standard resort in this case is to make a somewhat unsavory compromise: Simulate the expansion as far in time as the computer allows, and assume that the neglected later changes are not qualitatively important. It will be shown here how to avoid this while still using standard discrete FFT tools on a simple computer.
In Sec. \[MATTER\] the basics of the problem are described, and in Sec. \[DEFECTS\] an estimate is made of the the time regime over which a careful exact expansion of clouds with defects is necessary. Sec. \[1DEX\] demonstrates this with an example. The numerical difficulty is examined in more detail in Sec. \[NUM\], and the solution derived in Sec. \[DERIV\]. The paper concludes in Sec \[DISCUSSION\] with some discussion of practicalities and applications.
The prescription that constitutes the main result is briefly given in a stand-alone form in Sec. \[RESULT\].
The matter at hand {#MATTER}
==================
The aim is to calculate what is actually measured, the spatial density distribution at the detector, $\rho({{\mathbf{r}}},t_{\rm final})$. We assume that just before the trap is switched off at $t=0$, the trapped state is described by a complex field $\Psi_0({{\mathbf{r}}})$ that has the form of a single-particle wavefunction.
Conversion phase {#CONVERSION}
----------------
Typically the expansion can be considered as consisting of two phases: An initial “conversion” phase during which the interaction energy between the atoms is converted to kinetic energy, and later free flight of the atoms. Since the interaction energy per particle is proportional to the density, an expansion in three-dimensional space by a factor of two in size will reduce this interaction energy per particle by a factor of eight. This initial expansion can be done in a straightforward way until interactions are diluted away to become negligible. One just takes a computational lattice ${{\overline{x}}}$ in x-space that is 2–4 times wider than the initial cloud $\Psi_0$, and evolves on that. In ultracold atoms, the workhorse Gross-Pitaevskii Equation (GPE) is typically used – see the example in Sec. \[3DEX\].
The end result of this phase (at time $t_s$, say) is that we have a partly expanded wavefunction $\Psi({{\mathbf{r}}},t_s)$. Numerically, it is described as a table of complex numbers $\Psi_{{{\mathbf{n}}}}$ indexed by the set of integers ${{\mathbf{n}}} =\{n_1,\dots,n_d\}$ in $d$-dimensional space, that enumerate the points on the numerical lattice. The lattice spacings are $\Delta x_j = L_j/M_j$ where $L_j$ is the length of the box in the $j$th direction, and $M_j$ the corresponding number of lattice points, so that $n_j = 0,1,\dots,(M_j-1)$. The lattice positions are \[xj\] x\_j = a\_j + x\_j n\_j with offsets $a_j$. i.e. ${{\mathbf{x}}} = {{\mathbf{a}}}+\Delta{{\mathbf{x}}}\cdot{{\mathbf{n}}}$.
Note that the wavefunctions $\Psi({{\mathbf{r}}})$ are not in general the complete quantum many-particle wavefunction unless the particles are non-interacting. For interacting particles, one usually works with $\Psi$ in some kind of c-field approximation [@Brewczyk07; @Proukakis08; @Blakie08; @book13; @Sinatra02; @Polkovnikov10].
Lattice notation
----------------
Several numerical lattices will appear in what follows. The following notation will be used:
- Quantities with a tilde, such as ${{\widetilde{\Psi}}}({{\mathbf{k}}})$, are in k-space.
- Bold quantities are vectors in $d$ dimensions (usually $d=3$), with elements indexed by $j$ as in ${{\mathbf{x}}}=\{x_1,\dots,x_j\dots, x_d\}$.
- Undecorated quantities, such as $\Psi({{\mathbf{x}}})$ denote the lattice used to represent the starting state at $t_s$. This has a manageable number of points, $M$.
- Barred quantities, such as ${{\overline{\Psi}}}({{\overline{{{\mathbf{x}}}}}})$ will be on a magnified numerical lattice ${{\overline{{{\mathbf{x}}}}}}$ that can describe the expanded state, but is too coarse to describe the starting state at $t_s$.
- Underlined quantities such as ${{\underline{\Psi}}}({{\underline{{{\mathbf{x}}}}}})$ will be used to denote a sufficiently huge lattice that both encompasses the expanded state at $t_{\rm final}$ and resolves the starting state at $t_s$, when this lattice is different from the starting undecorated one.
- The position coordinate ${{\mathbf{r}}}$ is a continuum quantity, as opposed to ${{\mathbf{x}}}$ which are corresponding lattice positions. Similarly, $\bm{\kappa}$ is a continuum wavevector, while ${{\mathbf{k}}}$ is discretized.
Free flight into the far field {#FREEFLIGHT}
------------------------------
The remaining evolution after $t_s$, the “starting time”, is just free flight. Each particle of momentum $\hbar\bm{\kappa}$ flies a distance $\hbar\bm{\kappa}\,t_{\rm flight}/m$, where the flight time is t\_[flight]{} = t\_[final]{} - t\_s . Then with a far field assumption, i.e. that the initial starting position at $t_s$ is irrelevant because they have flown so far, the position of a particle is = t\_[flight]{} /m. An estimate for the final density can then be obtained from ${{\widetilde{\Psi}}}(\bm{\kappa},t_s)$, the momentum-space wavefunction at the end of the conversion phase. It is: \[farfieldk\] \_[ff]{}() = |\_[ff]{}()|\^2 = ()\^[d]{}|(,t\_s)|\^2. The prefactor is for normalization purposes, so that $\int d^d\bm{\kappa}\, |{{\widetilde{\Psi}}}(\bm{\kappa})|^2 = \int d^d{{\mathbf{r}}}\, |\Psi_{\rm ff}({{\mathbf{r}}})|^2$. Notably, this discards any phase information. However, the usual imaging in ultracold atom experiments is insensitive to that.
The starting momentum wavefunction ${{\widetilde{\Psi}}}(\bm{\kappa})$ at $t_s$ is obtained with a norm preserving Fourier transform: \[FT\] () = d\^de\^[-i]{}(). Numerically, the conversion is best made with a discrete Fourier transform (DFT). The DFT of a field $Q$ is \[DFT\] \_ = [DFT]{}\_ = \_ Q\_ . with indices ${{\widetilde{m}}}_j=0,1,\dots,(M_j-1)$ labeling the position on the k-space lattice which has spacing $\Delta k_j = 2\pi/L_j$. The sum is over the whole ${{\mathbf{n}}}$ range.
In what follows, we will always be using the physical free-space wavevectors ${{\mathbf{k}}}_{{{\widetilde{{{\mathbf{m}}}}}}}$ ordered as: \[kj\] k\_j(\_j) = k\_j \_j = k\_j {
[l@l]{} \_j&\_j<M\_j/2\
\_j-M\_j&\_jM\_j/2
. The integer multipliers can also be written as ${{\widetilde{l}}}_j = {\rm mod}\left[{{\widetilde{m}}}_j+\frac{1}{2}M_j\,,\, M_j\right]-\frac{1}{2}M_j$. For simple transformations such as (\[FT\]) and (\[Psiwtm\]), a set of monotonically ordered non-negative wavevectors ${{\widetilde{{{\mathbf{m}}}}}}\cdot\Delta{{\mathbf{k}}}$ is equivalent operationally to (\[kj\]) because $\Delta k_j M_j(x_j-a_j)$ is an integer multiple of $2\pi$. However, such equivalence no longer holds for calculating the kinetic energy or upon changing the lattice offset ${{\mathbf{a}}}$, both of which will be needed for expansion.
Using (\[FT\]) and the DFT (\[DFT\]), as well as taking care of a possible offset in (\[xj\]), the discrete momentum wavefunction at $t_s$ is \[Psiwtm\] \_(t\_s) = e\^[-i\_]{} [DFT]{}\_. with lattice point volume $\Delta V=\prod_j\Delta x_j$. This is readily implemented using standard fast Fourier transform (FFT) libraries [@FFTW05; @FFTW98].
In most cases in the literature, the short initial expansion to $t_s$ and conversion (\[farfieldk\]) to obtain the detected density is all that is done. This is fine provided that we are only interested in momentum differences $\delta{{\mathbf{k}}}$ much larger than those corresponding to the width $W_s$ of the converted cloud at $t_s$. That is, when $|\delta {{\mathbf{k}}}| \gg
m W_s/\hbar t_{\rm flight}$. Or, alternatively, that we are only interested in spatial resolutions $\gg W_s$ in the final expanded cloud.
Free flight without a far field assumption {#KINETIC}
------------------------------------------
To avoid making the rather uncontrolled far field assumption (\[farfieldk\]), and get results for a well defined final time $t_{\rm final}$, consider first that in principle, the free flight evolution of the wavefunction in k-space is straightforward: \[free0\] (,t\_[final]{}) = (,t\_s) . In principle, all one then needs to obtain $\rho({{\mathbf{r}}},t_{\rm final})$ is to inverse Fourier transform ${{\widetilde{\Psi}}}({{\mathbf{k}}},t_{\rm final})$ back into x-space. Generally: \[iFT\] (,t\_[final]{}) = d\^de\^[i]{} (,t\_[final]{}). The discrete implementation like in (\[DFT\]) and (\[Psiwtm\]) is \[iDFT\] Q\_ = [DFT]{}\^[-1]{}\_ = \_ \_ . $M=\prod_jM_j$ is the overall lattice size. With volume $V=\prod_jL_j$, \[Psin\] \_(t\_[final]{}) = \^[-1]{}\_. This step can, however, be hard on computational resources, even with an FFT because a very large lattice $M$ is often needed to fully describe the final time state ${{\widetilde{\Psi}}}(t_{\rm final})$.
Continued defect dynamics during free-field expansion {#DEFECTS}
-----------------------------------------------------
Expansions of clouds containing narrow mobile defects are a popular experimental topic in recent years [@Gring12; @Chomaz15; @Serafini15; @Lamporesi13; @Donadello14; @Sadler06; @Weiler08]. These are systems for which the transition from the starting $t_s$ state to the far-field is nontrivial.
Let the overall width of the cloud at $t_s$ in a chosen direction be $W_s$, and consider defects of width ${\rm w}_{\rm def}\ll W_s$ and typical speed $u$. Speed differences between different defects are then also $\approx u$. The velocity distribution in the gas, however, is dominated by the shortest length scale in the system. This is usually given by the half-width of individual defects, giving a typical velocity $\sigma_v\approx 2\hbar/m{\rm w}_{\rm def}$. After significant expansion, the width of the cloud will be $W_{\rm final}\approx 2t_{\rm flight}\sigma_v = 4\hbar t_{\rm flight}/m{\rm w}_{\rm def}$.
It takes a time $t_v=W_s/\sigma_v$ for a rough semblance of the velocity/momentum distribution to form in real space due to expansion (this is the time for a typical particle to move across the initial cloud). Remnants of defects can continue to rearrange until a time when their relative speed would allow them to move as far as $\approx W_s$, which is a typical initial spacing between them: \[tarrange\] t\_[arrange]{} t\_[v]{}. For clearly recognizable defects to be present, one should have defects slower than particles: $u\ll \sigma_v$. Due to this slowness, there is a period $t_v\ll t_{\rm flight} \ll t_{\rm arrange}$ during which complicated rearrangement of defect remnants can take place even though the gross shape of the cloud already resembles the far-field velocity distribution. The simple far field expression (\[farfieldk\]) is not appropriate during this time.
This is not an uncommon situation in ultracold atom experiments, and has relevance for interpretation of experimental data. For an initially trapped thermal gas in a classical field regime where its dynamics is quite well described by the Gross-Pitaevskii equation [@Sinatra02; @Mora03; @Castin04; @Brewczyk07; @Karpiuk12; @Pietraszewicz15], typical defects are solitons in 1D and vortices in 2D. In this regime, when $g$ is the s-wave scattering length and $\rho$ the typical density, the chemical potential is $\mu\approx g \rho$, giving defect width ${\rm w}_{\rm def}\approx2\hbar/\sqrt{m\mu}$ and a speed of sound $c=\sqrt{\mu/m}$. Major defects are much slower, i.e. $u=\epsilon c$ with $\epsilon\ll1$. With a trap frequency of $\omega$, the initial cloud width is $W_s\approx (2/\omega)\sqrt{2\mu/m}$. This lets us estimate $t_v=\sqrt{8}/\omega$ and $t_{\rm arrange}$, and one finds that the times $t_{\rm flight}$ during which rearrangement is still taking place in a cloud that looks to be already far-field in its gross features is t\_v t\_[flight]{} t\_[arrange]{}, 1 < . This can be a significant period.
Example: soliton dynamics during expansion {#1DEX}
==========================================
Let us consider an example of complicated free evolution even at times that would naively be considered to be in the far-field: the expansion of an elongated 1D ultracold gas in the quasi-1D regime. We take physical parameters like in a series of recent experiments [@Gring12; @AduSmith13], where clouds in the classical field regime were prepared. An initial c-field state of the 1d system is generated at temperature $T=80$nK = $260\hbar\omega/k_B$ using the stochastic Gross-Pitaevskii equation = -i(1-i)(x,) + (x,)\[SGPE\] by taking a sample of the field $\Psi_{\rm ic}(x) = \Psi(x,\tau)$ at $\tau = 10/\omega$, once the ergodic ensemble is reached. The simulation grows the field from the vacuum $\psi(x,0)=0$. Here, $g_1=0.54\hbar\omega a_{\rm ho}$ is the 1D s-wave scattering length for ${}^{87}$Rb in terms of the harmonic oscillator length $a_{\rm ho}=\sqrt{\hbar/m\omega}$. The bath coupling $\gamma=0.02$ has a typical value, $\mu=90\hbar\omega$ is a chemical potential chosen to give $N=3000$ particles on average in the stationary ensemble, and $\eta(x,\tau)$ is a Gaussian complex white noise field with variance $\langle\eta(x,\tau)^*\eta(x',\tau')\rangle = \delta(x-x')\delta(\tau-\tau')$. The lattice cutoff in a plane-wave basis is chosen as $\hbar k_{\rm max}= 0.65\sqrt{2\pi mk_B T}$, according to the optimum values obtained in [@Pietraszewicz15]. The generation of $\Psi_{\rm ic}(x)$ was carried out using (\[SGPE\]) on an initial lattice with $M=2^{11}$ points and $L=60a_{\rm ho}$.
![Evolution of the density $\rho(x)=|\Psi(x)|^2$ (in units of $a_{\rm ho}^{-1}$) after release, calculated directly according to (\[freex\]), (\[Psiwtm\]), and (\[Psin\]). The vertical scale is narrowed down compared to the computational lattice, with ${{\underline{L}}}=2400a_{\rm ho}$, to show the most interesting region. \[FIG-1\]](fig1.eps){width="\columnwidth"}
A proper treatment of the conversion phase will be done in the 3D example \[3DEX\]. Here, let us just do an immediate free-field expansion of $\Psi_{\rm ic}(x)$ from the moment the trap is switched off at $t_s=t_0=0$. The 1D density $\rho(x)=|\Psi(x)|^2$ approximates the marginal density of the 3D cloud when integrated over transverse directions. The fully free expansion can be quite a good approximation for a very tight initial trap that has the initial gas in a quasi-1d regime (tight transverse trap frequency $\omega_{\perp})$. Release causes a very rapid expansion in the transverse directions on a timescale of $1/\omega_{\perp}$, with width $\propto (1/\omega_{\perp})\sqrt{1+\omega_{\perp}^2t^2}$ [@Castin96]. Accordingly, the density drops as $\sim 1/(1+\omega_{\perp}^2t^2)$, and so does the relative strength of interparticle interactions. After a time of several $1/\omega_{\perp}$ (short compared to $t_v$), the gas is effectively non-interacting.
The evolution of the field is shown in Fig. \[FIG-1\]. Here in 1D, it is easily done directly using the equation \[freex\] = i\^2(x,t) and the DFTs (\[Psiwtm\],\[Psin\]). The initial state $\Psi_{\rm ic}(x)$ was padded with vacuum and evolved on a lattice of ${{\underline{M}}}=81920$ points on a simulation region of length ${{\underline{L}}}=2400a_{\rm ho}$, with ${{\underline{a}}}=-{{\underline{L}}}/2$. The purpose is to see defect evolution during expansion. Indeed, we see that appreciable defect evolution occurs until times of about $\approx 30/\omega$. This can be compared to the values of the crude estimates of (\[tarrange\]) obtained for this system when taking $\epsilon=0.1$: $t_v=2.8$ for significant expansion and $t_{\rm arrange}=28$ for end of rearrangement. A very good match.
![True density (blue) and its far-field estimate (green) given by (\[farfieldk\]) after long times of flight. \[FIG-2\]](fig2.eps){width="\columnwidth"}
In Fig. \[FIG-2\], the central section of the expanding cloud is shown for two long times $t=10/\omega$ and $t=40/\omega$. The figure also compares to the far-field estimate (\[farfieldk\]) given by a magnified momentum distribution. The far field estimate is found wanting even at the otherwise very long time $t=10/\omega$, and becomes only passable at $t=40/\omega$. For comparison, note that the detection time in the reference experiment [@Gring12] was $t_{\rm flight}=0.65/\omega\ll t_v$, which places it even well before any significant expansion.
A closer look at the numerical difficulty {#NUM}
=========================================
Consider now calculations, e.g. in 3D, for which a lattice that properly describes both the small starting and large expanded state is extremely large.
Computational effort {#EFFORT}
--------------------
Let the energy per particle in the trapped state be $\varepsilon d$, i.e. it will be $\varepsilon$ per degree of freedom in free space. This is all converted to kinetic energy by the end of the conversion phase at $t_s$ so that a typical wavevector is $k_{\rm typical} = \sqrt{2m\varepsilon}/\hbar$. For good measure, and particularly to allow for energy fluctuations above the mean, one needs to include higher values $k_{\rm max} \approx r_Kk_{\rm typical}$ with $r_K\sim2$. The spacing on the lattice needed to resolve the resulting wavelengths (Nyquist-Shannon theorem) is going to be $\Delta x_{\rm min} \le \pi/k_{\rm max} = \pi\hbar/(r_K\sqrt{2m\varepsilon})$. Now, when the widths of the starting cloud in the $j$th direction are $W_j$, the widths after free flight expansion will be approximately \[ulW\] \_j=W\_j + 2r\_Kt\_[flight]{}, allowing again for wavevectors of up to $k_{\rm max}$. We use the underlined notation for final quantities in anticipation of a large lattice. The minimum number of lattice points needed in each direction for the expanded cloud is ${{\underline{M}}}_j^{\rm min} = {{\underline{W}}}_j/\Delta x_{\rm min} = (r_K/\pi\hbar)[W_j\sqrt{2m\varepsilon} + 4r_Kt_{\rm flight}\varepsilon]$. To have an accurate calculation extra padding and resolution usually has to be included, giving ${{\underline{M}}}_j\approx r_A{{\underline{M}}}_j^{\rm min}$ with another factor $r_A\sim2$.
After significant expansion, when the $W_j$ have become negligible, the required lattice size approaches ${{\underline{M}}}_j\approx 4r_K^2r_At_{\rm flight}\varepsilon/\pi\hbar$. Thus the overall required size ${{\underline{M}}} = \prod_j {{\underline{M}}}_j$ will be \[Mnaive\] C ()\^d, where $C= (4r_K^2r_A/\pi)^d\sim{{\mathcal{O}}}(10^d)$, which is about a thousand in 3D.
Memory requirements for double precision arithmetic will be $16\times{{\underline{M}}}$ in bytes, while the time to carry out a FFT will scale as ${{\underline{M}}}\log{{\underline{M}}}$, and the time to do the evolution (\[free0\]) is $\sim{{\underline{M}}}$. Defect experiments tend to have $\varepsilon t_{\rm flight}/\hbar\sim{{\mathcal{O}}}(100)$. For example, in the [@Gring12] experiment considered as an example here, $\varepsilon t_{\rm flight}/\hbar\approx 60$. While the time for carrying out such an FFT on desktop computers is manageable (of the order of an hour for ${{\underline{M}}}=5\times10^9$ on one processor core), the real problem are memory requirements. For ${{\underline{M}}}=5\times10^9$, having sufficient RAM memory (75GB) on a desktop becomes troublesome.
Information in the wavefunction {#INFO}
-------------------------------
What can be done? One can see that the effort involved in (\[Mnaive\]) is almost all wasted because there is no new information about the cloud in the final state ${{\underline{\Psi}}}({{\underline{{{\mathbf{x}}}}}})$ that was not in the initial $\Psi({{\mathbf{k}}})$. The evolution (\[free0\]) and DFTs between x-space and k-space (\[FT\]), (\[iFT\]) are deterministic and reversible. Also, we know that the final not-quite-far-field density ${{\underline{\rho}}}({{\underline{{{\mathbf{x}}}}}})$ is going to be at least qualitatively similar to (\[farfieldk\]) which obtained via a simple magnification of the starting momentum wavefunction (see Fig. \[FIG-2\]). This suggests that visible structures will be much broader than in the initial state. The trouble of course is that while the density gets magnified during the free flight by factors \[Lambdaj\] \_j = = 1 + , the velocity remains encoded in a wavelength that remains constant. As long as velocity information is kept, the size of the computational lattice must grow by these same factors $\Lambda_j$ to keep resolving the largely unchanged phase oscillation. The wastefulness amounts to at least a factor of $\Lambda=\prod_j\Lambda_j$.
Clearly the thing one must do is avoid storing the entire fine lattice of size ${{\underline{M}}}$, and abandon knowledge of the properly sampled phase profile at $t_{\rm final}$, leaving only density information on a coarser lattice. The initial converted state $\Psi({{\mathbf{x}}},t_s)$ can be fully defined on a smaller lattice with M = \_j M\_j = ()\^d \_j W\_j points and the usual spacings $\Delta{{\mathbf{x}}}$, which comes from (\[Mnaive\]) and (\[Lambdaj\]). The right hand expression assumes $\Lambda_j\gg1$. A magnification of the initial $M$ lattice by a factor of $\Lambda_j$ in each direction, while keeping the number of points constant, should be possible in principle without adversely affecting the quality of the final density profile.
Let us first consider an overly simple approach that tries to do this but fails in an instructive way:
Naive DFT {#NAIVE}
---------
Since the positions appear explicitly in (\[iFT\]), it is tempting to proceed as follows:
1. Obtain with the k-space wavefunction at $t_s$ represented as ${{\widetilde{\Psi}}}_{{{\widetilde{{{\mathbf{m}}}}}}}$ on the small $M$ lattice.
2. Apply evolution (\[free0\]) to obtain ${{\widetilde{\Psi}}}(t_{\rm final})_{{{\widetilde{{{\mathbf{m}}}}}}}$ after whatever time $t_{\rm flight}$ is necessary.
3. Carry out the sum in the return transformation (\[iFT\]) using magnified lattice values of ${{\overline{{{\mathbf{x}}}}}}$ and automatically keeping the same relatively small number of points, ${{\overline{M}}}=M$.
An appropriate magnified lattice would have the same number of points as the starting state: ${{\overline{M}}}_j = M_j$, but larger spacing $\Delta{{\overline{x}}}_j = \Lambda_j\Delta x_j$, as well as appropriately shifted zero points ${{\overline{a}}}_j$. The new positions would be \[wbxnaive\] = +(-); \_j = \_j+\_j\_j indexed by ${{\overline{n}}}_j=0,\dots,(M_j-1)$. For step 3, the discrete exponent in (\[iFT\]) is \[exponent\] i = i + i\_[j=1]{}\^d\_j\_j \_j. To use the convenient DFT form (\[iDFT\]), the factors $\Lambda_j {{\overline{n}}}_j$ need to be integers. Hence, the scale factor $\Lambda_j$ needs to be an integer $\lambda_j$ for all points on the final ${{\overline{{{\mathbf{x}}}}}}$ lattice to be calculated this way. Since phases that are a multiple of $2\pi$ are equivalent, a value of $\lambda_j{{\overline{n}}}_j > M_j$ will lead to the same ${{\overline{\Psi}}}$ as one below $M_j$. This makes any value of ${{\overline{n}}}_j$ correspond to an element of a DFT that sums over ${{\widetilde{m}}}_j$. Let us define an auxiliary index \[nii\] n\^\_j = [mod]{}dependent on ${{\overline{n}}}_j$, which indicates the element of the final inverse DFT that should to be used to obtain a particular point on the final ${{\overline{x}}}$ lattice. One obtains the following: \[Psinaive\] \^[(naive)]{}\_(t\_[final]{}) = \^[-1]{}\_[\^]{}. which is very similar in form to (\[Psin\]), except for the indexing by ${{\mathbf{n}}}^{\prime\prime}$, shift ${{\overline{{{\mathbf{a}}}}}}$ and $\Lambda$ prefactor. The last is put in by hand, to keep the amplitude of the original cloud unchanged at $t_s$ upon magnification of the lattice. The numerical effort required by (\[Psinaive\]) is small, with the largest matrix to be stored only of size $M$ , i.e. $\prod_j\Lambda_j$ times less than the brute force case of (\[Mnaive\]).
![Initial (blue) and final (magenta) densities after the naive attempt at expansion in 1D with prescription (\[Psinaive\]). The initial state is $\Psi(x) = exp(-x^2/2)\cos k_0 x$ with $k_0=5$, defined on the range $x\in[-3,3]$ indicated with gray bars, with $M=600$. Expansion times are given on the plots. The true final state obtained using (\[Psin\]) on a larger lattice defined on $x\in[-6,6]$ is shown in green. \[FIG-3\]](fig3.eps){width="\columnwidth"}
The results for a 1D test case can be seen in Fig. \[FIG-3\]. They are not good. This is of course because DFTs correspond to the correct Fourier transform only for the specific relationship between x- and k-space lattices that is described in Sec. \[FREEFLIGHT\], and not the wishful one that step 3 implies. What is actually being carried out by (\[Psinaive\]) instead, is the free evolution for a time $t_{\rm flight}$ of an infinite number of copies of the state $\Psi({{\mathbf{x}}},t_s)$, repeated in a tiling pattern because of the periodic boundary conditions assumed by the DFT. As soon as the cloud begins to expand, it overlaps around its own edges and everything gets scrambled.
In k-space, the picture is that the flight time is so long that the phase winding caused by (\[free0\]) advances so much that aliasing occurs. The phase difference between neighboring high-k points is $2\pi\hbar t_{\rm flight} k_{\rm max}/mL_j$ which is \[deltatheta\] \_[max]{}=() after substituting for $k_{\rm max}$. As a result, the phase variation with ${{\mathbf{k}}}$ is not resolved and the state is scrambled by (\[Psinaive\]) whenever the starting box $L_j$ is appreciably smaller than the expanded width ${{\underline{W}}}_j$.
The moral from this naive approach is that the data in the starting $\Psi_{{{\mathbf{n}}}}$ lacks the crucial physical information that the area beyond the edges of the starting lattice is supposed to be vacuum.
Solving the memory problem {#SOL}
==========================
Derivation {#DERIV}
----------
To utilize the physical information about the system that the background over which the cloud expands is vacuum, let us define a buffered starting field $\Phi({{\mathbf{r}}},t_s)$: \[vacuum\] (,t\_s) = {
[c@l]{} (,t\_s) & j : a\_j r\_j < (a\_j+L\_j)\
0 &
. Also, to take advantage of the highly optimized FFT algorithms, the exponent in (\[iDFT\]) should contain only integer multiples of $2\pi i/M_j$ in each direction $j$. We will henceforth assume the magnification factors $\Lambda_j$ to all be positive integers $\lambda_j$, as in Sec. \[NAIVE\], which suffices to obtain this condition. The actual cloud need not expand by an integer value, only the numerical lattice. It may also be possible, in principle, to translate cases of fractional $\Lambda_j$ into an algorithm containing FFTs, when the $M_j$ and $\Lambda_j$ are appropriately matched. However, this would lead to combinatorial complications in the algorithm. We will refrain from considering this as it does not appear to offer any significant computational advantage.
As usual, the general procedure to obtain a final state is to implement the time evolution in k-space as in (\[free0\]), then use a DFT to obtain the final x-space wavefunction. This final state is to be on the magnified lattice whose points are \[wbx\] \_j = \_j + \_j\_jx\_j, lying $\Delta{{\overline{x}}}_j=\lambda_j\Delta x_j$ apart. ${{\overline{n}}}_j=0,\dots (M_j-1)$ is the index for the final “small coarse” lattice, like in (\[wbxnaive\]). Its volume is ${{\overline{V}}}=\prod_j{{\overline{L}}}_j$, a factor of $\lambda=\prod_j\lambda_j$ greater than the initial volume $V$.
This last immediately presents a problem, because to obtain a final state on an x-space lattice of length ${{\overline{L}}}_j$ with an FFT requires transforming a k-space wavefunction that has resolution $2\pi/{{\overline{L}}}_j = \Delta k_j/\lambda_j$. This is $\lambda_j$ times finer than what is available in the starting state $\Psi({{\mathbf{x}}})$. Fortunately, the vacuum assumption (\[vacuum\]) provides sufficient information to reconstruct the fine scale structure in k-space, as we will see below.
Time evolution must also occur on this fine lattice, and to remain exact, it must not cut off high momenta, so the huge lattice ${{\underline{M}}}$ will be required, at least formally. The required k-space wavefunction of the initial state is, generally, obtained with the transform (\[FT\]). Discretizing it onto the ${{\underline{M}}}$ lattice gives \[ex1\] (,t\_s) = \_ (,t\_s) e\^[-i]{}. The initial points in x-space are the ${{\underline{{{\mathbf{x}}}}}}$, while the k-space lattice has fine spacing $\Delta{{\underline{k}}}_j = \Delta k_j/\lambda_j$ and values ${{\underline{k}}}_j = {{\underline{{{\widetilde{l}}}}}}_j \Delta{{\underline{k}}}_j$ indexed by ${{\underline{{{\widetilde{m}}}}}}_j=0,\dots,(\lambda_jM_j-1)$ with ${{\underline{{{\widetilde{l}}}}}}_j={\rm mod}\left[{{\underline{{{\widetilde{m}}}}}}_j+\frac{1}{2}{{\underline{M}}}_j\,,\,{{\underline{M}}}_j\right]-\frac{1}{2}{{\underline{M}}}_j$. One in every $\lambda_j$ values of ${{\underline{k}}}_j$ will fall on a $k_j$ value that is also present in the small lattice of the starting state. In particular, instead of using the large index ${{\underline{{{\widetilde{{{\mathbf{m}}}}}}}}}$, its values can be alternatively enumerated by a pair of integers in the following way: \_j = \_j\_j + q\_j, where the coarse index ${{\widetilde{m}}}_j=0,\dots,(M_j-1)$ runs over the same set of momenta as in the starting state on the small lattice $M$, while a fine structure index $q_j = 0,\dots,(\lambda_j-1)$ counts the small $\Delta{{\underline{k}}}_j$ steps within the larger momentum step $\Delta k_j$. The k values themselves are \[ulk\] \_j = k\_j(\_j + ) when the small lattice size $M_j$ is even (as is usual). Odd $M_j$ introduces a minor but distracting complication, so from here until (\[iDFTbar\]) we will assume even $M_j$ and return to this matter at the end of the section. It is convenient to define a vector of fractional momentum steps \_j= \[0,k\_j ) so that the fine-grained momenta can be written in a conscise vector notation: \[ulkj\] \_j = k\_j + \_j(q\_j); = \_+\_ in terms of the coarse starting momenta ${{\mathbf{k}}}$ and the fine grained shift $\bm{\alpha}$.
Now luckily, the majority of the elements in the sum over points ${{\underline{{{\mathbf{x}}}}}}$ in (\[ex1\]) can be discarded because they are in vacuum and contribute zero. Provided we make the commonsense assumption that the large lattice includes the entire lattice ${{\mathbf{x}}}$ for the small starting cloud, then this leaves just the sum over the usual starting state indices ${{\mathbf{n}}}$ defined in (\[xj\]). With this, and substituting (\[xj\]), (\[vacuum\]) and (\[ulk\]) into (\[ex1\]), one obtains: \[ex2\] (,t\_s) = \_ \_(t\_s) e\^[-i]{} which is just a sum over the small lattice. In fact, any element of the k-space wavefunction is given by an appropriate DFT on the small lattice: \[ex3\] (,t\_s) = e\^[-i]{} [DFT]{}\_ with the help of the coarse ${{\widetilde{{{\mathbf{m}}}}}}$ and fine ${{\mathbf{q}}}$ indices. To get the entire wavefunction, a separate FFT is required for each differing value of ${{\mathbf{q}}}$.
The time evolution between the DFTs is just \[time\] (,t\_[final]{}) = \_[,]{}(t\_[final]{}) = (,t\_s) .
To obtain the final state in x-space, one discretizes (\[iFT\]) and obtains the following expression on the fine lattice: \[ex4\] (,t\_[final]{}) = \_[,]{} e\^[i]{} \_[,]{}(t\_[final]{}). We don’t need the entire huge lattice ${{\underline{{{\mathbf{x}}}}}}$, only the coarsened version with selected sparse points given by (\[wbx\]). Taking only the subset ${{\overline{{{\mathbf{x}}}}}}$ of points from the ${{\underline{{{\mathbf{x}}}}}}$ lattice and substituting according to (\[wbx\]) and (\[ulk\]) gives \[ex5\] (,t\_[final]{}) = \_[,]{} \_[,]{}(t\_[final]{}) . This can almost be written as a sum of DFTs, except for one detail that was seen already in Sec. \[NAIVE\]: For a DFT over the small k-space lattice ${{\widetilde{{{\mathbf{m}}}}}}$ of size $M$, normally the “x-space” indices should run over the range $0,\dots (M_j-1)$. Here instead, in the relevant part of the exponent $i\sum_{j=1}^d\frac{2\pi{{\widetilde{m}}}_j {{\overline{n}}}_j\lambda_j}{M_j}$, we have the quantity $n^{\prime}_j=({{\overline{n}}}_j\lambda_j)$ which increments in jumps: $n^{\prime}_j=0,\lambda_j,2\lambda_j,\dots,[\lambda_j(M_j-1)]$. Fortunately, the whole exponent is unchanged upon adding $M_j$ to $n^{\prime}_j$. Hence we can define the auxiliary numbers $n^{\prime\prime}_j$ like in (\[nii\]), which will index the output of the DFT. Then, the final result for the coarse-grained wavefunction after flight can be written as a sum of inverse DFTs on the small lattice $M$: \[iDFTbar\] \_(t\_[final]{}) = \_ e\^[i\_]{} [DFT]{}\^[-1]{}\_[”]{}. Note how the proper expression (\[iDFTbar\]) differs from the naive (\[Psinaive\]) by having a sum of similar DFTs indexed by ${{\mathbf{q}}}$, that account for the fine-scale fractional k-space shifts $\bm{\alpha}_{{{\mathbf{q}}}}$. To calculate these, only FFTs *on the small lattice $M$* are required. There is never a need to store the huge ${{\underline{M}}}$ lattice.
Finally, returning to the unusual case of odd $M_j$, (\[ulk\]) and (\[ulkj\]) also apply to all points except for a few with ${{\widetilde{m}}}_j=(M_j-1)/2$ that end up with ${{\underline{k}}}_j>k^{\rm max}_j=\pi M_j/L_j$. For these, one should substitute ${{\widetilde{l}}}_j\to{{\widetilde{l}}}'_j=({{\widetilde{l}}}_j-M_j)$ in (\[ulk\]) and $k_j\to (k_j-M_j\Delta k_j)$ in (\[ulkj\]) to carry out the umklapp flipping to negative wavevectors on the fine momentum grid. It turns out that the only change in the intervening expressions above is that one should replace $k_j\to (k_j-M_j\Delta k_j)$ in the ${{\mathbf{k}}}_{{{\widetilde{{{\mathbf{m}}}}}}}$ of (\[ex5\]) and (\[iDFTbar\]) for the several points when ${{\widetilde{m}}}_j=(M_j-1)/2$ and $q_j\ge\lambda_j/2$. This actually makes very little difference in practice provided that all $\lambda_j\ll M_j$.
The prescription {#RESULT}
----------------
This is a summary of the main result that gathers the above results together. One starts from the initial wavefunction $\Psi({{\overline{{{\mathbf{x}}}}}},t_s)\equiv\Psi_{{{\mathbf{n}}}}(t_s)$ described in $d$ dimensions on an x-space lattice with $M_j$ points in each dimension $j=1,\dots,d$ of length $L_j$. The region outside this lattice is initially vacuum. The positions of the points are x\_j = a\_j + with indices $n_j=0,\dots,(M_j-1)$. The momentum spacing is $\Delta k_j=2\pi/L_j$. Free flight occurs for a time interval $t_{\rm flight}=t_{\rm final}-t_s$. Subsequently the lattice spacing on which the wavefunction is described in x-space is magnified by integer factors $\lambda_j$, giving lattice points \_j = \_j + with indices ${{\overline{n}}}_j=0,\dots,(M_j-1)$. The volume of the initial lattice is $V=\prod_jL_j$, the number of points $M=\prod_jM_j$, the volume magnification $\lambda=\prod_j\lambda_j$. The fractional momentum steps \_j= \[0,1)k\_j form a vector $\bm{\alpha}_{{{\mathbf{q}}}}$ that is enumerated by the indices $q_j=0,1,\dots,(\lambda_j-1)$. Bold quantities are vectors in $d$-dimensional space. The final x-space wavefunction on the magnified lattice is given by:
\[FF\] \[FFpsi\] \_(t\_[final]{}) &=& \_ f\^[()]{}\_ B\^[()]{}\_[”]{}\
\[presf\] f\^[()]{}\_ &=&\
\[presB\] B\^[()]{}\_ &=& [FFT]{}\^[-1]{}\_\
\[presA\] \^[()]{}\_ &=& [FFT]{} \_ when all $M_j$ are even, with the elements of the auxilliary index in (\[FFpsi\]) being \[FFaux\] n\^\_j = [mod]{}, and FFT indicating fast Fourier transforms. The wavevectors ${{\mathbf{k}}}_{{{\widetilde{{{\mathbf{m}}}}}}}$ are given by (\[kj\]).
If any $M_j$ are odd, then in the ${{\mathbf{k}}}_{{{\widetilde{{{\mathbf{m}}}}}}}$ of (\[presB\]) one should further umklapp the maximum k value in each of those dimensions $j$ when $q_j\ge\lambda_j/2$. That is, for ${{\widetilde{m}}}_j=(M_j-1)/2$ and $q_j\ge\lambda_j/2$, replace $k_j({{\widetilde{m}}}_j)$ by $-\Delta k_j (M_j+1)/2$.
3D Example {#3DEX}
==========
As an example of a calculation that cannot be done by brute force on a PC, consider a full simulation of the relative phase measurements in the Vienna experiment [@Gring12]. Here, two neighbouring ultracold atomic clouds of ${}^{87}$Rb, $\Psi^{(\pm)}$, are initially populated in quasi-one-dimensional harmonic traps that are elongated in the $x$ direction, as seen in Fig. \[FIG-4\](a). Trap frequencies are $\omega=2\pi\times 6.4$ Hz in the $x$ direction, and $\omega_{\perp}=2\pi\times1400$ Hz in the transverse directions. The clouds are initially separated by a small gap of $D=2.75$nm$ = 0.645a_{\rm ho}$ in the $y$ direction. The traps are released at $t=0$, and rapid expansion takes place in the $y$ and $z$ directions. The clouds soon interfere, forming a fringe pattern, as shown in (Figs. \[FIG-4\](b-c))). The local displacement $\delta y(x)$ of the fringes in the $y$ direction from the middle position $y=0$ corresponds to the phase difference that existed locally between the two initial clouds at $t=0$. \[dphase\] y(x) \_0(x) = \_0\^[(+)]{}(x) - \_0\^[(-)]{}(x) The fringe pattern is detected after 16ms of free flight ($t_{\rm final}=0.65/\omega$) at a detector. This is basically an integral of the final density over the $z$ direction. In this way, a local phase difference measurement on the initial clouds can be made.
There is of course some distortion during flight, as was seen in Sec. \[1DEX\], and a simulation of the expansion can be necessary to see quantitatively how the pattern at the detector corresponds to the initial phase profile. Here the case where the two clouds are populated by independent thermal gases will be considered. The one-dimensional c-field wavefunctions $\Psi_{\rm ic}^{(\pm)}(x)$ are generated using the same SGPE method as in Sec. \[1DEX\] and the same parameters. Eq. (\[SGPE\]) is run separately with independent noises for each of the two clouds. As these are quasi-1D traps, the wavefunction in the $y$ and $z$ directions is well approximated by just the harmonic oscillator ground state. The initial state (see Fig. \[FIG-4\](a)) consists of two terms: \[3dic\] \_0() = \_ \_[ic]{}\^[()]{}(x).
![ Density patterns during the free flight described in Sec. \[3DEX\]. Shown is the $x,y$ density $\rho(x,y) = \int dz |\Psi({{\mathbf{r}}})|^2$. Panel (a): initial state released from the traps at $t=0$. Panel (b): The “starting” state for the free expansion, at $t=t_s=1.2$ms, after the conversion phase. Panel (c): The state at the detector at $t=t_{\rm final}=16$ms. \[FIG-4\]](fig4.eps){width="\columnwidth"}
The conversion phase is simulated with the 3D Gross-Pitaevskii equation (GPE) \[GPE\] = -i(,t) using with a semi-implicit split-step algorithm [@Drummond90]. It is run until 1.2ms, which is $t_s=0.05/\omega$. The s-wave scattering length is 5.24nm, giving a value of $g=0.01544\hbar\omega a_{\rm ho}^3$ for the interaction strength in 3D in terms of $a_{\rm ho}=\sqrt{\hbar/m\omega}$. The lattice used has dimensions $L_x = 67.5a_{\rm ho}, L_{y,z} = 8.26a_{\rm ho}$, and $M=2304\times256\times256$ points. The calculation took 1hour 20 mins on an Intel 2.4 GHz CPU using the FFTW library [@FFTW05], and used 8% of the 96GB RAM memory on the PC. The situation at $t_s$ is shown in Fig. \[FIG-4\](b).
Subsequently, the $\Psi({{\mathbf{x}}},t_s)$ were fed into the free flight prescription (\[FF\]) developed here, for expansion out to the detector at 16ms. The lattice expansion factors were $\lambda_y=\lambda_z=8$ and $\lambda_x=1$, i.e. no expansion in the $x$ direction. However, the initial ${{\overline{x}}}$ lattice was slightly buffered with vacuum at the edges with respect to the one used for the conversion phase, having a length ${{\overline{L}}}_x=94.92a_{\rm ho}$. This was to allow some natural spreading, which was too small to make a lattice expansion by $\lambda_x=2$ worthwhile. The final coarse lattice had ${{\overline{M}}} = 3240\times256\times256$ points. The calculation took 64 minutes on the same PC and used 14% of RAM. The resulting predicted detector image is shown in Fig. \[FIG-4\](c).
For comparison, a brute force calculation using the plain (\[Psin\]) was not able to reach the detection time. The best that was obtained on the aforementioned PC without going into swap space was expansion out to $t=0.40/\omega$, corresponding to 10ms, or 62% of the flight. This took 3 hours on a ${{\underline{M}}}=2880 \times 1350 \times 1350$ lattice, and used 88% of RAM, as well as requiring special additional work with the code to pass 64 bit pointers into the FFTW library. 64 bit pointers were required when ${{\underline{M}}}\ge2^{31}$.
, respectively, using (\[dphase2\]). The yellow diamonds show what would be inferred from the very late density image of Fig. \[FIG-6\] at 62ms. \[FIG-5\]](fig5.eps){width="0.5\columnwidth"}
Fig. \[FIG-5\] shows the apparent phase differences that can be inferred from the free-expansion at different times, and the true initial phase difference. The fringe shift $\delta y(x)$ was estimated from the $y$ position of the maximum density peak for a given $x$. The proportionality constant in (\[dphase\]) can be estimated by considering the free flight evolution of (\[3dic\]) in the $y$ direction only, ignoring other effects. One finds (y,t) = \_ A\_. where $A_{\pm} = \left(\frac{m\omega_{\perp}}{\pi\hbar}\right)^{1/4}\Psi_0^{(\pm)}(x)$. We are most interested in the limit $\omega_{\perp}t\gg1$ and $y\gg D$. We expand the exponents in the density $|\psi(y)|^2$ to lowest nontrivial orders in the small quantities $\eta_t=1/\omega_{\perp}t$ and $\eta_y=D/y$: that is, ${{\mathcal{O}}}(\eta_t^2,\eta_y^{-2})$ for the amplitude and ${{\mathcal{O}}}(\eta_t,\eta_y^{-1})$ for the phases. This gives |(y,t)|\^2 { |A\_+|\^2+|A\_-|\^2+2|A\_+|A\_-|}. Hence, the phase difference estimate (modulo $2\pi$) is \[dphase2\] \_0(x) y\_[peak]{}(x) . where $y_{\rm peak}$ is the location of the peak nearest to $y=0$. What we see in Fig. \[FIG-5\] is that the global long-wavelength behaviour of the phase difference is generally predicted well by the fringes in the expanded cloud. This is apart from some remnant localized shifts of modulo $2\pi$ that move an entire segment by $2\pi$ without affecting the long-wavelength phase trend. These shifts are at 0$\mu$m and 28$\mu$m for the early cloud at $t_s$, and near 10$\mu$m for the cloud at the detector. However, one can also see that true to the behaviour seen in the 1D case of Sec. \[1DEX\], the prediction of local details in the phase difference is largely scrambled during the time of flight.
![ The density pattern that would be seen after $t=62$ms of free flight, details as in Fig. \[FIG-4\], except for a 1:4 aspect ratio between the axes. \[FIG-6\]](fig6.eps){width="0.5\columnwidth"}
The effort required by (\[FF\]) for even very long expansions scales relatively graciously. For example, continuing the expansion out to $t=62$ms$ = 2.5/\omega$ is also possible. This is shown in Fig, \[FIG-6\] and the yellow plot in \[FIG-5\], and one sees continuing change in the fringe profile. In particular, the phase difference estimate is starting to become bad, with large long-scale discrepancies. This because now the expansion has lasted long enough that a lot of movement of the defects in the $x$ direction has occurred. This is expected, since the estimate for formation time of a momentum distribution from Sec. \[DEFECTS\] is $t_v=\sqrt{8}/\omega=70$ms here. The final cloud is now a relatively huge $0.3\times 0.6\times0.6$ mm in size, having expanded by a factor of about $10^4$ in the $y$ and $z$ directions since release from the trap. The final lattice is scaled by $\lambda_x=2$, and $\lambda_y=\lambda_z=30$ from that at $t_s$. This calculation took 18 hours on the reference PC, and used 7GB of memory. The direct approach with the ${{\underline{M}}}$ lattice would have needed 4000GB.
Discussion {#DISCUSSION}
==========
Efficiency {#EFF}
----------
To take advantage of the memory savings in (\[FF\]), one should evaluate each term in the sum (\[FFpsi\]) labeled by ${{\mathbf{q}}}$ sequentially, and accumulate its contribution to ${{\overline{\Psi}}}_{{{\overline{{{\mathbf{n}}}}}}}(t_{\rm final})$. This way, memory requirements will be $\approx2M$ complex numbers \[32$\times M$ bytes for the usual double precision\] – one array of size $M$ for carrying out the FFTs, and one to store the accumulated sum ${{\overline{\Psi}}}_{{{\overline{{{\mathbf{n}}}}}}}(t_{\rm final})$. Some time efficiency can be gained by using a third array of size $M$ to store the starting state $\Psi_{{{\mathbf{n}}}}(t_s)$, but using 48$\times M$ bytes in total.
The computational load in terms of operations scales as \[cpuload\] MM which is slightly faster than the brute force approach that would use (\[Psin\]) directly on a vacuum padded lattice ${{\underline{M}}}$ and take $\sim\lambda M\log(\lambda M)$ operations. The speed-up is mostly marginal – an improvement by a factor of $(1+\log\lambda/\log M)$. The somewhat surprising result that there is any speed up at all compared to the highly-optimized FFT on ${{\underline{M}}}$ is due to the fact that so much of the initial system is vacuum and does not contribute. For this reason, there is no advantage to be gained by trying to use the maximum starting lattice size $M$ that will fit in memory (perhaps after padding the nonzero part of the field that comes out of the conversion phase with vacuum), and minimum magnification $\lambda$.
The memory needed is of course strongly reduced – by a factor of at least $\lambda/2$, comparing to the most memory-efficient in-place FFT on the huge ${{\underline{M}}}$ lattice.
An expansion in $d$ directions requires summing of a number of terms that grows as $(t_{\rm flight})^d$. This can eventually become fairly time intensive as was seen for the calculation of Fig. \[FIG-6\]. Memory use never budges above the baseline no matter how long the flight takes.
The time needed can be alleviated by an extremely basic parallelization. Namely, distributing the evaluation of the $B^{({{\mathbf{q}}})}$ on many processing cores. Up to $\lambda$ cores could be used to obtain the result in a time $\sim M\log M$. However, a significantly smaller number will be optimal since $\lambda$ FFTs in parallel will require $\sim\lambda M$ numbers stored in memory again, which is what one is trying to avoid.
Relationship to fast Fourier transforms {#FFT}
---------------------------------------
The algorithm presented in (\[FF\]) bears some rough resemblance to a Cooley-Tukey FFT algorithm [@Cooley65] with radix-$\lambda_j$. The similarity is that the end results of the FFTs on the smaller $M$ lattice are multiplied by twiddle factors in (\[presf\]). These involve $e^{i\alpha_j{{\overline{x}}}_j}$ that introduces fractional phase shifts compared to those available on the FFT lattice. However, the overall procedure is quite different to Cooley-Tukey and relies heavily on the vacuum padding assumption (\[vacuum\]). This allows it to e.g. perform the two sequential FFTs on each ${{\mathbf{q}}}$-th term in the final sum (\[FFpsi\]). This is also what allows the effort to scale as $\lambda$ instead of the $\lambda^2$ that would be expected from a manual summation of smaller $M$-size FFTs.
Looking at (\[FFaux\]), one can see that when $\lambda$ and $M$ have common factors, this $n''$ index only accesses a part of the field $B^{(q)}_{{{\mathbf{p}}}}$. One can try to gain some computational advantage from this by using a pruned FFT [@Sorensen93] for the (\[presB\]) step, to calculate only the required ${{\mathbf{p}}}$ values. The advantage of pruned FFTs is not huge though. This would reduce the overall computational effort at most from $2M\log M$ to $2M\log M-M\log\lambda$ .
Loss of phase information {#COMP}
-------------------------
![Loss of momentum information in the final wavefunction generated by (\[FF\]). The plot shows: In blue: the true k-space density ${{\widetilde{\rho}}}(k)=|{{\widetilde{\Psi}}}(k)|^2$ in the cloud on the initial ($M$) lattice. Physically, this is preserved during free evolution; In green and red: The apparent k-space density ${{\widetilde{\rho}}}({{\overline{k}}})=|{{\widetilde{{{\overline{\Psi}}}}}}({{\overline{k}}})|^2$ , inferred by a DFT (\[badDFT\]) of the coarse grained final wavefunction ${{\overline{\Psi}}}({{\overline{x}}},t_{\rm final})$ in x-space. Green is for a magnification of $\lambda=8$ at $t_{\rm flight}=10/\omega$, while red is for a longer time of $t_{\rm flight}=40/\omega$ with $\lambda=40$. Other parameters as in Sec. \[1DEX\]. Panel (b) is a magnification of part of Panel (a). \[FIG-7\]](fig7.eps){width="\columnwidth"}
An important feature of the algorithm (\[FF\]) to be aware of is that while the density in x-space at $t_{\rm final}$ is calculated precisely, the ${{\overline{\Psi}}}_{{{\overline{{{\mathbf{n}}}}}}}(t_{\rm final})$ is not generally viable for further evolution, and does not store the correct momentum distribution. This is because of the phase aliasing (\[deltatheta\]) discussed in Sec. \[NAIVE\].
The wavefunction that can be reconstructed from ${{\overline{\Psi}}}_{{{\overline{{{\mathbf{n}}}}}}}(t_{\rm final})$ is: \[badDFT\] \_ = e\^[-i\_]{} [DFT]{}\_. This sits on a fine k-space lattice ${{\overline{k}}}_j({{\widetilde{{{\overline{m}}}}}}_j) = {{\widetilde{{{\overline{l}}}}}}_j\,(2\pi/{{\overline{L}}}_j)$ with ${{\widetilde{{{\overline{l}}}}}}_j={\rm mod}\left[{{\widetilde{{{\overline{m}}}}}}_j+\frac{1}{2} M_j\,,\,M_j\right]-\frac{1}{2}M_j$. The resulting momentum distribution is shown in Fig. \[FIG-7\], for the same 1D system that was studied in Sec. \[1DEX\]. This time, the initial wavefunction $\Psi_{\rm ic}(x)$ was evolved to times $t_{\rm flight}$ using the prescription (\[FF\]) on the initial $M=2048$ lattice rather than the standard step-by-step evolution (\[freex\]) on ${{\underline{M}}}=81920$ that was used in Sec. \[1DEX\]. The green case at $t_{\rm flight}=10/\omega$ might still be passable for some purposes, though the high momenta are already lost. The red longer-time case is completely scrambled.
The fact that the phase structure in x-space remains small-scale despite a magnification of the density stymies several superficially promising ideas on how to increase the efficiency of the expansion calculation:
First, one could be tempted to try to reduce the processing load to only $\sim \log\lambda$ FFTs on $M$-points instead of the present $\lambda$ FFTs, by implementing several sequential expansions (\[FF\]) by small factors, say $\lambda_j=2$. However, at each such step we are left with a discretized wavefunction that has has its momentum-space tails truncated. This will soon come to resemble the red line in Fig. \[FIG-7\] and become useless for further evolution.
Another approach that has been discussed in the field would try to introduce a time-dependent lattice spacing $\Delta{{\mathbf{x}}}(t) = F(t)$ that would track the expected density structure while keeping the lattice size $M$ constant. The hope is that it would allow one to keep the full nonlinear equation (\[GPE\]) at the cost of some additional correction terms dependent on $F(t)$. However, one can see that this will be unsuccessful as soon as the phase structure becomes too fine for the growing $\Delta{{\mathbf{x}}}(t)$.
Generalizations
---------------
The algorithm is readily adapted to cases where several complex-valued fields are present. One such case that may be aided with the algorithm presented here are positive-P simulations of supersonic BEC collisions [@Deuar07; @Deuar11; @Kheruntsyan12; @Lewis-Swan15]. Here, two independent complex-valued fields $\psi({{\mathbf{x}}})$ and $\psi^+({{\mathbf{x}}})$ that correspond to the ${\widehat{\Psi}}({{\mathbf{x}}})$ and ${\widehat{\Psi}^{\dagger}}({{\mathbf{x}}})$ Bose fields are used, and allow for the exact treatment of quantum fluctuations. The comparison of calculated and experimental pair velocity correlation widths has been problematic in these systems , because of the narrowness of the correlation peak in velocity [@Kheruntsyan12]. The detected peak is distorted in comparison with its k-space prediction due to not yet being in the far-field regime. There is no hope of a direct calculation of the free flight because the quantity $\varepsilon t_{\rm flight}/\hbar$ of (\[Mnaive\]) is very high (up to $\sim10^4$) in BEC collision experiments.
The approach can also be trivially adapted to cases of other spectra than the free particle one. This simply requires a modification of (\[free0\]) to ${{\widetilde{\Psi}}}({{\mathbf{k}}},t_{\rm final}) = {{\widetilde{\Psi}}}({{\mathbf{k}}},t_s) \exp\left[-it_{\rm flight}\omega_{{\mathbf{k}}}\right]$, with appropriate tweaks in (\[presf\]) and (\[presB\]). The crucial element is the presence of the vacuum assumption (\[vacuum\]).
Conclusions {#CONC}
-----------
To conclude, an algorithm (\[FF\]) has been presented that allows the exact calculation of the density of a wavefunction freely expanding into vacuum for practically arbitrary flight times without filling up the computer memory. The memory requirements do not depend on flight time and are the same size as the initial input state. Computation time is slightly faster than using an FFT on a large vacuum padded lattice. It is implemented using standard FFT libraries and some summing of terms. The approach relies crucially on two physical inputs: (1) That the initially compact wavefunction expands into vacuum, and (2) that the density length scale of the expanded cloud grows approximately linearly with time. The approach makes no assumptions about symmetries of the system or about the input wavefunction, so that it is a black box tool that can be immediately applied to general cases. This makes it well suited to the study of wavefunctions containing defects or samples of a thermal ensemble, a topic of many recent experiments [@Gring12; @Chomaz15; @Serafini15; @Lamporesi13; @Donadello14; @Sadler06; @Weiler08]. The flight times over which nontrivial defect evolution occurs during free flight are estimated in Sec. \[KINETIC\].
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Ray Ng and Mariusz Gajda for helpful discussions on this matter, and Julius de Hond whose feedback identified an omission in the manuscript. The work was supported by the National Science Centre (Poland) grant No. 2012/07/E/ST2/01389.
References {#references .unnumbered}
==========
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Y.W. Yu'
- 'X.P. Zheng'
date: 'Received 9 September 2005 / Accepted 14 December 2005'
title: Cooling of a rotating strange star in the color superconducting phase with a crust
---
[We investigate the thermal evolution of strange stars in the 2-flavor color superconductivity and color-flavor locked phases under the influence of deconfinement heating.]{} [Due to the spin-down of strange stars, the nuclear matter at the base of the thin crusts dissolves into quarks, releasing energy to heating the stars. On the other hand, the neutrino emissivities and specific heat involving pairing quarks are suppressed by the large pairing gap in color superconducting phases. Then the thermal evolution equation of strange stars is calculated.]{} [Deconfinement heating delays the cooling of strange stars considerably. The presence of color superconductivity with a large gap enhances this effect. Especially, in the color-flavor locked phase, the stars cannot be very cold at an early age but they cool slowly. For the stars with strong magnetic fields, a significant heating period could exist during the first several ten or hundred years. In addition, we reckoned that a possible theoretical limit line, which is determined by the competition between deconfinement heating and surface photon cooling, may indicate the upper limit temperature that isolated compact stars should not exceed. ]{} [Deconfinement heating is important for the thermal evolution of strange stars and is especially determinant for the stars in color-flavor locked phase which could show characteristic cooling behavior under this heating effect.]{}
1. Introduction {#introduction .unnumbered}
===============
Cooling simulation based on interior physics is of significant interest for the research of compact stars. In accordance with nuclear physics, a quark matter core could be produced in the interior of compact stars (hybrid star) and even strange quark matter stars (strange star, SS) may exist. Phenomenological and microscopic studies have confirmed that quark matter at a sufficiently high density, as in compact stars, undergoes a phase transition into a color superconducting state, which are typical cases of the 2-flavor color superconductivity (2SC) and color-flavor locked (CFL) phases [@Shovkovy(2004); @Alford(2004)]. Theoretical approaches also concur that the superconducting order parameter, which determines the gap $\Delta$ in the quark spectrum, lies between 10 and 100MeV for baryon densities existing in the interiors of compact stars. Recently, the cooling of hybrid stars with color superconducting quark cores have been investigated. Stars with CFL cores behave similarly to ordinary neutron stars [@Shovkovy(2002)]. And with a designed 2SC+X phase, hybrid stars could also explain the cooling data properly [@Grigorian; @et; @al(2005)]. In these cases, the thermal properties of the quark cores are suppressed by the large gap, and the hadronic matter parts of the stars play an important role in their cooling history.
The thermal evolution of SSs has also been extensively discussed. In early works, it was generally accepted that the surface temperature of SSs should be lower than neutron stars at the same age due to the quark direct Urca (QDU) processes [@Alcock(1988); @Pizzochero(1991); @Page(1992); @Schaab(1996)]. However, since the electron fraction could be small or even vanish, the QDU processes may be switched off. The cooling of SSs dominated by the quark modified Urca (QMU) and quark bremsstrahlung (QB) processes can be slower than neutron stars with standard cooling [@Schaab(1997a); @Schaab(1997b)]. Of course, a color superconducting phase could occur in SSs, and its effect on the cooling of the stars is a significant issue. @Blaschke(2000) show that the cooling of the stars in the 2SC phase (2SS hereafter) is compatible with existing X-ray data but that the stars in CFL phase (CSS hereafter) cool down too rapidly, which disagrees with the data. However, in those calculations an important factor, as described below, is ignored.
An SS, both in normal phase and in color superconducting phase, can sustain a tiny nuclear crust with a maximum density below neutron drip ($\sim10^{11}\rm g\hspace{0.1cm} cm^{-3}$) and mass $M_{c}\leq10^{-5}M_{\odot}$ due to the existence of a strong electric field on the quark surface [@Alcock(1986); @Usov(2004); @Zheng(2006)]. The spin-down of the star makes the matter at the bottom of the crust compress. As soon as the density exceeds neutron drip, the surplus matter in the crust falls into the quark core in the form of neutrons. Consequently, the engulfed neutrons dissolve into quarks, and the released energy during this process leads to a so-called deconfinement heating (DH). @Yuan(1999) claim that DH delays the cooling of SSs in normal phase (NSS) and may even lead to a slight increase in the temperature at the early ages of a star under specific conditions.
We argue that the heating effect on the thermal evolution of stars in color superconducting phase is huge compared to NSSs because of the suppression of specific heat and neutrino emission involving pairing quarks. Therefore, focus on the effects of DH on the cooling of 2SSs and CSSs in this paper. Our paper is arranged as follows. We recall neutrino emissivities and specific heat, color superconductivity, and the DH mechanism in Scets.2, 3, and 4, respectively. The cooling curves and the corresponding explanations are presented in Sect.5. Section 6 contains our conclusion and discussions.
2. Neutrino emissivities and specific heat {#neutrino-emissivities-and-specific-heat .unnumbered}
==========================================
The emissivity associated with the QDU processes $
d{\rightarrow}ue\bar{\nu}$ and $ue{\rightarrow}d{\nu}$ of quarks is [@Iwamoto(1982)] $${\epsilon}^{(D)}{\simeq}8.8{\times}10^{26}
{\alpha}_c\left({\frac{{\rho}_b}{{\rho}_0}}\right)Y_e^{1/3}T_9^6
{\rm \hspace{0.1cm} erg \hspace{0.1cm} cm^{-3} \hspace{0.1cm}
sec^{-1}},$$ where ${\alpha}_c$ is the strong coupling constant, ${\rho}_b$ is baryon density and ${\rho}_0=0.17{\rm \hspace{0.1cm} fm^{-3}}$ the nuclear saturation density, and $T_9$ is the temperature in units of $10^9\hspace{0.1cm} {\rm K}$. The electron fraction $Y_e={\rho}_e/{\rho}_b$ is small and even vanishes at a certain set of parameters of ${\rho}_b$, ${\alpha}_c$ and s-quark mass $m_s$ [@Duncan(1983)]. It has not yet been considered that the contribution to the emissivity by the *s-u* reaction, which is suppressed by an extra factor $\rm sin^{2}\theta_{c}\sim10^{-3}$ compared to the *d-u* reaction [@Duncan(1983)], where $\theta_{c}$ is the Cabibbo angle. When the QDU processes are switched off due to a small electron fraction ($Y_{e}<Y_{ec}=(3/\pi)^{1/2}m_{e}^{3}\alpha_{c}^{-3/2}/64$. ), the contribution to the emissivities by the QMU processes $dq{\rightarrow}uqe\bar{\nu}$ and QB processes $q_1q_2{\rightarrow}q_1q_2{\nu}\bar{\nu}$ dominate. These emissivities were also estimated as [@Iwamoto(1982)] $${\epsilon}^{(M)}{\simeq}2.83{\times}10^{19}
{\alpha}_c^2\left({\frac{{\rho}_b}{{\rho}_0}}\right)T_9^8{\rm
\hspace{0.1cm} erg \hspace{0.1cm} cm^{-3} \hspace{0.1cm}
sec^{-1}},$$ $${\epsilon}^{(B)}{\simeq}2.98{\times}10^{19}
\left({\frac{{\rho}_b}{{\rho}_0}}\right)T_9^8{\rm \hspace{0.1cm}
erg \hspace{0.1cm} cm^{-3} \hspace{0.1cm} sec^{-1}}.$$ In order to compute the thermal evolution of the stars, we also need the the specific heat involving quarks and electrons written as [@Iwamoto(1982); @Blaschke(2000)] $$c_q{\simeq}2.5{\times}10^{20}\left({\frac{{\rho}_b}{{\rho}_0}}\right)^{2/3}T_9{\rm
\hspace{0.1cm} erg \hspace{0.1cm} cm^{-3}\hspace{0.1cm} K^{-1}},$$ $$c_e{\simeq}0.6{\times}10^{20}\left({\frac{{Y_e{\rho}_b}}{{\rho}_0}}\right)^{2/3}T_9{\rm
\hspace{0.1cm} erg \hspace{0.1cm} cm^{-3}\hspace{0.1cm} K^{-1}}.$$
Since the mass of the crust is very small, $M_{\rm
c}{\leq}10^{-5}M_{\odot}$, compared with the total mass of the star, its contribution to neutrino emissivity and specific heat can be neglected [@Gudmundsson; @et; @al(1983); @Lattimer(1994)]. Here we also ignore the neutrino emissivity and specific heat due to the photon-gluon excitation, because this excitation is only important for a temperature higher than $70{\rm MeV}$ [@Blaschke(2000)], which is much higher than the typical temperature in our calculation.
3. Color superconductivity {#color-superconductivity .unnumbered}
==========================
It is widely accepted that the color superconducting phase is the real ground state of quantized chromodynamics at asymptotically large densities. At a certain range of the quark chemical potential the quark-quark interaction is attractive, driving the pairing between the quarks [@Alford(1998); @Alford(1999); @Alford(2003); @Rapp(1998); @Shovkovy(2004)]. Because of the pairing, QDU processes are suppressed by a factor ${\rm exp}(-{\Delta}/k_{B}T)$, and QMU & QB processes are suppressed by a factor ${\rm exp}(-2{\Delta}/k_{B}T)$ for $T<T_c{\simeq}0.4\Delta/k_{B}$ [@Blaschke(2000)]. In the 2SC phase, two color states of *u* and *d* quarks pair, whereas the *s* quark is unpaired. To be specific, we suppose that blue-green and green-blue *u-d* quarks are paired , whereas red *u* and *d* quarks ($u_{r},d_{r}$) remain unpaired. As a consequence, the QDU processes on the red (unpaired) quarks, as $d_{r}\rightarrow u_{r}e \bar{\nu}$, as well as QMU, $d_{r}q_{r}\rightarrow u_{r}q_{r}e \bar{\nu}$, and QB, $q_{1r}q_{2r}\rightarrow q_{1r}q_{2r}\bar{\nu}\nu $, are not blocked, whereas other processes involving paired quarks are blocked out by a large pairing gap. Therefore, it can be estimated that the neutrino emissivities in 2SC phase are reduced by about one magnitude [@Blaschke(2000)]. On the other hand, the neutrino processes involving all flavors are suppressed in the CFL phase by the exponential factors. For both phases where the specific heat contributed by the paired quarks is also changed, we apply the formula [@Blaschke(2000)] $$\begin{array}{cc}
c_{sq}=3.2c_q\left({\frac{T_c}{T}}\right)\times\hspace{5cm}\\
\left[2.5-1.7\left({\frac{T}{T_c}}\right)+3.6\left({\frac{T}{T_c}}\right)^2\right]{\rm
exp}\left(-\frac{\Delta}{k_{B}T}\right).
\end{array}$$
4. Deconfinement heating {#deconfinement-heating .unnumbered}
========================
The effect of DH is determined by the number of neutrons engulfed by the quark core, in other words, the variation in the mass of the crust. The total heat released per time unit as a function of $t$ is $$H_{\rm dec}(t)=-q_n\frac{1}{m_{b}}\frac{d M_{\rm c}}{d
{\nu}}\dot{\nu},$$ where $q_n$, the heat release per absorbed neutron, is expected to be in the range $q_n{\sim}10-30{\rm MeV}$ [@Haensel(1991)], and $m_{b}$ is the mass of baryon. Assuming the spin-down is induced by the magnetic dipole radiation, the evolution of the rotation frequency $\nu$ is given by $$\dot{\nu}=-\frac{8\pi^{2}}{3Ic^3}{\mu}^2{\nu}^3{\rm sin}^2{\theta},$$ where $I$ is the stellar moment of inertia, ${\mu}=\frac{1}{2}BR^3$ is the magnetic dipole moment, and $\theta$ is the inclination angle between magnetic and rotational axes. The mass of the crust $M_{\rm
c}$ is calculated by a quadratic function of $\nu$ by @Glendenning(1992), whose result describes the cases with intermediate frequencies ($\leq500\rm Hz$) very well. @Zdunik2001 improve the calculation nearly up to the Keplerian frequency using a polynomial including terms of higher order in $\nu$. The mass of the crust reads [@Zdunik2001] $$M_{\rm c}=M^{0}_{\rm c}(1+0.24\nu^{2}_{3}+0.16\nu^{8}_{3}),$$ where $\nu_{3}=\nu/10^{3}$Hz, and $M^{0}_{\rm
c}\approx10^{-5}M_{\odot}$ is the mass of the crust in the static case.
5. Cooling curves {#cooling-curves .unnumbered}
=================
The thermal evolution with DH of a star is determined by the equation $$C\frac{d T}{d t}=-L_{\nu}-L_{\gamma}+H_{\rm dec},$$ where $C$ is the total specific heat, $L_{\nu}$ the neutrino luminosity, and $L_{\gamma}$ the surface photon luminosity given by $$L_{\gamma}=4{\pi}R^2{\sigma}T_s^4,$$ where ${\sigma}$ is the Stefan-Boltzmann constant and $T_s$ the surface temperature. The internal structure of SSs can be regarded as temperature-independent [@Glen(1980)], and the surface temperature is related to internal temperature by a coefficient determined by the scattering processes occurring in the crust. In the work of @Blaschke(2000), this relation is given by a simple expression $T_{s}=5\times10^{-2}T$, but following @Yuan(1999), we apply an accurate formula that is demonstrated by @Gudmundsson [@et; @al(1983)], $$T_{s}=3.08\times10^6g_{s,14}^{1/4}T_{9}^{0.5495},$$ where $g_{s,14}$ is the proper surface gravity of the star in units of $10^{14}\rm cm\hspace{0.1cm}s^{-2}$; or for a recent version see the result of @Potekhin(1997).
In our calculations, to be specific, we consider a model of canonical SS of $1.4M_{\odot}$ at a constant density, which is a very good approximation for SSs of mass $M\leq1.4M_{\odot}$ [@Alcock(1986)]. As used by [@Blaschke(2000)], we take $Y_{e}=10^{-5}$, $\alpha_{c}=0.25$, $\rho=3\rho_{0}$ for $Y_{e}>Y_{ec}$, which is a representative set of parameters for which the QDU processes contribute to the cooling, whereas $Y_{e}=0$, $\alpha_{c}=0.15$, $\rho=5\rho_{0}$ for $Y_{e}<Y_{ec}$. And we also choose $\Delta=100$MeV and $q_n=20{\rm MeV}$, the initial temperature $T_0=10^9{\rm K}$, initial period $P_0=0.78{\rm
ms}$, and the magnetic tilt angle $\theta=45^{\circ}$. The gravitational red-shift is also taken into account. Then the effective surface temperature detected by a distant observer is $T_{s}^{\infty}=T_{s}\sqrt{1-R_{g}/R}$, where $R_{g}$ is the gravitational stellar radius.
We plot the cooling curves without DH of NSSs (solid curves), 2SSs (dotted curves), and CSSs (dashed curves) in Fig.1. And the observational data, which are taken from Tables 1 and 2 in @Page [@et; @al(2004)], are also shown in order to give the readers a feeling of the position of the illustrative curves in the logarithm $T_s^{\infty}-t$ plane. But we will not try to fit the data carefully in this paper. It shows that the cooling history of 2SSs are similar to NSSs, whereas CSSs become very cold at an early age since the specific heat is very small, cooler than $10^{4.5}$K after 1000yr for $Y_{e}=10^{-5}$ or several hours for $Y_{e}=0$ (this curve is not shown in Fig.1). We can see the curves of CSSs are very far from the data. These conclusions are also indicated by @Blaschke(2000) using the relation of $T_{s}=5\times10^{-2}T$.
Figure 2 shows the cooling behaviors of 2SSs with DH for various magnetic fields ($10^{8}-10^{12}$G). And the analogs for NSSs can be seen in @Yuan(1999). We can see DH delays the stellar cooling considerably. As discussed by @Yuan(1999), the stronger the magnetic field the more rapid the spin-down, and most of the nuclear matter in the crust dissolves during an earlier and shorter time. For a 2SS with a strong ($B=10^{12}$G) field and small electron fraction (dotted curve a), a distinct heating period exists in the first several ten years. And in the cases of weak fields ($B<10^{10}$G), stars could maintain high temperatures even at older ages ($>10^{6}$yrs). In the following paragraph, we discuss in detail how DH induces temperature rise and delays cooling.
We here pay more attention to the situation of CSSs because we argue the existence of a marked heating effect relative to the reduced emissions. To be clear, Fig. 3 shows the cooling curves of CSSs with DH for both a strong ($B=10^{11}$G) and a weak ($B=10^{9}$G) magnetic field. It is obvious that the cooling curves are changed dramatically by DH. The strong magnetic field induces a rapid spin-down of the star at the earliest ages, which could enhance the effect of DH to make it greater than the cooling effect at the beginning. As a result, the temperature should rise due to the surplus heat until the increasing luminosity equals the heating effect, $L_{\gamma}=H_{\rm dec}$, so a net heating period appears at the earliest ages. On the other hand, the temperature of the star with a weak field decreases but does not rise, due to the relative greater cooling effect at the start until the thermal release is compensated for by DH entirely: $L_{\gamma}=H_{\rm dec}$. In this case, since the confinement energy deposited in the crust is released slowly to heat the star, the star with a weak field can maintain a high temperature even at older ages ($> 10^{6}{\rm
yrs}$). To conclude, both the stars with strong and weak fields, after several hundred years (the specific value of the time is determined by the specific condition of the star), could arrive at an equilibrium between the cooling and heating effects. From then on, the temperature could only be reduced in order to rebuild the equilibrium when $H_{\rm dec}$ deceases with time, so the cooling of the stars is delayed. Due to this delay, the curves of CSSs cannot be in conflict with observational data as shown in Fig.3. Since the neutrino emission involving all quarks in CSSs is suppressed, the equilibrium discussed above is only determined by DH and photon emission, and has hardly anything to do with the interior thermal properties of the star. Therefore, the dependence of the cooling on the electron fraction is eliminated after the first several hundred years. Going back to Fig.2, we see that the mechanism described above also influences the cooling of 2SSs. However, since the equilibrium ($L_{\nu}+L_{\gamma}=H_{\rm dec}$) should involve neutrino luminosity, which is larger at high temperature whereas smaller at low temperature than photon luminosity, the evolution of 2SSs may be more complicated than it is for CSSs (see Fig.4 for detail), i.e., the cooling history can be roughly divided into a neutrino cooling stage ($L_{\nu}\gg L_{\gamma}$) and a photon cooling stage ($L_{\nu}\ll L_{\gamma}$), and the cooling is sensitive to the electron fraction; and the moment when the equilibrium is achieved could be very different for different magnetic fields.
Figure 4 shows the cooling curves of CSSs with different magnetic fields. We can see the cooling of the stars with any field is delayed, just as we find in Fig.2. However, in comparison with Fig.2, there is a question of why the temperature rise of CSSs can be more significant than the one of 2SSs. For both CSSs and 2SSs, as discussed in the previous paragraph, the reason for the temperature rise is that the heating effect is greater than the cooling effect at the beginning: $H_{\rm dec}>L_{\gamma}$ for CSSs and $H_{\rm
dec}>L_{\nu}+L_{\gamma}$ for 2SSs. Since the neutrino term is absent for CSSs, the initial difference between the heating term and luminosity of CSSs is much larger than the one of 2SSs with the same magnetic field and initial temperature. On the other hand, with the rise in temperature, the increase in the luminosity is proportional to $T^{2.2}$ for CSSs (see Eqs.(11, 12)) but to $T^{8}$ for 2SSs with a small electron fraction (see Eqs.(2, 3), where the term of photon luminosity is ignored since $L_{\nu}\gg L_{\gamma}$ at high temperature). Therefore, 2SSs can achieve the equilibrium easily after a comparatively small temperature rise, but the magnitude of the rise for CSSs needs to be much larger. In addition, the needed magnetic field intensity to induce the temperature rise of CSSs could be smaller than 2SSs since the initial cooling effect of CSSs is smaller.
We link the points where the cooling curves turn down together as a line (dash-dotted curve) in logarithm $T_{s}^{\infty}-t$ plane in Fig.4. The temperature indicated by the line is expressed as a power form $T^{\infty}_{s,\rm lim}=6.4{\times}10^{7}t^{-1/4}{\rm K}$. It may be well-founded that any other cooling curves of isolated stars (to our knowledge), regardless of star models, will be below this line due to the high heating effect and low cooling effect in CSSs. Hence we reckon that the indicated temperature may be the upper limit to what compact stars can reach at a given age. We must emphasize that this line found from the cooling curves of CSSs is only determined by the equilibrium between DH and surface photon cooling.
Finally, we present the cooling curves of CSSs for different gaps in Fig.5. It can be seen that the cooling curves are almost independent of the gap on a very large parameter scale (${\Delta{\sim}10-100{\rm
MeV}}$).
6. Conclusion and discussions {#conclusion-and-discussions .unnumbered}
=============================
We have studied the cooling behaviors of rotating SSs in the presence of color superconductivity by considering the effect of DH. The thermal evolution of SSs is now quite different from previous results, because DH can delay the cooling, and color superconductivity enhances this effect significantly, especially in the CFL phase. For CSSs, the previous discussions point out that the specific heat is determined by electrons since the contribution of quarks has been suppressed. This reduction leads to a very rapid cooling that disagrees with observational data [@Blaschke(2000)]. However, when we consider the effect of DH, the results should imply that the cooling curves could not be in serious conflict with the data. We even find it is possible that CSSs reach a higher temperature than other kinds of compact stars in their cooling history. The limit temperature line should illustrate this conjecture.
To be specific, as pointed out by @Yuan(1999), a temperature rising period could exist at the early ages due to the DH with a strong magnetic field. And we argue that the presence of color superconductivity may lead to a significant rise. @Yuan(1999) propose that this phenomena may be a signature of the existence of SS. In our opinion, if the theory of the color superconductivity is reliable, observing a young and quite hot source may be possible, although we also note that an important so-called brightness constraint has been suggested recently by @Grigorian(2005), who argues that it is unlikely that objects with a given age are hotter than those already observed. Statistically, this constraint is a good finding, but we think that in theory the possibility of the existence of young hotter stars still cannot be rejected absolutely. Of course, for our model, the early evolution also may be changed to a certain extent if we consider the formation of the crust of the star. On the other hand, for those stars with weak fields ($<10^{10}$G), our results show that they can maintain a high temperature at older ages ($>10^{6}$yrs). Unfortunately, these older sources with weak fields also have been not detected up to now (see, for example, @Popov [@et; @al(2003)] for the list of close-by cooling pulsars). To summarize, at the present point of observations, there is no evidence of the existence of extra hot sources. It may imply that the model needs some further improvements. However, it still should be emphasized that the various heating mechanisms in compact stars need to be given more importance when we talk about the star’s cooling when using the so-called standard scenario.
We would like to thank Prof. D. F. Hou for the useful discussion. We are especially indebted to the anonymous referee for his/her useful comments that helped us to improve the paper. This work was supported by the NFSC under Grant Nos. 10373007 and 90303007.
[18]{} Alcock, C., Farhi, E., & Olinto, A. 1986, ApJ, 310, 261 Alcock, C., & Olinto, A. 1988, Ann. Rev. Nucl. Sci. 38, 161 Alford, M. 2004, J. Phys. G, 30, 441 Alford, M., Rajagopal, K., & Wilczek, F. 1998, Phys. Lett. B, 422, 247 Alford, M., Rajagopal, K., & Wilczek, F. 1999, Nucl. Phys. B, 537, 443 Alford, M., & Reddy, S. 2003, Phys. Rev. D, 67, 074024 Blaschke, D., & Klähn, T., & Voskresensky, D. N. 2000, ApJ, 533, 406 Duncan, R. C., Shapiro, S. L., & Wasserman, I. 1983, ApJ, 267, 358 Glen, G., & Sutherland, P. 1980, ApJ, 239, 671 Glendenning, N. K., & Weber, F. 1992, ApJ, 400, 647 Grigorian, H., Blaschke, D., & Voskresensky, D. 2005, Phys. Rev. C71, 045801 Grigorian, H. 2005, \[arXiv: astro-ph/0507052\] Gudmundsson, E. H., Pethick, C. J., & Epstein, R. I. 1983, ApJ, 272, 286 Haensel, P., & Zdunik, J. 1991, In: Madsen, J., Haensel, P., (eds.) Strange Quark Matter in Physics and Astrophysics. (Nucl. Phys. B\[Proc. Suppl.\], 24), 139 Iwamoto, N. 1982, Ann. Phys., 141, 1 Lattimer, J. M., Van Riper, K. A., Prakash, M., & Prakash, M. 1994, ApJ, 425, 802 Page, D. 1992, In:P[é]{}r[é]{}z, M., Huerta, R., (eds.) Proceedings of the work-shop on High Energy Phenomenology. World Scientific, Singapore, 347 Page, D., Lattimer, J. M., Prakash, M., & Steiner, A.W. 2004, ApJS, 155, 623 Pizzochero, P. M. 1991, Phys. Rev. Lett., 66, 2425 Potekhin, A. Y., Chabrier, G., & Yakovlev, D. G. 1997, A&A, 323, 415 Popov, S. B., Colpi, M., Prokhorov, M.E., Treves, A., & Turolla, R. 2003, A&A, 406, 111 Rapp, R., Schäfer, T., Shuryak, E., & Velkovsky, M. 1998, Phys. Rev. Lett., 81, 53 Schaab, C., Hermann, B., Weber, F., & Weigel, M. K. 1997a, ApJ, 480, L111 Schaab, C., Hermann, B., Weber, F., & Weigel, M. K. 1997b, J. Phys. G, 23, 2029 Schaab, C., Weber, F., Weigel, M. K., & Glendenning, N. K. 1996, Nucl. Phys. A, 605, 531 Shovkovy, I. A. 2004, lectures delivered at the IARD 2004 conference, Saas Fee, Switzerland, June 12-19, and at the Helmholtz International Summer School and Workshop on Hot points in Astrophysics and Cosmology, JINR, Dubna, Russia, Aug. 2-13, \[arXiv: nucl-th/0410091\] Shovkovy, I. A. & Ellis, P. J. 2002, talk presented at workshop “Continuous Advances in QCD 2002/Arkadyfest,” Minneapolis, USA, May, 17-23, \[arXiv: astro-ph/0207346\] Usov, V.V. 2004, Phys. Rev. D, 70, 067301 Yuan, Y. F., & Zhang, J. L. 1999, A&A, 344, 371 Zdunik, J. L., Haensel, P., & Gourgoulhon, E. 2001, A&A, 372, 535 Zheng, X. P., & Yu, Y. W. 2006, A&A, 445, 627
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe a primal-dual framework for the design and analysis of online convex optimization algorithms for [*drifting regret*]{}. Existing literature shows (nearly) optimal drifting regret bounds only for the $\ell_2$ and the $\ell_1$-norms. Our work provides a connection between these algorithms and the Online Mirror Descent ($\omd$) updates; one key insight that results from our work is that in order for these algorithms to succeed, it suffices to have the gradient of the regularizer to be bounded (in an appropriate norm). For situations (like for the $\ell_1$ norm) where the vanilla regularizer does not have this property, we have to [*shift*]{} the regularizer to ensure this. Thus, this helps explain the various updates presented in [@bansal10; @buchbinder12]. We also consider the online variant of the problem with $1$-lookahead, and with movement costs in the $\ell_2$-norm. Our primal dual approach yields nearly optimal competitive ratios for this problem.'
author:
- 'Suman K Bera[^1], Anamitra R Choudhury[^2], Syamantak Das[^3], Sambuddha Roy[^4] and Jayram S. Thatchachar[^5]'
bibliography:
- 'main.bib'
title: Fenchel Duals for Drifting Adversaries
---
[^1]: Indian Institute of Technology Delhi. Email: .
[^2]: IBM Research, Delhi. Email:
[^3]: Indian Institute of Technology Delhi. Email: .
[^4]: IBM Research, Delhi. Email:
[^5]: IBM Research, Almaden. Email: .
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that the BRST cohomology of the massless sector of the Type IIB superstring on $AdS_5\times S^5$ can be described as the relative cohomology of an infinite-dimensional Lie superalgebra. We explain how the vertex operators of ghost number 1, which correspond to conserved currents, are described in this language. We also give some algebraic description of the ghost number 2 vertices, which appears to be new. We use this algebraic description to clarify the structure of the zero mode sector of the ghost number two states in flat space, and initiate the study of the vertices of the higher ghost number.'
---
[**$ $\
$ $\
Pure spinors in AdS and Lie algebra cohomology** ]{}\
\
Introduction
============
Pure spinor formalism [@Berkovits:2000fe] is a generalization of the BRST formalism with the ghost fields constrained to satisfy a nonlinear (quadratic) equation: $$\label{IntroPSConstraint}
\lambda^{\alpha}\Gamma_{\alpha\beta}^m\lambda^{\beta} = 0$$ where $\Gamma^m_{\alpha\beta}$ are the Dirac’s Gamma-matrices. A natural question arizes, what kind of nonlinear constraints can ghost fields satisfy in a physical theory? What if we replace (\[IntroPSConstraint\]) by an arbitrary set of equations: $$\lambda^{\alpha}C_{\alpha\beta}^i\lambda^{\beta} = 0\;,\quad
i\in I\quad \mbox{?}$$ Of course, this would generally speaking have nothing to do with the string theory. But the question is, besides [*coming from*]{} superstring theory, what special properties of $C_{\alpha\beta}^m = \Gamma_{\alpha\beta}^m$ are important for physics? This would be useful to know, for example when thinking about possible generalizations of the pure spinor formalism.
It turns out that there is some special property of (\[IntroPSConstraint\]) which plays an important role in the string worldsheet theory. This is the so-called Koszulity — see [@Gorodentsev:2006fa] and references therein. The formalism of Koszul duality was extensively used in the study of the algebraic properties of the supersymmetric Yang-Mills theories in [@Movshev:2003ib; @Movshev:2004aw], and in the classification of the possible deformations of these theories in [@Movshev:2009ba].
In this paper we will study the BRST cohomology of the massless sector of the Type IIB superstring in $AdS_5\times S^5$. We will use the formalism of Koszul duality to gain better understanding of the massless BRST cohomology.
The BRST cohomology counts infinitesimal deformations of the background $AdS_5\times S^5$, also called “linearized excitations” or “gravitational waves”. From the point of view of the string worldsheet theory, they are identified with the [*massless vertex operators*]{}. Understanding the properties of these vertex operators is important already because of their role in the scattering theory. Indeed, the correlation function of vertex operators is the main ingredient in the string theory computation of the S-matrix.
#### Main results
1. We show that the cohomology of the BRST complex of the Type IIB SUGRA on $AdS_5\times S^5$ is equivalent to some relative Lie algebra cohomology.
2. We classify the vertex operators of the ghost number 1, which correspond to the densities of the local conserved charges
3. We give a general Lie-algebraic description of the vertex operators of the ghost number $\geq 2$ and use this description to study the properties of the zero momentum states (“discrete states”)
#### Previous results for ghost number 1
The classification of the vertex operators in the ghost number 1 was done, at least partially, in the Appendix of our previous paper [@Mikhailov:2009rx]; the method which we develop here appears more elegant.
#### Zero momentum states
In a typical string theory computation one considers the scattering of physical excitations (vertex operators) which depend on the space-time coordinates exponentially: $$V(x)\simeq e^{ikx}$$ But we find it interesting to also consider vertex operators depending on $x$ polynomially. We will call them “zero momentum vertices” because their wavefunction in the momentum space is supported at $k=0$. It turns out that this “zero momentum sector” carries one potentially unpleasant surprize: there are some well-defined vertex operators which do not correspond to any physical states [@Bedoya:2010qz; @Mikhailov:2012id]. This means that just the requirement of BRST invariance alone does not yet provide a complete characterization of the physically relevant sigma-models. (But the picture becomes complete if one imposes, in addition to the BRST invariance, the condition of the sigma-model being finite at the one-loop level.) In this paper we use the Koszul duality to obtain a dual description of such unphysical states in terms of fields satisfying unusual equations of motion, similar to this one: $$\partial_m A_n + \partial_n A_m = 0$$ Such equations imply that higher derivatives of $A$ vanish.
#### Plan of the paper
We will start in Sections \[sec:Maxwell\], \[sec:MaxwellLieCohomology\] with the application of Koszul duality to the ten-dimensional supersymmetric Maxwell theory. In Section \[sec:SUGRA\] we apply a similar method to the study of linearized Type IIB SUGRA in $AdS_5\times S^5$. We introduce in Section \[sec:LieAlgebraOfCovariantDerivatives\] some infinite-dimensional super-Lie algebra, and show in Section \[sec:ReductionToIdeal\] that the BRST cohomology is equal to the Lie-algebraic cohomology of some ideal $I$ of this super-algebra. In Section \[sec:FlatSpaceLimit\] we consider the flat space limit and in particular study the zero momentum states. One unusual finding is the existence of nontrivial cohomology at the ghost number three.
#### Note added in the revised version
The approach developed in this paper is useful for clarifying the construction of [*integrated*]{} vertex [@Chandia:2013kja].
Pure spinor formulation of the SUSY Maxwell theory {#sec:Maxwell}
==================================================
Supersymmetric space-time and basic constraints
-----------------------------------------------
Here we will remind the superspace descirption of the classical supersymmetric Maxwell theory in 10 dimensions. The superspace is formed by 10 bosonic coordinates $x^m$ and 16 fermionic coordinates $\theta^{\alpha}$. This is the supersymmetric space-time, we will call it $M$: $$M = {\bf R}^{10|16}$$ The basic superfield is the vector potential $A_{\alpha}(x,\theta)$. For every $\alpha\in \{1,\ldots,16\}$, the corresponding $A_{\alpha}$ is a scalar function: $$A_{\alpha}\;:\; M \to {\bf R}$$ The equations of motion of the theory are encoded in the following construction. Let us consider the “covariant derivatives”: $$\label{CovariantDerivative}
\nabla_{\alpha} = {\partial\over\partial\theta^{\alpha}} +
\Gamma_{\alpha\beta}^m \theta^{\beta} {\partial\over\partial x^m} +
A_{\alpha}(x,\theta)$$ It turns out [@Nilsson:1981bn; @Witten:1985nt] that the equations of motion of SUSY Maxwell theory are equivalent to the [*constraint*]{}:
- There exists a differential operator $\nabla_m = {\partial\over\partial x^m} + A_m(x,\theta)$ such that: $$\label{BasicConstraint}
\{\nabla_{\alpha},\nabla_{\beta}\} = \Gamma_{\alpha\beta}^m \nabla_m$$
The nontrivial requirement of the constraint is that the LHS of (\[BasicConstraint\]) is proportional to $\Gamma^m_{\alpha\beta}$, because the most general structure would be: $$\Gamma^m_{\alpha\beta}\nabla_m + \Gamma^{m_1m_2m_3m_4m_5}_{\alpha\beta}X_{m_1m_2m_3m_4m_5}$$ where $X_{m_1\ldots m_5} = X(x,\theta)_{m_1\ldots m_5}$ some function on the superspace. Equivalently, the constraint (\[BasicConstraint\]) can be written: $$\label{ContractionWithGammaFiveIsZero}
\Gamma^{\alpha\beta}_{m_1m_2m_3m_4m_5}\;\{\nabla_{\alpha},\nabla_{\beta}\} = 0$$ With the constraint (\[ContractionWithGammaFiveIsZero\]) satisfied, we consider (\[BasicConstraint\]) as the definition of $\nabla_m$. The pure spinor interpretation of (\[ContractionWithGammaFiveIsZero\]) is due to [@Howe:1991mf].
Definition of the Lie superalgebra $\cal L$. {#sec:LYangMills}
--------------------------------------------
Now let us forget Eq. (\[CovariantDerivative\]) and consider the Lie superalgebra $\cal L$ generated by the letters $\nabla_{\alpha}$ with the relation (\[BasicConstraint\]). This is an infinite-dimensional Lie superalgebra. It turns out that some properties of the SUSY Maxwell theory can be described in terms of this algebra $\cal L$. In the next Section we will describe an application of the cohomology of ${\cal L}$.
Lie algebra cohomology and solutions of the SUSY Maxwell theory {#sec:MaxwellLieCohomology}
===============================================================
Vacuum solution
---------------
Let us consider the vacuum solution $A_{\alpha}(x,\theta) = 0$. In this case $\nabla_{\alpha} = \nabla_{\alpha}^{(0)} = {\partial\over\partial\theta^{\alpha}} +
\Gamma_{\alpha\beta}^m \theta^{\beta} {\partial\over\partial x^m}$. The vacuum solution is invariant under the supersymmetry algebra $\bf susy$ generated by the operators $S_{\alpha}$: $$S_{\alpha} = {\partial\over\partial\theta^{\alpha}} -
\Gamma_{\alpha\beta}^m \theta^{\beta} {\partial\over\partial x^m}$$ We observe that $\{S_{\alpha},\nabla_{\alpha}^{(0)}\} = 0$, and in this sense the vacuum solution is $\bf susy$-invariant. It turns out that the operators $\nabla^{(0)}$ themselves generate the same (isomorphic) algebra $\bf susy$ as do $S_{\alpha}$. This can be explained using the interpretation of $M$ as the coset space of $\bf susy$. Let us consider the abstract algebra $\bf susy$ generated by ${t^{\rm\scriptscriptstyle odd}}_{\alpha}$ and ${t^{\rm\scriptscriptstyle even}}_m$ with the commutation relations: $$\{{t^{\rm\scriptscriptstyle odd}}_{\alpha},{t^{\rm\scriptscriptstyle odd}}_{\beta}\} = \Gamma^m_{\alpha\beta} {t^{\rm\scriptscriptstyle even}}_m$$ and other commutators all zero. Let us interpret $x^m$ and $\theta^{\alpha}$ as coordinates on the group manifold of the corresponding Lie group: $$g = \exp(\theta^{\alpha} {t^{\rm\scriptscriptstyle odd}}_{\alpha} + x^m{t^{\rm\scriptscriptstyle even}}_m)$$ Then $\nabla_{\alpha}$ acts as the multiplication by ${t^{\rm\scriptscriptstyle odd}}_{\alpha}$ on the left, and $S_{\alpha}$ as the multiplication by ${t^{\rm\scriptscriptstyle odd}}_{\alpha}$ on the right. We can consider the universal enveloping algebra $U{\bf susy}$ as a representation of $\bf susy$, by the left multiplication. Then the regular representation can be considered as its dual, which will be denoted $(U{\bf susy})'$.
#### Relation between $\cal L$ and $\bf susy$.
There is an ideal $I\subset {\cal L}$ such that the factoralgebra over this ideal is $\bf susy$: $${\cal L} / I = {\bf susy}$$ The basic constraint (\[BasicConstraint\]) actually implies the existence of $W^{\alpha}$ such that[^1]: $$\label{DefW}
[\nabla_{\alpha},\nabla_m] = \Gamma^m_{\alpha\beta}W^{\beta}$$ This $W^{\alpha}$ is the element of $I$, because if $\nabla_{\alpha}$ were the generators of the 10-dimensional supersymmetry algebra, then $W^{\alpha}$ would be zero.
Deformations of solutions and cohomology {#sec:DeformationsAndCohomology}
----------------------------------------
The deformation of the given solution $A_{\alpha}(x,\theta)$ is: $$A_{\alpha} \mapsto A_{\alpha} + \delta A_{\alpha}$$ where $\delta A_{\alpha}$ should satisfy: $$\label{ConstraintOnDeformation}
\{\nabla_{\alpha}, \delta A_{\beta}\} = \Gamma_{\alpha\beta}^m \delta A_m$$ The fact that the LHS is proportional to $\Gamma_{\alpha\beta}^m$ is a nontrivial constraint on $\delta A_{\beta}$, and if it is satisfied than (\[ConstraintOnDeformation\]) becomes the definition of $\delta A_m$.
Let us introduce [*pure spinors*]{} $\lambda^{\alpha}$ satisfying: $$\lambda^{\alpha}\Gamma_{\alpha\beta}^m\lambda^{\beta} = 0$$ Using these pure spinors, Eq. (\[ConstraintOnDeformation\]) can be written: $$\begin{aligned}
Q v =\;& 0
\\
\mbox{\tt\small where } Q =\;& \lambda^{\alpha}\nabla_{\alpha}
\\
\mbox{\tt\small and } v =\;& \lambda^{\alpha}\delta A_{\alpha}\end{aligned}$$ Therefore the problem of classifying the infinitesimal deformations of the vacuum solution is reduced to the computation of the cohomology of $Q$.
Koszul duality and its application to deformations {#sec:KoszulAndDef}
--------------------------------------------------
Let us consider a representation $V$ of the Lie algebra $\bf susy$, and the following version of the BRST complex: $$\label{BRSTComplexMaxwellCoeffV}
\ldots \stackrel{Q_{\rm BRST}}{\longrightarrow}
V\otimes_{\bf C}{\cal P}^n
\stackrel{Q_{\rm BRST}}{\longrightarrow}
V\otimes_{\bf C}{\cal P}^{n+1}
\stackrel{Q_{\rm BRST}}{\longrightarrow} \ldots$$ where ${\cal P}^n$ is the space of polynomial functions of degree $n$ on the pure spinors $\lambda^{\alpha}$. A representation $V$ of $\bf susy$ is also a representation of $\cal L$, because ${\bf susy} = {\cal L}/I$.
Koszul duality[^2] implies that the cohomology of (\[BRSTComplexMaxwellCoeffV\]) coincides with the Lie algebra cohomology of $\cal L$: $$\label{KoszulImpliesBRSTEqualsLie}
H^n(Q_{\rm BRST}\;;\;V) = H^n({\cal L}\;;\;V)$$ Notice that $\bigoplus\limits_{n=0}^{\infty}{\cal P}^n$ is [*a commutative algebra with quadratic relations*]{}. This algebra is Koszul dual to [*the universal enveloping of a Lie algebra*]{} $U{\cal L}$.
#### Brief review of (\[KoszulImpliesBRSTEqualsLie\])
The Koszul duality implies that the following sequence: $$\begin{aligned}
\label{KoszulSequence}
\ldots & \longrightarrow \mbox{Hom}_{\bf C}({\cal P}^2,\;U{\cal L})
\longrightarrow \mbox{Hom}_{\bf C}({\cal P}^1,\;U{\cal L})
\longrightarrow U{\cal L} \longrightarrow {\bf C} \longrightarrow 0\end{aligned}$$ is exact, and therefore provides a free resolution of the $U{\cal L}$-module $\bf C$. This fact depends on special properties of the quadratic constraint (\[IntroPSConstraint\]).
In (\[KoszulSequence\]) the action of $U{\cal L}$ on $U{\cal L}$ is by the left multiplication, and the action of the differential involves the right multiplication by the $\nabla_{\alpha}$: $$d \phi(p) = \phi(\lambda^{\alpha}p)\nabla_{\alpha}$$ Here on the right hand side we have the product of $\nabla_{\alpha}\in U{\cal L}$ with $\phi(\lambda^{\alpha}p)\in U{\cal L}$. In other words, for $\phi\in\mbox{Hom}_{\bf C}({\cal P}^n,U{\cal L})$ we have: $$\label{DifferentialInKoszulResolution}
d\phi = \mu^{\rm \tiny right}_{U{\cal L}}(\nabla_{\alpha})\circ\phi\circ\mu_{{\cal P}}(\lambda^{\alpha})$$ where $\mu_{\cal P}(\lambda^{\alpha}):{\cal P}^n\to {\cal P}^{n+1}$ is a multiplication of a polinomial by $\lambda^{\alpha}\in {\cal P}^1$, and $\mu^{\rm\tiny right}_{U{\cal L}}(\nabla_{\alpha})$ is the right multiplication by $\nabla_{\alpha}$ in $U{\cal L}$. (The composition $\phi\circ\mu(\lambda^{\alpha})$ is of the type ${\cal P}^n\to U{\cal L}$; we then multiply by $\nabla_{\alpha}\in U{\cal L}$.)
Since we have a projective resolution of ${\bf C}$, we can now use it to compute the Lie algebra cohomology of ${\cal L}$ with coefficients in $V$, [*i.e.*]{} $\mbox{Ext}_{U{\cal L}}({\bf C},V)$. It is the cohomology of the following sequence: $$\begin{aligned}
0 & \longrightarrow \mbox{Hom}_{U{\cal L}}(U{\cal L},V)
\longrightarrow
\mbox{Hom}_{U{\cal L}}(\mbox{Hom}_{\bf C}({\cal P}^1,U{\cal L}),V)
\longrightarrow\ldots
\label{HomHom}\\
\ldots & \longrightarrow
\mbox{Hom}_{U{\cal L}}(\mbox{Hom}_{\bf C}({\cal P}^n,U{\cal L}),V)
\longrightarrow
\mbox{Hom}_{U{\cal L}}(\mbox{Hom}_{\bf C}({\cal P}^{n+1},U{\cal L}),V)
\longrightarrow \ldots
\nonumber\end{aligned}$$ where the differential is induced by (\[DifferentialInKoszulResolution\]) and acts as follows. For $f\in \mbox{Hom}_{U{\cal L}}(\mbox{Hom}_{\bf C}({\cal P}^n,U{\cal L}),V)$, the $df\in\mbox{Hom}_{U{\cal L}}(\mbox{Hom}_{\bf C}({\cal P}^{n+1},U{\cal L}),V)$ is evaluated on $\phi\in\mbox{Hom}_{\bf C}({\cal P}^{n+1},U{\cal L})$ as follows: $$(df)(\phi: {\cal P}^{n+1} \to U{\cal L}) =
f(\mu^{\rm\tiny right}_{U{\cal L}}(\nabla_{\alpha})\circ\phi\circ\mu_{\cal P}(\lambda^{\alpha}))$$ There is an isomorphism: $$\begin{aligned}
{\cal P}^n\otimes_{\bf C}V
\simeq \;&
\mbox{Hom}_{U{\cal L}}(\mbox{Hom}_{\bf C}({\cal P}^n,U{\cal L}),V)
\\
p\otimes v \mapsto
\;& [\phi\mapsto \phi(p)v]\end{aligned}$$ Here “$\phi(p)v$” means the action of $\phi(p)\in U{\cal L}$ on the element $v$ of the representation $V$ of $U{\cal L}$. This isomorphism relates (\[HomHom\]) to (\[BRSTComplexMaxwellCoeffV\]).
#### Special case
The cohomology problem described in Section \[sec:DeformationsAndCohomology\] corresponds to the particular case of $V= (U{\bf susy})'$. As we have just explained, this is equivalent to the computation of the Lie algebra cohomology: $$H^{\bullet}({\cal L}, (U{\bf susy})')$$ Notice that ${\bf susy} = {\cal L}/I$ and therefore $(U{\bf susy})'$ is naturally a representation of $\cal L$, by the left multiplication. To calculate this cohomology, we notice that the following complex: $$\ldots \longrightarrow U{\cal L}\otimes_{\bf C} \Lambda^2 I \longrightarrow
U{\cal L}\otimes_{\bf C} I \longrightarrow U{\cal L} \longrightarrow
U{\bf susy}
\longrightarrow 0$$ is a free resolution of $U{\bf susy}$ as a $U{\cal L}$-module. This means that: $$H^n({\cal L}, (U{\bf susy})') = H^n(I,{\bf C})$$ More specifically, the ghost number one vertex operator $\lambda^{\alpha}\delta A_{\alpha}$ corresponds to the first cohomology: $$\label{H1ViaAbelianization}
H^1(I,{\bf C}) = \left( {I\over [I,I]}\right)'$$ This has the following physical interpretation. The space $I\over [I,I]$ can be identified with the space of field strengths. Then (\[H1ViaAbelianization\]) tells us that the classical solutions are linear functionals on the space of field strengths. Indeed, given a classical solution, we can compute the value of the field strenght on this classical solution. Therefore, the space of classical solutions is expected to be dual to the space of field strengths, as we indeed observe in (\[H1ViaAbelianization\]).
#### Explicit description of $I\over [I,I]$
Elements $W^{\alpha}$ of $I$ were introduced in Eq. (\[DefW\]). Consider the projection of $W^{\alpha}$ to $I/[I,I]$, [*i.e.*]{} $W^{\alpha} \mbox{ mod } [I,I]$. We conjecture that all the other elements of $I/[I,I]$ can be obtained from $W^{\alpha}$ by commuting with $\nabla_{\alpha}$, [*i.e.*]{} acting with $\bf susy$. This means that all the gauge invariant operators at the linearized level are $W^{\alpha}$ and its derivatives.
Type IIB SUGRA in $AdS_5\times S^5$ {#sec:SUGRA}
===================================
#### Note in the revised version
The constructions of this paragraph can be illustrated by explicit examples of vertex operator, corresponding to the $\beta$-deformation These examples are constructed in [@Chandia:2013kja].
BRST complex {#sec:BRSTComplexTypeIIB}
------------
The BRST complex of Type IIB SUGRA in $AdS_5\times S^5$ [@Berkovits:2001ue; @Berkovits:2000yr; @Berkovits:2004xu] is based on the coset space $G/G_0$ where $G$ is the Lie supergroup corresponding to the Lie superalgebra ${\bf g} = {\bf psu}(2,2|4)$ and $G_0$ is the subgroup corresponding to ${\bf g}_{\bar{0}} = so(1,4)\oplus so(5)$. A ${\bf Z}_4$-grading of $\bf g$ plays an important role. The generators of $\bf g$ are denoted: $$\begin{aligned}
t^3_{\alpha} & \mbox{ \tt\small of degree 3},\; \alpha\in \{1,\ldots,16\}
\nonumber \\
t^1_{\dot{\alpha}} & \mbox{ \tt\small of degree 1},\; \dot{\alpha}\in\{1,\ldots,16\}
\nonumber \\
t^2_n & \mbox{ \tt\small of degree 2}, \; n \in \{0,\ldots,9\}
\label{BasisOfPSU}
\\
t^0_{[mn]} & \mbox{ \tt\small of degree 0}
\nonumber\end{aligned}$$ The subalgebra ${\bf g}_{\bar{0}}$ is generated by $t^0_{[mn]}$, ${\bf g}_{\bar{3}}$ by $t^3_{\alpha}$, ${\bf g}_{\bar{1}}$ by $t^1_{\dot{\alpha}}$, and ${\bf g}_{\bar{2}}$ by $t^2_m$. The index $[mn]$ of $t^0_{[mn]}$ runs over a union of two sets: the set of choices of 2 different elements $m,n$ from $\{0,\ldots 4\}$, and the set of choices of 2 different elements $m,n$ from $\{5,\ldots,9\}$. This corresponds to the split of ${\bf g}_{\bar{0}}$ into the direct sum of $so(1,4)$ and $so(5)$. Both $t_{\alpha}^3$ and $t_{\dot{\alpha}}^1$ transform as spinors of both $so(1,4)$ and $so(5)$ under the adjoint action of ${\bf g}_{\bar{0}}$, and $t^2_m$ transform as vectors.
The BRST complex computing supergravity excitations on the background $AdS_5\times S^5$ is: $$\begin{aligned}
\label{StandardBRSTComplex}
\ldots \stackrel{Q_{\rm BRST}}{\longrightarrow} \mbox{Hom}_{{\bf g}_{\bar{0}}}\left(
U{\bf g}\;,\;{\cal P}^n
\right) \stackrel{Q_{\rm BRST}}{\longrightarrow} \mbox{Hom}_{{\bf g}_{\bar{0}}}\left(
U{\bf g}\;,\;{\cal P}^{n+1}
\right) \stackrel{Q_{\rm BRST}}{\longrightarrow} \ldots\end{aligned}$$ where ${\cal P}^n$ is the space of polynomials functions of the order $n$ of two independent pure spinors $\lambda_L$ and $\lambda_R$: $$\lambda_L^{\alpha}f_{\alpha\beta}{}^m\lambda_L^{\beta} = 0\;,\quad
\lambda_R^{\dot{\alpha}}f_{\dot{\alpha}\dot{\beta}}{}^m\lambda_R^{\dot{\beta}} = 0
\quad \mbox{ for } m\in \{0,\ldots,9\}$$ where $f_{\bullet\bullet}{}^{\bullet}$ are the structure constants of ${\bf g}$, and $Q_{\rm BRST}$ is given by: $$\begin{aligned}
Q_{\rm BRST} = \;& Q_{\rm BRST}^L + Q_{\rm BRST}^R
\\
\mbox{\tt\small where } Q_{\rm BRST}^L =\;&
\lambda_L^{\alpha}L(t^3_{\alpha})
\\
\mbox{\tt\small and } Q_{\rm BRST}^R =\;&
\lambda_R^{\dot{\alpha}}L(t^1_{\dot{\alpha}})\end{aligned}$$ Here $L(t)$ is the left multiplication by $t$. We will use the notation ${\cal P}^{p,q}$ for the space of polynomials of the order $p$ in $\lambda_L$ and $q$ in $\lambda_R$. Therefore ${\cal P}^n =\bigoplus_{p+q=n}{\cal P}^{p,q}$.
More generally, we can consider the cohomology with coefficients in an arbitrary representation $V$ of ${\bf g}$: $$\label{BRSTComplexGeneralRepresentation}
\ldots \stackrel{Q_{\rm BRST}}{\longrightarrow}
V\otimes_{{\bf g}_{\bar{0}}}{\cal P}^n
\stackrel{Q_{\rm BRST}}{\longrightarrow}
V\otimes_{{\bf g}_{\bar{0}}}{\cal P}^{n+1}
\stackrel{Q_{\rm BRST}}{\longrightarrow} \ldots$$ The cohomology of this complex[^3] will be denoted $H^n(Q_{\rm BRST}\;;\;V)$. With this notation, the cohomology of the “standard” BRST complex (\[StandardBRSTComplex\]) is $H^n(Q_{\rm BRST}\;;\; (U{\bf g})')$. These complexes were studied in [@Berkovits:2000yr; @Mikhailov:2009rx; @Mikhailov:2011af].
It is useful to consider a [*filtration*]{} $F^p$ on the space of vertex operators, corresponding to the powers of $\lambda_R$. We will consider an element of $\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\bf g}\;,\;{\cal P}^n)$ to be of the order $p$ if it goes like $O(\lambda_R^p)$ when $\lambda_R \to 0$. The space of such operators will be donoted $F^p \mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\bf g}\;,\;{\cal P}^n)$. This is a decreasing filtration, [*i.e.*]{} $\ldots \supset F^p\supset F^{p+1} \supset F^{p+2}\supset\ldots$ This allows us to calculate the cohomology of $Q_{\rm BRST}$ using some approximation scheme, starting from the cohomology of $Q^L_{\rm BRST}$ and considering $Q^R_{\rm BRST}$ as a small correction. The first approximation is: $$E_2^{p,q} = H^p(Q^R_{\rm BRST}\;;\; H^q(Q^L_{\rm BRST}\;;\;V))$$
Lie algebra formed by the covariant derivatives {#sec:LieAlgebraOfCovariantDerivatives}
-----------------------------------------------
Now we will introduce some infinite-dimensional Lie algebra, which we will use later to study the cohomology of the complexes (\[StandardBRSTComplex\]) and (\[BRSTComplexGeneralRepresentation\]).
#### Definition of the Lie algebra ${\cal L}^{\rm tot}$.
We will consider the infinite-dimensional super Lie algebra generated by the following letters: $$\nabla^L_{\alpha} \;,\; \nabla^R_{\dot{\alpha}} \;,\; t^0_{[mn]}$$ where the indices $\alpha$, $\dot{\alpha}$ and $[mn]$ run over the same sets as in (\[BasisOfPSU\]), and with the following relations: $$\begin{aligned}
\{\nabla_{\alpha}^L\;,\;\nabla_{\beta}^L\} = \;& f_{\alpha\beta}{}^m\nabla^L_m
\label{DefNablaML}\\
\{\nabla_{\dot{\alpha}}^R\;,\;\nabla_{\dot{\beta}}^R\} = \;&
f_{\dot{\alpha}\dot{\beta}}{}^m\nabla^R_m
\label{DefNablaMR}\\
\{\nabla_{\alpha}^L\;,\;\nabla_{\dot{\beta}}^R\} = \;&
f_{\alpha\dot{\beta}}{}^{[mn]} t^0_{[mn]}
\label{CollapsToT0}\\
[t^0_{[mn]}\;,\;\nabla_{\alpha}^L] =\;&
f_{[mn]\alpha}{}^{\beta}\nabla_{\beta}^L
\label{ActionOfT0OnLeft}\\
[t^0_{[mn]}\;,\;\nabla_{\dot{\alpha}}^R] =\;&
f_{[mn]\dot{\alpha}}{}^{\dot{\beta}}\nabla_{\dot{\beta}}^R
\label{ActionOfT0OnRight}\\
[t^0_{[kl]}\;,\;t^0_{[mn]}] =\;& f_{[kl][mn]}{}^{[pq]}t^0_{[pq]}\end{aligned}$$ where Eqs. (\[DefNablaML\]) and (\[DefNablaMR\]) are the definitions of $\nabla^L_m$ and $\nabla^R_m$. The coefficients $f_{\bullet\bullet}{}^{\bullet}$ are the structure constants of $psu(2,2|4)$ in the basis (\[BasisOfPSU\]). We will introduce the following notation for this Lie algebra: $${\cal L}^{\rm tot} = {\cal L}^L + {\cal L}^R + {\bf g}_{\bar{0}}$$ where the sum is as linear spaces. More details are in [@Mikhailov:2013vja].
#### Grading.
We will introduce on ${\cal L}^{\rm tot}$ a ${\bf Z}$-grading as follows: $$\begin{aligned}
\mbox{deg}(\nabla_{\alpha}^L) = \;& 1
\nonumber \\
\mbox{deg}(\nabla_{\dot{\alpha}}^R) = \;& -1
\label{ZGrading}\end{aligned}$$
#### Definition of the ideal $I\subset {\cal L}^{\rm tot}$.
There is an ideal $I\subset {\cal L}^{\rm tot}$ such that ${\cal L}^{\rm tot}/I = {\bf g}$. The structure of $\bf g$ is explained in Eq. (\[BasisOfPSU\]). Modulo $I$ the generators $t^0_{[mn]}$ become the generators $t^0_{[mn]}$ of ${\bf g}_{\bar{0}}\subset {\bf g}$, $\nabla^L_{\alpha}$ becomes $t^3_{\alpha}$, $\nabla^R_{\dot{\alpha}}$ becomes $t^1_{\dot{\alpha}}$, and both $\nabla^L_m$ and $\nabla^R_m$ become $t^2_m$. The ideal $I$ is not invariant under the $U(1)$ which defines the ${\bf Z}$-grading (\[ZGrading\]), but only under ${\bf Z}_4\subset U(1)$.
Lie algebra cohomology {#sec:LieAlgebraCohomology}
----------------------
Let us consider the relative Lie algebra cohomology[^4]: $$\label{RelativeLieCohomology}
H^{\bullet}\left( {\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\; V\right)$$ We claim that this cohomology coincides with the BRST cohomology: $$\label{RelativeCohomologyCoincidesWithBRST}
H^{\bullet}\left( {\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\; V\right)
= H(Q_{\rm BRST}\;;\;V)$$ We will prove a stronger statement. Let us introduce a decreasing filtration of the Lie algebra cochain complex in the following way. We say that a cochain $c$ belongs to $F^pC^q\left( {\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\; V\right)$ if $c(\xi_1,\ldots,\xi_q)$ is zero whenever there are less than $p$ letters $\nabla_{\dot{\alpha}}^R$ among $\xi_1,\ldots,\xi_q$. For example, for $c\in F^3C^2$ should be true that $c(\nabla^R_{\dot{\alpha}},\nabla^R_{\dot{\beta}}) = 0$, but $c(\nabla^R_m,\nabla^R_{\dot{\beta}})$ does not have to be zero (because $\nabla^R_m$ is defined in (\[DefNablaMR\]) as the commutator of two $\nabla_{\dot{\alpha}}^R$, [*i.e.*]{} has degree 2).
In other words, the ghost dual to $\nabla_{\dot{\alpha}}^R$ is considered “small of the order $\varepsilon$”; the ghost dual to $\nabla_m^R$ is considered “small of the order $\varepsilon^2$”, [*etc.*]{} But all the “left” ghosts are of the order 1. The $F^pC$ consists of cochains which are of the order $\varepsilon^p$ and higher.
Similarly, the BRST complex has a filtration by the powers of $\lambda_R$.
We will construct a [*filtered*]{} quasi-isomorphism between the relative Lie algebra complex $C^{\bullet}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}};\;;V)$ and the BRST complex. A filtered quasi-isomorphism of two filtered complexes $C_1^{\bullet}$ and $C_2^{\bullet}$ is a map of complexes which is a quasi-isomorphism ${\bf gr}^pC_1^{\bullet} \to {\bf gr}^pC_2^{\bullet}$ for every $p$. A filtered quasi-isomorphism is a quasi-isomorphism of complexes in the usual sense, if one forgets the grading [@stacks-project Lemma 05S3]. This can be understood from the point of view of spectral sequences; filtered quasi-isomorphism becomes an isomorphism at $E_1^{\bullet,\bullet}$.
In particular, it follows that the relative Lie algebra cohomology (\[RelativeLieCohomology\]) coincides with the BRST cohomology (\[BRSTComplexGeneralRepresentation\]).
#### Construction of filtered quasi-isomorphism.
Let $C^{\bullet}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)$ denote the space of cochains in the relative Lie algebra cohomology complex (\[RelativeLieCohomology\]). Let us introduce the operation of restriction from the space of relative cochains to the BRST complex: $$R\;:\;\;C^{\bullet}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)
\longrightarrow V\otimes \mbox{Fun}(\lambda_L,\lambda_R)$$ which is defined as follows. Given the cochain $c\in C^q({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)$, we have to define $Rc\in V\otimes \mbox{Fun}(\lambda_L,\lambda_R)$. By definition $c$ is a polylinear function of $q$ elements of ${\cal L}$: $$\xi_1\wedge\xi_2\wedge\cdots\wedge\xi_q \mapsto
c(\xi_1\wedge\xi_2\wedge\ldots\wedge\xi_q)$$ Elements of the linear space ${\cal L}^{\rm tot}/{\bf g}_{\bar{0}}$ are, by definition in Section \[sec:LieAlgebraOfCovariantDerivatives\], nested commutators of $\nabla^L$s plus nested commutators of $\nabla^R$s. We define $Rc$ as the following function of $\lambda_L$ and $\lambda_R$: $$\begin{aligned}
Rc(\lambda_L,\lambda_R) =\;& c\left(
\left(
\lambda^{\alpha}_L\nabla^L_{\alpha} +
\lambda^{\dot{\alpha}}_R\nabla^R_{\dot{\alpha}}
\right)^{\otimes q}
\right)
\\
& \mbox{\tt\small for } c \in
C^{q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)
\nonumber\end{aligned}$$ We used the following notation: $\xi^{\otimes q}$ means $\underbrace{\xi \otimes \xi \otimes \cdots \otimes \xi}_{q \text{ times}}$. We observe: $$RQ_{\rm Lie} = Q_{\rm BRST} R$$
#### Lemma:
$R$ is a filtered quasi-isomorphism.
To prove this, we consider the action of $Q_{\rm Lie}$ on the following space: $$\begin{aligned}
{\bf gr}^pC^{p+q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)
= \;&
{F^pC^{p+q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)
\over
F^{p+1}C^{p+q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)} =
\nonumber \\
= \;& \bigoplus^p_{r=0}
C^{q+r}({\cal L}^L;V)\otimes_{{\bf g}_0}
{\bf gr}^p C^{p-r}({\cal L}^R;{\bf C})\end{aligned}$$ We observe that:
1. The action of $Q_{\rm Lie}$ on ${\bf gr}^pC^{p+q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)$ coincides with the action of the operator $Q_{\rm Lie}^{[H^{\bullet}({\cal L}^L,V)]} + Q_{\rm Lie}^{[H^{\bullet}({\cal L}^R,{\bf C})]}$ on $\bigoplus_{r=0}^p C^{q+r}({\cal L}^L;V)\otimes_{{\bf g}_0} {\bf gr}^p C^{p-r}({\cal L}^R;{\bf C})$
2. The restriction map $R$ is only nonzero on the $r=0$ term. It intertwines this complex with the left BRST complex, which has the BRST operator $Q_L = \lambda_L^{\alpha} t^3_{\alpha}$. In other words, it is a morphism of complexes: $${\bf gr}^pC^{p+q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)
\longrightarrow {\bf gr}^pC^{p+q}_{\rm BRST}$$
With these two observations, the Koszul isomorphisms: $$\begin{aligned}
H^q({\cal L}^L;\; V) \simeq \;& H^q(Q^L_{\rm BRST};\;V)
\\
H^p({\cal L}^R;\; {\bf C}) \simeq \;& H^p(Q^R_{\rm BRST};\;{\bf C})
=\mbox{Fun}(\lambda_R^{\otimes p})\end{aligned}$$ imply that ${\bf gr}^pR:\;{\bf gr}^pC^{p+q}({\cal L}^{\rm tot}\;;\; {\bf g}_{\bar{0}}\;;\;V)\longrightarrow {\bf gr}^pC^{p+q}_{\rm BRST}$ is a quasi-isomorphism, [*i.e.*]{} $R$ is a filtered quasi-isomorphism.
An analogue of the Koszul resolution
------------------------------------
In fact, it is possible to glue two Koszul resolutions (one for ${\cal L}^L$ and another for ${\cal L}^R$) along ${\bf g}_{\bar{0}}$, as we will now explain[^5]. Similarly to (\[KoszulSequence\]), consider the following BRST-type complex: $$\begin{aligned}
0\longrightarrow {\bf C}
\longrightarrow
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\bf C})
\longrightarrow
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^1)
\longrightarrow \ldots
\label{RelativeInjectiveResolution} \\
\ldots \longrightarrow
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^n)
\longrightarrow
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^{n+1})
\longrightarrow \ldots
\nonumber\end{aligned}$$ where the differential acts as follows: $$\begin{aligned}
\label{DifferentialInGluedComplex}
d\phi = \;&
\mu_{\cal P}(\lambda_L^{\alpha})
\circ \phi \circ
\mu^{\rm \tiny right}_{U{\cal L}^{\rm tot}}(\nabla^L_{\alpha})
+
\mu_{\cal P}(\lambda_R^{\dot{\alpha}})
\circ \phi \circ
\mu^{\rm \tiny right}_{U{\cal L}^{\rm tot}}(\nabla^R_{\dot{\alpha}})\end{aligned}$$ (notations as in (\[KoszulSequence\])), and $\mbox{Hom}_{{\bf g}_{\bar{0}}}$ means linear maps invariant under the following action of ${\bf g}_{\bar{0}}$: $$\begin{aligned}
(\eta . \phi)(x) = \phi(x\eta) + \eta^{[mn]}t_{[mn]}^0\phi\end{aligned}$$ We will call the two terms on the right hand side of (\[DifferentialInGluedComplex\]) $d_L\phi$ and $d_R\phi$. We will introduce the abbreviated notation for the terms of (\[RelativeInjectiveResolution\]):
$$0\longrightarrow {\bf C}\longrightarrow X^0 \longrightarrow X^1\longrightarrow
\ldots$$
There is a bigrading: $X^n = \bigoplus_{p+q = n} X^{p,q}$ where $X^{p,q} = \mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^{p,q})$; notice that $d_L:X^{p,q}\to X^{p+1,q}$ and $d_R:X^{p,q}\to X^{p,q+1}$.
We will now prove that (\[RelativeInjectiveResolution\]) is a $(U{\cal L}^{\rm tot},U{\bf g}_{\bar{0}})$-injective $(U{\cal L}^{\rm tot},U{\bf g}_{\bar{0}})$-exact resolution of ${\bf C}$ in the sense of [@MR0080654].
#### Proof
Being $(U{\cal L}^{\rm tot},U{\bf g}_{\bar{0}})$-injective follows from Section 1 of [@MR0080654] (Lemma 1). Note that every term of (\[RelativeInjectiveResolution\]) is a direct sum of finite-dimensional representations of ${\bf g}_{\bar{0}}$. This implies that the kernel and the image of every differential is a direct ${\bf g}_{\bar{0}}$-submodule as required in [@MR0080654]. It remains to prove the exactness. We will prove the equivalent statement, that the cohomology of the truncated complex: $$0\longrightarrow X^0\longrightarrow X^1\longrightarrow \ldots$$ is only nonzero in the zeroth term: $H^0 = {\bf C}$. We will use the spectral sequence of the bicomplex $d = d_L + d_R$. Let us first calculate the cohomology of $d_L$. We will “normal order” the elements of $U{\cal L}^{\rm tot}$ by putting elements of $U{\cal L}^R$ to the left and elements of $U{\cal L}^L$ to the right. This gives an isomorphism of linear spaces: $$\label{GradedAsLinearSpaces}
\mbox{Hom}_{{\bf g}_{\bar{0}}} \left(U{\cal L}^{\rm tot}\;,\;{\cal P}^{n-p,\;p}\right)
\;=\;
\mbox{Hom}_{\bf C}\left(U{\cal L}^L\;,\;{\cal P}_L^{n-p}\right)
\otimes
\mbox{Hom}_{\bf C}\left(U{\cal L}^R\;,\;{\cal P}_R^p\right)$$ The differential $d_L$ only acts on the $\mbox{Hom}_{\bf C}\left(U{\cal L}^L\;,\;{\cal P}_L^{n-p}\right)$, while $\mbox{Hom}_{\bf C}\left(U{\cal L}^R\;,\;{\cal P}_R^p\right)$ is “inert”. The action of the differential on $\mbox{Hom}_{\bf C}\left(U{\cal L}^L\;,\;{\cal P}_L^{n-p}\right)$ is the same as in the Koszul complex of $U{\cal L}^L$. Therefore the cohomology of $d_L$ is $\mbox{Hom}_{\bf C}\left(U{\cal L}^R\;,\;{\cal P}_R^p\right)$. The action of $d_R$ on the cohomology of $d_L$ is the same as the action of the differential in the Koszul complex of $U{\cal L}^R$. Therefore $H(d_R,H(d_L)) = {\bf C}$, corresponding to constant $\phi$. This completes the proof.
#### Corollary
This means that for any $U{\cal L}^{\rm tot}$-module $W$, the $\mbox{Ext}_{(U{\cal L}^{\rm tot},U{\bf g}_{\bar{0}})}(W,{\bf C})$ can be computed as the cohomology of the following complex: $$\begin{aligned}
\ldots & \longrightarrow
\mbox{Hom}_{U{\cal L}^{\rm tot}}\left(
W\;,\;
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^n)
\right)
\longrightarrow
\nonumber \\
& \longrightarrow
\mbox{Hom}_{U{\cal L}^{\rm tot}}\left(
W\;,\;
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^{n+1})
\right)
\longrightarrow\ldots
\label{ExtFromGluedComplex}\end{aligned}$$ As in Section \[sec:KoszulAndDef\], there is an isomorphism of complexes (\[ExtFromGluedComplex\]) and (\[BRSTComplexGeneralRepresentation\]): $$\begin{aligned}
\mbox{Hom}_{U{\cal L}^{\rm tot}}
\left(
W\;,\;
\mbox{Hom}_{{\bf g}_{\bar{0}}}(U{\cal L}^{\rm tot},{\cal P}^n)
\right)
\simeq\;&
\mbox{Hom}_{{\bf g}_{\bar{0}}}(W, {\cal P}^n)
\\
f \mapsto
\;& [w\mapsto f(w)({\bf 1})]\end{aligned}$$ If $W$ is semisimple as a representation of ${\bf g}_{\bar{0}}$, then this shows that $\mbox{Ext}_{(U{\cal L}^{\rm tot},U{\bf g}_{\bar{0}})}(W,{\bf C})$ can be identified with the cohomology of (\[BRSTComplexGeneralRepresentation\]) for $V=W'$.
#### Variation
Similarly, we can consider the following projective resolution: $$\begin{aligned}
\ldots\longrightarrow
({\cal P}^{n+1})'\otimes_{{\bf g}_{\bar{0}}} U{\cal L}^{\rm tot}
\longrightarrow
({\cal P}^n)'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}
\longrightarrow\ldots
\label{RelativeProjectiveResolution} \\
\ldots\longrightarrow
({\cal P}^1)'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}
\longrightarrow
{\bf C}\otimes_{{\bf g}_{\bar{0}}} U{\cal L}^{\rm tot}
\longrightarrow {\bf C}\longrightarrow 0
\nonumber\end{aligned}$$ where the differential acts as follows: $$\begin{aligned}
\label{DifferentialInGluedComplex}
\partial (s\otimes \xi) = \;&
(s\circ\mu_{\cal P}(\lambda_L^{\alpha}))\otimes\xi\nabla^L_{\alpha} +
(s\circ\mu_{\cal P}(\lambda_R^{\dot{\alpha}}))\otimes\xi\nabla^R_{\dot{\alpha}}\end{aligned}$$ This means that $\mbox{Ext}_{(U{\cal L}_{\rm tot}, U{\bf g}_{\bar{0}})}({\bf C},V)$ can be computed as the cohomology of the following complex: $$\begin{aligned}
\ldots & \longrightarrow
\mbox{Hom}_{U{\cal L}^{\rm tot}}\left(
({\cal P}^n)'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}\;,\;V
\right)
\longrightarrow
\nonumber \\
& \longrightarrow
\mbox{Hom}_{U{\cal L}^{\rm tot}}\left(
({\cal P}^{n+1})'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}\;,\;V
\right)
\longrightarrow\ldots
\label{ExtFromGluedComplexProjective}\end{aligned}$$ As in Section \[sec:KoszulAndDef\], there is an isomorphism of complexes (\[ExtFromGluedComplexProjective\]) and (\[BRSTComplexGeneralRepresentation\]): $$\begin{aligned}
\mbox{Hom}_{U{\cal L}^{\rm tot}}(({\cal P}^n)'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}\;,\;V) \simeq \;&
{\cal P}^n\otimes_{{\bf g}_{\bar{0}}}V
\label{IsomorphismToPV}
\\
f \mapsto \;& [\lambda\mapsto f(\lambda\otimes {\bf 1})]
\label{ExplicitIsomorphism}\end{aligned}$$ The expression $[\lambda\mapsto f(\lambda\otimes {\bf 1})]$ on the right hand side of (\[ExplicitIsomorphism\]) denotes an element of ${\cal P}^n\otimes_{{\bf g}_{\bar{0}}}V$, understood as a ${\bf g}_{\bar{0}}$-invariant polynomial function of pure spinors of the order $n$, whose value on a pair of pure spinors $\lambda = (\lambda_L,\lambda_R)$ is defined as follows. Since $\lambda$ can be interpreted as an element of $({\cal P}^n)'$, we can consider $\lambda\otimes{\bf 1}$ an element of $({\cal P}^n)'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}$; then we can act on it by $f\in \mbox{Hom}_{U{\cal L}^{\rm tot}}(({\cal P}^n)'\otimes_{{\bf g}_{\bar{0}}}U{\cal L}^{\rm tot}\;,\;V)$.
Eq. (\[IsomorphismToPV\]) is another proof of (\[RelativeCohomologyCoincidesWithBRST\]).
Reduction to the cohomology of the ideal $I\subset {\cal L}^{\rm tot}$
----------------------------------------------------------------------
The following construction works for an arbitrary completely reducible representation $A$ of ${\bf g}_{\bar{0}}$. Given such an $A$, let us consider $H^n(Q_{\rm BRST}\;;\;V)$ in the special case: $$\label{DefV}
V = \mbox{Hom}_{\bf C}\left(
U{\bf g}\otimes_{{\bf g}_{\bar 0}} A \;,\; {\bf C}
\right)$$ According to Section \[sec:LieAlgebraCohomology\] $H^n(Q_{\rm BRST}\;;\;V)$ is equivalent to $H^n({\cal L}^{\rm tot}\;;\;{\bf g}_{\bar{0}}\;;\;V)$, which in the case (\[DefV\]) is the same as $\mbox{Ext}^n_{(U{\cal L}^{\rm tot},\;U{\bf g}_{\bar{0}})}(U{\bf g}\otimes_{{\bf g}_{\bar{0}}} A\;;\;{\bf C})$ [@MR0080654]. Consider the following complex of $U{\cal L}^{\rm tot}$-modules: $$\begin{aligned}
\ldots \longrightarrow U{\cal L}^{\rm tot}\otimes_{{\bf g}_0}
(\Lambda^2 I \otimes_{\bf C} A)
\longrightarrow U{\cal L}^{\rm tot}\otimes_{{\bf g}_0} (I \otimes_{\bf C} A)
\longrightarrow \;&
\nonumber \\
\longrightarrow U{\cal L}^{\rm tot}\otimes_{{\bf g}_{\bar{0}}} A
\longrightarrow U{\bf g}\otimes_{{\bf g}_{\bar{0}}} A \longrightarrow \;& 0
\label{RelativeResolution}\end{aligned}$$ Here the action of ${\bf g}_{\bar{0}}$ on $\Lambda^p I \otimes_{\bf C} A$ is the sum of the adjoint action on $I$ and the action on $A$. The complex (\[RelativeResolution\]) is a $(U{\cal L}^{\rm tot},\;U{\bf g}_{\bar{0}})$-projective and $(U{\cal L}^{\rm tot},\;U{\bf g}_{\bar{0}})$-exact resolution of $U{\bf g}\otimes_{{\bf g}_{\bar{0}}}A$ as a $U{\cal L}^{\rm tot}$-module, in the sense of [@MR0080654]; see Appendix \[sec:Exactness\]. Therefore: $$\label{CohomologyWithA}
H^n\left({\cal L}^{\rm tot}\;;\;{\bf g}_{\bar{0}}\;;\;
\mbox{Hom}_{\bf C}\left(
U{\bf g}\otimes_{{\bf g}_{\bar 0}} A \;,\; {\bf C}
\right)
\right) = \mbox{Hom}_{{\bf g}_{\bar{0}}}\left(A\;, H^n(I) \right)$$ \[sec:ReductionToIdeal\]
#### Geometrical interpretation
Consider the case when $A$ is a finite-dimensional representation. With $V$ defined by (\[DefV\]) the BRST complex of (\[BRSTComplexGeneralRepresentation\]) is: $$\mbox{Hom}_{{\bf g}_{\bar{0}}} \left(
U{\bf g}\otimes_{{\bf g}_{\bar 0}} A \;,\; {\cal P}^{\bullet}
\right)$$ Geometrically, this is the space of $A'$-valued functions $f_a(g,\lambda_3,\lambda_1)$ where the index $a$ enumerates a basis of $A'$, such that for $h\in G_{\bar{0}}$: $$\begin{aligned}
f_a(hg,\; h\lambda_3h^{-1},\; h\lambda_1h^{-1}) =\;&
f_a(g,\;\lambda_3,\lambda_1)
\label{CovarianceOfF}
\\
f_a(gh,\;\lambda_3,\lambda_1) =\; & f_b(g,\;\lambda_3,\lambda_1)\rho^b_a(h)
\label{RotationsOfF}\end{aligned}$$ More precisely, this is the space of [*Taylor series*]{} of sections of the pure spinor bundle over $AdS_5\times S^5$; the universal enveloping algebra is the space of [*finite*]{} linear combinations, [*i.e.*]{} we do not care about the convergence of the Taylor series $f$. Equation (\[CovarianceOfF\]) says that $f$ is a section of a bundle over the homogeneous space. On the other hand, Eq. (\[RotationsOfF\]) requires that $f$ transform in a fixed representation $A'$ under the group $G_{\bar{0}}$ of [*global rotations*]{} around $g = {\bf 1}$.
The space of Taylor series, as a representation of the global rotations $G_{\bar{0}}$, is the direct sum of infinitely many finite-dimensional representations: $$\mbox{Hom}_{{\bf g}_{\bar{0}}} \left(
U{\bf g} \;,\; {\cal P}^{\bullet}
\right) = \bigoplus_{A} A\otimes
\mbox{Hom}_{{\bf g}_{\bar{0}}} \left(
U{\bf g}\otimes_{{\bf g}_{\bar 0}} A \;,\; {\cal P}^{\bullet}
\right)$$ Therefore (\[CohomologyWithA\]) implies that: $$H^n\left(\;
Q_{\rm BRST}\;,\;
(U{\bf g})' \;
\right) = H^n(I)$$
#### Action of the global symmetries
Notice that $\bf g$ naturally acts on $H^m(I)$. This corresponds to the right action of $\bf g$ on the BRST complex (\[StandardBRSTComplex\]), [*i.e.*]{} to the global symmetries of the $AdS_5\times S^5$ sigma-model.
Ghost number 1: global symmetry currents
----------------------------------------
The elements of $H^1(Q_{\rm BRST}\;;\;(U{\bf g})') = H^1(I)$ correspond to the global symmetry currents of the $\sigma$-model [@Berkovits:2004jw; @Berkovits:2004xu; @Bedoya:2010qz]. There are finitely many global symmetries. We have: $$H^1(I) = \left({I\over [I,I]}\right)'$$ We will now show that $I\over [I,I]$ is a [*finite-dimensional*]{} representation of ${\bf g}$, actually the adjoint representation of $\bf g$.
#### Special notations for summation over repeating indices.
As already introduced in (\[BasisOfPSU\]), the index $m$ enumerates the basis of the vector representation of ${\bf g}_{\bar{0}} = so(1,4)\oplus so(5)$, and runs from $0$ to $9$; more precisely, $m\in\{0,\ldots,4\}$ enumerates vectors of $so(1,4)$, and $m\in \{5,\ldots,9\}$ vectors of $so(5)$. For a vector $v^m$ we denote: $$v^{\overline{m}} =
\left\{
\begin{array}{c}
v^m \mbox{ \small\tt if } m\in\{0,\ldots,1\} \cr
-v^m \mbox{ \small\tt if } m\in\{5,\ldots,9\} \end{array}\right.$$ For two vectors $v^m$ and $w^m$ we denote: $$\begin{aligned}
v^mw^m = \;& v^0w^0 - \sum_{i=1}^9v^iw^i
\nonumber\\
v^mw^{\overline{m}} = \;& v^0w^0 - \sum_{i=1}^4v^iw^i + \sum_{i=5}^9 v^iw^i\end{aligned}$$
#### Proposition.
As a representation of $\bf g$, $I\over [I,I]$ is generated by the following objects[^6] : $$\begin{aligned}
T^2_m = \;& \nabla_m^L - \nabla_m^R
\label{T2}\\
T^0_{[mn]} =\;& [\nabla^L_m,\nabla^L_n] - [\nabla^R_m,\nabla^R_n]
\label{T0}\\
Z^L_{\alpha} =\;& \nabla^L_{\alpha}
- {1\over 10}
[\;\nabla^L_{\overline{m}}\;,\;[\nabla^L_m\;,\; \nabla^L_{\alpha}]\;]
\label{ZL}\\
Z^R_{\dot{\alpha}} =\;& \nabla^R_{\dot{\alpha}}
- {1\over 10}
[\;\nabla^R_{\overline{m}}\;,\;[\nabla^R_m\;,\; \nabla^R_{\dot{\alpha}}]\;]
\label{ZR}\end{aligned}$$ Notice that $[(\nabla^L_m - \nabla^R_m)\;,\;(\nabla^L_n - \nabla^R_n)] \in [I,I]$ implies that: $$[\nabla^L_m,\nabla^L_n] + [\nabla^R_m,\nabla^R_n] - 2t_{[mn]}^{0} = 0\;\; \mbox{ mod } [I,I]$$ Similarly, $[(\nabla_m^L - \nabla_m^R),[(\nabla^L_{\overline{m}} - \nabla^L_{\overline{m}}),
\nabla_{\alpha}^L]] \in [I,I]$ implies that: $$\begin{aligned}
\nabla_{\alpha}^L -
{1\over 10} f_{\alpha}{}^{m\dot{\alpha}}
[\nabla_{\overline{m}}^R,\nabla_{\dot{\alpha}}^R] = - Z^L_{\alpha} \mbox{ mod } [I,I]\end{aligned}$$ We will write “$\equiv 0$” instead of “$=0\mbox{ mod } [I,I]$”.
The $(30|32)$-dimensional linear space generated by $T_m^2,T_{[mn]}^0,Z_{\alpha}^L,Z_{\dot{\alpha}}^R$ is closed under the action of ${\bf g}$. It must be the adjoint representation of $\bf g$. For example, let us consider $\{\nabla^L_{\alpha}\;,\;Z^L_{\beta}\}$. Modulo $[I,I]$ this is same as $\{[\nabla^R_m\;,\;\nabla^R_{\dot{\alpha}}]\;,\;Z^L_{\beta}\}$, and using (\[CollapsToT0\]), (\[ActionOfT0OnLeft\]) and (\[ActionOfT0OnRight\]) this is proportional to $T_m$.
#### Proof of the proposition.
Let $J$ denote the subspace of $I/[I,I]$ generated by the action of $\bf g$ on (\[T2\]), (\[T0\]), (\[ZL\]) and (\[ZR\]). We have to prove that $J=I$. Let us consider some linear combination of commutators of $\nabla_{\alpha}^L$, for example: $$\begin{aligned}
\label{SumOfNestedCommutators}
\sum_{\vec{\alpha}}C^{\alpha_1\ldots\alpha_q}
[\nabla^L_{\alpha_1},\{\nabla^L_{\alpha_2},\ldots
[\nabla^L_{\alpha_{q-2}},\{\nabla^L_{\alpha_{q-1}},\nabla^L_{\alpha_q}\}]\ldots\}]\end{aligned}$$ Suppose that the coefficients $C$ are such that this expression belongs to $I$. We will prove that it also belongs to $J$, using the induction in $q$ — the number of commutators. Suppose that for $q<n$, all such expressions lie in $J$. We will prove that for $q=n$, (\[SumOfNestedCommutators\]) is also in $J$.
Notice that: $$\sum_{\vec{\alpha}}C^{\alpha_1\ldots\alpha_5}
[\nabla^L_{\alpha_1},\{\nabla^L_{\alpha_2},\ldots
\{\nabla^L_{\alpha_{q-1}},
\left(\nabla^L_{\alpha_q} -
{1\over 10}f_{\alpha_q}{}^{\overline{m}\beta}[\nabla_m^R,\nabla^R_{\dot{\beta}}]
\right)
\}]\ldots\}] \in J$$ because $\nabla^L_{\alpha} - {1\over 10}f_{\alpha}{}^{\overline{m}\beta}[\nabla_m^R,\nabla^R_{\dot{\beta}}]\in J$. Therefore, it remains to prove that the following expression belongs to $J$: $$\sum_{\vec{\alpha}}C^{\alpha_1\ldots\alpha_5}
[\nabla^L_{\alpha_1},\{\nabla^L_{\alpha_2},\ldots
\{\nabla^L_{\alpha_{q-1}},
f_{\alpha_q}{}^{\overline{m}\beta}[\nabla_m^R,\nabla^R_{\dot{\beta}}]
\}]\}]$$ (notice that it automatically belongs to $I$). When we commute $\nabla^R$ with $\nabla^L$, the number of commutators drops and we are left with $q-4$ commutators. This provides the step of the induction.
#### Calculation of $\{\nabla_{\alpha}^L\;,\; Z^R_{\dot{\alpha}}\}$ and $\{\nabla_{\dot{\alpha}}^R\;,\;Z^L_{\alpha}\}$.
Here we will prove that both $\{\nabla^L_{\alpha}\;,\; Z^R_{\dot{\alpha}}\}$ and $\{\nabla^R_{\dot{\alpha}}\;,\;Z^L_{\alpha}\}$ are proportional to $f_{\alpha\dot{\alpha}}{}^{[mn]}T^0_{[mn]}$, and $[\nabla_m,T^2_n]$ is proportional to $f_{mn}{}^{[pq]} T^0_{[pq]}$. Let us define $\nabla^R_{\alpha}$ and $\nabla^L_{\dot{\alpha}}$ so that: $$\begin{aligned}
[\nabla_m^L,\nabla_{\alpha}^L] =\;&
f_{m\alpha}{}^{\dot{\alpha}}\nabla^L_{\dot{\alpha}}
\label{DefNablaLDot}
\\
[\nabla_m^R,\nabla_{\dot{\alpha}}^R] = \;&
f_{m\dot{\alpha}}{}^{\alpha}\nabla^R_{\alpha}
\label{DefNablaR}\end{aligned}$$ That the RHS of (\[DefNablaLDot\]) is proportional to $f_{m\alpha}{}^{\dot{\alpha}}$ and the RHS of (\[DefNablaR\]) is proportional to $f_{m\dot{\alpha}}{}^{\alpha}$ follows from (\[DefNablaML\]) and (\[DefNablaMR\]).
To calculate $\{\nabla^L_{\alpha}\;,\; Z^R_{\dot{\alpha}}\}$, $\{\nabla^R_{\dot{\alpha}}\;,\;Z^L_{\alpha}\}$ and $[\nabla_m,T^2_n]$ we start with the following observation: $$\label{NablaLZRPlusNablaRZL}
\{\nabla_{\alpha}^L\;,\;Z^R_{\dot{\alpha}}\}
+ \{\nabla_{\dot{\alpha}}^R\;,\;Z_{\alpha}^L\} \equiv 0$$ This follows from: $$0 \equiv \{\nabla^L_{\alpha} - \nabla^R_{\alpha}\;,\;
\nabla^L_{\dot{\alpha}} - \nabla^R_{\dot{\alpha}}\} =
\{\nabla^L_{\alpha}\;,\;\nabla^L_{\dot{\alpha}}\} +
\{\nabla^R_{\alpha}\;,\;\nabla^R_{\dot{\alpha}}\} - 2t^0_{\alpha\dot{\alpha}}$$ Also notice: $$\begin{aligned}
& \{[\nabla_m^L,\nabla_{\dot{\beta}}^L]\;,\; \nabla^R_{\dot{\alpha}} - \nabla^L_{\dot{\alpha}}\} \;\equiv
f_{m\dot{\beta}}{}^{\beta}\{\nabla^L_{\beta}\;,\; \nabla^R_{\dot{\alpha}} -\nabla^L_{\dot{\alpha}}\} \;=
\nonumber \\
=\;& [\nabla_m^L\;,\;\{\nabla_{\dot{\beta}}^L\;,\; \nabla^R_{\dot{\alpha}} - \nabla^L_{\dot{\alpha}}\}] \;-\;
\{\nabla_{\dot{\beta}}^L\;,\; [\nabla_m^L\;,\;\nabla^R_{\dot{\alpha}} - \nabla^L_{\dot{\alpha}}]\} \; \equiv
\nonumber \\
\equiv \;&
- f_{\dot{\alpha}\dot{\beta}}{}^n[\nabla_m^L\;,\;\nabla_n^L-\nabla_n^R] \;-\;
f_{m\dot{\alpha}}{}^{\gamma}\{\nabla^L_{\dot{\beta}}\;,\;
\nabla^R_{\gamma} - \nabla^L_{\gamma}\}\end{aligned}$$ This implies: $$\begin{aligned}
f_{m\dot{\beta}}{}^{\beta}\{\nabla^L_{\beta}\;,\;Z^R_{\dot{\alpha}}\} -
f_{m\dot{\alpha}}{}^{\gamma}\{\nabla_{\dot{\beta}}^R\;,\;Z^L_{\gamma}\} =
- f_{\dot{\alpha}\dot{\beta}}{}^n[\nabla_m^L\;,\;\nabla_n^L-\nabla_n^R] \end{aligned}$$ Similarly: $$f_{m\beta}{}^{\dot{\beta}}\{\nabla_{\dot{\beta}}^R\;,\;Z_{\alpha}^L\}
- f_{m\alpha}{}^{\dot{\gamma}}\{\nabla^L_{\beta}\;,\;Z_{\dot{\gamma}}^R\} =
- f_{\alpha\beta}{}^n[\nabla_m^R\;,\;\nabla_n^R-\nabla_n^L]$$ Taking into account (\[NablaLZRPlusNablaRZL\]), we get the following system of equations for $X_{\alpha\dot{\alpha}} = \{\nabla^L_{\alpha}\;,\;Z^R_{\dot{\alpha}}\}$ and $X_{mn} = [\nabla_m^L\;,\;\nabla_n^L - \nabla_n^R]$: $$\begin{aligned}
2f_{m(\dot{\alpha}|}{}^{\gamma}X_{\gamma|\dot{\beta})} \;+\;
f_{\dot{\alpha}\dot{\beta}}{}^n X_{mn} \;=\;0
\label{Cocycle1}
\\
2f_{m(\alpha}{}^{\dot{\gamma}}X_{\beta)\dot{\gamma}} \;+\;
f_{\alpha\beta}{}^n X_{mn}\;=\; 0
\label{Cocycle2}\end{aligned}$$ This system of equations has the following solution, which defines $T^0_{[pq]}$: $$\begin{aligned}
X_{\alpha\dot{\alpha}} =\;& f_{\alpha\dot{\alpha}}{}^{[pq]}\;T^0_{[pq]}
\label{DefT0}
\\
X_{mn} = \;& - f_{mn}{}^{[pq]} \;T^0_{[pq]}\end{aligned}$$ We have to prove that there are no other solutions. Let us use the identity: $$f_m{}^{\alpha\beta}f_{\alpha\beta}{}^n = 16\; \delta_m^{\overline{n}}$$ Contracting (\[Cocycle1\]) and (\[Cocycle2\]) with $f^{\dot{\alpha}\dot{\beta}}{}_{\overline{k}}$ and $f^{\alpha\beta}{}_{\overline{k}}$ we get: $$\begin{aligned}
2f_{m\dot{\alpha}}{}^{\gamma}f_{\overline{k}}{}^{\dot{\alpha}\dot{\beta}}X_{\gamma\dot{\beta}} \;+\;
16\; X_{mk} \;=\;& 0
\label{XgreekVsXlat1}
\\
2f_{m\alpha}{}^{\dot{\gamma}}f_{\overline{k}}{}^{\alpha\beta} X_{\beta\dot{\gamma}} \;+\;
16\; X_{mk} \;=\;& 0\end{aligned}$$ This implies: $$\label{ffX}
f_{m\dot{\alpha}}{}^{\gamma}f_{\overline{k}}{}^{\dot{\alpha}\dot{\beta}}X_{\gamma\dot{\beta}}
+
f_{\overline{k}\dot{\alpha}}{}^{\gamma}f_{m}{}^{\dot{\alpha}\dot{\beta}}X_{\gamma\dot{\beta}}
=0$$ Let us assume that the pair $(m,k)$ is such that:\
-------- --------------------------------------------------
either $m\in \{0,\ldots,4\}$ and $k\in \{5,\ldots,9\}$
or $m\in \{5,\ldots,9\}$ and $k\in \{0,\ldots,4\}$;
-------- --------------------------------------------------
\
then (\[ffX\]) implies that for such pairs $(m,k)$ the expression $f_{m\dot{\alpha}}{}^{\gamma}f_{\overline{k}}{}^{\dot{\alpha}\dot{\beta}}X_{\gamma\dot{\beta}}$ is symmetric under the exchange $m\leftrightarrow k$. But $X_{mk}$ is always antisymmetric under such an exchange. Therefore Eq. (\[XgreekVsXlat1\]) implies that $X_{mk}$ is only nonzero when either both $m$ and $k$ belong to $\{0,\ldots,4\}$, or both $m$ and $k$ belong to $\{5,\ldots,9\}$. This means that $X_{mk}$ is proportional to $f_{mk}{}^{\bullet}$, and we can define $T^0_{[pq]}$ from (\[DefT0\]). Then (\[Cocycle1\]) gives: $$2f_{m(\dot{\alpha}|}{}^{\gamma}\left(
X_{\gamma|\dot{\beta})} \;-\;f_{\gamma|\dot{\beta})}{}^{[pq]}Y_{[pq]}
\right) = 0$$ which implies that $X_{\alpha\dot{\alpha}} \;=\;f_{\alpha\dot{\alpha}}{}^{[pq]}Y_{[pq]}$.
#### To summarize,
$I\over [I,I]$ is a finite-dimensional space, the adjoint representation of ${\bf g}$.
Ghost number 2: vertex operators
--------------------------------
The cohomology group $H^n(I)$ is a linear space dual[^7] to the homology $H_n(I)$. The vertex operators correspond to $H^2(I) = (H_2(I))'$. The linear space $H_2(I)$ consists of the expressions of the form: $$\begin{aligned}
a = \;& \sum_i x_i\wedge y_i
\\
\;& \sum_i [x_i,y_i]=0
\label{ZeroInternalCommutator}\end{aligned}$$ where $x_i$ and $y_i$ are elements of $I$, with the equivalence relations: $$a\;\; \simeq\;\; a + [x,y]\wedge z + [y,z]\wedge x + [z,x]\wedge y$$ We do not have the complete analysis at the ghost number two. It must be true that $H_2(I)$ correspond to the space of gauge-invariant[^8] operators at a marked point in $AdS_5\times S^5$. This is an infinite-dimensional representation of $\bf g$. The simplest element of $H_2(I)$ is: $$\begin{aligned}
\label{DilatonAtZero}
{\cal O} = C^{\alpha\dot{\alpha}}(\nabla_{\alpha}^L - W^R_{\alpha})\wedge
(\nabla_{\dot{\alpha}}^R - W^L_{\dot{\alpha}})\end{aligned}$$ This probably corresponds to the value of the dilaton[^9]. It should be possible to obtain other fields by acting on (\[DilatonAtZero\]) with $\nabla_{\alpha}^L$ and $\nabla_{\dot{\alpha}}^R$.
Flat space limit {#sec:FlatSpaceLimit}
================
In this section we will study the cohomology of the BRST operator in flat space.
In flat space ${\cal L}^{\rm tot} = {\cal L}^L \oplus {\cal L}^R$. The limit of the BRST complex (\[StandardBRSTComplex\]) is: $$\label{Qflat}
Q_{\rm SUGRA} =
\lambda^{\alpha}_L \left(
{\partial\over\partial \theta_L^{\alpha}} +
\Gamma^m_{\alpha\beta}\theta_L^{\beta}{\partial\over\partial x^m}
\right)
+
\lambda^{\hat{\alpha}}_R \left(
{\partial\over\partial \theta_R^{\hat{\alpha}}} +
\Gamma^m_{\hat{\alpha}\hat{\beta}}
\theta_R^{\hat{\beta}}{\partial\over\partial x^m}
\right)$$ acting on functions of $\theta_L,\theta_R,x,\lambda_L,\lambda_R$.
Ghost number 1.
---------------
The space $I\over [I,I]$ is generated by $\nabla^L_m - \nabla^R_m$, $[\nabla_m^L\;,\;\nabla_n^L]$, $W^{\alpha}_L$ and $W^{\dot{\alpha}}_R$. We observe: $$\begin{aligned}
[\nabla_m^L\;,\;\nabla_n^L] = \;& - [\nabla_m^R\;,\;\nabla_n^R] \;\mbox{ mod }\; [I,I]\end{aligned}$$ As a representation of $\bf susy$, this space should be the dual to ${\bf susy}+{\bf Lorentz}$. We observe: $$\begin{aligned}
\{\nabla_{(\alpha}\;,\; \Gamma_{\beta)\gamma}^m W_L^{\gamma}\} \equiv \;&
{1\over 2}\Gamma^n_{\alpha\beta}[\nabla_n^L\;,\;\nabla_m^L]
\label{FlatSpaceNablaW}\end{aligned}$$ As explained in [@Mafra:2009wq], Eq. (\[FlatSpaceNablaW\]) implies that $\nabla_{\alpha}W_L^{\gamma}$ is proportional to $(\Gamma_{mn})^{\gamma}_{\alpha}[\nabla_n^L\;,\;\nabla_m^L]$.
Ghost number 2.
---------------
We do not have the complete analysis at the ghost number two. The RR should correspond to $W_L^{\alpha}\wedge W_R^{\dot{\alpha}}$. The NSNS 3-form field strength $H=dB$ should correspond to: $$\label{HCorrespondsTo}
H_{klm} = (\nabla_{[k}^L-\nabla_{[k}^R)\wedge [\nabla^L_l,\nabla^L_{m]}]$$ The following expression $$\label{Curvature}
R'_{klmn} = [\nabla_k^L,\nabla_l^L]\wedge [\nabla_m^R,\nabla_n^R]
+ [\nabla_m^L,\nabla_n^L]\wedge [\nabla_k^R,\nabla_l^R]$$ should correspond to a linear combination of the curvature tensor $R_{klmn}$ and the second derivatives of the dilaton — see Eq. (\[IdentificationOfRPrime\]). It satisfies the relations: $$\begin{aligned}
R'_{klmn} = \;& R'_{mnkl} = - R'_{lkmn}
\label{RiemannSymmetries}
\\
R'_{[klmn]} =\;& 0
\label{RiemannJacobi} \\
\nabla_{[j} R'_{kl]mn} = \;& 0
\label{RiemannSecondJacobi}\end{aligned}$$ Notice that $R'_{k[lmn]}=0$ follows from (\[RiemannSymmetries\]) and (\[RiemannJacobi\]). Eq. (\[RiemannSymmetries\]) follows immediately from (\[Curvature\]). Here is the proof of (\[RiemannJacobi\]): $$\begin{aligned}
R'_{klmn} = \;& [\nabla^L_k\;,\; \nabla^L_l]\wedge
[\nabla^R_m -\nabla^L_m\;,\;\nabla^R_n - \nabla^L_n]
+ ((kl)\leftrightarrow (mn))
\nonumber
\\
\Rightarrow
R'_{[klmn]} = \;&2\; [\nabla^L_{[k}\;,\; \nabla^L_l]\wedge
[\nabla^R_m -\nabla^L_m\;,\;\nabla^R_{n]} - \nabla^L_{n]}] \; \equiv
\nonumber
\\
\equiv\;& 4\; [[\nabla^L_{[k}\;,\; \nabla^L_l]\;,\;\nabla^R_m -\nabla^L_m]\wedge
(\nabla^R_{n]} - \nabla^L_{n]}) \;=\;0\end{aligned}$$ — here $[[\nabla^L_{[k},\nabla^L_l],\nabla^L_{m]}]=0$ because of the Jacobi identity. To prove (\[RiemannSecondJacobi\]) we observe that when calculating $\nabla_j\phi$ for any element $\phi$ of $H_2(I)$, we can use either $\nabla^L_j\phi$ or $\nabla^R_j\phi$. Since both terms on the right hand side of (\[Curvature\]) are in $H_2(I)$, we are free to use $\nabla^L_j$ when calculating $\nabla_j([\nabla_k^L,\nabla_l^L]\wedge [\nabla_m^R,\nabla_n^R])$ and $\nabla_j^R$ when calculating $\nabla_j([\nabla_m^L,\nabla_n^L]\wedge [\nabla_k^R,\nabla_l^R])$. Those are both zero because of the Jacobi identity.
#### Mismatch.
It turns out that the linearized SUGRA equations of motion are not satisfied, because $\nabla^k H_{klm}\neq 0$. Using the identities from Appendix B of [@Mafra:2009wq], we derive using (\[HCorrespondsTo\]): $$\begin{aligned}
\nabla^kH_{klm} = \;&
-{2\over 3}[\nabla_k^L,\nabla_l^L]\wedge [\nabla_k^L,\nabla_m^L]
+ {1\over 3}(\nabla_k^L-\nabla_k^R)\wedge
[\nabla_k^L,[\nabla_l^L,\nabla_m^L]]\;+
\nonumber \\
& + {1\over 3}(\nabla_{[l}^L - \nabla_{[l}^R)\wedge
\Gamma_{m]\alpha\beta}\{W^{\alpha}_L,W^{\beta}_L\}\end{aligned}$$ However, the derivatives of $\nabla^k H_{klm}$ are all zero[^10]: $$\label{DerivativesOfDivHAreZero}
\nabla_n\nabla^kH_{klm} =0$$ therefore this is a “zero mode effect”. Moreover, we have: $$\begin{aligned}
\nabla^kH_{klm} =\;& \nabla_{[l}A_{m]}^L = \nabla_{[l}A_{m]}^R
\label{DivHVsDA}\\
\mbox{\tt\small where }\;&
A^L_m = {2\over 3}(\nabla_n^L-\nabla_n^R)
\wedge [\nabla_n^L,\nabla_m^L]
+ {1\over 3}\Gamma_{\alpha\beta m}W_L^{\alpha}\wedge W_L^{\beta}
\nonumber\\
\;& A^R_m = {2\over 3}(\nabla_n^R-\nabla_n^L)
\wedge [\nabla_n^R,\nabla_m^R]
+ {1\over 3}\Gamma_{\alpha\beta m}W_R^{\alpha}\wedge W_R^{\beta}\end{aligned}$$ Notice that $A^L_m$ and $A^R_m$ are both in $H_2(I)$.
#### The dilaton
The difference $A_m^L - A_m^R$ should be identified with the first derivative of the dilaton $\partial_m\phi$. Notice that: $$\nabla_n(A_m^L - A_m^R) =
{4\over 3} [\nabla_k^L,\nabla_{(m}^L]\wedge [\nabla_{n)}^R,\nabla_k^R]$$ This is in agreement with the statement that (\[Curvature\]) is a linear combination of the Riemann-Christoffel tensor $R_{klmn}$ and the derivatives of the dilaton $\partial_{[l}g_{k][m}\partial_{n]} \phi$. Indeed, we have: $$\begin{aligned}
g^{lm}\left(
[\nabla_k^L,\nabla_l^L]\wedge [\nabla_m^R,\nabla_n^R]
+ [\nabla_m^L,\nabla_n^L]\wedge [\nabla_k^R,\nabla_l^R]
\right) - {3\over 4} \nabla_n (A_k^L - A_k^R) = 0\end{aligned}$$ which is the Einstein’s equation $R_{kn} = 0$ for the Ricci tensor $R_{kn}=g^{lm}R_{klmn}$, if we identify: $$\begin{aligned}
\label{IdentificationOfRPrime}
& [\nabla_k^L,\nabla_l^L]\wedge [\nabla_m^R,\nabla_n^R]
+ [\nabla_m^L,\nabla_n^L]\wedge [\nabla_k^R,\nabla_l^R] =
\nonumber \\
= \;& R_{klmn} +
\partial_{[l}g_{k][m}\partial_{n]}\phi\end{aligned}$$ \
where $R_{klmn}$ is the Riemann-Christoffel tensor in the Einsten frame, and $\partial_n\phi = {3\over 8} (A^L_n - A^R_n)$. Also observe that $\nabla_n(A_n^L - A_n^R) =0$ — the Klein-Gordon equation for the dilaton. Indeed: $$\begin{aligned}
& [\nabla_k^L,\nabla_l^L]\wedge [\nabla_k^R,\nabla_l^R] =
\nonumber \\
=\;&
[(\nabla_k^L-\nabla_k^R),(\nabla_l^L-\nabla_l^R)]\wedge [\nabla_k^R,\nabla_l^R]
\simeq
\nonumber \\
\simeq \;&
2(\nabla_k^L - \nabla_k^R)\wedge [\nabla_l^R,[\nabla_l^R,\nabla_k^R]] =
- (\nabla_k^L - \nabla_k^R)\wedge \Gamma_{k\alpha\beta} \{W^{\alpha}_R, W^{\beta}_R\}
\simeq
\nonumber \\
\simeq \;&
\Gamma_{k\alpha\beta} [\nabla_k^R , W^{\alpha}_R ]\;\wedge
W^{\beta}_R = 0\end{aligned}$$ (We used the Dirac equation $ \Gamma_{k\alpha\beta} [\nabla_k^R , W^{\alpha}_R ] = 0$.)
#### Unphysical operator
We have seen that the difference $A_m^L - A_m^R$ corresponds to the derivative of the dilaton: $\partial_m\phi$. But the sum $A_m^L + A_m^R$ presents a problem. Observe that: $$\begin{aligned}
\nabla_l(A_m^L + A_m^R) =\;& \nabla_{[l}(A_{m]}^L + A_{m]}^R)
\\
\nabla_k\nabla_l(A_m^L + A_m^R) =\;& 0\end{aligned}$$ This means that the first derivative of $(A_m^L + A_m^R)$ is [*a constant*]{}.
#### Relation to the results of [@Bedoya:2010qz; @Mikhailov:2012id]
This mismatch is not surprizing. We know from [@Bedoya:2010qz] that the zero momentum states are not correctly reproduced as the cohomology of the “naive” BRST complex (\[StandardBRSTComplex\]). Therefore we do expect a mismatch in the zero mode sector of the space of local operators.
A state on which $A_m^L + A_m^R$ is nonzero is described in [@Mikhailov:2012id]. It is obtained as the flat space limit of the nonphysical AdS vertex of [@Bedoya:2010qz] with the internal commutator taking values in ${\bf g}_{\bar{2}}$ (using the notations of Section \[sec:BRSTComplexTypeIIB\]). In this case $A_m^L + A_m^R$ is constant — the gradient of the “asymmetric dilaton”.
Besides being constant, $A_m^L + A_m^R$ can also be depending on $x$ linearly. To obtain the state with $A_m^L + A_m^R$ depending linearly on $x$, we have to consider the flat space limit of the nonphysical vertex ${\cal B}_{ab}j^a\wedge j^b$ with the internal commutator $f^{ab}{}_c {\cal B}_{ab}$ taking values in ${\bf g}_{\bar{0}}$ [@Bedoya:2010qz; @Mikhailov:2012id]. It depends on a constant antisymmetric tensor $B_{mn}$. The leading term in the flat space limit is a trivial constant NSNS $B$-field $B_{mn}dx^m\wedge dx^n$, which can be gauged away. Discarding the terms with $\theta$’s, the leading nontrivial term is: $$B_{mn}dx^m\wedge
\left(x^n\sum_{k=0}^4(dx_kx^k) - dx^n\sum_{k=0}^4(x_k x^k)\right)$$ This does not solve the SUGRA equations $\partial^nH_{nml}=0$, instead $\partial^nH_{nml}$ is proportional to $B_{mn}dx^m\wedge dx^n$ — a constant 2-form.
In terms of the unintegrated vertex, the observable $A_m^L + A_m^R$ should be identified as follows. It is proportional to $\partial^nB_{mn}$ in the gauge where the vertex has ghost number $(1,1)$, [*i.e.*]{} only $\lambda_L\lambda_R$ terms, no $\lambda_L\lambda_L$ and $\lambda_R\lambda_R$ terms[^11].
#### Nonphysical operator: summary
Let us denote: $$\begin{aligned}
[\nabla_k^L,\nabla_l^L]\wedge [\nabla_m^R,\nabla_n^R]
+ [\nabla_m^L,\nabla_n^L]\wedge [\nabla_k^R,\nabla_l^R]
\;& = {\cal R}_{klmn}
\\
(\nabla_{[k}^L-\nabla_{[k}^R)\wedge [\nabla^L_l,\nabla^L_{m]}]
\;& = H_{klm} =\partial_{[k}B_{lm]}
\\
A_m^{\pm}\;& = A_m^L \pm A_m^R
\label{DefApm}\end{aligned}$$ We get the following equations of motion: $$\begin{aligned}
g^{lm}{\cal R}_{klmn} \;& = {3\over 4} \nabla_{(k}A^-_{n)}
\\
0 \;& = \nabla_{[k}A^-_{n]}
\\
\nabla^k H_{klm} \;& = \nabla_{[l} A^+_{m]}
\\
0 \;& = \nabla_{(l}A^+_{m)}\end{aligned}$$ The gradient of the dilaton corresponds to $A^-_n$, while $A_n^+$ does not have a clear interpretation in the Type IIB supergravity. The “observable” $A_n^+$ is dual to the unphysical vertex of [@Mikhailov:2012id]. The unphysical vertex is not BRST trivial. However, as we explained in [@Mikhailov:2012id], it should be thrown away because it leads to a quantum anomaly in the worldsheet sigma-model at the 1-loop level.
#### Generic element of $H_2(I)$
The “generic” element is: $${\cal O} = x_L\wedge x_R$$ where $x_L\in I\cap {\cal L}^L$ and $x_R\in I\cap {\cal L}^R$. Notice that the following expression: $$(\nabla_m x_L)\wedge x_R - x_L\wedge (\nabla_m x_R)$$ is zero in homology, [*i.e.*]{} exact: $$\begin{aligned}
(\nabla_m x_L)\wedge x_R - x_L\wedge (\nabla_m x_R) =
\delta( (\nabla_m^L - \nabla_m^R)\wedge x_L \wedge x_R )\end{aligned}$$ Indeed, the generic gauge-invariant SUGRA operator can be understood as the product of two gauge-invariant Maxwell operators ${\cal O}_L$ and ${\cal O}_R$, with the condition that ${\cal O}_L \;\stackrel{\leftrightarrow}{\partial\over{\partial x^m}} {\cal O}_R = 0$. The zero momentum special operators of the form (\[DivHVsDA\]) are not of this form.
Higher ghost numbers
--------------------
This section was [**added in the revised version**]{} of the paper. We have previously claimed that the cohomology at the ghost number higher than 2 vanishes. We are greateful to the referee for insisting that we present a proof of this statement. Upon careful examination, it turns out that the statement is wrong. There is some nontrivial cohomology at least at the ghost number 3. Here we will only do a preliminary analysis:
- We prove that the cohomology at the ghost number $>4$ vanishes.
- We give an example of the nontrivial cohomology class at the ghost number 3.
We suspect that the cohomology at the ghost numbers 3 and 4 is a finite-dimensional space, and is in some way related to the unphysical states of [@Bedoya:2010qz; @Mikhailov:2012id].
We will start by proving the vanishing theorem for the super-Maxwell cohomology at the ghost number higher than 1. We will then point out that the SUGRA BRST complex is [*amlost*]{} the tensor product of two super-Maxwell complexes (the “left sector” and the “right sector”). If it were, literally, the tensor product, that would indeed imply the vanishing theorem at the ghost number $>2$. But in fact, even in flat space there is some “interaction” between the left and the right sector, and this leads to a nontrivial cohomology at least at the ghost number 3.
### Super-Maxwell BRST complex
The cohomology of the super-Maxwell BRST complex: $$Q_{\rm SMaxw} = \lambda^{\alpha}\left(
{\partial\over\partial \theta^{\alpha}} +
\Gamma^m_{\alpha\beta}\theta^{\beta}{\partial\over\partial x^m}
\right)$$ is only nontrivial at the ghost numbers 0 and 1.
#### Sketch of the proof
This fact is well-known in the pure spinor formalism. At the ghost number 0, the cohomology is formed by the constants (no dependence on $\lambda$, $x$ and $\theta$). At the ghost number 1, the cohomology is the solutions of the free Maxwell equation and the free Dirac equation. The vanishing of the cohomology at the ghost number 2 is equivalent to the following two statements: 1) for any current $j_m$ such that $\partial_mj_m=0$ always exists the gauge field $F_{mn}$ satisfying $\partial_{[k}F_{lm]}=0$ and $\partial_mF_{mn} = j_n$ and 2) for any spinor $\psi$ exists a spinor $\phi$ such that $\Gamma^m\partial_m\phi = \psi$. The vanishing of the cohomology at the ghost number 3 is equivalent to the statement that for any $\rho$ exists $j_m$ such that $\partial_m j_m = \rho$. All these facts are proven in any graduate course of classical electrodynamics.
### Type IIB BRST complex
The BRST complex of Type IIB in flat space is [*almost*]{} the tensor product of two SMaxwell complexes: $$\label{BRSTSMaxwTimesSMaxw}
Q_{{\rm SMaxw}\otimes {\rm SMaxw}} =
\lambda^{\alpha}_L \left(
{\partial\over\partial \theta_L^{\alpha}} +
\Gamma^m_{\alpha\beta}\theta_L^{\beta}{\partial\over\partial x_L^m}
\right)
+
\lambda^{\hat{\alpha}}_R \left(
{\partial\over\partial \theta_R^{\hat{\alpha}}} +
\Gamma^m_{\hat{\alpha}\hat{\beta}}
\theta_R^{\hat{\beta}}{\partial\over\partial x_R^m}
\right)$$ The cohomology of (\[BRSTSMaxwTimesSMaxw\]) is the tensor product of the cohomologies of two super-Maxwell complexes. Therefore it is only nontrivial at the ghost numbers 0,1 and 2. However, in the Type IIB BRST complex there is no separation of $x$ into $x_L$ and $x_R$. The actual BRST complex is therefore different from (\[BRSTSMaxwTimesSMaxw\]): $$\label{QSUGRA}
Q_{\rm SUGRA} =
\lambda^{\alpha}_L \left(
{\partial\over\partial \theta_L^{\alpha}} +
\Gamma^m_{\alpha\beta}\theta_L^{\beta}{\partial\over\partial x^m}
\right)
+
\lambda^{\hat{\alpha}}_R \left(
{\partial\over\partial \theta_R^{\hat{\alpha}}} +
\Gamma^m_{\hat{\alpha}\hat{\beta}}
\theta_R^{\hat{\beta}}{\partial\over\partial x^m}
\right)$$ The difference is that the left and the right sector have a common $x$ instead of separate $x_L$ and $x_R$. We also write: $$Q_{\rm SUGRA} = Q_L + Q_R$$ where $Q_L$ and $Q_R$ are the first and second terms on the right hand side of (\[QSUGRA\]).
#### Vanishing theorem:
$H^n_{Q_{\rm SUGRA}} = 0$ for $n>4$. Let us consider, for example, a vertex of the ghost number $5$.
#### Lemma
Given a vertex at the ghost number 5, we can always modify it by adding $Q$-exact terms so that the new vertex has only terms of the type $\lambda_L^1\lambda_R^4$.
We have to prove that the terms with $\lambda_R^5$, $\lambda_L^2\lambda_R^3$, $\lambda_L^3\lambda_R^2$, $\lambda_L^4\lambda_R$ and $\lambda_L^5$ can be gauged away. The term with $\lambda_R^5$ is $Q_R$-closed. Suppose that the term with the lowest power of $\theta_R$ is proportional to $\lambda_R^5\theta_R^p$. We observe that this term is closed under $\lambda_R{\partial\over\partial\theta_R}$ and therefore is equal to $\lambda_R{\partial\over\partial\theta_R}$ of some expression proportional to $\lambda_R^4\theta_R^{p+1}$. This means that we can add $Q$-exact terms so that the new vertex has terms of the order $\lambda_R^5$ starting with $\lambda_R^5\theta_R^{p+2}$. An induction by $p$ implies that the terms containing $\lambda_R^5$ can be all gauged away. Similarly, we can gauge away terms proportional to $\lambda_L^5$, then terms proportional to $\lambda_L^4\lambda_R$, then $\lambda_L^3\lambda_R^2$, then $\lambda_L^2\lambda_R^3$. This [**proves the Lemma**]{}.
Now we are left with the terms proportional to $\lambda_L^1\lambda_R^4$. In this gauge the vertex operator is both $Q_R$-closed and $Q_L$-closed. Let us look at the expansion in powers of $\theta_R$. Schematically: $$V = \lambda_R^4\left(
\theta_R^k\phi_k(\lambda_L,\theta_L,x) +
\theta_R^{k+1}\phi_{k+1}(\lambda_L,\theta_L,x) + \ldots
\right)$$ were every $\phi_j$ is linear in $\lambda_L$. We observe that all these $\phi_j$s are annihilated by $Q_L$ (because $Q_LV=0$ and $Q_L$ does not act on $\theta_R$): $$Q_L\phi_j =0$$ We also observe that in the leading term, the coefficient of $\phi_k$ is annihilated by $\lambda_R{\partial\over\partial\theta_R}$. This implies: $$\begin{aligned}
V =\;& Q_{\rm SUGRA}\left(
\lambda_R^3\theta_R^{k+1}\phi_k(\lambda_L,\theta_L,x)
\right) \; +
\nonumber \\
\;& + \lambda_R^4\left(
\theta_R^{k+1}\phi_{k+1}(\lambda_L,\theta_L,x) +
\theta_R^{k+2}\tilde{\phi}_{k+2}(\lambda_L,\theta_L,x) + \ldots
\right)\end{aligned}$$ This means that we are able to increase the order of the leading term by adding a $Q_{\rm SUGRA}$-exact expression. The induction in $k$ [**proves the Theorem**]{}.
But is it true that $H^n_{Q_{\rm SUGRA}} =0 $ for $n=3$ and $n=4$? It turns out that at least for $n=3$ the cohomology is nontrivial. The fact that the cohomology at the ghost number higher than 2 is nontrivial is (for us) unexpected. We will leave this for future research, giving here only an example.
#### Example of a vertex at the ghost number 3
For any constant 5-form $F$, let us denote $\hat{F} = F_{klmnp}\Gamma^{klmnp}$. Consider the following coboundary of $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}$: $$\begin{aligned}
\label{PhiIsQOfPsi}
\Phi[F] = \;& Q_{{\rm SMaxw}\otimes {\rm SMaxw}}\Psi[F]\end{aligned}$$ where $$\begin{aligned}
\Psi[F] =\;&
(\theta_L\Gamma^p\lambda_L) \left(\theta_L\Gamma_p\;
(x_L^m\Gamma_mx_R^n\Gamma_n + 5||x_L||^2)\hat{F}\;
\Gamma_q\theta_R
\right)(\lambda_R\Gamma^q\theta_R)\;+
\nonumber \\
\;& + (\theta_L\Gamma^p\lambda_L) \left(\theta_L\Gamma_p\;
x_L^m\Gamma_m f[\lambda_R\theta_R^4]\right)
+ \left(
g_n[\lambda_L\theta_L^4]x_R^n\hat{F}\Gamma_q\theta_R
\right)(\lambda_R\Gamma^q\theta_R)\end{aligned}$$ where $f[\lambda_R\theta_R^4]$ is chosen so that: $$\left(\lambda_R{\partial\over\partial\theta_R} + (\theta_R\Gamma^l\lambda_R)
{\partial\over\partial x_R^l}\right)
\left(
x_R^n\Gamma_n \hat{F} \Gamma_q\theta_R(\lambda_R\Gamma^q\theta_R)
+ f[\lambda_R\theta_R^4]
\right)= 0$$ and $g[\lambda_L\theta_L^4]$ is chosen so that: $$\begin{aligned}
\left(\lambda_L{\partial\over\partial\theta_L} + (\theta_L\Gamma^l\lambda_L)
{\partial\over\partial x_L^l}\right)\left(
(\theta_L\Gamma^p\lambda_L)
\theta_L\Gamma_p \left(x_L^m\Gamma_m \Gamma_n - 10 x_L^n\right)
+ g^n[\lambda_L\theta_L^4]
\right) = 0\end{aligned}$$ Such $f[\lambda_R\theta_R^4]$ and $g^n[\lambda_L\theta_L^4]$ exist because the expression $x_R^n\Gamma_n \hat{F}$ satisfies the “right” Dirac equation: $${\partial\over\partial x_R^k}\left(x_R^n\Gamma_n \hat{F}\right)\Gamma_k = 0$$ and the expression $\left(x_L^m\Gamma_m \Gamma_n - 10 x_L^n\right)$ satisfies the “left” Dirac equation: $${\partial\over\partial x^k_L}\Gamma_k \left(x_L^m\Gamma_m \Gamma_n - 10 x_L^n\right) = 0$$ We will now prove that $\Phi[F]$ depends on $x_L$ and $x_R$ only in the combination $x_L + x_R$. Indeed, for a constant $c^m$ let us introduce $\Xi[c,F]$ as follows: $$\begin{aligned}
\Xi[c,F]=\;&
c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right) \Psi[F] =
\nonumber \\
=\;&
(\theta_L\Gamma^p\lambda_L)\left(
\theta_L\Gamma_p \;c^m\Gamma_m \; x_R^n\Gamma_n \hat{F}\Gamma_q\theta_R
\right)(\lambda_R\Gamma^q\theta_R) \;+
\nonumber \\
\;&
+ (\theta_L\Gamma^p\lambda_L) \left(\theta_L\Gamma_p\;
c^m\Gamma_m f[\lambda_R\theta_R^4]\right) \; -
\nonumber \\
\;&
- (\theta_L\Gamma^p\lambda_L) \left(\theta_L\Gamma_p\;
(x_L^m\Gamma_mc^n\Gamma_n - 10(x_Lc))\hat{F}\;
\Gamma_q\theta_R
\right)(\lambda_R\Gamma^q\theta_R) \; -
\nonumber \\
\;&
- \left(g^n[\lambda_L\theta^4_L] c_n\hat{F}\;
\Gamma_q\theta_R
\right)(\lambda_R\Gamma^p\theta_R)
\label{XicF}\end{aligned}$$ and we observe that: $$\label{QdIsZero}
Q_{{\rm SMaxw}\otimes {\rm SMaxw}}\;
c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right) \Psi[F] = 0$$ Since $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}$ commutes with $c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)$, Eq. (\[QdIsZero\]) implies that $\Phi[F]$ depends on $x_L$ and $x_R$ only in the combination $x_L + x_R$, and is therefore a cocycle of $Q_{\rm SUGRA}$. We will now prove that $\Phi[F]$ is not a coboundary of $Q_{\rm SUGRA}$. We know that $\Phi[F]$ [*is*]{} a coboundary of $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}$, [*i.e.*]{} once we introduce separate $x_L$ and $x_R$ we have (\[PhiIsQOfPsi\]). The question is:
In order to answer this question, it is useful to consider $c$ as a ghost and interpret $\Xi[c,F]$ as a cocycle of the nilpotent operator $c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)$ acting [*on the cohomology*]{} of $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}$. The answer to the question (\[Question\]) is positive only if $\Xi[c,F]$ is a coboundary in this complex. The cohomology of $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}$ is the tensor product of two super-Maxwell solutions. We will now prove that $\Xi[c,F]$ represents a nonzero element of: $$\label{GroupH1}
H^1\left(\; c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)\;,\;\; {\rm SMaxw}_{(x_L)}\otimes {\rm SMaxw}_{(x_R)}
\right)$$ Remember that super-Maxwell is a direct sum of a solution of the free Maxwell equations and a solution of the free Dirac equation. Looking at (\[XicF\]), the corresponding cocycle corresponds to the tensor product of two solutions of the free Dirac equation. Such an element of $ {\rm SMaxw}_{(x_L)}\otimes {\rm SMaxw}_{(x_R)}$ can be represented as a bispinor field $\psi^{\alpha\hat{\beta}}(x_L,x_R)$ satisfying: $$\begin{aligned}
\Gamma^m_{\alpha\alpha'}{\partial\over\partial x^m_L}
\psi^{\alpha'\dot{\beta}}(x_L,x_R) = \;& 0
\label{LeftDiracEqn}\\
{\partial\over\partial x_R^m}\psi^{\alpha\dot{\beta}'}(x_L,x_R)
\Gamma^m_{\dot{\beta}'\dot{\beta}} = \;& 0
\label{RightDiracEqn}\end{aligned}$$ The element of (\[GroupH1\]) corresponding to $\Xi[c,F]$ is: $$\begin{aligned}
\label{Cocycle}
\psi(c;x_L,x_R)^{\alpha\dot{\beta}} = \;&
\left(
\hat{c} \hat{x}_R \hat{F} - (\hat{x}_L\hat{c} - 10 (x_L\cdot c))\hat{F}
\right)^{\alpha\dot{\beta}}\end{aligned}$$ where hat over letter stands for the contraction with the gamma-matrices, [*e.g.*]{} $\hat{x}_R = \Gamma_m x^m_R$. Let us analize the possibility of (\[Cocycle\]) being in the image of $c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)$: $$\begin{aligned}
\;& \left(
\hat{c} \hat{x}_R \hat{F} - (\hat{x}_L\hat{c} - 10 (x_L\cdot c))\hat{F}
\right)^{\alpha\dot{\beta}}\; \stackrel{?}{=}
\nonumber \\
\stackrel{?}{=}\;&
c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)
\left(
\phi^{\alpha\dot{\beta}}_{mn}x_L^mx_L^n +
\chi^{\alpha\dot{\beta}}_{mn}x_L^mx_R^n +
\sigma^{\alpha\dot{\beta}}_{mn}x_R^mx_R^n
\right)\end{aligned}$$ with all three $\phi^{\alpha\dot{\beta}}_{mn}x_L^mx_L^n$, $\chi^{\alpha\dot{\beta}}_{mn}x_L^mx_R^n$ and $\sigma^{\alpha\dot{\beta}}_{mn}x_R^mx_R^n$ satisfying both (\[LeftDiracEqn\]) and (\[RightDiracEqn\]). Looking at the part linear in $x_R$, this implies: $$\begin{aligned}
\left(\Gamma_m \hat{x}_R\hat{F}\right)^{\alpha\dot{\beta}} \;=\;
- \;2 \sigma^{\alpha\dot{\beta}}_{mn}x_R^n
+ \chi_{mn}^{\alpha\dot{\beta}}x_R^n \end{aligned}$$ The left Dirac equation on $\chi$ implies $\Gamma^m_{\alpha\alpha'}\chi^{\alpha'\dot{\beta}}_{mn} = 0$, therefore: $$10\left(\hat{x}_R\hat{F}\right)_{\alpha}^{\dot{\beta}} = -2 \Gamma^m_{\alpha\alpha'}\sigma_{mn}^{\alpha'\dot{\beta}}x_R^n$$ This implies that $\sigma$ is of the form: $$\begin{aligned}
\sigma^{\alpha\dot{\beta}}_{mn} =\;& - 5\delta_{mn}\hat{F}^{\alpha\dot{\beta}} +
s_{mn}^{\alpha\dot{\beta}}
\label{DefS}
\\
\mbox{ \tt where } \; & \Gamma^m_{\alpha\alpha'}s^{\alpha'\dot{\beta}}_{mn} = 0
\label{LeftDiracOnS}\end{aligned}$$ for some $s_{mn}^{\alpha\dot{\beta}}$ symmetric in $m\leftrightarrow n$. As we have already mentioned, $\sigma$ should satisfy the right Dirac equation: $$\label{RightDiracOnSigma}
\sigma_{mn}^{\alpha\dot{\beta}'}\Gamma^n_{\dot{\beta}'\dot{\beta}} = 0$$ Equations (\[LeftDiracOnS\]) and (\[RightDiracOnSigma\]) imply that the traces of $\sigma$ and $s$ are zero: $$\sigma_{mm}^{\alpha\dot{\beta}} = s_{mm}^{\alpha\dot{\beta}} = 0$$ but this contradicts (\[DefS\]) because the trace of $\delta_{mn} \hat{F}$ is not zero. This shows that (\[Cocycle\]) is not in the image of $c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)$, and therefore it represents a nonzero element of the cohomology group (\[GroupH1\]). This implies that $\Phi[F]$ is a BRST-nontrivial vertex operator at the ghost number three.
#### Generalization
The cohomology of $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}$ at the ghost number 3 is trivial, [*i.e.*]{} any cocycle with three $\lambda$’s can be represented as $Q_{{\rm SMaxw}\otimes {\rm SMaxw}}\Psi$. But sometimes $\Psi$ cannot be chosen to depend on $x_L$ and $x_R$ through $x_L + x_R$ only. The obstacle for that is in $H^1({\bf R}^{10},\;{{\rm SMaxw}\otimes {\rm SMaxw}})$ where ${\bf R}^{10}$ is the abelian group of translations, the Lie cohomology differential is $Q_{\rm Lie} = c^m\left({\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}\right)$. Notice that ${{\rm SMaxw}\otimes {\rm SMaxw}}$ splits into components: $$\begin{aligned}
& {{\rm SMaxw}\otimes {\rm SMaxw}} =
\\
= \;&({{\rm Maxw}\otimes {\rm Maxw}})
\oplus
({{\rm Maxw}\otimes {\rm Dirac}})
\oplus
({{\rm Dirac}\otimes {\rm Maxw}})
\oplus
({{\rm Dirac}\otimes {\rm Dirac}})
\nonumber\end{aligned}$$ Consider the cohomology in the sector ${{\rm Dirac}\otimes {\rm Dirac}}$, and more specifically those elements of it which have linear $x$-dependence. It turns out that this cohomology is identified with the quadratic in $x$ solutions $f$ of the “double Dirac equation” modulo solutions presentable as a sum of a solution of the left Dirac equation and a solution of the right Dirac equation: $$\begin{aligned}
\;&
{\partial\over\partial x^m}\Gamma^m_{\alpha\alpha'}
{\partial\over\partial x^n}\Gamma^n_{\dot{\alpha}\dot{\alpha}'}
f^{\alpha'\dot{\alpha}'}(x) = 0
\nonumber \\[5pt]
\mbox{\tt but }\nexists \; s \mbox{ \tt and }\sigma \mbox{ \tt such that: }&
f^{\alpha\dot{\alpha}} = s^{\alpha\dot{\alpha}} + \sigma^{\alpha\dot{\alpha}}
\label{DoNotExistSAndSigma}\\
\;& {\partial\over\partial x^m}\Gamma^m_{\alpha\alpha'}s^{\alpha'\dot{\alpha}} =0
\;\mbox{ \tt and }
{\partial\over\partial x^n}
\sigma^{\alpha\dot{\alpha}'}\Gamma^n_{\dot{\alpha}'\dot{\alpha}} =0
\nonumber\end{aligned}$$ Indeed, given such an $f^{\alpha\dot{\alpha}}$ with the quadratic $x$-dependence, we construct $\psi(c)$ in the following way: $$\psi(c) = \hat{c}\Gamma^n{\partial\over\partial x_R^n} f(x_R) +
\xi(x_L,c)$$ where $\xi$ is some solution of the left Dirac equation, chosen so that $Q_{\rm Lie}\psi = 0$; such a solution always exists because $H^2({\bf R}^{10},\;{\rm Dirac}) = 0$. Suppose that $\psi$ is in the image of $Q_{\rm Lie}$ acting on the quadratic (in $x_{L|R}$) elements of ${\rm Dirac}\otimes {\rm Dirac}$, [*i.e.*]{}: $$\psi(c) \stackrel{?}{=} c^m\left(
{\partial\over\partial x_L^m} - {\partial\over\partial x_R^m}
\right)\left(
\sigma\langle x_R\otimes x_R\rangle +
\chi\langle x_R\otimes x_L\rangle +
\phi\langle x_L\otimes x_L\rangle
\right)$$ The part of $\psi(c)$ linear in $x_R$ would be: $$- c^m{\partial\over\partial x_R^m}\sigma\langle x_R^{\otimes 2}\rangle
+ c^m{\partial\over\partial x_L^m}\chi\langle x_R\otimes x_L\rangle$$ This implies: $$\Gamma^m{\partial\over\partial c^m}\psi(c)\langle x_R\rangle =
10 \Gamma^n{\partial\over\partial x_R^n} f(x_R) = - \Gamma^m{\partial\over\partial x_R^m}\sigma\langle x_R^{\otimes 2}\rangle$$ in other words $f = s + \sigma$ where $\sigma$ satisfies the right Dirac equation and $s$ the left Dirac equation. This contradicts (\[DoNotExistSAndSigma\]).
Eq. (\[DefS\]) has $f^{\alpha\dot{\alpha}} = ||x||^2 \hat{F}^{\alpha\dot{\alpha}}$ with a 5-form $\hat{F}$; there are also solutions corresonding to a 3-form or 7-form $\hat{G}$: $$f = \hat{G} ||x||^2 -
{1\over 52} \hat{x}\Gamma_p\hat{G}\Gamma^p\hat{x}$$ and a 1-form or 9-form $\hat{A}$: $$f = \hat{A} ||x||^2 -
{1\over 28} \hat{x}\Gamma_p\hat{A}\Gamma^p\hat{x}$$ This means that the cohomology at the ghost number 3 at least includes states with the quantum number of a bispinor.
### Dual picture
We conjecture that the dual element of $H_3(I)$ is of the form: $$\begin{aligned}
{\cal O}^{\alpha\dot{\beta}} = \;&\phantom{-}
[\nabla_m^L,W_L^{\alpha}] \wedge W_R^{\dot{\beta}} \wedge
(\nabla^L_m - \nabla^R_m)\;-
\nonumber \\
\;& - W_L^{\alpha}\wedge [\nabla_m^R,W_R^{\dot{\beta}}] \wedge
(\nabla^L_m - \nabla^R_m)\;+
\nonumber \\
\;& +{1\over 2}
W_L^{\alpha}\wedge W_R^{\dot{\beta}'} (\Gamma^{mn})^{\dot{\beta}}_{\dot{\beta}'}
\wedge [\nabla^R_m,\nabla^R_n]\; +
\nonumber \\
\;& +{1\over 2}
W_L^{\alpha'}(\Gamma^{mn})^{\alpha}_{\alpha'}\wedge W_R^{\dot{\beta}}
\wedge [\nabla^L_m,\nabla^L_n]\end{aligned}$$
### Conjecture about the vertices at the ghost number 3
Generally speaking, the physical interpretation of vertex operators is:
- Ghost number 1: global symmetries of the space-time
- Ghost number 2: infinitesimal deformations of the space-time
- Ghost number 3: obstructions to continuing the infinitesimal deformations of the space-time to the second order in the deformation parameter
It is natural to conjecture that the vertices at the ghost number 3 obstruct those and only those infinitesimal deformations which are unphysical in the sense of [@Mikhailov:2012id].
The cohomology at the ghost numbers 3 and 4 deserves systematic investigation. We hope to return to this subject in the future work.
Conclusion
==========
In this paper we presented a relation between the cohomology of the pure spinor BRST complex in AdS space and the relative Lie algebra cohomology.
We used this relation to develop a “dual” point of view on the vertex operators in Type IIB. In this approach, instead of looking at the vertex operators, we look at the dual linear space which is identified with the gauge-invariant local operators of the Type IIB SUGRA. This works both in flat space and in AdS. We observe that some elements of the BRST cohomology do not correspond to any physical states, [*e.g.*]{} the $A^+$ of (\[DefApm\]). It turns out that there are also vertex operators at the ghost number three. They correspond to the obstructions for nonlinear deformations in the actions. Physically, these obstructions should not be present.
Such “unphysical” elements should go away if we restrict the BRST complex to the operators annihilated by the Virasoro constraints. We do not know what this restriction means from the point of view of the Lie algebra cohomology.
We conclude that the BRST complex (\[StandardBRSTComplex\]) in $AdS_5\times S^5$ and its flat space limit (\[Qflat\]) both have rich mathematical structure. But at the same time the cohomology does not give a complete description of the supergravity excitations. The difference is in some unphysical states. These unphysical states have polynomial $x$-dependence, as opposed to the usually considered exponential $x$-dependence. This polynomial (or “zero-momentum”) sector could be important in the calculation of the scattering amplitude, because the momentum conservation implies that the product of the scattering vertices has zero total momentum.
Exactness of (\[RelativeResolution\]) {#sec:Exactness}
=====================================
This is similar to the proof of the exactness of the standard Koszul resolution of the Lie algebra in [@Knapp]. For any Lie algebra $L$, the universal enveloping $UL$ is filtered so that ${\bf gr}^p UL = F^pUL/F^{p-1}UL = S^pL$. The differential in our complex acts in such a way, that we can consistently define: $$\begin{aligned}
\ldots \longrightarrow
F^{p-2}U{\cal L}^{\rm tot}\otimes_{{\bf g}_0} (\Lambda^2 I \otimes_{\bf C} A)
\longrightarrow
F^{p-1}U{\cal L}^{\rm tot}\otimes_{{\bf g}_0} (I \otimes_{\bf C} A)
\longrightarrow \;&
\nonumber \\
\longrightarrow
F^p U{\cal L}^{\rm tot}\otimes_{{\bf g}_{\bar{0}}} A
\longrightarrow
F^p U{\bf g}\otimes_{{\bf g}_{\bar{0}}} A \longrightarrow \;& 0\end{aligned}$$ This defines a series of complexes $d: X_n^p \to X_{n-1}^p$ parametrized by an integer $p$, where $X_{-1}^p = F^p U{\bf g}\otimes_{{\bf g}_{\bar{0}}} A$, $X_0^p=F^p U{\cal L}^{\rm tot}\otimes_{{\bf g}_{\bar{0}}} A$, and $X_n^p = F^{p-n}U{\cal L}^{\rm tot}\otimes_{{\bf g}_0} (\Lambda^n I \otimes_{\bf C} A)$ for $n>0$. At $p=0$ we get the exact sequence: $$0\longrightarrow A \longrightarrow A \longrightarrow 0$$ On the other hand, the factor-complex $X^p/X^{p-1}$ is: $$\begin{aligned}
\ldots \longrightarrow
S^{p-2}\left({\cal L}^{\rm tot}/{\bf g}_{\bar{0}}\right)
\otimes_{{\bf C}} \Lambda^2 I \otimes_{\bf C} A
\longrightarrow
S^{p-1}\left({\cal L}^{\rm tot}/{\bf g}_{\bar{0}}\right)\otimes_{\bf C}
I \otimes_{\bf C} A
\longrightarrow \;&
\nonumber \\
\longrightarrow
S^p \left({\cal L}^{\rm tot}/{\bf g}_{\bar{0}}\right)\otimes_{\bf C} A
\longrightarrow
S^p \left({\bf g}/{\bf g}_{\bar{0}}\right)
\otimes_{\bf C} A \longrightarrow \;& 0\end{aligned}$$ This is exact, being the de Rham complex of the linear space $I$ times functions of additional “inert” variables corresponding to a complement to ${\bf g}_{\bar{0}} + I$ in ${\cal L}^{\rm tot}$. By induction, the complexes $X^p$ are exact for all values of $p$, and therefore the complex (\[RelativeResolution\]) is exact.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Nathan Berkovits for discussions and the anonymous referee for useful suggestions. This work was supported in part by the Ministry of Education and Science of the Russian Federation under the project 14.740.11.0347 “Integrable and algebro-geometric structures in string theory and quantum field theory”, and in part by the RFFI grant 10-02-01315 “String theory and integrable systems”.
\[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{}
[BBMR11]{}
Oscar A. Bedoya, L.Ibiapina Bevilaqua, Andrei Mikhailov, and Victor O. Rivelles, *[Notes on beta-deformations of the pure spinor superstring in AdS(5) x S(5)]{}*, Nucl.Phys. **B848** (2011), 155–215, [ arXiv/1005.0049, ]{}.
Nathan Berkovits and Osvaldo Chandia, *Superstring vertex operators in an ads(5) x s(5) background*, Nucl. Phys. **B596** (2001), 185–196, [ hep-th/0009168, ]{}.
Nathan Berkovits, *[Super-Poincare]{} covariant quantization of the superstring*, JHEP **04** (2000), 018, [hep-th/0001035, ]{}.
, *[BRST]{} cohomology and nonlocal conserved charges*, JHEP **02** (2005), 060, [hep-th/0409159, ]{}.
, *Quantum consistency of the superstring in [AdS(5) x S(5)]{} background*, JHEP **03** (2005), 041, [hep-th/0411170, ]{}.
Nathan Berkovits and Paul S. Howe, *Ten-dimensional supergravity constraints from the pure spinor formalism for the superstring*, Nucl. Phys. **B635** (2002), 75–105, [hep-th/0112160, ]{}.
Osvaldo Chandia, Andrei Mikhailov, and Brenno C. Vallilo, *[A construction of integrated vertex operator in the pure spinor sigma-model in AdS5xS5]{}*, [arXiv/1306.0145, ]{}.
B. L. Feigin and D. B. Fuchs, *Cohomology of lie groups and algebras (in russian)*, VINITI t. 21, 1988.
Alexey L. Gorodentsev, A. S. Khoroshkin, and Alexei N. Rudakov, *[On syzygies of highest weight orbits]{}*, [arXiv/math/0602316, ]{}.
G. Hochschild, *Relative homological algebra*, Trans. Amer. Math. Soc. **82** (1956), 246–269.
Paul S. Howe, *[Pure spinors lines in superspace and ten-dimensional supersymmetric theories]{}*, Phys.Lett. **B258** (1991), 141–144.
Anthony W. Knapp, *Lie groups, lie algebras, and cohomology*, Princeton University Press, 1988.
Carlos R. Mafra, *[Superstring Scattering Amplitudes with the Pure Spinor Formalism]{}*, [arXiv/arXiv:0902.1552, ]{}.
Andrei Mikhailov, *[Finite dimensional vertex]{}*, JHEP **1112** (2011), 5, [arXiv/1105.2231, ]{}.
, *[Symmetries of massless vertex operators in AdS(5) x S\*\*5]{}*, Journal of Geometry and Physics (2011), [arXiv/0903.5022, ]{}.
, *[Cornering the unphysical vertex]{}*, JHEP **082** (2012), [arXiv/1203.0677, ]{}.
, *[A generalization of the Lax pair for the pure spinor superstring in AdS5 x S5]{}*, [arXiv/1303.2090, ]{}.
M. Movshev and Albert S. Schwarz, *[Algebraic structure of Yang-Mills theory]{}*, [arXiv/hep-th/0404183, ]{}.
, *[On maximally supersymmetric Yang-Mills theories]{}*, Nucl.Phys. **B681** (2004), 324–350, [arXiv/hep-th/0311132, ]{}.
, *[Supersymmetric Deformations of Maximally Supersymmetric Gauge Theories]{}*, [arXiv/0910.0620, ]{}.
Bengt E.W. Nilsson, *[SIMPLE TEN-DIMENSIONAL SUPERGRAVITY IN SUPERSPACE]{}*, Nucl.Phys. **B188** (1981), 176.
The [Stacks Project Authors]{}, *[*Stacks Project*]{}*, <http://math.columbia.edu/algebraic_geometry/stacks-git>.
Edward Witten, *[Twistor - Like Transform in Ten-Dimensions]{}*, Nucl.Phys. **B266** (1986), 245.
[^1]: A thorough investigation of the consequences of the basic constraint (\[BasicConstraint\]) can be found in [@Mafra:2009wq]
[^2]: A nice review can be found in the introductory part of [@Gorodentsev:2006fa]; cohomology with coefficients in a representation was not considered in [@Gorodentsev:2006fa], but it was discussed in [@Movshev:2009ba]
[^3]: Frobenius reciprocity implies a relation between (\[StandardBRSTComplex\]) and (\[BRSTComplexGeneralRepresentation\]), see [@Mikhailov:2009rx].
[^4]: For introduction into the Lie algebra cohomology, see [@Knapp; @FeiginFuchs]
[^5]: Note in the revised version: we are greatful to the referee of [@Chandia:2013kja] for pointing out an error in the original version of this subsection
[^6]: The coefficient $1\over 10$ depends on the choice of normalization for $\nabla_{\alpha}$; in our conventions $f_{\alpha\beta}{}^m = \Gamma^m_{\alpha\beta}$, and the projection $\mbox{pr}(\nabla_m)$ of $\nabla_m$ to $\bf g$ satisfies: $(\mbox{ad}_{{\rm pr}(\nabla_m)})^2|_{{\bf g}_{\bar{3}}}=1$ — no summation over $m$.
[^7]: This is the Poincaré duality, Section VI.3 of [@Knapp].
[^8]: Gauge invariance means is the diffeomorphism invariance plus various gauge symmetries of the Type IIB SUGRA
[^9]: We did not prove that (\[DilatonAtZero\]) is not exact. One can compute its value on some vertex operator and show that it it nonzero; but this is technically a nontrivial computation, and we did not do it
[^10]: Since the homology of $I$ is $I$-invariant, we can calculate either $\nabla_n^L\nabla^kH_{klm}$ or $\nabla_n^R\nabla^kH_{klm}$; it is easier to calculate $\nabla_n^R\nabla^kH_{klm}$
[^11]: If we try to change the gauge $B_{mn}\rightarrow B_{mn} + \partial_{[m}\Lambda_{n]}$ to get rid of $\partial^nB_{mn}$, this would generate some $\lambda_L\lambda_L$ and $\lambda_R\lambda_R$ terms [@Bedoya:2010qz].
|
{
"pile_set_name": "ArXiv"
}
|
On the front panel you will find several 2mm banana sockets with different colors. Their functions are briefly explained below.\
1. 5V OUT - This is a regulated 5V power supply that can be used for powering external circuits. It can deliver only upto 100mA current , which is derived from the 9V unregulated DC supply from the adapter.\
2. Digital outputs - four RED sockets at the lower left corner . The socket marked D0\* is buffered with a transistor; it can be used to drive 5V relay coils. The logic HIGH output on D0 will be about 4.57V whereas on D1, D2, D3 it will be about 5.0V. D0 should not be used in applications involving precise timing of less than a few milli seconds.\
3. Digital inputs - four GREEN sockets at the lower left corner. It might sometimes be necessary to connect analog outputs swinging between -5V to +5V to the digital inputs. In this case, you MUST use a 1K resistor in series between your analog output and the digital input pin.\
4. ADC inputs - four GREEN sockets marked CH0 to CH3\
5. PWG - Programmable Waveform Generator\
6. DAC - 8 bit Digital to Analog Converter output\
7. CMP - Analog Comparator negative input, the positive input is tied to the internal 1.23 V reference.\
8. CNTR - Digital frequency counter (only for 0 to 5V pulses)\
9. 1 mA CCS - Constant Current Source, BLUE Socket, mainly for Re- sistance Temperature Detectors, RTD.\
10. Two variable gain inverting amplifiers, GREEN sockets marked IN and BLUE sockets marked OUT with YELLOW sockets in between to insert resistors. The amplifiers are built using TL084 Op-Amps and have a certain offset which has to be measured by grounding the input and accounted for when making precise measurements.\
11. One variable gain non-inverting amplifier. This is located on the bot- tom right corner of the front panel. The gain can be programmed by connecting appropriate resistors from the Yellow socket to ground.\
12. Two offset amplifiers to convert -5V to +5V signals to 0 to 5V signals. This is required since our ADC can only take 0 to 5V input range. For digitizing signals swinging between -5V to +5V we need to convert them first to 0 to 5V range. Input is GREEN and output is BLUE.\
|
{
"pile_set_name": "ArXiv"
}
|
=23.7cm =16.5cm =-2.0cm =-1.4cm
DFTUZ 95-01
HUPD 9505
January, 1995
[**One loop renormalization of the four-dimensional theory for quantum dilaton gravity**]{}
[Ilya L. Shapiro]{}\
Departamento de Fisica Teorica, Universidad de Zaragoza, 50009, Zaragoza, Spain. [^1]\
Department of Physics, Hiroshima University, Higashi-Hiroshima 724, Japan. [^2]\
[**Abstract**]{}
We study the one loop renormalization in the most general metric-dilaton theory with the second derivative terms only. Classical action includes three arbitrary functions of dilaton. The general theory can be divided into two classes, models of one are equivalent to conformally coupled with gravity scalar field and also to general relativity with cosmological term. The models of second class have one extra degree of freedom which corresponds to dilaton. We calculate the one loop divergences for the models of second class and find that the theory is not renormalizable off mass shell. At the same time the arbitrary functions of dilaton in the starting action can be fine-tuned in such a manner that all the higher derivative counterterms disappear on shell. The only structures in both classical action and counterterms, which survive on shell, are the potential (cosmological) ones. They can be removed by renormalization of the dilaton field which acquire the nontrivial anomalous dimension, that leads to the effective running of the cosmological constant. For some of the renormalizable solutions of the theory the observable low energy value of the cosmological constant is small as compared with the Newtonian constant. We also discuss another application of our result. In particular, our calculations in a general dilaton model in original variables give the possibility to estimate quantum effects in $\Lambda+\alpha R+\beta R^2$ theory.
Introduction
============
Recently it has been a considerable interest to the metric - scalar gravity in four dimensions. The active research in this field was inspired by different reasons (see [@DEF] for the interesting discussion of the subject). In particular, the effective action of (super)string depends on both metric and dilaton (see, for example, [@GSW]). Such an effective action arise in a form of the power series in a string loop parameter $\alpha'$, and the standard point of view is that the higher orders in such an expansion correspond to higher energies. From this point of view at lower energy scale the action for gravity has the form of the lower derivative dilaton action. From another hand the presence of dilaton in a low derivative gravity action leads to the inflationary cosmological solution, that enables one to solve some specific problems in the field of cosmology. The problem of classical solutions and the cosmological phase transitions in a dilaton theory has been extensively studied (see, for example, [@bar2; @maeda; @wein; @CO; @DN]). Moreover it turned out that some special version of the dilaton gravity is classically equivalent to the restricted higher derivative gravity theory which include only square of scalar curvature in addition to Hilbert-Einstein action [@whit; @bar2; @maso] (see also the last paper for more complete references). This theory is also of big cosmological interest because it enables one to construct the inflationary solutions [@star; @baot; @MMS; @hans; @CO]).
Perhaps the completely consistent theory of quantum gravity can be constructed within the string model, and gravity will be described by effective action within this frame. However the string theory can be valid at the Planck energies and above, and if one wish to deal with the energies below Planck scale, it is natural to suppose that the quantum effects of gravity will be related with some low energy action. One can, for instance, apply higher derivative gravity for this purposes. Higher derivative gravity is renormalizable [@9; @vortyu] and allow the renormalization group study of some physical effects like asymptotic freedom [@11; @12; @bksvw] and phase transitions [@bosh; @odsh1], but not unitary (at least within the usual perturbation scheme (see [@book] for the introduction and more complete references). Thus at the moment we do not have any consistent theory which is applicable below Planck scale and any researchment in this field is based on the choice of some model, which allow us to explore some quantum gravity effects. In present paper we consider the four dimensional metric-dilaton model including the second derivative terms only. We choose the most general action including arbitrary functions $A, B, C$ of the dilaton $\phi$. S= d\^4x { A()g\^\_\_+ B()R + C() } \[0.1\] that covers all special cases including the string inspired action, special (relevant from cosmological viewpoint) case of higher derivative gravity and also admit some other interesting applications. Such a model is non-renormalizable that can be seen already from power counting consideration. Indeed one can suppose that all necessary counterterms are introduced from the very beginning, but then the finite parts of the amplitudes and also the “beta functions” for the generalized couplings will be ill defined because of relevant gauge fixing and parametrization dependence and therefore any analysis becomes inconsistent. However there are a few possibilities to obtain some sound results for the theory (\[0.1\]) on quantum level. First of all there is some interest to explore the one-loop renormalization of the theory and to compare the results with the ones for General Relativity [@hove]. In the last case all the one-loop counterterms vanish (if the cosmological term is lacking) on mass shell and hence the one-loop $S$-matrix is finite. The theory with cosmological constant is renormalizable [@cosm], however if one introduce the matter fields the one loop renormalizability is lacking even on mass shell. It should be interesting to know, whether it is so for the dilaton model (\[0.1\]). In this case the situation is much more complicated, because the amount of possible counterterms is essentially higher as compared with the pure metric theory. It turns out, however, that it is possible to reduce the counterterms to the few structures which survive on mass shell.
Furthermore, if the consideration is restricted by the one-loop on shell case, then the theory with cosmological term $C(\phi)$ can be renormalizable that leads to some general conjectures about the high energy behavior of quantum gravity [@11]. Next, we can restrict ourselves by some special backgrounds where the theory is renormalizable. For example, the cosmological inflationary background provides the renormalizability of the special higher derivative model which is the particular case of the above model [@OV]. On the other hand one can introduce an additional constraint on the background dilaton and regard it as constant. This way is also of some cosmological interest, because the renormalizability in the potential sector enables one to evaluate the significance of quantum gravity for the cosmological phase transitions.
The action (\[0.1\]) may be viewed as the second derivative part of the general (fourth derivative) model of the dilaton gravity, which has been recently inversigated in [@ejos]. In [@ejos] we have restricted ourselves by the case when only the scalar field is the quantum variable. Despite the general case is very interesting, the explicit calculations are too cumbersome because of the presence of higher derivatives. Here we perform the one-loop calculations in the theory (\[0.1\]), considering both fields $\phi$ and $g_{\mu\nu}$ as quantum ones. We start with the general model (\[0.1\]) and then turn to the analysis of special cases.
The paper is organized as follows. In section 2 we discuss the different conformally equivalent forms of the theory (\[0.1\]), and show that all of them can be divided into two sets. Models of one set are classically equivalent to conformal scalar - metric theory and, simultaneously, to General Relativity. The models of second set include physical degrees of freedom, corresponding to the dilaton (or conformal factor), and in forthcoming sections we restrict the consideration only by the models of this class. In section 3 the general structure of renormalization of (\[0.1\]) is explored both off and on mass shell. In section 4 we calculate of the one-loop counter terms. To make this we apply the method which was developed in [@odsh] within the two dimensional dilaton gravity. It turns out that it is useful in $d=4$ as well, and not only in the model (\[0.1\]) but also in the higher derivative dilaton gravity formulated recently in [@ejos] (see also discussion in [@sh94]). In section 5 the concrete analysis of the on shell renormalization of the model is performed. Here we fine tune the functions $A$ and $B$ to provide one-loop finiteness of the theory without $C$ term. If the potential term is included then the one loop on shell renormalizability require the vanishing of the Einstein counterterm. As a result we face with the cosmological type divergences only, and it turns out that they can be removed by renormalization of the scalar dilaton field. In section 6 the renormalization of the dilaton theory (\[0.1\]) interacted with matter fields is discussed. It turns out that qualitatively the structure of counterterms is the same as in the Einstein gravity, and the dilaton - metric theory with matter is non-renormalizable even on mass shell. In section 7 we give the qualitative discussion of the renormalization in two special cases, one of them is rather interesting and has to be analyzed separately. The last section consists in discussion of the results.
General notes on the dilaton gravity
====================================
If we are interested to understand the parametrization dependence of the dilaton action, it is useful to start with the simple particular case of the general action (\[0.1\]). S= d\^4x { R’+ V() } \[1.1\] Here the curvature $R'$ corresponds to the metric $g'_{\mu\nu}$ and $g'= \det (g'_{\mu\nu})$. Let now transform this action to new variables $g_{\mu\nu}$ and $\phi$ according to g’\_=g\_e\^[2()]{}, =() \[1.2\] where $\sigma(\phi)$ and $\Phi(\phi)$ are arbitrary functions of $\phi$. In a new variables the action becomes: S= d\^4x { ()Re\^[2()]{}+ 6()\^2e\^[2()]{}\[’+’\]’ + V(())e\^[4()]{} } \[1.3\] Therefore we are able to transform the particular action (\[1.1\]) to the general form (\[0.1\]) with A()=6e\^[2()]{}\[’+’\]’, B()=() e\^[2]{} \[1.4\] It is quite reasonable to explore the inverse problem, that is to find the form of $\sigma(\phi)$ and $\Phi(\phi)$ that correspond to the given $A(\phi)$ and $B(\phi)$. One can find that in this case $\sigma(\phi)$ and $\Phi(\phi)$ are defined from the equations A=6B\_1\_1-6B(\_1)\^2, = B e\^[-2]{} \[1.7\] Substituting (\[1.7\]) into (\[1.3\]) we find that in a new variables the action have the form S= d\^4x { A()g\^\_\_+ B()R + ( )\^2 V(())} \[1.8\] where the last term is nothing but $C(\phi)$ from(\[0.1\]).
It is easy to see that the above transformations lead to some restrictions on the functions $A(\phi)$ and $B(\phi)$. Let us consider the special case of Einstein gravity that is to put $\Phi=const$. One can rewrite this condition in terms of $A(\phi)$ and $B(\phi)$. Note that 2AB -3(B\_1)\^2 = -3()\^2 e\^[4()]{} \[1.5\] Here and below the lower index show the order of derivative with respect to $\phi$. For instance, B\_1 = , A\_2 = , \_1 = , etc. \[1.6\] Hence it is clear that the case of $\;2AB -3(B_1)^2 =0\;$ qualitatively differs from another ones. Let us now comment this amusing case. We start with the most simple example $A=1, B=\xi \phi^2$ where $\xi=\frac{1}{6}$. Then the equation (\[1.7\]) can be easily solved and we obtain $\sigma(\phi) = \sigma_0+\ln|\phi|$ and $\Phi=\frac{1}{6}e^{2\sigma_0}=const$. Next, substituting these expressions into (\[1.8\]) we find that in a new variables the transformed action has the form S= d\^4x { g\^\_\_+ R\^2 + \^4} \[1.10\] where $\lambda=e^{-4\sigma_0}\;V(\frac{1}{6}\;e^{2\sigma_0})$. Thus we see that in a new variables the starting Hilbert-Einstein action (\[1.1\]) (remind that $\Phi$ is constant here) corresponds to the conformally coupled scalar field $\phi$. The extra scalar degree of freedom in (\[1.10\]) is compensated by extra symmetry - local conformal invariance. Both theories are equivalent on the classical level. On quantum level the conformal invariance of the theory (\[1.10\]) will be probably broken because of the non-invariance of the measure of path integral over the metric (see [@shja] for the discussion of this point in Weyl gravity. Indeed it is not completely sufficient, and the conformal version should be investigated separately). Thus the new anomalous degree of freedom starts to propagate and the equivalence of two theories can be violated. Let us notice that the same frame for Einstein gravity has been recently used in [@25] for the investigation of $2+\varepsilon$ quantum gravity.
And so, all the theories (\[0.1\]) can be divided into two sets. First set is labeled by $2AB -3(B_1)^2 =0$, it is conformally equivalent to the General Relativity with cosmological constant. For the second set $2AB -3(B_1)^2 \neq 0$. Such models are conformally equivalent to (\[1.1\]) with non-constant $\Phi$. Below we shall deal only with the theories of the second type. On classical level the change of dynamical variables can be compensated by the change of the functions $A(\phi), B(\phi), C(\phi)$. However, as it was recently discussed by Magnano and Sokolowski [@maso], the natural choice of the frame is preferable from physical point of view already on classical level. One can face the same situation in the quantum theory as well. To see this, let us consider one interesting particular case [@OV]. If one put the potential term in (\[1.1\]) in the special way and make the shift of the field $\Phi=\phi-\phi_0$, where $\phi_0=const$ then the resulting theory S= d\^4x {\^2+ R(-\_0)+} \[1.12\] is equivalent to the special version of higher derivative quantum gravity S=d\^4x{-R\^2-R+}\[1.11\]
However on quantum level it is so only if we do not introduce into the generating functional of the Green functions the external source for the auxiliary field $\phi$. If one make some nonlinear change of variables like the conformal transform described above, the auxiliary field and the conformal factor of the metric are mixed and we likely lose the simple relation between (\[1.11\]) and (\[1.12\]).
The structure of the counterterms off and on mass shell
=======================================================
The main purpose of the present paper is to investigate the theory (\[0.1\]) on quantum level within the one - loop approximation. The simple consideration based on power counting shows that the theory is non-renormalizable just as General Relativity. At the same time at one-loop order the last theory is renormalizable on mass shell [@hove]. This property holds even if the cosmological constant is included to the action [@cosm] that enables one to apply some kind of renormalization group approach for it’s study [@11]. That is why it looks interesting to consider the renormalization of our theory on mass shell. The next reason to do this is the gauge and parametrization independence of the effective action on mass shell.
In this section we write down the classical equations of motion and the possible divergent structures, taking into account only the one loop order. Then we find some simple relations between counterterms and consider the divergent structures which are possible on shell. The equations of motion in the theory (\[0.1\]) have the form $$B R^{\mu\nu}+ g^{\mu\nu}
\left[\left( B_2 -\frac{A}{2} \right)(\nabla \phi)^2
-\frac{R+C}{2}+B_1(\Box \phi)\right]
+ (A -B_2)(\nabla^{\mu} \phi)(\nabla^{\nu}\phi)
-B_1 (\nabla^{\mu} \nabla^{\nu} \phi) =0$$ B\_1 R + C\_1 -A\_1 ()\^2 -2A () =0 \[2.1\]
Before going on to discuss the renormalization of the theory, one have to define the classical dimension of the field $\phi$. The form of the starting Lagrangian shows that there is some dimensional parameter $M$ from the very beginning. One can introduce such dimensional parameter in a different ways, that corresponds to different classical dimensions of the scalar field $\phi$. For instance, in the case of dimension-less $\phi$ the arbitrary functions $A, B, C$ include the dimensional parameter $M$ in a trivial way $A, B \sim M^2$ and $C \sim M^4$. On the contrary, if the dimension of $\phi$ is chosen as unity, then (if we want to consider arbitrary functions $A, B, C$), they depend on the ratio $\frac{\phi}{M}$. Of course the results of the explicit (one-loop in our case) calculations do not depend on this choice, and thus we can regard the dimension of $\phi$ according to our convenience. On this stage it is better to consider the dimension-less $\phi$. Then the arbitrary functions $A,B,C$ do not contain the dimensional parameter $M$. Another advantage of this choice is that $\phi$ and metric have an equal dimensions and therefore the power counting in a dilaton theory is essentially the same as compared with General Relativity.
If one is interested only in the one-loop divergences, then the counterterms contains the covariant terms of fourth order in derivatives. The most general action of this type has the form [@ejos]: $$\Gamma_{div}^{1-loop}= - \frac{1}{16 \pi^2 (n-4)}
\int d^4x\sqrt{-g}
[ c_w C^2 + c_r R^2 + c_4 R(\nabla \phi)^2 + c_5 R(\Box \phi )+
c_6 R^{\mu \nu}(\nabla_{\mu} \phi)(\nabla_\nu \phi) +$$ +c\_7 R + c\_8 ()\^4 + c\_9 ()\^2()+ c\_[10]{} ()\^2 + c\_[11]{} ()\^2 +c\_[12]{} \]+ (s.t.) \[2.2\] where $C^2=C_{\mu\nu\alpha\beta }C^{\mu\nu\alpha\beta}$ is the square of Wyle tensor and$(\nabla\phi)^2=g^{\mu\nu}\;\nabla_\mu\phi\;\nabla_\nu\phi$. $"s.t."$ means “surface terms”. All $c_{w,r,4,...,12}$ are some functions of $A(\phi), B(\phi), C(\phi)$ and their derivatives. One can easily check the following reduction formulas which show the surface form of the other possible structures [@ejos]. $$c_{13}(\nabla^\mu R)(\nabla_\mu\phi)=-c'_{13} R(\nabla_\mu\phi)^2-c_{13}R
(\Box\phi) + (s.t.)$$ $$c_{14}R_{\mu\nu}(\nabla^\mu\nabla^\nu\phi)=-c'_{14}R_{\mu\nu}
(\nabla^\mu\phi)(\nabla^\nu\phi)+{1\over 2}
c'_{14}R(\nabla\phi)^2+{1\over 2}c_{14}R(\Box\phi) + (s.t.)$$ $$c_{15}(\nabla^\nu\phi)(\Box\nabla_\nu\phi)=-c'_{15}(\nabla\phi)^2
(\Box\phi)-c'_{15}(\Box\phi)^2+c'_{15}R_{\mu\nu}(\nabla^\mu\phi)
(\nabla^\nu\phi) + (s.t.)$$ $$c_{16}(\nabla_\nu\nabla_\mu\phi)^2={1\over 2}c''_{16}(\nabla\phi)^4+
{3\over 2}c'_{16}(\nabla\phi)^2(\Box\phi)+
c_{16}(\Box\phi)^2-c_{16}
R_{\mu\nu}(\nabla^\mu\phi)(\nabla^\nu\phi) + (s.t.)$$ $$c_{17}(\nabla^\nu\Box\nabla_\nu\phi)=
c''_{17}(\nabla\phi)^2(\Box\phi)+
c'_{17}(\Box\phi)^2-c'_{17}R_{\mu\nu}(\nabla^\mu\phi)(\nabla^\nu\phi)
+ (s.t.)$$ $$c_{18}(\Box^2\phi)=c''_{18}(\nabla\phi)^2(\Box\phi)+
c'_{18}(\Box\phi)^2 + (s.t.)$$ $$c_{19}(\Box R)=c''_{19}R(\nabla\phi)^2+c'_{19}R(\Box\phi) + (s.t.)$$ $$c_{20}(\nabla_\nu\phi)(\nabla_\mu\phi)(\nabla^\nu\nabla^\mu\phi)=
-{1\over 2} c'_{20}(\nabla\phi)^4
-{1\over 2}c_{20}(\nabla\phi)^2(\Box\phi) + (s.t.)$$ c\_[21]{}(\^)(\_)=-c’\_[21]{}()\^2 ()-c\_[21]{}()\^2 + (s.t.) \[2.3\] Here $c_{13,..,21} = c_{13,..,21}(\phi)$ are some (arbitrary) functions.
Thus the power counting consideration and the account of symmetries show that the possible counterterms have complicated form and differs from the classical action. Therefore the theory is expected to be non-renormalizable off shell. Let us now discuss the renormalization on mass shell. For this purpose we shall apply the equations of motion (\[2.1\]) and the reduction formulas (\[2.3\]) and rewrite the counterterms (\[2.2\]) and the classical action (\[0.1\]) in a maximally simple form. In particular, from the equations of motion (\[2.1\]) one can get the following relations $$(\nabla \phi)^2=x R + y,
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
(\Box \phi)=z R + w$$ (\^ \^ ) = r (\^ )(\^) + s R\^ + t g\^ R + u g\^ \[2.4\] where $$x(\phi)=
{{2\,A\,B - 3\,{{B_{1}}^2}}\over {-2\,{A^2} - 3\,A_{1}\,B_{1} + 6\,A\,B_{2}}}$$$$y(\phi)=
{{4\,A\,{\rm C} - 3\,B_{1}\,{\rm C}_{1}}\over
{-2\,{A^2} - 3\,A_{1}\,B_{1} + 6\,A\,B_{2}}}$$$$z(\phi)=
{{-\left( B\,A_{1} \right) - A\,B_{1} + 3\,B_{1}\,B_{2}}\over
{-2\,{A^2} - 3\,A_{1}\,B_{1} + 6\,A\,B_{2}}}$$$$w(\phi)=
{{2\,{\rm C}\,A_{1} + A\,{\rm C}_{1} - 3\,B_{2}\,{\rm C}_{1}}\over
{2\,{A^2} + 3\,A_{1}\,B_{1} - 6\,A\,B_{2}}}$$$$r(\phi)=
{{A - B_{2}}\over {B_{1}}}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
s(\phi)=
{B\over {B_{1}}}$$ $$t(\phi)=
{{B\,A_{1}\,B_{1} + A\,{{B_{1}}^2} - 2\,A\,B\,B_{2}}\over
{2\,B_{1}\,\left( -2\,{A^2} - 3\,A_{1}\,B_{1} + 6\,A\,B_{2} \right) }}$$ u()= [[2[A\^2]{}[C]{} + [C]{}A\_[1]{}B\_[1]{} - 2A[C]{}B\_[2]{} - AB\_[1]{}[C]{}\_[1]{}]{}]{} \[2.5\] Next, combining the equations of motion (\[2.1\]) and the reduction formula for $c_{14}$ (\[2.3\]) we find $$\int d^4 x \sqrt{-g}\;c_6 R_{\mu \nu} (\nabla^{\mu}\phi)(\nabla^{\nu}\phi)
= \int d^4 x \sqrt{-g}\; [\;
\frac{1}{2}c_6 x R^2 + \frac{1}{2}c_6 R+$$ +f() { -s R\_\^2 + ( r x + z -t)R\^2 +(ry+w-u) R} \] \[2.6\] where $f(\phi)$ is solution of the following differential equation f\_1() + f() r() - c\_6() = 0 \[2.7\] Since the last equation have solution for any $r(\phi)$ and $c_6(\phi)$, we find that the on shell one loop divergences for our dilaton model (\[0.1\]) can be reduced to the form of higher derivative terms without explicit kinetic terms for the dilaton. Of course one can choose another basis and express everything in terms of dilaton structures only, removing the terms with higher powers of curvature.
One-loop calculations
=====================
In this section we shall present the details of the calculation of the one-loop counterterms of the theory (\[0.1\]). For the purpose of calculation of the divergences we will apply the background field method and the Schwinger-De Witt technique. The features of the metric-dilaton theory do not lead to the necessity of some modifications of the calculational scheme, basically developed in the similar two-dimensional theory [@odsh].
Let us start with the usual splitting of the fields into background $g_{\mu\nu}, \phi$ and quantum $h_{\mu\nu}, \varphi$ ones ’ = + , g\_ g’\_ + h\_ \[2.8\] The one-loop effective action is given by the standard general expression =[i 2]{}-i,\[2.9\] where $\hat{H}$ is the bilinear form of the action (\[0.1\]) with added gauge fixing term and $\hat{H}_{ghost}$ is the bilinear form of the gauge ghosts action. To perform the calculations in a most simple way one needs to introduce the special form of the gauge fixing term: S\_[gf]{} = d\^4 x \_\^ \[2.10\] where $\chi_{\mu} = \nabla_{\alpha} \bar{h}_{\mu}^{\,\alpha}+
\beta\nabla_{\mu}h+\gamma \nabla_{\mu} \varphi$, $h=h_{\mu}^{\mu},\;
\bar{h}_{\mu\nu}=h_{\mu\nu}-\frac{1}{4}\;hg_{\mu\nu}$ and $\alpha, \beta,
\gamma$ are some functions of the background dilaton, which can be tuned for our purposes. For instance, if one choose these functions as follows =-B,=-,=- \[2.11\] then the bilinear part of the action $S+S_{gf}$ and the operator $\hat{H}$ has especially simple (minimal) structure $$\left(S + S_{gf}\right)^{(2)}
=\int d^4 x \sqrt{-g}\; {\omega} \hat{H} {\omega}^T$$ =+\_\^+ \[2.12\] Here $\omega=\left(\bar{h}_{\mu\nu},\;h,\; \varphi\right)$ and $T$ means transposition. The components of $\hat{H}$ have the form $$\hat{K}=\left(
\begin{array}{ccc}
\frac{B}{4} \delta^{\mu \nu \alpha \beta} & 0 & 0\\
0 & -\frac{B}{16} & -\frac{B_1}{4} \\
0 & -\frac{B_1}{4} & \frac{B_1^2}{2B} -A
\end{array}
\right)$$ $$\hat{L}^{\lambda}=\left(
\begin{array}{ccc}
\frac{B_1}{4} \left(\delta^{\mu \nu \alpha \beta}
g^{\tau \lambda} +
2 g^{\nu \beta}\left(g^{\mu \tau } g^{\alpha \lambda }
- g^{\alpha \tau } g^{\mu \lambda }\right)\right)
& - \frac{B_1}{4} g^{\mu \tau} g^{\nu \lambda}
& \left( \frac{B_2}{2}-A \right) g^{\mu \tau} g^{\nu \lambda}\\
\frac{B_1}{4} g^{\alpha \tau} g^{\beta \lambda}
& -\frac{B_1}{16} g^{\tau \lambda}
& \left(\frac{A}{4} -\frac{5}{8} B_2 \right) g^{\tau \lambda}\\
\left( A - \frac{B_2}{2}\right) g^{\alpha \tau} g^{\beta \lambda}
& \left( \frac{B_2}{8}-\frac{A}{4}\right) g^{\tau \lambda}
& \left(\frac{B_1^2}{2B} -A\right)_1 g^{\tau \lambda}
\end{array}
\right) (\nabla_{\tau}\phi)$$ [$$\hat{M}=
\left(
\begin{array}{ccc}
\begin{array}{l}
\delta^{\mu \nu \alpha \beta}\left( \frac{B_1}{2}
(\Box \phi) + \left( \frac{B_2}{2}-\frac{A}{4} \right) (\nabla \phi)^2
-\frac{C}{4} \right)
\\ + g^{\nu \beta}\left(
-B_1\left( \nabla^{\mu} \nabla^\alpha \phi \right)
+\left( A-B_2 \right)(\nabla^\mu \phi)(\nabla^\alpha \phi)\right)
\\ +\frac{B}{4}\left( -\delta^{\mu \nu \alpha \beta} R
+ 2 g^{\nu \beta} R^{\mu \alpha}+2R^{\mu \alpha \nu \beta} \right)
\end{array}
\!\!\!\! & \!\!\!\! 0
\!\!\!\! & \!\!\!\! \begin{array}{l}
\frac{B_2}{2}\left( \nabla^{\mu} \nabla^{\nu} \phi \right)
\\
+ \left( \frac{B_3}{2} - \frac{A_1}{2} \right)
(\nabla^\mu \phi)(\nabla^\nu \phi)
\\
- \frac{B_1}{2}R^{\mu \nu}
\end{array}
\\
\!\!\!\! & \!\!\!\!
\!\!\!\! & \!\!\!\!
\\
\frac{B_1}{4}\left( \nabla^{\alpha} \nabla^{\beta} \phi \right)
+ \frac{B_2}{4} (\nabla^\alpha \phi)(\nabla^\beta \phi)
& \!\! \frac{C}{16}
& \!\! \begin{array}{l}
-\frac{3}{8} B_2 (\Box \phi)
\\
+ \left( \frac{A_1}{8} - \frac{3}{8}B_3 \right)(\nabla \phi)^2
\\
+ \frac{B_1}{8} R + \frac{C_1}{4}
\end{array}
\\
\!\!\!\! & \!\!\!\!
\!\!\!\! & \!\!\!\!
\\
A\left( \nabla^{\alpha} \nabla^{\beta} \phi \right)
+ \frac{A_1}{2} (\nabla^\alpha \phi)(\nabla^\beta \phi)
- \frac{B_1}{2}R^{\alpha \beta}
& \begin{array}{l}
-\frac{A}{4} (\Box \phi)
\\
+ \frac{A_1}{8} (\nabla \phi)^2
\\
+ \frac{B_1}{8} R + \frac{C_1}{4}
\end{array}
&
\begin{array}{l}
-A_1(\Box \phi)
\\
-\frac{A_2}{2}(\nabla \phi)^2
\\
+ \frac{B_2}{2} R + \frac{C_2}{2}
\end{array}
\end{array}
\right)$$ ]{} The next problem is to separate the divergent part of $\Tr\ln\hat{H}$. To make this we rewrite this trace in a following way. =+ (+ \^[-1]{} \^\_+\^[-1]{} ) \[2.15\] One can notice that the first term does not give contribution to the divergences. Let us explore the second term which has standard minimal form and can be easily estimated with the use of standard Schwinger-DeWitt method [@DW; @hove] (see also [@book] for technical introduction and more complete references).
The bilinear form of the ghost action also has the minimal structure \_[ghost]{}= g\^+(\^)\^ + (\^ \^ ) + R\^ \[2.16\] and it’s contribution to the divergences can be easily derived with the use of the standard methods.
Summing up both contributions we find that the one-loop divergences have the form (\[2.2\]) that is in a full accord with the power counting consideration. The coefficient functions $c$ have the form $$c_w=
{{43}\over {120}} - {{{{B_{1}}^2}}\over X},\;\;\;\;\;\;\;\;
\;\;\;\;\;\;\;\;\;\;\;\;X=2A\,B\,-3B_{1}^{2}$$$$c_r=
{1 \over {{X^2}}}
\left[
{1 \over {72}}
(76\,{A^2}\,{B^2} - 132\,A\,B\,{{B_{1}}^2} + 171\,{{B_{1}}^4})
- {1 \over 6}
(B\,( 2\,A\,B + 9\,{{B_{1}}^2} ) \,B_{2})
+ {1 \over 2}
({B^2}\,{{B_{2}}^2})
\right]$$$$c_4=
{1 \over {{X^3}}}
\left[
{1 \over {24\,{B^2}}}
(128\,{A^4}\,{B^4}
+ 8\,A\,{B^5}\,{{A_{1}}^2}
- 144\,{A^2}\,{B^4}\,A_{1}\,B_{1}
- 1552\,{A^3}\,{B^3}\,{{B_{1}}^2}
\right.$$$$+ 36\,{B^4}\,{{A_{1}}^2}\,{{B_{1}}^2}
+ 312\,A\,{B^3}\,A_{1}\,{{B_{1}}^3}
+ 5208\,{A^2}\,{B^2}\,{{B_{1}}^4}
- 6786\,A\,B\,{{B_{1}}^6}
+ 3159\,{{B_{1}}^8})$$$$+ {{B_{2}} \over {6\,B}}
( -80\,{A^3}\,{B^3}
- 6\,{B^4}\,{{A_{1}}^2}
- 24\,A\,{B^3}\,A_{1}\,B_{1}
+ 402\,{A^2}\,{B^2}\,{{B_{1}}^2}
- 54\,{B^2}\,A_{1}\,{{B_{1}}^3}
- 810\,A\,B\,{{B_{1}}^4}$$$$\left.
+ 459\,{{B_{1}}^6} )
+ 2\,( 2\,{A^2}\,{B^2}
+ 3\,{B^2}\,A_{1}\,B_{1}
+ 12\,A\,B\,{{B_{1}}^2}
- 9\,{{B_{1}}^4} ) \,{{B_{2}}^2}
- 6\,A\,{B^2}\,{{B_{2}}^3}
\right]$$$$c_5=
{1 \over {{X^2}}}
\left[
{1 \over {12\,B}}
(4\,A\,{B^3}\,A_{1}
- 128\,{A^2}\,{B^2}\,B_{1}
+ 18\,{B^2}\,A_{1}\,{{B_{1}}^2}
+ 270\,A\,B\,{{B_{1}}^3}
- 225\,{{B_{1}}^5})
\right.$$$$\left.
+ {{B_{2}} \over 2}
( -2\,{B^2}\,A_{1}
+ 4\,A\,B\,B_{1}
+ 3\,{{B_{1}}^3} )
\right]$$$$c_6=
{1 \over {{X^2}}}
\left[
{{B_{1}} \over {{B^2}}}
( 8\,A\,{B^3}\,A_{1}
- 4\,{A^2}\,{B^2}\,B_{1}
- 6\,{B^2}\,A_{1}\,{{B_{1}}^2}
+ 22\,A\,B\,{{B_{1}}^3}
- 15\,{{B_{1}}^5} )
\right.$$$$\left.
+ {{B_{2}} \over B}
( -8\,{A^2}\,{B^2} + 2\,{B^2}\,A_{1}\,B_{1} + 4\,A\,B\,{{B_{1}}^2} -
3\,{{B_{1}}^4} )
- 4\,A\,B\,{{B_{2}}^2}
\right]$$$$c_7=
{1 \over {{X^2}}}
\left[
{2 \over {3\,B}}
( 26\,{A^2}\,{B^2}\,{\rm C}
- 85\,A\,B\,{\rm C}\,{{B_{1}}^2}
+ 63\,{\rm C}\,{{B_{1}}^4}
+ 3\,B\,{\rm C}\,{{B_{1}}^2}\,B_{2}
\right.$$$$\left.
+ 8\,A\,{B^2}\,B_{1}\,{\rm C}_{1}
- 6\,{B^2}\,B_{1}\,B_{2}\,{\rm C}_{1} )
+ {B\,{\rm C}_{2} \over 6}
( -2\,A\,B - 9\,{{B_{1}}^2} + 6\,B\,B_{2} )
\right]$$$$c_8=
{1 \over {32\,{B^4}\,{X^4}}}
\left[
2560\,{A^6}\,{B^6}
- 448\,{A^3}\,{B^7}\,{{A_{1}}^2}
+ 16\,{B^8}\,{{A_{1}}^4}
+ 256\,{A^4}\,{B^7}\,A_{2}
\right.$$$$+ 2432\,{A^4}\,{B^6}\,A_{1}\,B_{1}
- 22528\,{A^5}\,{B^5}\,{{B_{1}}^2}
+ 1280\,{A^2}\,{B^6}\,{{A_{1}}^2}\,{{B_{1}}^2}
- 1152\,{A^3}\,{B^6}\,A_{2}\,{{B_{1}}^2}$$$$- 17120\,{A^3}\,{B^5}\,A_{1}\,{{B_{1}}^3}
+ 96\,{B^6}\,{{A_{1}}^3}\,{{B_{1}}^3}
+ 81568\,{A^4}\,{B^4}\,{{B_{1}}^4}
- 880\,A\,{B^5}\,{{A_{1}}^2}\,{{B_{1}}^4}$$$$+ 1856\,{A^2}\,{B^5}\,A_{2}\,{{B_{1}}^4}
+ 42016\,{A^2}\,{B^4}\,A_{1}\,{{B_{1}}^5}
- 158592\,{A^3}\,{B^3}\,{{B_{1}}^6}
+ 168\,{B^4}\,{{A_{1}}^2}\,{{B_{1}}^6}$$$$- 1248\,A\,{B^4}\,A_{2}\,{{B_{1}}^6}
- 43512\,A\,{B^3}\,A_{1}\,{{B_{1}}^7}
+ 176976\,{A^2}\,{B^2}\,{{B_{1}}^8}
+ 288\,{B^3}\,A_{2}\,{{B_{1}}^8}$$$$+ 16416\,{B^2}\,A_{1}\,{{B_{1}}^9}
- 108144\,A\,B\,{{B_{1}}^{10}}
+ 28323\,{{B_{1}}^{12}}
- 2048\,{A^5}\,{B^6}\,B_{2}$$$$+ 224\,{A^2}\,{B^7}\,{{A_{1}}^2}\,B_{2}
+ 128\,{A^3}\,{B^7}\,A_{2}\,B_{2}
+ 4480\,{A^3}\,{B^6}\,A_{1}\,B_{1}\,B_{2}
- 15248\,{A^2}\,{B^5}\,A_{1}\,{{B_{1}}^3}\,B_{2}$$$$+ 14208\,{A^4}\,{B^5}\,{{B_{1}}^2}\,B_{2}
- 1408\,A\,{B^6}\,{{A_{1}}^2}\,{{B_{1}}^2}\,B_{2}
- 448\,{A^2}\,{B^6}\,A_{2}\,{{B_{1}}^2}\,B_{2}
- 192\,{B^7}\,{{A_{1}}^3}\,B_{1}\,B_{2}$$$$- 28416\,{A^3}\,{B^4}\,{{B_{1}}^4}\,B_{2}
+ 744\,{B^5}\,{{A_{1}}^2}\,{{B_{1}}^4}\,B_{2}
+ 480\,A\,{B^5}\,A_{2}\,{{B_{1}}^4}\,B_{2}
+ 13680\,A\,{B^4}\,A_{1}\,{{B_{1}}^5}\,B_{2}$$$$+ 9696\,{A^2}\,{B^3}\,{{B_{1}}^6}\,B_{2}
- 144\,{B^4}\,A_{2}\,{{B_{1}}^6}\,B_{2}
- 2628\,{B^3}\,A_{1}\,{{B_{1}}^7}\,B_{2}
+ 21672\,A\,{B^2}\,{{B_{1}}^8}\,B_{2}$$$$- 15444\,B\,{{B_{1}}^{10}}\,B_{2}
+ 1792\,{A^4}\,{B^6}\,{{B_{2}}^2}
+ 192\,A\,{B^7}\,{{A_{1}}^2}\,{{B_{2}}^2}
+ 60624\,{A^2}\,{B^4}\,{{B_{1}}^4}\,{{B_{2}}^2}$$$$- 19328\,{A^3}\,{B^5}\,{{B_{1}}^2}\,{{B_{2}}^2}
+ 576\,{B^6}\,{{A_{1}}^2}\,{{B_{1}}^2}\,{{B_{2}}^2}
+ 7776\,A\,{B^5}\,A_{1}\,{{B_{1}}^3}\,{{B_{2}}^2}
- 928\,{A^2}\,{B^6}\,A_{1}\,B_{1}\,{{B_{2}}^2}$$$$- 6984\,{B^4}\,A_{1}\,{{B_{1}}^5}\,{{B_{2}}^2}
- 69120\,A\,{B^3}\,{{B_{1}}^6}\,{{B_{2}}^2} +
25380\,{B^2}\,{{B_{1}}^8}\,{{B_{2}}^2}
+ 128\,{A^3}\,{B^6}\,{{B_{2}}^3}$$$$\left.
- 1152\,A\,{B^6}\,A_{1}\,B_{1}\,{{B_{2}}^3}
- 3072\,{A^2}\,{B^5}\,{{B_{1}}^2}\,{{B_{2}}^3}
+ 3888\,{B^3}\,{{B_{1}}^6}\,{{B_{2}}^3}
+ 576\,{A^2}\,{B^6}\,{{B_{2}}^4}
\right]$$$$+ {{B_{3}} \over {4\,{B^2}\,{X^2}}}
\left[
-10\,A\,{B^3}\,A_{1}
- 24\,{A^2}\,{B^2}\,B_{1}
+ 19\,{B^2}\,A_{1}\,{{B_{1}}^2}
+ 36\,A\,B\,{{B_{1}}^3}
\right.$$$$\left.
- 6\,{{B_{1}}^5}
+ 8\,A\,{B^2}\,B_{1}\,B_{2}
- 36\,B\,{{B_{1}}^3}\,B_{2}
\right]$$$$c_9=
{1 \over {8\,{B^3}\,{X^3}}}
\left[
-16\,{A^3}\,{B^5}\,A_{1}
+ 8\,{B^6}\,{{A_{1}}^3}
+ 576\,{A^4}\,{B^4}\,B_{1}
- 16\,A\,{B^5}\,{{A_{1}}^2}\,B_{1}
\right.$$$$+ 160\,{A^2}\,{B^4}\,A_{1}\,{{B_{1}}^2}
- 2960\,{A^3}\,{B^3}\,{{B_{1}}^3}
+ 12\,{B^4}\,{{A_{1}}^2}\,{{B_{1}}^3}
- 324\,A\,{B^3}\,A_{1}\,{{B_{1}}^4}$$$$+ 5400\,{A^2}\,{B^2}\,{{B_{1}}^5}
+ 90\,{B^2}\,A_{1}\,{{B_{1}}^6} .
- 4176\,A\,B\,{{B_{1}}^7}
+ 1107\,{{B_{1}}^9}$$$$+ 56\,{A^2}\,{B^5}\,A_{1}\,B_{2}
+ 960\,{A^3}\,{B^4}\,B_{1}\,B_{2}
- 48\,{B^5}\,{{A_{1}}^2}\,B_{1}\,B_{2}
- 144\,A\,{B^4}\,A_{1}\,{{B_{1}}^2}\,B_{2}$$$$- 4032\,{A^2}\,{B^3}\,{{B_{1}}^3}\,B_{2}
+ 234\,{B^3}\,A_{1}\,{{B_{1}}^4}\,B_{2}
+ 6228\,A\,{B^2}\,{{B_{1}}^5}\,B_{2}
- 3186\,B\,{{B_{1}}^7}\,B_{2}$$$$\left.
+ 48\,A\,{B^5}\,A_{1}\,{{B_{2}}^2}
- 240\,{A^2}\,{B^4}\,B_{1}\,{{B_{2}}^2}
+ 216\,A\,{B^3}\,{{B_{1}}^3}\,{{B_{2}}^2}
- 108\,{B^2}\,{{B_{1}}^5}\,{{B_{2}}^2}
\right]
- {{B_{3}}\over B}$$$$c_{10}=
{1 \over {8{B^2}{X^2}}}
\left[
4B^3A_1(B A_1
- 4AB_1)
+ 12 B^2B_1^2 (18A^2
- A_1B_1)+B_1^4(387 B_1^2
- 528AB)
\right]
- {{B_{2}}\over B}$$$$c_{11}=
{1 \over {4\,{B^3}\,{X^3}}}
\left[
(64\,{A^4}\,{B^4}
+ 16\,{A^2}\,{B^4}\,A_{1}\,B_{1}
- 832\,{A^3}\,{B^3}\,{{B_{1}}^2}
- 40\,{B^4}\,{{A_{1}}^2}\,{{B_{1}}^2}
\right.$$$$+ 16\,A\,{B^4}\,A_{2}\,{{B_{1}}^2}
+ 40\,A\,{B^3}\,A_{1}\,{{B_{1}}^3}
+ 3176\,{A^2}\,{B^2}\,{{B_{1}}^4}
- 24\,{B^3}\,A_{2}\,{{B_{1}}^4}
- 24\,{B^2}\,A_{1}\,{{B_{1}}^5}$$$$- 4356\,A\,B\,{{B_{1}}^6}
+ 2070\,{{B_{1}}^8}
- 96\,{A^3}\,{B^4}\,B_{2}
+ 40\,A\,{B^4}\,A_{1}\,B_{1}\,B_{2}
+ 160\,{A^2}\,{B^3}\,{{B_{1}}^2}\,B_{2}$$$$+ 84\,{B^3}\,A_{1}\,{{B_{1}}^3}\,B_{2}
- 420\,A\,{B^2}\,{{B_{1}}^4}\,B_{2}
+ 234\,B\,{{B_{1}}^6}\,B_{2}
- 72\,A\,{B^3}\,{{B_{1}}^2}\,{{B_{2}}^2}
+ 36\,{B^2}\,{{B_{1}}^4}\,{{B_{2}}^2}
)\,{\rm C}$$$$+ ( - 16\,{A^2}\,{B^5}\,A_{1}
+ 96\,{A^3}\,{B^4}\,B_{1}
+ 80\,{B^5}\,{{A_{1}}^2}\,B_{1}
- 32\,A\,{B^5}\,A_{2}\,B_{1}
- 56\,A\,{B^4}\,A_{1}\,{{B_{1}}^2}$$$$- 808\,{A^2}\,{B^3}\,{{B_{1}}^3}
+ 48\,{B^4}\,A_{2}\,{{B_{1}}^3}
- 24\,{B^3}\,A_{1}\,{{B_{1}}^4}
+ 1140\,A\,{B^2}\,{{B_{1}}^5}$$$$- 612\,B\,{{B_{1}}^7}
- 40\,A\,{B^5}\,A_{1}\,B_{2}
+ 224\,{A^2}\,{B^4}\,B_{1}\,B_{2}
- 228\,{B^4}\,A_{1}\,{{B_{1}}^2}\,B_{2}$$$$+ 108\,A\,{B^3}\,{{B_{1}}^3}\,B_{2}
+ 54\,{B^2}\,{{B_{1}}^5}\,B_{2}
+ 120\,A\,{B^4}\,B_{1}\,{{B_{2}}^2}
- 36\,{B^3}\,{{B_{1}}^3}\,{{B_{2}}^2}
)\,{\rm C}_{1}$$$$+ ( 64\,{A^3}\,{B^5}
- 20\,{B^6}\,{{A_{1}}^2}
+ 8\,A\,{B^6}\,A_{2}
- 16\,A\,{B^5}\,A_{1}\,B_{1}
- 112\,{A^2}\,{B^4}\,{{B_{1}}^2}$$$$- 12\,{B^5}\,A_{2}\,{{B_{1}}^2}
+ 60\,{B^4}\,A_{1}\,{{B_{1}}^3}
+ 120\,A\,{B^3}\,{{B_{1}}^4}
- 45\,{B^2}\,{{B_{1}}^6}
- 16\,{A^2}\,{B^5}\,B_{2}$$$$\left.
+ 72\,{B^5}\,A_{1}\,B_{1}\,B_{2}
- 48\,A\,{B^4}\,{{B_{1}}^2}\,B_{2}
- 72\,{B^3}\,{{B_{1}}^4}\,B_{2}
- 24\,A\,{B^5}\,{{B_{2}}^2}
)\,{\rm C}_{2}
\right]$$$$+ { {{\rm C}_{3}} \over {2\,{X^2}}}
\left[
2\,{B^2}\,A_{1}
- 4\,A\,B\,B_{1}
- 3\,{{B_{1}}^3}
\right]$$$$c_{12}=
{1 \over {{X^2}}}
\left[
{1 \over {{B^2}}}
(20\,{A^2}\,{B^2}\,{{{\rm C}}^2}
- 56\,A\,B\,{{{\rm C}}^2}\,{{B_{1}}^2}
+ 41\,{{{\rm C}}^2}\,{{B_{1}}^4}
- 8\,A\,{B^2}\,{\rm C}\,B_{1}\,{\rm C}_{1}
+ { {{B^2}\,{{{\rm C}_{2}}^2}} \over 2}
\right.$$ . + 4A[B\^3]{}[[[C]{}\_[1]{}]{}\^2]{} + 2[B\^2]{}[[B\_[1]{}]{}\^2]{}[[[C]{}\_[1]{}]{}\^2]{}) + 2B\_[1]{}( [C]{}B\_[1]{} - 2B[C]{}\_[1]{} ) [C]{}\_[2]{} + 4B[C]{}[[B\_[1]{}]{}\^3]{}[C]{}\_[1]{} \] \[2.17\]
Let us now make some comments concerning the above result. The one loop divergences (\[2.2\]), (\[2.17\]) essentially depend on the choice of the functions $A(\phi), B(\phi), C(\phi)$ in the starting action (\[0.1\]). In particular, this dependence concerns the $c_w$ and $c_r$ functions, which correspond to the terms with the second powers in curvature tensor. The above expressions are valid only in the case $\;X=2A\,B\,-3B_{1}^{2}\neq 0$. For $X=0$ the calculational scheme must be modified because of extra conformal symmetry. In this case one has to introduce the additional gauge fixing condition for conformal symmetry. It is easy to see that if such condition is taken in the form $h=0$ then the degeneracy of $\hat{K}$ is removed.
The curvature squared terms in (\[2.17\]) are in a good accord with the same terms calculated earlier in [@hove]. The direct comparison of (\[2.17\]) with the results of other authors is difficult since they have used different choice of quantum variables.
One-loop finiteness and renormalizability on shell
==================================================
And so we observe that the one-loop divergences in the theory under discussion have rather complicated form, and include all possible structures of the action (\[2.2\]). In this respect the theory is similar to General Relativity, where all possible counterterms also appear [@hove]. It is well known that in the last case all the counterterms disappear on mass shell. Therefore it is interesting to consider the divergences (\[2.17\]) when the equations of motion (\[2.1\]) are taken into account. Then, with the use of (\[2.5\]) and (\[2.6\]) we obtain the one loop divergences in the form \_[on-shell, div]{}\^[1-loop]{}= - d\^4x \[3.1\] where $\epsilon = (4\pi)^2 \;(n-4)$ and the values of coefficients are $$k_l(\phi) =
{\it c_{12}} + {\it c_{10}}\,{w^2} + {\it c_{11}}\,y + {\it c_9}\,w\,y
+ {\it c_8}\,{y^2}$$$$k_r(\phi)=
{\it c_7} - f\,u + {\it c_5}\,w + {{f\,w}\over 2} + {\it c_{11}}\,x +
{\it c_9}\,w\,x +$$$$+ {\it c_4}\,y + {{{\it c_6}\,y}\over 2} - {{f\,r\,y}\over 2} + 2\,
{\it c_8}\,x\,y +
2\,{\it c_{10}}\,w\,z + {\it c_9}\,y\,z$$$$k_{rr}(\phi) =
{\it c}_r - {{f\,s}\over 3} - f\,t + {\it c_4}\,x + {{{\it c_6}\,x}\over 2} -
{{f\,r\,x}\over 2} + {\it c_8}\,{x^2} + {\it c_5}\,z + {{f\,z}\over 2} +
{\it c_9}\,x\,z + {\it c_{10}}\,{z^2}$$ k\_w() = [*c*]{}\_w - [[fs]{}2]{} \[3.2\] and $x,y,z,w,t,u,f$ have been defined in (\[2.5\]), (\[2.6\]). It is very important that the values of $k_{w,rr,r,l}$ do not depend on the choice of gauge fixing. From this follows that the finite solutions which we find below are independent on gauge and therefore good defined.
The divergences in the higher derivative dimension-less sector do not depend on the dimensional function $C(\phi)$ and therefore can be analyzed independently on the others. Let us start the search of finite solutions with the higher derivative structures and consider the equations $k_w=o$ and $k_{rr}=0$. In order to find the solutions we must solve these differential equations that looks extremely difficult problem. However one can see that the divergent coefficients (\[2.17\]) possess some homogeneouty. Taking this into account one can successfully find the finite solutions. In fact we have found three finite solutions of power type A()=a\^m, B()=b\^[m+2]{} \[3.3\] with different real values of $m$ and of the ratio $\frac{a}{b}$. (m\_1,)= (-23.4851... ,1413.45...), \[3.41\] (m\_2,)= (-0.526300...,2.72924...), \[3.42\] (m\_3,)= (-0.317820...,4.09345...) \[3.43\] The conditions $k_w=o$ and $k_{rr}=0$ do not fix the values of the constants $a, b$ but only their ratio. Let us notice that the algebraic equation for $m$ is of fifth order and therefore the fact of existence of real solution is independent of the numerical details in the expression (\[2.17\]). Thus we have found the form of the functions $A(\phi)$ and $B(\phi)$ for which our dilaton model without cosmological term $C(\phi)$ is finite on shell. Note that $C(\phi)$ term doesn’t give contributions to the higher derivative counterterms due to it’s dimension. However, if this term is included, situation becomes a little bit more complicated.
If we substitute the expressions (\[3.3\]) into the classical action (\[0.1\]), then the term, linear in curvature, is equal to zero. Thus the theory under consideration can be renormalizable on shell only if $k_r=0$. Therefore we must choose $C(\phi)$ in such a way that the counterterm, linear in curvature, is lacking on shell.
If we choose $C(\phi)=c \phi^{2(m+2)}$ (that corresponds to the same power of $\phi$ in both $C(\phi)$ and counterterm) then the classical action of the theory has the form S= d\^4 x \[3.5\] and the on shell divergences are \_[on-shell, div]{}\^[1-loop]{}=- d\^4x \[3.6\] where $s_1, s_2$ have the approximate numerical values (here and below we omit dots) $$(s_1, s_2)=(1.45386, 0.05772),\;\;\;
(-0.55182, -3.95364),\;\;\;
(-0.105262, -3.06052)$$ for the solutions (\[3.41\])-(\[3.43\]) correspondingly. Thus for this choice of $C(\phi)$ the theory is not renormalizable even on mass shell. Hence we have to look for another values of the power.
Substituting (\[3.3\]) into the expression for $k_r$ we obtain the ordinary differential equation for $C(\phi)$. \_0\^3C\_3+\_1\^2C\_2+\_2C\_1+ \_3 C=0 \[3.7\] Here the indices stand for the derivatives with respect to $\phi$ (\[1.6\]). For three versions of $(m,\frac{a}{b})$ given in (\[3.41\]), (\[3.42\]), (\[3.43\]) correspondingly, the constants $\lambda_{0,1,2,3}$ have the following values: \_0=-2.49128 ,\_1= -813.35, \_2= -6170.03 ,\_3= 1310940 \[3.8\] \_0= 1.25083,\_1=-7.41961 , \_2=13.6904 ,\_3=-4.71357 \[3.9\] \_0= 2.94653,\_1=-23.1234 , \_2=60.081 ,\_3=-50.1804 \[3.10\]
The equations (\[3.7\]) can be easily solved in the form C\_i()=L\_[i1]{}\^[k\_[i1]{}]{}+L\_[i2]{}\^[k\_[i2]{}]{}+L\_[i3]{}\^[k\_[i3]{}]{} \[3.11\] where $C_i(\phi)$ corresponds to $m_i$ and $L_{11,12,...,33}$ are arbitrary integration constants and the following powers correspond to coefficients (\[3.8\]), (\[3.9\]), (\[3.10\]) k\_[11]{}=35.4101, k\_[12]{}=-47.7636, k\_[13]{}=-311.125 \[3.111\] k\_[21]{}=5.77717, k\_[22]{}=0.222461, k\_[23]{}=2.93212 \[3.121\] k\_[31]{}=3.36417, k\_[32]{}=0.752027, k\_[33]{}=6.73149 \[3.131\]
The above solutions give the form of the function $C(\phi)$ which provide the absence of the non-renormalizable $k_r R $ type counterterm. The only structures which survive on shell in both classical action and counterterms are the potential ones. In this respect the theory under consideration is sharing the corresponding property of the Einstein gravity, where the only cosmological term remains when one uses the equations of motion. In the Einstein gravity with cosmological term this leads to the renormalizability of the theory on shell and enables one to consider the renormalization group equation for the cosmological constant.
Let us do the same and consider the renormalization of the potential function $C(\phi)$. We consider only the simple case of $C(\phi)=L\phi^{k}$ that is take two of $L$’s equal to zero, and the third arbitrary. The renormalization of the potential function follows from (\[3.3\]), (\[3.2\]), (\[2.17\]) and has the form L\^[(0)]{}(\^[(0)]{})\^[k]{} = \^[n-4]{} \[3.12\]
Here we have included the factors of dimensional parameter $\mu$ related with the use of dimensional regularization. $Q$ is some number, which depends on the ratio of $a$ and $b$. The values of $Q$ which correspond to the $k$’s from (\[3.111\]) - (\[3.131\]) and $m$’s from (\[3.41\]) are Q\_[11]{}=10.2275, Q\_[12]{}=-1.51342, Q\_[13]{}=1249.04 \[3.211\] Q\_[21]{}=19.7562, Q\_[22]{}=26.9177, Q\_[23]{}=-1.39468 \[3.221\] Q\_[31]{}=-1.42338, Q\_[32]{}=268.66, Q\_[33]{}=664.993 \[3.231\]
It is easy to see that the potential type divergences can not be removed by the transformation of $L$ but only by the following renormalization transformation of scalar field $\phi$. \^[(0)]{}=\^ \[3.13\]
The relation (\[3.12\]) does not enable us to find the dimensions of both $L$ and $\phi$. Since all three terms in (\[3.12\]) have (if $n=4$) dimension 4, we can easily obtain two equations for dimensions $d_L$, $d_b$ (dimension of constant $b$) and $d_\phi$. $$d_L + k\;d_\phi = 4$$ d\_L + (k-m-2) d\_- d\_b = 2 \[3.14\] Another equation for these dimensions comes from the starting action d\_b + (m+2) d\_= 2 \[3.103\] together with $d_a = d_b$. However the last three equations are dependent and thus we only can express the dimensions of $a, b, L$ via $d_\phi$.
The renormalization relation (\[3.13\]) together with (\[3.14\]) and (\[3.103\]) enables us to explore the renormalization group equations for the effective charges. he renormalization group function for the scalar field $\phi$ is defined in a standard way and has the form \_= = - + (k-2m-4)\^[k-2m-3]{} \[3.14x\] Since we consider the four dimensional theory the first term in the $rhs$ can be omitted and thus we arise at the following renormalization group equations for $L(t)$ and $\phi(t)$ $$(4\pi)^2\;\frac{d L}{dt} = (k\;d_\phi - 4)\;L$$ (4)\^2 = (k-2m-4)\^[k-2m-3]{} - d\_\[3.14a\] Indeed one can easily write the similar equations for $a(t)$ and $b(t)$. Since these constants are not renormalized, their values depend on scale only due the classical dimensions. Since all these dimensions are ill defined the equations looks rather artificial. The only way to extract some information is to consider the quantities with definite dimension.
One can see that in the theory under consideration all the on shell divergences can be removed by the renormalization of the dilaton field. As a result there are the nontrivial renormalization group equations for $\phi$ which acquire the anomalous dimension. The scale dependence of the parameters $a, b, L$ and the field $\phi$ leads to the effective running of the cosmological constant. To see this let us consider the effective potential of the scalar field. Upon the normalization conditions are introduced, the effective potential has the form V\_[eff]{} = { b\^[m+2]{} R + L \^k } \[3.115\] where $M_i(\phi)$ are the effective masses, which are the eigenvalues of the operators (\[2.12\]) and (\[2.16\]) and the algebraic summation is performed according to rule (\[2.9\]). $\mu$ is the dimensional parameter of renormalization, and $\gamma$ is the renormalization group function for dilaton (\[3.14x\]). Since we are interested in the qualitative scaling behavior of the Newtonian and cosmological constants the logarithmic corrections in (\[3.115\]) are not relevant and one can deal with the renormalization group improved classical potential of the theory. V\_[imp]{} = b(t)\^[m+2]{}(t)R+L(t) \^k(t) \[3.116\] One can easily see that for some of the solutions (\[3.111\]) - (\[3.131\]), (\[3.41\]) - (\[3.43\]) for the potential (\[3.116\]) it is satisfied the criteria of the second order phase transition = 0, > 0 \[3.117\] The first of (\[3.117\]) leads to the relation between critical values of scalar field and curvature R\_c = - \_c\^[k-m-2]{} \[3.118\] and the second impose the restriction $k - m > 2$. The last condition is satisfied for the models with $k_{11}, k_{21}, k_{23}, k_{31}, k_{33}$ (\[3.111\]) - (\[3.131\]) and corresponding values of $m$ in (\[3.41\]) - (\[3.43\]). Note that the stability of vacuum at classical level require positive $k, m+2, b$ and negative $L$. Thus the value of $k_{11}$ is not compatible with physical requirements. However another four models are and thus our theory allow the second order phase transitions. In the point of minima $\phi_0$ the renormalization group improved classical potential has the form of the Hilbert - Einstein action S\_[min]{}=d\^4x{-R+\_[ind]{}} \[3.119\] where the induced values of Newtonian $G_{ind}$ and cosmological $\Lambda_{ind}$ constants are defined as $G_{ind}^{-1}=b\;\phi_0^{m+2}$ and $\Lambda_{ind} = L\;\phi_0^k$. Since the parameters $b$ and $L$ are not renormalized the scaling dependence of $G_{ind}$ and $\Lambda_{ind}$ is caused by the only renormalization of dilaton (\[3.13\]) and by the classical dimension of the constants, which are well defined in the case. Since the classical dimension will surely dominate in the renormalization group equations for $G_{ind}$ and $\Lambda_{ind}$, it is reasonable to rewrite the renormalization group equation (\[3.14a\]) in terms of dimension-less parameter $\eta(t) = \Lambda_{ind}(t)\;G_{ind}^2(t)$. The equation for $\eta(t)$ has remarkably simple form (4)\^2 = Q(a/b) \^2 \[3.120\] which indicate to the standard asymptotical behavior of this parameter. And so the scaling behaviour of the induced cosmological constant is defined by the sign of the number $Q(a/b)$. One can see from (\[3.211\]) - (3.231) that for the physically relevant solutions with $k_{21}, k_{33}$ from (\[3.121\]), (\[3.131\]) the value $Q$ is positive. In these models the effective cosmological constant decrease at low energies. Thus we observe that the quantum effects in these versions of theory under consideration describe the vanishing of the cosmological constant in far IR. As a result the observable low energy value of the cosmological constant is small as compared with the high energy value. On the countrary, for $ k_{23}, k_{31}$ (\[3.121\]), (\[3.131\]) the sign of $Q$ is negative and the effective cosmological constant decrease at high energy scale.
And so we have found that the model (\[0.1\]) leads to the effective running of the cosmological constant if the last is measured in the units of the Newtonian constants. Earlier the running of these couplings has been in the framework of Higher derivative models [@merc], in the gauge models on external classical background [@lam] and also in the higher derivative dilaton model [@ejos]. The interesting property of our model is that here we observe qualitatively different asymptotical regims. In this respect the theory of one loop quantum dilaton gravity share the features of the higher derivative dilaton model [@ejos].
Interaction with matter fields
==============================
In the previous sections we have shown there are some special versions of the general dilaton model (\[0.1\]) which are renormalizable on shell at one - loop level. In spite of that the expression for counterterms (\[2.17\]) is very cumbersome (one can truly say terrible) all those counterterms vanish on shell if the starting functions are chosen in a special way. Thus the special cases (\[2.17\]) - (\[2.17\]), (\[2.17\]) - (\[3.131\]) are sharing the same property of quantum Einstein gravity (with cosmological constant), which is also renormalizable on shell [@hove; @cosm]. However in the last case the interaction with matter fields leads to the violation of one loop on shell renormalizability [@dene]. In this section we show that for our dilaton model it is also the case. If the matter fields are included the arbitrary functions $A(\phi), B(\phi)$ can not be fine tuned in such a way that the on shell higher derivative counterterms vanish.
Below we consider this in some details. One can suppose for simplicity that the matter fields action is composed by the vector fields only, and that the action of matter fields has the standard form and matter fields do not interact with the dilaton field $\phi$ directly, but only via metric. Thus we consider the most simple case which corresponds to the first work in [@dene] when the gravity was described by the Hilbert - Einstein action. According to [@dene] he renormalizability of the Einstein - Maxwell system is violated by the counterterms like $T_{\mu\nu}\;T^{\mu\nu}$ and $R_{\mu\nu}\;T^{\mu\nu}$. Here $T_{\mu\nu}$ is the Energy - Momentum Tensor of the matter $$T_{\mu\nu} = - \frac{2}{\sqrt{-g}}\;\frac{\delta S_m}{\delta g^{\mu\nu}}$$ which is traceless in four dimensions $T_{\mu\nu}\;g^{\mu\nu}=0$ that reflects the conformal invariance property of the matter fields action.
The contributions of matter fields and the ones of “mixed sector” to the one loop counterterms in our metric - dilaton gravity (after the renormalization in a matter fields sector, however for vectors it is not necessary [@dene]) lead to the following change of the counterterms. The general form of the divergences (\[2.2\]) is changed because of $T_{\mu\nu}\;T^{\mu\nu}$ and $R_{\mu\nu}\;T^{\mu\nu}$ terms, and moreover coefficient $c_w$ acquire the addition c\_wc\_w+ + \[3.15\] The contributions to $c_r$ are forbidden by conformal invariance (see, for example, [@book]) and others are lacking because the matter fields decouple from dilaton.
Let us now discuss the on shell renormalization which is a little bit more complicated. The first of the classical equations of motion (\[2.1\]) acquire the additional term $\frac{1}{2}\;T_{\mu\nu}$ on the $rhs$. However the second equation (\[2.1\]) remains unchanged as well as the trace of the first one. The transfer on shall is performed with the use of the formulas (\[2.4\]) - (\[2.7\]). Since (\[2.4\]) and (\[2.5\]) are based just on the second equation (\[2.1\]) and on the the trace of the first one, we conclude that for the theory with matter (\[2.4\]) and (\[2.5\]) are the same as in a pure metric - dilaton gravity. The detailed analysis show that the equation (\[2.7\]) is also the same. However, the expression (\[2.6\]) changes according to sR\_R\^sR\_R\^ - R\^T\_ \[3.16\] Therefore there is following contribution of the matter fields sector to the on shell divergences = - d\^4 x R\^T\_ \[3.17\] which has to be added to (\[3.1\]) together with original $T_{\mu\nu}\;T^{\mu\nu}$ and $R_{\mu\nu}\;T^{\mu\nu}$ terms. Moreover one must take into account the numerical change in $k_w$ which corresponds to (\[3.15\]) and (\[3.2\]).
Thus if we try to cancel the higher derivative on shell divergences by choosing the functions $A$ and $ B$ we face the more difficult problem than we have met in Section 5. As it was already pointed out there, the equations $k_w = k_{rr}
= 0$ have real solutions and therefore the $R^2$ type counterterms can be removed. However the $T_{\mu\nu}\;T^{\mu\nu}$ and $R_{\mu\nu}\;T^{\mu\nu}$ structures survive and we already do not have free parameters to cancel them. Thus one can observe that the on shell renormalizability is lacking in our metric - dilaton theory just as in purely metric gravity [@dene]. Moreover the renormalizability is violated by exactly the same two structures which are related with the traceless Energy - Momentum Tensor of the matter fields.
Two special cases
=================
In this section we briefly discuss two special cases of the theory (\[0.1\]) which are of special physical interest. One can consider this part of the paper as some kind of quantum gravity phenomenology.
i\) Let us consider the theory (\[1.12\]) that is classically equivalent to the special version of higher derivative quantum gravity (\[1.11\]). The transfer to quantum theory can be performed by introducing the generating functional of Green functions. If one introduce the external source for the auxiliary scalar field $\phi$, then the direct link between two models (\[1.11\]) and (\[1.12\]) will be lost. Therefore if we like to have such a link, the external source must be introduced for the metric only. If we consider the effective action in a background gauge, then the lack of external source for scalar corresponds to the lack of the background scalar field. In this case the only counterterms in the theory are the ones of the $c_w, c_r, c_7, c_{12}$ type in (\[2.2\]) and only one of them, namely $c_w$, violate the renormalizability. If we restrict ourselves by only the conformally flat background, then the theory is renormalizable and we can construct the renormalization group equations for the effective couplings $\alpha(t), G(t), \Lambda(t)$. The study of these equations show that the theory possesses cosmologically acceptable regime and, in particular, the quantum effects lead to exponential decrease of $\alpha(t)$ and $ \Lambda(t)$ at high energies [@OV]. Indeed the additional restriction on the background metric is not completely consistent from formal point of view. On this way we remove the divergent diagrams with massive spin two particles by hands. On the other side, in higher derivative gravity the existence of these particles (which have the wrong sign of the kinetic term and thus are unphysical) lead to the well known unitarity problem, so their removal here is not much worst then the existence. The detailed analysis of the renormalization group equations in the model (\[1.11\]) and their cosmological consequences will be given in [@OV].
ii\) Another interesting particular case is related with another extra condition on the background. One can suppose that the background scalar field is varying slowly as compared with the metric, and remove all the terms with the derivatives of scalar. Then we find that the only types of the counterterms which survive are the same as in the previous case. The next natural step is to look for the solutions of the equations $c_w = c_r = 0$ and so construct the theory with renormalizable potential. We have explored the (\[3.3\]) form of the functions $A(\phi)$ and $B(\phi)$. It turned out that the equations for $a, b, m$ following from $c_w = c_r = 0$ condition, do not have real solutions, and hence this idea doesn’t work. At the same time the above equations for $A(\phi)$ and $B(\phi)$ are ill defined because the conditions $c_w = c_r = 0$ depend on the choice of gauge fixing parameters. The possible way to remove this dependence is related with the use of the Vilkovisky’s unique effective action [@vilk] which coincide with the conventional effective action on shell, but differs off shell. Within this scheme we shall get the equations for $A(\phi)$ and $B(\phi)$ which have the structure similar to above but with different numerical coefficients. One can suppose that taking into account the Vilkovisky corrections to the divergences, we can get the real solutions and to construct the dilaton theory with renormalizable potential.
Discussion
==========
In this paper we have considered the different aspects of the one-loop renormalization in the theory (\[0.1\]). We have shown that the models of this type can be divided into two classes - models of one are conformally equivalent to the general relativity and also to the conformally coupled to gravity scalar field. The models of second class are conformally equivalent to the model (\[1.1\]) with non-constant $\Phi$, and any model of this type can be related to another one by some change of variables together with some change of potential function.
The one-loop calculations have been carried out for the general model (\[0.1\]) in original variables, with the use of background field method and some calculational improvements basically introduced in similar $d=2$ theory. Our calculational method does not need the conformal transformation of the metric and therefore is applicable (with minor standard modifications) to the higher derivative dilaton model which has been recently formulated in [@ejos; @shja]. The theory under consideration leads to a very cumbersome divergences and hence is non-renormalizable in usual sense. At the same time if the cosmological (or potential) term $C(\phi)$ is lacking then the theory with the fine tuned functions $A(\phi)$ and $B(\phi)$ is finite on classical equations of motion, that is possesses the same property as general relativity. If the potential term $C(\phi)$ is included the theory is renormalizable on shell, and the divergences can be removed by the renormalization of the dilaton field. Summing up, we have constructed 9 versions of the model (\[0.1\]) with $$A(\phi)=a\phi^m,\; \;\;\;\;\;
B(\phi)=b\phi^{m+2} \; \;\;\;\;\;
C(\phi)=L \phi^k$$ with $\frac{a}{b}, m, k$ defined in (\[3.43\]) - (\[3.43\]), (\[3.111\]) - (\[3.131\]). All these versions have qualitatively the same renormalization property as Einstein gravity with cosmological term. They are non-renormalizable off shell and renormalizable on shell. If the matter fields are included, then the on shell renormalizability is lost. The higher loops are expected to violate the on shell renormalizability because of appearance of the counterterms with third powers of curvature. The one loop renormalizability of the theory enable us to apply the renormalization group method for it’s study. It turns out that the effective potential of the theory indicate to the possibility of the second order phase transitions and in the point of minima the potential has the form of the Hilbert - Einstein action with both Newtonian and cosmological constants depending on scale. It is important that the results of our analysis are independent on the choice of gauge fixing condition because we consider the on shell renormalization (see [@VLT] for the most complete investigation of the gauge dependence in quantum field theory.
The one-loop calculations in the model (\[0.1\]) have been recently published in [@BKK]. In this paper, by use of transformation like (\[1.2\]) (the special form of these transformations had been originally introduced in [@hove] for this purposes) the general model is reduced to the special case with $A=-\frac{1}{2}, B=const$, and then the divergences are calculated in a special variables which correspond to this reduced model. We have performed the calculations in an original variables, and in this sense our result differs from the one of [@BKK]. In particular, the use of original variables allows the direct application to the model (\[1.12\]). Next, if considered off shell, our counterterms differ from the ones, derived in [@BKK] because of different choice of quantum variables. This difference indicate to the parametrization dependence of all the counterterms. As a consequence, the generalized beta functions, which have been derived in [@BKK], are likely to be parametrization (and probably gauge) dependent. This fact is a direct consequence of the non-renormalizability of the theory in standard sense.
The calculation in original variables is especially important for the study of $R+R^2$-gravity. On quantum level this model is equivalent to some version of (\[0.1\]), but only in original field variables, since in the last case one can avoid the introduction of the external source for the auxiliary scalar field. Introducing an extra constraint on the background metric, one can derive the renormalization group beta functions and explore the asymptotics of the effective charges [@OV].
[**Acknowledgments**]{} The authors are grateful to M. Asorey, I.L. Buchbinder, T. Morozumi, T. Muta and S.Odintsov for the stimulating discussions. ILS especially appreciate the contribution of B. Ovrut who paid his attention to the link between the model (\[0.1\]) and special version of higher derivative gravity (\[1.12\]). ILS is also grateful to T. Morozumi, T. Muta and to whole Department of Particle Physics for warm hospitality during his stay in Hiroshima University, and the Department of Theoretical Physics at the University of Zaragoza for warm hospitality at present time. The work of ILS has been supported in part by the RFFR (Russia), project no. 94-02-03234, and by ISF (Soros Foundation), grant RI1000.
[99]{}
T. Damour and G. Esposito-Farése, Class. Quantum Grav. [**9**]{}, 2093 (1992); T. Damour and A. M. Polykov, gr-qc/9411069; To appear in Gen.Rel.Grav. (1994).
M.B. Green, J.H. Schwarz and E. Witten, [*Superstring Theory*]{} (Cambridge University Press, Cambridge, 1987).
J.D. Barrow, Nucl.Phys., [**B296**]{}, (1988) 697; J.D. Barrow, S. Cotsakis, Phys.Lett. [**214B**]{}, (1988) 515; A.B. Burd, J.D. Barrow, Nucl.Phys., [**B308**]{}, (1988) 929.
K. Maeda, Phys.Rev. [**D37**]{}, (1988) 858; J.D. Barrow and K. Maeda, Nucl.Phys. [**B341**]{}, (1990) 294; J.D. Barrow, Phys.Rev. [**D47**]{}, (1993) 5329; Phys.Rev. [**D48**]{}, (1993) 3592.
E.J. Weinberg, Phys.Rev. [**D40**]{}, (1989) 3950.
G.L. Cardoso and B.A. Ovrut, [*Natural Supergravity Inflation*]{} CERN-TH.6685/92, UPR-0526T (1992); B.A. Ovrut, [*Talk given on International Conference on Gravity and Field Theory*]{}, Tomsk, August 1994.
T. Damour and K. Nordtvedt, Phys. Rev. [**D48**]{}, 3436 (1993).
B. Whitt, Phys.Lett. [**145B**]{}, (1984) 176.
G. Magnano and L.M. Sokolowski, Phys. Rev. [**50D**]{}, 5039 (1994).
A.A. Starobinskii, Phys.Lett. [**91B**]{}, (1980) 99; L.A. Kofman, A.D. Linde and A.A. Starobinskii, Phys.Lett. [**157B**]{}, (1985) 361.
Barrow J.D. and Ottewill A.C., J.Phys. [**A16**]{}, (1983) 2757; Barrow J.D., Phys.Lett., [**183B**]{}, (1987) 285.
M.B. Mijic, M.M. Morris and W.-M. Suen, Phys.Rev. [**D34**]{}, (1986) 2934.
A.A. Starobinskii and H.J. Schmidt, Class.Quant.Grav. [**4**]{}, (1987) 695.
K.S. Stelle, Phys.Rev. [**16D**]{}, 953 (1977).
B.L. Voronov and I.V. Tyutin, Sov. J. Nucl. Phys. [**23**]{}, 664 (1976).
E.S. Fradkin and A.A. Tseytlin, Nucl. Phys. [**201B**]{}, 469 (1982).
I.G. Avramidi, Yad. Fiz. (Sov. J. Nucl. Phys.) [**44**]{}, 255 (1986).
I.L. Buchbinder and I.L. Shapiro, Yad. Fiz. (Sov. J. Nucl. Phys.) [**44**]{}, 1033 (1986); I.L. Buchbinder, O.K. Kalashnikov, I.L. Shapiro, V.B. Vologodsky and Yu.Yu. Wolfengaut, Phys. Lett. [**B216**]{}, 127 (1989); I.L. Shapiro, Class. Quant. Grav. [**6**]{}, 1197 (1989).
I.L. Shapiro, Yad. Fiz. (Sov. J. Nucl. Phys.) (1994)
S. Christensen and M. Duff., Nucl. Phys. [**170B**]{} (1980) 480.
B.S. DeWitt, [*Dynamical Theory of Groups and Fields*]{}. (Gordon and Breach, NY, 1965).
I.L. Buchbinder, S.D. Odintsov and I.L. Shapiro, [*Effective Action in Quantum Gravity*]{} (IOP, Bristol, 1992).
G. t’Hooft and M. Veltman, Ann. Inst. H. Poincare. [**A20**]{}, 69 (1974).
Paper in preparation.
E. Elizalde, A.G. Jacksenaev, S.D. Odintsov and I.L. Shapiro, Phys. Lett. [**B328**]{}, 297 (1994); Preprint HUPD - 9413, Hiroshima University, 1994; E. Elizalde, S.D. Odintsov and I.L. Shapiro, Class. Quant. Grav. [**11**]{}, 1607 (1994).
I. Antoniadis and E. Mottola, Phys. Rev. [**45D**]{}, 2013 (1992)
S.D. Odintsov and I.L. Shapiro, Class. Quant. Grav. [**8**]{} L57 (1991).
I.L. Buchbinder and S.D. Odintsov, Class. Quant. Grav. [**2**]{}, 721 (1985).
S.D. Odintsov and I.L. Shapiro, Class. Quant. Grav. [**9**]{} 873 (1992).
I.L. Shapiro and A.G. Jacksenaev, Phys.Lett. [**324B**]{}, 284 (1994).
T. Aida, Y. Kitazawa, H. Kawai and M. Ninomiya, [*Nucl. Phys.*]{} [**B427**]{} (1994) 158.
T. Goldman, J. Pérez-Mercader, F. Cooper, M. Martin-Nieto, [*Phys. Lett.*]{} [**281B**]{} (1992) 219.
I.L. Shapiro, Phys.Lett. [**329B**]{}, 181 (1994).
S.Deser and P. van Nieuwenhuisen, Phys. Rev. [**10D**]{}, 401 (1974); [**10D**]{}, 411 (1974).
G.A. Vilkovisky, Nucl.Phys. [**234B**]{} 125 (1984).
Voronov B.L., Lavrov P.M., Tyutin I.V., Sov.J.Nucl.Phys. [**36**]{} 498 (1992).
A.O.Barvinski, A.Kamenschik, B.Karmazin, Phys.Pev. D, [**48**]{}, 3677 (1993).
[^1]: On leave from Tomsk Pedagogical Institute, 634041 Tomsk, Russia.\
E-mail: [email protected]
[^2]: E-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'On November 12, 2014, the ESA/Rosetta descent module Philae landed on the Abydos site of comet 67P/Churyumov-Gerasimenko. Aboard this module, the Ptolemy mass spectrometer measured a CO/CO$_2$ ratio of 0.07 $\pm$ 0.04 which differs substantially from the value obtained in the coma by the Rosetta/ROSINA instrument, suggesting a heterogeneity in the comet nucleus. To understand this difference, we investigated the physico-chemical properties of the Abydos subsurface leading to CO/CO$_2$ ratios close to that observed by Ptolemy at the surface of this region. We used a comet nucleus model that takes into account different water ice phase changes (amorphous ice, crystalline ice and clathrates), as well as diffusion of molecules throughout the pores of the matrix. The input parameters of the model were optimized for the Abydos site and the ROSINA CO/CO$_2$ measured ratio is assumed to correspond to the bulk value in the nucleus. We find that all considered structures of water ice are able to reproduce the Ptolemy observation with a time difference not exceeding $\sim$50 days, i.e. lower than $\sim$2% on 67P/Churyumov-Gerasimenko’s orbital period. The suspected heterogeneity of 67P/Churyumov-Gerasimenko’s nucleus is also found possible only if it is constituted of crystalline ices. If the icy phase is made of amorphous ice or clathrates, the difference between Ptolemy and ROSINA’s measurements would rather originate from the spatial variations in illumination on the nucleus surface. An eventual new measurement of the CO/CO$_2$ ratio at Abydos by Ptolemy could be decisive to distinguish between the three water ice structures.'
author:
- 'B. Brugger, O. Mousis, A. Morse, U. Marboeuf, L. Jorda, A. Guilbert-Lepoutre, D. Andrews, S. Barber, P. Lamy, A. Luspay-Kuti, K. Mandt, G. Morgan, S. Sheridan, P. Vernazza, and I.P. Wright'
title: 'Subsurface characterization of 67P/Churyumov-Gerasimenko’s Abydos Site'
---
Introduction
============
On November 12, 2014, the ESA/Rosetta descent module Philae landed at the Abydos site on the surface of comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G). As part of the scientific payload aboard Philae, the Ptolemy mass spectrometer (Wright et al. 2007) performed the analysis of several samples from the surface at Agilkia (Wright et al. 2015) and atmosphere at Abydos (Morse et al. 2015). The main molecules detected by Ptolemy on the Abydos site were H$_2$O, CO and CO$_2$, with a measured CO/CO$_2$ molar ratio of 0.07 $\pm$ 0.04. Meanwhile, the CO/CO$_2$ ratio has also been sampled in 67P/C-G’s coma between August and September 2014 by the ROSINA Double Focusing Mass Spectrometer (DFMS; Balsiger et al. 2007; H[ä]{}ssig et al. 2013) aboard the Rosetta spacecraft. Strong variations of the CO and CO$_2$ production rates, dominated by the diurnal changes on the comet, have been measured by ROSINA, giving CO/CO$_2$ ratios ranging between 0.50 $\pm$ 0.18 and 1.62 $\pm$ 1.34 over the August-September 2014 time period (H[ä]{}ssig et al. 2015). Large fluctuations correlated with the sampled latitudes have also been observed and explained either by seasonal variations or by a compositional heterogeneity in the nucleus (H[ä]{}ssig et al. 2015). Further investigation of the coma heterogeneity performed by Luspay-Kuti et al. (2015) in the southern hemisphere of 67P/C-G at a later time period led to conclusions in favor of compositional heterogeneity. This latter hypothesis is also reinforced by the Ptolemy measurement of the CO/CO$_2$ ratio at the Abydos site, which is found outside the range covered by the ROSINA measurements (Morse et al. 2015).
Here, we aim at investigating the physico-chemical properties of the Abydos subsurface which can reproduce the CO/CO$_2$ ratio observed by Ptolemy, assuming that the composition of the solid phase located beneath the landing site initially corresponds to the value in the coma. To investigate the possibility of a heterogeneous nucleus for 67P/C-G, we have employed a comet nucleus model with i) an updated set of thermodynamic parameters relevant for this comet and ii) an appropriate parameterization of the illumination at the Abydos site. This allows us to mimic the thermal evolution of the subsurface of this location. By searching for the matching conditions between the properties of the Abydos subsurface and the Ptolemy data, we provide several constraints on the structural properties and composition of Philae’s landing site in different cases.
The comet nucleus model
=======================
The one-dimensional comet nucleus model used in this work is described in Marboeuf et al. (2012). This model considers an initially homogeneous sphere composed of a predefined porous mixture of ices and dust in specified proportions. It describes heat transmission, gas diffusion, sublimation/recondensation of volatiles within the nucleus, water ice phase transition, dust release, and dust mantle formation. The model takes into account different phase changes of water ice, within: amorphous ice, crystalline ice and clathrates. The use of a 1D model is a good approximation for the study of a specific point at the surface of the comet nucleus, here the Abydos landing site. However, since 67P/C-G’s shape is far from being a sphere (Sierks et al. 2015), we have parameterized the model in a way that correctly reproduces the illumination conditions at Abydos. This has been made possible via the use of the 3D shape model developed by Jorda et al. (2014) which gives the coordinates of Abydos on the surface of 67P/C-G’s nucleus, as well as the radius corresponding to the Abydos landing site and the normal to the surface at this specific location. The Abydos landing site is located just outside the Hatmethit depression, at the very edge of the area illuminated during the mapping and close observation phases of the Rosetta mission, roughly until December 2014. It is also at the edge of a relatively flat region of the small lobe illuminated throughout the perihelion passage of the comet. Geomorphologically, Abydos is interpreted as being a rough deposit region composed of meter-size boulders (Lucchetti et al. 2016). Other geometric parameters specific to 67P/C-G, such as the obliquity and the argument of the subsolar meridian at perihelion, are calculated from the orientation of the spin axis computed during the shape reconstruction process (Jorda et al. 2014). Table 1 summarizes the main parameters used in this work. The porosity and dust/ice ratio of the cometary material are set in the range of measurement of 80 $\pm$ 5 % (Kofman et al. 2015) and 4 $\pm$ 2 (Rotundi et al. 2014), respectively. These two parameters are linked through the density of the cometary material, and are set to be compatible with the preliminary value determined by Jorda et al. (2014) (510 $\pm$ 20 kg/m$^3$). 67P/C-G’s thermal inertia is estimated to be in the 10–150 W K$^{-1}$ m$^{-2}$ s$^{1/2}$ range based on the measurement obtained by the Rosetta/VIRTIS instrument (Leyrat et al. 2015). According to the same study, regions surrounding Abydos are characterized by a thermal inertia in the lower half of this range. We have therefore chosen a low thermal inertia close to 50 W K$^{-1}$ m$^{-2}$ s$^{1/2}$.
In addition to water ice and dust, the solid phase of our model includes CO and CO$_2$ volatiles. Although coma abundances do not necessarily reflect those in the nucleus, they constitute the most relevant constraint available on its composition. We thus considered the CO/CO$_2$ ratio (1.62 $\pm$ 1.34) measured by ROSINA on August 7, 2014 at 18 hours, as representative of the bulk ice composition in the nucleus, and more specifically in the Abydos subsurface. This ratio is derived from the CO/H$_2$O and CO$_2$/H$_2$O ROSINA measurements performed at this date, which are equal to 0.13 $\pm$ 0.07 and 0.08 $\pm$ 0.05, respectively (H[ä]{}ssig et al. 2015). We selected the date of August 7 because i) the corresponding ROSINA measurements were performed at high northern latitudes where the Abydos site is located, and ii) the large CO/CO$_2$ range obtained at this moment covers all values measured by ROSINA at other dates (including the late value obtained by Le Roy et al. (2015) on October 20 for the northern hemisphere, namely CO/CO$_2$ = 1.08).
The three main phases of ices identified in the literature, namely crystalline ice, amorphous ice and clathrate phase, are considered in this work. Outgassing of volatiles in 67P/C-G could then result from the sublimation of ices, amorphous-to-crystalline ice phase transition, or destabilization of clathrates in the crystalline ice, amorphous ice and clathrates cases, respectively. Because the properties of volatiles trapping in the nucleus matrix strongly depend on the considered icy phase, the following models have been considered:
#### Crystalline ice model
Water ice is fully crystalline, meaning that no volatile species are trapped in the water ice structure. Here, CO and CO$_2$ are condensed in the pores of the matrix made of water ice and dust;
#### Amorphous ice model
The matrix itself is made of amorphous water ice with a volatile trapping efficiency not exceeding $\sim$10%. In this case, the cumulated mole fraction of volatiles is higher than this value, implying that an extra amount of volatiles is crystallized in the pores. With this in mind, we consider different distributions of CO and CO$_2$ in the both phases of this model;
#### Clathrate model
Water ice is exclusively used to form clathrates. Similarly to amorphous ice, clathrates have a maximum trapping capacity ($\sim$17%). The extra amount of volatiles, if any, also crystallizes in the pores. In our case, however, CO is fully trapped in clathrates and escapes only when water ice sublimates. In contrast, we assume that solid CO$_2$ exists in the form of crystalline CO$_2$ in the pores of the nucleus because this molecule is expected to condense in this form in the protosolar nebula (Mousis et al. 2008).
[lcl]{} Rotation period (hr) & 12.4 & Mottola et al. (2014)\
Obliquity ($\degree$) & 52.25 &\
Subsolar meridian $\Phi$ ($\degree$) [^1] & -111 &\
Co-latitude ($\degree$) [^2] & -21 &\
Initial radius (km) & 2.43 &\
Bolometric albedo (%) & 1.5 & Fornasier et al. (2015)\
Dust/ice mass ratio & 4 $\pm$ 2 & Rotundi et al. (2014)\
Porosity (%) & 80 $\pm$ 5 & Kofman et al. (2015)\
Density (kg/m$^3$) & 510 $\pm$ 20 & Jorda et al. (2014)\
I (W K$^{-1}$ m$^{-2}$ s$^{1/2}$) [^3] & 50 & Leyrat et al. (2015)\
CO/CO$_2$ initial ratio & 1.62 $\pm$ 1.34 & H[ä]{}ssig et al. (2015)\
Thermal evolution of the subsurface at Abydos
=============================================
Our results show that the illumination at the surface of the Abydos site is a critical parameter for the evolution of the nucleus, regardless the considered ice structure. Consequently, all three models described in Section 2 present the same behavior up to a given point. We first describe the characteristics displayed in common by the three models by presenting the thermal evolution of the crystalline ice model, before discussing the variations resulting from the different assumptions on the nature of ices. Figure 1 shows the time evolution of the nucleus stratigraphy, which corresponds to the structural differentiation occurring in the subsurface of the Abydos site. This differentiation results from the sublimation of the different ices. After each perihelion passage, the sublimation interfaces of CO and CO$_2$ reach deeper layers beneath the nucleus surface, with a progression of $\sim$20 m per orbit. The CO sublimation interface always propagates deeper than its CO$_2$ counterpart because of the higher volatility of the former molecule. On the other hand, because surface ablation is significant, the progression of these interfaces is stopped by the propagation of the water sublimation front after perihelion. This allows the Abydos region to present a “fresh” surface after each perihelion.
{width="9cm"}
At the surface of the Abydos site, the outgassing rates of CO and CO$_2$ vary with the illumination conditions, reaching maxima at perihelion and minima at aphelion. Because the sublimation interface of CO$_2$ is closer to the surface, its production rate is more sensitive to the illumination conditions than that of CO. As a result, the outgassing rate of CO$_2$ presents important variations with illumination while that of CO is less affected. This difference strongly impacts the evolution of the CO/CO$_2$ outgassing ratio at the surface of Abydos (see Figure 2). Close to perihelion, this ratio crosses the range of values measured by Ptolemy (0.07 $\pm$ 0.04) and reaches a minimum. Note that the CO/CO$_2$ outgassing ratio presents spikes during a certain period after perihelion. These spikes appear when the CO and CO$_2$ interfaces of sublimation are dragged out to the surface by ablation and result from temperature variations induced by diurnal variations of the insolation.
We define $\Delta t$ as the time difference existing between the Ptolemy CO/CO$_2$ observations (here 0.07 measured on November 12, 2014) and the epoch at which our model reproduces these data (see Figure 2). In each case investigated, we vary the input parameters of the model to minimize the value of $\Delta t$ (see Table 2 for details). We have also defined the quantity $f_t = \Delta t / P_{orb}$, namely the fraction of 67P/C-G’s orbital period ($P_{orb} = 6.44$ yr) corresponding to $\Delta t$. The results of our simulations are indicated below.
{width="9cm"}
[lccc]{} Parameters & & &\
Dust/ice mass ratio & 6 & 4 & 4\
Porosity (%) & 78 & 76 & 76\
Density (kg/m$^3$) & 516 & 510 & 519\
I (W K$^{-1}$ m$^{-2}$ s$^{1/2}$) [^4] & $\sim$60 & 40–60 & $\sim$50\
CO/CO$_2$ initial ratio & 0.46 & 1.62 & 0.46\
CO/H$_2$O initial abundance & 6% & 13% [^5] & 6%\
CO$_2$/H$_2$O initial abundance & 13% & 8% [^6] & 13%\
Results & & &\
$\Delta t$ (day) & 52 & 4 & 34\
$f_t$ (%) & 2.2 & 0.17 & 1.4\
#### Crystalline ice model
In this case, we find that the initial CO/CO$_2$ ratio and the dust/ice ratio adopted in the nucleus have a strong influence on $\Delta t$. This quantity is minimized i) when the adopted dust/ice ratio becomes higher than those found by Rotundi et al. (2014) and ii) if the selected initial CO/CO$_2$ ratio is lower than the nominal value found by ROSINA (H[ä]{}ssig et al. 2015). Figure 3 shows the evolution of $\Delta t$ as a function of the dust/ice ratio for two different values of the initial CO/CO$_2$ ratio, namely 1.62 (the central value) and 0.46 (close to the lower limit). These results confirm the aforementioned trend: with CO/CO$_2$ = 0.46 and a dust/ice ratio of 6 or higher, the Ptolemy’s measurement epoch is matched with $\Delta t$ = 52 $\pm$ 27 days or lower, which corresponds to $f_t$ lower than $\sim$2%. These results can be explained by the thermal conductivity of crystalline water ice, which is in the 3–20 W m$^{-1}$ K$^{-1}$ range (Klinger 1980), considering the temperatures in the comet. Because dust has a conductivity of 4 W m$^{-1}$ K$^{-1}$ (Ellsworth & Schubert 1983), the global conductivity decreases with the increase of the dust/ice ratio in the nucleus. Since heating in the nucleus is mostly provided by surface illumination, a low conductivity increases the temperature gradient between the upper and deeper layers (where the CO$_2$ and the CO sublimation interfaces are located, respectively). This gradient enhances more the sublimation rate of CO$_2$ ice than that of CO ice, leading to smaller CO/CO$_2$ outgassing ratios at the surface and to a smaller $\Delta t$ (see Figure 2).
{width="9cm"}
#### Amorphous ice model
Here, values of $\Delta t$ are significantly lower than those obtained with the crystalline ice model; a $\Delta t$ of 4 $\pm$ 9 days is obtained using the central values listed in Table 2 for the initial CO/CO$_2$ and dust/ice ratios, leading to $f_t \sim 0.2\%$. Higher values of the CO/CO$_2$ ratio never allow our model matching the Ptolemy data: the CO/CO$_2$ outgassing ratio is increased sufficiently high so that even its minimum becomes higher than the range of measurements performed by Ptolemy. On the other hand, lower CO/CO$_2$ ratios increase $\Delta t$. Interestingly, the results of this model are poorly affected by the considered dust/ice ratio. Within the 0.12–1.35 W m$^{-1}$ K$^{-1}$ range (Klinger 1980), the conductivity of amorphous water ice lays under those of dust (4 W m$^{-1}$ K$^{-1}$) and crystalline ices (3–20 W m$^{-1}$ K$^{-1}$). Since amorphous ice dominates the volatile phase, the mean conductivity is never too far from to that of dust, irrespective of the dust/ice ratio considered within the observed range.
#### Clathrate model
In this case, low CO/CO$_2$ ratios (still within the range given in Table 1) are required to get values of $\Delta t$ below 50 days (i.e., $f_t$ below 2%). Similarly to the case of the amorphous ice model, $\Delta t$ is poorly sensitive to the variation of the dust/ice ratio because the conductivity of clathrates (0.5 W m$^{-1}$ K$^{-1}$; Krivchikov et al. 2005a, 2005b) is small compared to that of dust.
Discussion
==========
Our goal was to investigate the possibility of recovering the value of the CO/CO$_2$ outgassing ratio measured by the Ptolemy instrument at the surface of Abydos. Interestingly, all considered models match the Ptolemy value with $f_t$ lower than $\sim$2%, provided that an optimized set of parameters is adopted for the Abydos region. Despite the fact it is poorly sensitive to the adopted dust/ice ratio, a nucleus model dominated by amorphous ice (and possibly including a smaller fraction of crystalline ices) gives the best results ($\Delta t$ $\leq$ 4 days, i.e. $f_t$ $\leq$ 0.2%) for a primordial CO/CO$_2$ ratio equal to the central value measured by the ROSINA instrument. On the other hand, the crystalline ice and clathrate models require a primordial CO/CO$_2$ ratio close to the lower limit sampled by ROSINA to obtain values of $\Delta t$ under 50 days (i.e. $f_t$ under 2%). We stress the fact that the CO/CO$_2$ range of validity is used under the assumption that the ROSINA measurements correspond to the bulk nucleus abundances, which is not necessarily true. A second requirement to minimize the value of $\Delta t$ in the crystalline ice model is the necessity to adopt a dust/ice ratio at least equal to or higher than the upper limit determined by Rotundi et al. (2014) for 67P/C-G. This is supported by the pictures taken at Abydos by the CIVA instrument (Bibring et al. 2007) aboard the Philae module. The very low reflectance of 3–5% of the Abydos region (Bibring et al. 2015) is in agreement with the OSIRIS and VIRTIS reflectance measurements in the visible (Sierks et al. 2015) and near-IR (Capaccioni et al. 2015), which are consistent with a low ice content in the upper surface layer (Capaccioni et al. 2015).
Surface illumination can also greatly influence the CO/CO$_2$ outgassing ratio on 67P/C-G. To quantify this effect, we have simulated a point of 67P/C-G’s nucleus which is more illuminated in comparison to Abydos. We have performed a set of simulations at a co-latitude of -52.25$\degree$, a point that receives permanent sunlight around perihelion. At the date when the CO/CO$_2$ outgassing ratio at Abydos is equal to 0.07 (the central value of Ptolemy’s range of measurement), we obtain a different value for the CO/CO$_2$ outgassing ratio at this new location, irrespective of the adopted model. For the crystalline ice model, the outgassing ratio at the illuminated site reaches a value of 0.11, which is still within Ptolemy’s range of measurement. This implies that a different illumination cannot explain a strong variation of the CO/CO$_2$ outgassing ratio if the nucleus presents a homogeneous composition. In this case, the difference between the Ptolemy and ROSINA measurements is clearly due to a heterogeneity in the nucleus composition. On the other hand, the CO/CO$_2$ outgassing ratio at the illuminated site is equal to 0.74 and 0.76 in the cases of the amorphous ice and clathrate models, respectively. These values are within ROSINA’s range of measurements, implying that the difference of illumination is sufficient to explain the difference with the CO/CO$_2$ ratio sampled at Abydos, assuming a homogeneous nucleus.
In summary, all possible water ice structures are able to reproduce the observations made by Ptolemy, assuming that the primordial CO/CO$_2$ ratio is the one inferred by ROSINA. Each case requires a unique set of input parameters taken from the range of values inferred by Rosetta and which describes the structure and composition of the material. According to our simulations, a heterogeneity in the composition of 67P/C-G’s nucleus is possible only if the nucleus is composed of crystalline ices. However, if we consider different ice phases like amorphous ice or clathrates, the difference between the Ptolemy and ROSINA measurements could simply originate from the variation of illumination between different regions of the nucleus.
In the upcoming months, the Philae module could awaken and allow the Ptolemy mass spectrometer to perform additional measurements of the CO/CO$_2$ ratio. By comparing these new values with the different CO/CO$_2$ outgassing ratios predicted by our three models at the same date, we would be able to see which model is the most reliable, and thus to determine which water ice structure is dominant at the surface of 67P/C-G’s nucleus.
O.M. acknowledges support from CNES. This work has been partly carried out thanks to the support of the A\*MIDEX project (n^o^ ANR-11-IDEX-0001-02) funded by the “Investissements d’Avenir” French Government program, managed by the French National Research Agency (ANR). Funding and operation of the Ptolemy instrument was provided by the Science and Technology Facilities Council (Consolidated Grant ST/L000776/1) and UK Space Agency (Post-launch support ST/K001973/1). A.L.-K. acknowledges support from the NASA Jet Propulsion Laboratory (subcontract No. 1496541).
Balsiger, H., Altwegg, K., Bochsler, P., et al. 2007, , 128, 745
Bibring, J.-P., Lamy, P., Langevin, Y., et al. 2007, , 128, 397
Bibring, J.-P., Langevin, Y., Carter, J., et al. 2015, Science, 349, 020671
Capaccioni, F., Coradini, A., Filacchione, G., et al. 2015, Science, 347, aaa0628
Ellsworth, K., & Schubert, G. 1983, , 54, 490
Fornasier, S., Hasselmann, P. H., Barucci, M. A., et al. 2015, , 583, A30
H[ä]{}ssig, M., Altwegg, K., Balsiger, H., et al. 2013, , 84, 148
H[ä]{}ssig, M., Altwegg, K., Balsiger, H., et al. 2015, Science, 347, 276
Jorda, L., Gaskell, R. W., Hviid, S. F., et al. 2014, AGU Fall Meeting Abstracts, 3943
Klinger, J. 1980, Science, 209, 271
Kofman, W., Herique, A., Barbin, Y., et al. 2015, Science, 349, 020639
Krivchikov, A. I., Gorodilov, B. Y., Korolyuk, O. A., et al. 2005a, Journal of Low Temperature Physics, 139, 693
Krivchikov, A. I., Manzhelii, V. G., Korolyuk, O. A., Gorodilov, B. Y., & Romantsova, O. O. 2005b, Physical Chemistry Chemical Physics (Incorporating Faraday Transactions), 7, 728
Le Roy, L., Altwegg, K., Balsiger, H., et al. 2015, , 583, A1
Leyrat, C., Erard, S., Capaccioni, F., et al. 2015, EGU General Assembly Conference Abstracts, 17, 9767
Lucchetti, A., Cremonese, G., Jorda, L., et al. 2016, , 585, L1
Luspay-Kuti, A., H[ä]{}ssig, M., Fuselier, S. A., et al. 2015, , 583, A4
Marboeuf, U., Schmitt, B., Petit, J.-M., Mousis, O., & Fray, N. 2012, , 542, A82
Morse, A., Mousis, O., Sheridan, S., et al. 2015, , 583, A42
Mottola, S., Lowry, S., Snodgrass, C., et al. 2014, , 569, L2
Mousis, O., Alibert, Y., Hestroffer, D., et al. 2008, , 383, 1269
Rotundi, A., Rietmeijer, F. J. M., Ferrari, M., et al. 2014, Meteoritics and Planetary Science, 49, 550
Sierks, H., Barbieri, C., Lamy, P. L., et al. 2015, Science, 347, aaa1044
Wright, I. P., Barber, S. J., Morgan, G. H., et al. 2007, , 128, 363
Wright, I. P., Sheridan, S., Barber, S. J., et al. 2015, Science, 349, 020673
[^1]: Argument of subsolar meridian at perihelion.
[^2]: Angle between the normal to the surface and the equatorial plane.
[^3]: Thermal inertia.
[^4]: Thermal inertia resulting from the different water ice conductivities.
[^5]: 2.8% trapped in amorphous ice, 10.2% condensed in the pores (Marboeuf et al. 2012).
[^6]: 4.2% trapped in amorphous ice, 3.2% condensed in the pores (Marboeuf et al. 2012).
|
{
"pile_set_name": "ArXiv"
}
|
amstex =1200
**NEW MODULI SPACES OF POINTED CURVES**
**AND PENCILS OF FLAT CONNECTIONS**
**A. Losev${}^1$, Yu. Manin${}^2$**
*${}^1$Institute of Theoretical and Experimental Physics, Moscow, Russia*
*${}^2$Max–Planck–Institut für Mathematik, Bonn, Germany*
[**Abstract.**]{} It is well known that formal solutions to the Associativity Equations are the same as cyclic algebras over the homology operad $(H_*(\overline{M}_{0,n+1}))$ of the moduli spaces of $n$–pointed stable curves of genus zero. In this paper we establish a similar relationship between the pencils of formal flat connections (or solutions to the Commutativity Equations) and homology of a new series $\overline{L}_n$ of pointed stable curves of genus zero. Whereas $\overline{M}_{0,n+1}$ parametrizes trees of $\bold{P}^1$’s with pairwise distinct nonsingular marked points, $\overline{L}_n$ parametrizes strings of $\bold{P}^1$’s stabilized by marked points of two types. The union of all $\overline{L}_n$’s forms a semigroup rather than operad, and the role of operadic algebras is taken over by the representations of the appropriately twisted homology algebra of this union.
**0. Introduction and plan of the paper**
One of the remarkable basic results in the theory of the Associativity Equations (or Frobenius manifolds) is the fact that their formal solutions are the same as cyclic algebras over the homology operad $(H_*(\overline{M}_{0,n+1}))$ of the moduli spaces of $n$–pointed stable curves of genus zero. This connection was discovered by physicists, who observed that the data of both types come from models of topological string theories. Precise mathematical treatment was given in \[KM\] and \[KMK\].
In this paper we establish a similar relationship between the pencils of formal flat connections (or solutions to the Commutativity Equations: see 3.1–3.2 below) and homology of a new series $\overline{L}_n$ of pointed stable curves of genus zero. Whereas $\overline{M}_{0,n+1}$ parametrizes trees of $\bold{P}^1$’s with pairwise distinct nonsingular marked points, $\overline{L}_n$ parametrizes strings of $\bold{P}^1$’s, and all marked points with exception of two are allowed to coincide (see the precise definitions in 1.1 and 2.1). Moreover, the union of all $\overline{L}_n$’s forms a semigroup rather than operad, and the role of operadic algebras is taken over by the representations of the appropriately twisted homology algebra of this union: see precise definitions in 3.3.
This relationship was discovered on a physical level in \[Lo1\], \[Lo2\]. Here we give a mathematical treatment of some of the main issues raised in these papers.
This paper is structured as follows.
In §1 we introduce the notion of $(A,B)$–pointed curves whose combinatorial structure generalizes that of strings of projective lines described above. We then describe a construction of “adjoining a generic black point” which allows us to produce families of such curves and their moduli stacks inductively. This is a simple variation of one of the arguments due to F. Knudsen \[1\].
In §2 we define and study the spaces $\overline{L}_n$ for which we give two complementary constructions. The first one identifies $\overline{L}_n$ with one of the moduli spaces of pointed curves. The second one exhibits $\overline{L}_n$ as a well–known toric manifold associated with the polytope called permutohedron in \[Ka2\]. These constructions put $\overline{L}_n$ into two quite different contexts and suggest generalizations in different directions.
As moduli spaces, $\overline{L}_n$ become components of the extended modular operad which we define and briefly discuss in §4. We expect that there exists an appropriate extension of the Gromov–Witten invariants producing algebras over extended operads involving gravitational descendants.
As toric varieties, $(\overline{L}_n)$ form one of the several series related to the generalized flag spaces of classical groups: see \[GeSe\]. It would be interesting to generalize to other series our constructions.
In this paper we use the toric description in order to prove for $\overline{L}_n$’s an analog of Keel’s theorem (Theorem 2.7.1) and its extension (Theorem 2.9), crucial for studying representations of the twisted homology algebra.
This twisted homology algebra $H_*T$ and its relationship with pencils of formal flat connections are discussed in §3, which contains the main result of this paper: Theorem 3.3.1.
[*Acknowledgement.*]{} Yu. Manin is grateful to M. Kapranov who, after having seen the formula $\chi (\overline{L}_n)=n!$, suggested that $\overline{L}_n$ must be the toric variety associated with the permutohedron.
**§1. $(A,B)$–pointed curves**
Let $A, B$ be two finite disjoint sets, $S$ a scheme, $g\ge 0$. An $(A,B)$–pointed curve of genus $g$ over $S$ consists of the data $$(\pi:\,C\to S;\,x_i:\,S\to C,\ i\in A; \,x_j:\,S\to C,\ j\in B)
\eqno (1.1)$$ where
\(i) $\pi$ is a flat proper morphism whose geometric fibres $C_s$ are reduced and connected curves, with at most ordinary double points as singularities, and $g=H^1(C_s,\Cal{O}_{C_s}).$
\(ii) $x_i, i\in A\cup B,$ are sections of $\pi$ not containing singular points of geometric fibres.
\(iii) $x_i\cap x_j=\emptyset$ if $i\in A,\ j\in A\cup B,\ i\neq j.$
Such a curve $(1.1)$ is called stable, if the normalization of any irreducible component $C^{\prime}$ of a geometric fibre carries $\ge 3$ pairwise different special points if $C^{\prime}$ is of genus $0$ and $\ge 1$ special points if $C^{\prime}$ is of genus $1.$ Special points are inverse images of singular points and of the structure sections $x_i$.
[**1.2. Remarks.**]{} a) If we put in this definition $B=\emptyset$, we will get the usual notion of an $A$–pointed (pre)stable curve whose structure sections are not allowed to intersect pairwise. Now we divide the sections into two groups: “white” sections $x_i, i\in A$ are not allowed to intersect any other section, whereas “black” sections $x_j, j\in B$ cannot intersect white ones, but otherwise are free and can even pairwise coincide. (However, both types of sections are not allowed to intersect singularities of fibres).
If we take in this definition one–element set $B=\{*\}$, we will get a natural bijection between $(A,\{*\})$–pointed curves and $(A\cup\{*\},\emptyset )$–pointed curves. If $\roman{card}\,B\ge 2,$ the two notions become essentially different.
b\) The dual modular graph of a geometric fibre is defined in the same way as in the usual case (for the conventions we use see \[Ma\], III.2). Tails now can be of two types, and we may refer to them and their marks as “black” and “white” ones as well. Combinatorial type of a geometric fiber is, by definition, the isomorphism class of the respective modular graph with $(A,B)$–marking of its tails.
c\) Let $T\to S$ be an arbitrary base change. It produces from any $(A,B)$–pointed (stable) curve (1.2) over $S$ another $(A,B)$–pointed (stable) curve over $T$: $(C_T;\,x_{i,T}).$
[**1.3. A construction.**]{} In this subsection, we start with an $(A,B)$–pointed curve (1.1) and produce from it another $(A,B^{\prime})$–pointed curve: $$(\pi^{\prime} :\,C^{\prime}\to S^{\prime};\,x_i^{\prime},\, i\in A\cup
B^{\prime}).
\eqno (1.2)$$ The base of the new curve will be $S^{\prime}:=C.$ There will be one extra black mark, say, $*$, so that $B^{\prime}=B\cup\{*\}$. The new curve and sections will be produced in two steps. At the first step we make the base change $C\to S$ as in 1.2 c), obtaining an $(A,B)$–pointed curve $X:=C\times_S C$, with sections $x_{i,C}.$ We then add the extra section $\Delta :\,C\to C\times_S C$ which is the relative diagonal, and mark it by $*$. We did not yet produce an $(A,B^{\prime})$–pointed curve over $S^{\prime}=C$, because the extra black section can (and generally will) intersect both singular points of the fibres and white sections as well.
At the second step of the construction, we remedy this by birationally modifying $C\times_S C\to C$ as in \[Kn1\], Definition 2.3. More precisely, we define $C^{\prime}:= \roman{Proj\,Sym}\,\Cal{K}$ as the relative projective spectrum of the symmetric algebra of the sheaf $\Cal{K}$ on $X=C\times_S C$ defined as the cokernel of the map $$\delta :\,\Cal{O}_X\to \Cal{J}_{\Delta}\check{}\,\oplus\,
\Cal{O}_X(\sum_{i\in A}x_{i,C}),\ \delta (t) = (t,t).
\eqno(1.3)$$ Here $\Cal{J}_{\Delta}$ is the $\Cal{O}_X$–ideal of $\Delta$, and $\Cal{J}_{\Delta}\check{}$ is its dual sheaf considered as a subsheaf of meromorphic functions, as in \[Kn1\], Lemma 2.2 and Appendix.
We claim now that we get an $(A,B^{\prime})$–pointed curve, because Knudsen’s treatment of his modification can be directly extended to our case. In fact, the modification we described is nontrivial only in a neighbourhood of those points, where $\Delta$ intersects either singular points of the fibres, or $A$–sections. The $B$–sections do not intersect these neighborhoods, if they are small enough, and do not influence the local analysis due to Knudsen (\[Kn1\], pp. 176–178).
[**1.3.1. Remark.**]{} We can try to modify this construction in order to be able to add an extra white point, instead of a black one. However, for $\roman{card}\,B\ge 2,$ we will not be able then to avoid the local analysis of the situation by referring to \[Kn1\]. In fact, points where $\Delta$ intersects at least two $B$–sections simultaneously, will have to be treated anew.
**§2. Spaces $\overline{L}_n$**
[**2.1. Spaces $\overline{L}_n$.**]{} In this subsection we will inductively define for any $n\ge 1$ the $(\{0,\infty\}, \{1,\dots ,n\})$–pointed stable curve of genus zero $$(\pi_n: C_n\to \overline{L}_n;\ x_0^{(n)},x_{\infty}^{(n)};\ x_1^{(n)},\dots ,x_n^{(n)}).
\eqno(2.1)$$ Namely, put $$C_1:=\bold{P}^1,\ \overline{L}_1 = \roman{a\ point},$$ and choose for $x_0^{(1)},x_{\infty}^{(1)}, x_1^{(1)}$ arbitrary pairwise distinct points.
If (2.1) is already constructed, we define the next family $(C_{n+1}\to \overline{L}_{n+1}, \dots )$ as the result of the application of the construction 1.3 to $C_n/\overline{L}_n.$ In particular, we have a canonical isomorphism $C_n=\overline{L}_{n+1}.$
a\) $\overline{L}_n$ is a smooth separated irreducible proper manifold of dimension $n-1.$ It represents the functor which associates with every scheme $T$ the set of the isomorphism classes of $(\{0,\infty\}, \{1,\dots ,n\})$–pointed stable curves of genus zero over $T$ whose geometric fibers have combinatorial types described below.
The symmetric group $\bold{S}_n$ renumbering the structure sections acts naturally and compatibly on $\overline{L}_n$ and the universal curve. In particular, we can define the spaces $\overline{L}_B, C_B$ for any finite set $B$, functorial with respect to the bijections of the sets.
b\) Combinatorial types of geometric fibres of $C_n\to\overline{L}_n$ are in a natural bijection with ordered partitions $$\{1,\dots ,n\}=\sigma_1\cup\,\dots\,\cup\, \sigma_l,\ 1\le l\le n,\ \sigma_i\ne\emptyset .
\eqno(2.2)$$ Partition (2.2) corresponds to the linear graph with vertices $(v_1,\dots ,v_l)$ of genus zero, edges joining $(v_i,v_{i+1}), 1\le i\le l-1$, $A$–tail $0$ at the vertex $v_1$, $A$–tail $\infty$ at the vertex $v_l$, and $B$–tails marked by the elements of $\sigma_i$ at the vertex $v_i$.
We will call $l=l(\sigma )$ the length of the partition $\sigma$ as in (2.2).
c\) Denote by $L_{\sigma}$ the set of all points of $\overline{L}_n$ corresponding to the curves of the combinatorial type $\sigma$, and by $\overline{L}_{\sigma}$ its Zariski closure. Then $L_{\sigma}$ are locally closed subsets, and we have $$\overline{L}_{\sigma} =\coprod_{\tau \le\sigma} L_{\tau}
\eqno(2.3)$$ where $\tau \le\sigma$ means that $\tau$ is obtained from $\sigma$ by replacing each $\sigma_i$ by an ordered partition of $\sigma_i$ into non–empty subsets.
d\) For every $\sigma$, there exists a natural isomorphism $$L_{|\sigma_1|}\times \dots \times L_{|\sigma_l|}
\to L_{\sigma}
\eqno(2.4)$$ such that the pointed curve induced by this isomorphism over $L_{|\sigma_1|}\times \dots \times L_{|\sigma_l|}$ can be obtained by clutching the curves $C_{|\sigma_i|}/L_{|\sigma_i|}$ in an obvious linear order ($\infty$–section of the $i$–th curve is identified with the $0$–section of the $(i+1)$–th curve, see \[Kn1\], Theorem 3.4), and subsequent remarking of the $B$–sections.
In particular, $L_{\sigma}$ is a smooth irreducible submanifold of codimension $l(\sigma )-1.$
The similar statements hold for the closed strata $\overline{L}_{\sigma}.$
[**Proof.**]{} Properness and smoothness follow by induction and Knudsen’s local analysis which we already invoked.
The statement about the combinatorial types is proved by induction as well. In fact, if everything is already proved for $C_n$, then we must look at a geometric fibre $C_{n,s}$ of $C_n$ and see what happens to it after the blow up described in 1.3. If $\Delta$ intersects a smooth point of $C_{n,s}$, not coinciding with $x_{0,s}, x_{\infty,s}$, nothing happens, except that we get a new black point on this fibre, and a new tail at the respective vertex of the dual graph. If $\Delta$ intersects an intersection point of two neighboring components of $C_{n,s}$, then after blowing up these two components become disjoint, and we get a new component intersecting both of them, with a new black point on it. The linear structure of the graph is preserved. Finally, if $\Delta$ intersects $C_{n,s}$ at $x_{0,s}$ or $x_{\infty,s}$, then after blowing up we will get a new end component, with $x_{0,s}$, resp. $x_{\infty,s}$ and the new black point on it. Thus the new combinatorial types will be linear and indexed by partitions of $(n+1)$. To check that all partitions are obtained in this way, it suffices to remark that $\Delta$, being the relative diagonal, can intersect the fibre of a given type at any point.
In order to check the statement about the functor represented by $\overline{L}_n$ we apply the following inductive reasoning. For $n=1$ the statement is almost obvious. In fact, let $\pi :\,C\to S$ be a $(\{0,\infty\}, \{1\})$–pointed stable curve of genus zero over $T$. From the stability it follows that all geometric fibres are projective lines. Since the three structure sections pairwise do not intersect, the family can be identified with $\bold{P}^1\times T$ endowed with three constant sections. This means that it is induced by the trivial morphism $T\to\overline{L}_1$.
Assume that the statement is true for $n$. In order to prove it for $n+1$, consider a $(\{0,\infty\}, \{1,\dots ,n+1\})$–pointed stable curve of genus zero $\pi :\,C\to T.$ First of all, one can produce from it a $(\{0,\infty\}, \{1,\dots ,n\})$–pointed stable curve of genus zero $\pi :\,C^{\prime}\to T$ obtained by forgetting $x_{n+1}$ and subsequent stabilization. The respective map $C^{\prime}\to C$ is given by the relative projective spectrum of the algebra $\sum_{k=0}^{\infty}
\pi_*(\Cal{K}^{\otimes k})$ where $\Cal{K}:=\omega_{C/T}(x_0+x_1 +
\dots +x_n+x_{\infty})$. By induction, $C^{\prime}$ is induced by a morphism $p:\,T\to\overline{L}_n$. Addition of an extra black section to $C^{\prime}$ and subsequent stabilization boils down exactly to the construction 1.3 applied to $C^{\prime}/T$ which allows us to lift $p$ to a unique morphism $q:\,T\to\overline{L}_{n+1}.$
Separatedness is checked by the standard deformation arguments.
The statement about renumbering follows from the description of the functor.
A similar adaptation of Knudsen’s arguments allows us to prove the remaining statements, and we leave them to the reader.
Notice that below we will give another direct description of the spaces $\overline{L}_B$ and all the structure morphisms connecting them in terms of toric geometry. This will provide easy alternate proofs of their properties. Except for §4, we can restrict ourselves to this alternate description.
[**2.2.1. Remark.**]{} Dual graphs of the degenerate fibers of $C_n$ over $\overline{L}_n$ come with a natural orientation from $x_0$ to $x_{\infty}$. We could have allowed ourselves not to distinguish between the two white points, interchanging them by isomorphisms, but this would produce several upleasant consequences. First, our manifolds would become actual stacks, starting already with $\overline{L}_1$. Second, we would have lost the toric interpretation of these spaces. Third, and most important, we would meet an ambiguity in the definition of the multiplication between the homology spaces: see (3.5) below. With our choice, we can simply introduce the involution permuting $x_0$ and $x_{\infty}$ as a part of the structure and look how it interacts with other parts.
$\overline{L}_n$ has no odd cohomology. Let $$p_n(q):=\sum_{i=0}^{n-1} \roman{dim}\,H^{2i}(\overline{L}_n)\,q^i
\eqno(2.5)$$ be the Poincaré polynomial of $\overline{L}_n$. Then we have $$1+\sum_{n=1}^{\infty} \frac{p_n(q)}{n!}y^n =
\frac{q-1}{q-e^{(q-1)y}} \in \bold{Q}[q][[y]].
\eqno(2.6)$$ Letting here $q\to 1$ we get $\dfrac{1}{1-y}$ so that $\chi (\overline{L})=n!.$
[**Proof.**]{} Since $\overline{L}_n$ are defined over $\bold{Q}$, we can apply the classical Weil’s technique of counting points over $\bold{F}_q$ (thus treating $q$ not as a formal variable but as a power of prime). After the counting is done, we will see that $\roman{card}\,\overline{L}_n(\bold{F}_q)$ is a polynomial in $q$ with positive integer coefficients, so that we can right away identify it with $p_n$: $$p_n(q)=\roman{card}\,\overline{L}_n(\bold{F}_q)
\eqno(2.7)$$ The latter number can be calculated by directly applying (2.3) to the one–element partition $\sigma$, so that we get $$\frac{p_n(q)}{n!} =\sum_{l=1}^n \sum\Sb (s_1,\dots ,s_l)\\
s_1+\dots +s_l=n\\s_i\ge 1\endSb
\frac{(q-1)^{s_1-1}}{s_1!}\dots \frac{(q-1)^{s_l-1}}{s_l!}$$ $$\sum_{l=1}^n\left[ \roman{coeff.\ of}\ x^{n-l}\ \roman{in}\
\left( \frac{e^x-1}{x} \right)^l \right]\cdot (q-1)^{n-l}.$$ Inserting this in the left hand side of (2.6) and summing over $n$ first, we obtain $$\sum_{n=1}^{\infty} \frac{p_n(q)}{n!} y^n=
\sum_{l=1}^{\infty}\sum_{n=l}^{\infty}
\left[ \roman{coeff.\ of}\ x^{n}\ \roman{in}\
({e^x-1})^l \right]\cdot (q-1)^{n}$$ $$=\sum_{l=1}^{\infty}\frac{1}{(q-1)^l}\,(e^{(q-1)y}-1)^l$$ which gives (2.6).
[**2.3.1. Special cases.**]{} Here is a list of the Poincaré polynomials for small values of $n$: $$p_1=1,\ p_2=q+1,\ p_3=q^2+4q+1,\ p_4=q^3+11q^2+11q+1,$$ $$p_5=q^4+26q^3+66q^2+26q+1,\ p_6=q^6+57q^5+302q^4+302q^2+57q+1.$$ The rank of $H^2(L_n)$ is $2^n-n-1$. Individual coefficients of of $p_n(q)$ are well known in combinatorics. They are called Euler numbers: $$a_{n,i} = \roman{dim}\,H^{2i}(\overline{L}_n) \,.$$
[**2.4. $\overline{L}_n$ and toric actions.**]{} Let $\varepsilon$ be the trivial partition of $B$ of length one. The “big cell” $L_{\varepsilon}$ of $\overline{L}_B$ (see 2.2 c)) has a canonical structure of the torsor (principal homogeneous space) over the torus $T_B:=\bold{G}_m^B/\bold{G}_m$ (where the subgroup $\bold{G}_m$ is embedded diagonally). In fact, $\bold{P}^1\setminus \{x_0,x_{\infty}\}$ is a $\bold{G}_m$–torsor, and the respective action of $\bold{G}_m^B$ on $L_{\varepsilon}$, moving $x_i,\,i\in B$ via the $i$–th factor, produces an isomorphic marked curve exactly via the action of the diagonal.
Similarly, every stratum $L_{\sigma}$ is a torsor over $T_{\sigma}:=\prod_i T_{\sigma_i}$ (see (2.4)), and there is a canonical surjective morphism $T_B\to T_{\sigma}$ so that $L_B$ is a union of $T_B$–orbits. In order to show that $L_B$ is a toric variety, it remains to show that these actions are compatible. This again can be done using the explicit construction of $\overline{L}_n$ and induction. For a change, we will provide a direct toric construction. We start with a more systematic treatment of the combinatorics involved.
[**2.4.1. Partitions of finite sets.**]{} For any finite set $B$, we call a partition $\sigma$ of $B$ [*a totally ordered set of non–empty subsets of $B$ whose union is $B$ and whose pairwise intersectons are empty.*]{} If a partition consists of $N$ subsets, it is called $N$–partition. If its components are denoted $\sigma_1,\dots ,\sigma_N$, or otherwise listed, this means that they are listed in their structure order. Another partition can be denoted $\tau$, $\sigma^{(1)}$ etc. Notice that no particular ordering of $B$ is a part of the structure. This is why we replaced $\{1,\dots ,n\}$ here by an unstructured set $B.$
Let $\sigma$ be a partition of $B$, $i,j\in B.$ We say that $\sigma$ [*separates $i$ and $j$*]{} if they belong to different components of $\sigma$. We then write $i\sigma j$ in order to indicate that the component containing $i$ comes earlier that the one containing $j$ in the structure order.
Let $\tau$ be an $N+1$–partition of $B$. If $N\ge 1,$ it determines a well ordered family of $N$ 2–partitions $\sigma^{(a)}$: $$\sigma^{(a)}_1:=\tau_1\cup\dots\cup\tau_{a},\
\sigma^{(a)}_2:=\tau_{a+1}\cup\dots\cup\tau_{N},\ a=1,\dots ,N\, .
\eqno(2.8)$$ In reverse direction, call a family of 2–partitions $(\sigma^{(i)})$ [*good*]{} if for any $i\ne j$ we have $\sigma^{(i)}\ne \sigma^{(j)}$ and either $\sigma^{(i)}_1\subset \sigma^{(j)}_1,$ or $\sigma^{(j)}_1\subset \sigma^{(i)}_1.$ Any good family is naturally well–ordered by the relation $\sigma^{(i)}_1\subset \sigma^{(j)}_1$, and we will consider this ordering as a part of the structure. If a good family of 2–partitions consists of $N$ members, we will usually choose superscripts $1,\dots ,N$ to number these partitions in such a way that $\sigma^{(i)}_1\subset \sigma^{(j)}_1$ for $i<j.$
Such a good family produces one $(N+1)$–partition $\tau$: $$\tau_1:=\sigma_1^{(1)},\ \tau_2:=\sigma_1^{(2)}\setminus
\sigma_1^{(1)},\ \dots ,\
\tau_N:=\sigma_1^{(N)}\setminus
\sigma_1^{(N-1)},\ \tau_{N+1}=\sigma_2^{(N)}.
\eqno(2.9)$$ This correspondence between good $N$–element families of 2–partitions and $(N+1)$–partitions is one–to–one, because clearly $\sigma_1^{(i)}=\tau_1\cup\dots\cup\tau_i$ for $1\le i\le N.$
Consider the case when $\tau^{(1)}=\sigma$ is a 2–partition, and $\tau^{(2)}=\tau$ is an $N$–partition, $N\ge 2$. Their union is good, iff there exists $a\le N$ and a 2–partition $\alpha =(\tau_{a1},\tau_{a2})$ of $\tau_a$ such that $$\sigma=(\tau_1\cup\dots\cup\tau_{a-1}\cup
\tau_{a1}, \tau_{a2}\cup
\tau_{a+1}\cup\dots\cup\tau_{N}).
\eqno(2.10)$$ In this case we denote $$\sigma *\tau = \tau (\alpha ):=
(\tau_1,\dots ,\tau_{a-1},
\tau_{a1}, \tau_{a2},
\tau_{a+1},\dots ,\tau_{N}).
\eqno(2.11)$$
Let $\tau$ be a partition of $B$ of length $\ge 1,$ and $\sigma$ a 2–partition. Then one of the three mutually exclusive cases occurs:
\(i) $\sigma$ coincides with one of the partitions $\sigma^{(a)}$ in (2.8). In this case we will say that $\sigma$ breaks $\tau$ between $\tau_a$ and $\tau_{a+1}.$
\(ii) $\sigma$ coincides with one of the partitions (2.10). In this case we will say that $\sigma$ breaks $\tau$ at $\tau_a$.
\(iii) None of the above. In this case we will say that $\sigma$ does not break $\tau$. This happens exactly when there is a neighboring pair $(\tau_b,\tau_{b+1})$ of elements of $\tau$ with the following property: $$\tau_b\setminus\sigma_1\ne \emptyset,\
\tau_{b+1}\cap\sigma_1\ne \emptyset .
\eqno(2.12)$$ We will call $(\tau_b,\tau_{b+1})$ a bad pair (for $\sigma$).
[**Proof**]{}. Consider the sequence of sets $$\sigma_1\cap\tau_1, \sigma_1\cap\tau_2, \dots , \sigma_1\cap\tau_N .$$ Produce from it a sequence of numbers 0,1,2 by the following rule: replace $\sigma_1\cap\tau_b$ by 2, if it coincides with $\tau_b$, by 0 if it is empty, and by 1 otherwise. Cases (i) and (ii) above together will furnish all sequences of the form $(2\dots 20\dots 0)$, $(2\dots 210\dots 0)$, $(10\dots 0)$. Each remaining admissible sequence will contain at least one pair of neighbors from the list 01, 02, 11, 12. For the respective pair of sets, (2.12) will hold.
[**2.5. Fan $F_B$.**]{} In this subsection we will describe a fan $F_B$ in the space $N_B\otimes{\bold{R}}$, where $N_B:=\roman{Hom}\,(\bold{G}_m,T_B)$, $T_B:=\bold{G}_m^B/\bold{G}_m$ as in the beginning of 2.4. Up to notation, we use \[Fu\] as the basic reference on fans and toric varieties.
Clearly, $N_B$ can be canonically identified with $\bold{Z}^B/\bold{Z}$, the latter subgroup being embedded diagonally. Similarly, $N_B\otimes{\bold{R}}=\bold{R}^B/\bold{R}$. We will write the vectors of this space (resp. lattice) as functions $B\to \bold{R}$ (resp. $B\to \bold{Z}$) considered modulo constant functions. For a subset $\beta\subset B$, let $\chi_{\beta}$ be the function equal 1 on $\beta$ and 0 elsewhere.
The fan $F_B$ consists of the following $l$–dimensional cones $C(\tau )$ labeled by $(l+1)$–partitions $\tau$ of $B$.
If $\tau$ is the trivial 1–partition, $C(\tau )=\{0\}$.
If $\sigma$ is a 2–partition, $C(\sigma )$ is generated by $\chi_{\sigma_1}$, or, equivalently, $-\chi_{\sigma_2}$, modulo constants.
Generally, let $\tau$ be an $(l+1)$–partition, and $\sigma^{(i)},\,i=1,\dots,l$, the respective good family of 2–partitions (2.9). Then $C(\tau )$ as a cone is generated by all $C(\sigma^{(i)})$.
It is not quite obvious that $F_B$ is well defined. We sketch the relevant arguments.
First, all cones $C(\tau )$ are strongly convex. In fact, according to \[Fu\], p. 14, it suffices to check that $C(\tau )\cap (-C(\tau ))={0}$. But $C(\tau )$ consists of classes of linear combinations with non–negative coefficients of functions $$\chi_{\tau_1},\, \chi_{\tau_1}+\chi_{\tau_2},\,
\dots ,\, \chi_{\tau_1}+\dots +\chi_{\tau_l}$$ if $\tau$ has length $l+1$. Non–vanishing function of this type cannot be constant.
Second, the same argument shows that $C(\tau )$ is actually $l$–dimensional.
Third, since the cone $C(\tau )$ is simplicial, one sees that $(l-1)$–faces of $C(\tau )$ are exactly $C(\tau^{(i)})$ where $\tau^{(i)}$ is obtained from $\tau$ by uniting $\tau_i$ with $\tau_{i+1}$, which is equivalent to omitting $C(\sigma^{(i)})$ from the list of generators. More generally, $C(\tau^{\prime})$ is a face of $C(\tau )$ iff $\tau\le\tau^{\prime}$ as in (2.3), that is, if $\tau$ is a refinement of $\tau^{\prime}$.
Fourth, let $C(\tau^{(i)}),\,i=1,2,$ be two cones. We have to check that their intersection is a cone of the same type. An obvious candidate is $C(\tau )$ where $\tau$ is the crudest common refinement of $\tau^{(1)}$ and $\tau^{(2)}$. This is the correct answer.
In order to see this, let us a give a different description of $F_B$ which will simultaneously show that the support of $F_B$ is the whole space. Let $\chi :\bold{B}\to\bold{R}$ represent an element $\bar{\chi}\in N_B\otimes\bold{R}.$ It defines a unique partition $\tau$ of $B$ consisting of the level sets of $\chi$ ordered in such a way that the values of $\chi$ decrease. Clearly, $\tau$ depends only on $\bar{\chi}$, and $\chi$ modulo constants can be expressed as a linear combination of $\chi_{\tau_1}+\dots
+\chi_{\tau_i}$, $1\le i\le l$ with positive coefficients. In other words, $\chi$ belongs to the interior part of $C(\tau )$. On the boundary, some of the strict inequalities between the consecurive values of $\chi$ become equalities. This proves the last assertion.
We see now that $F_B$ satisfies the definition of \[Fu\], p. 20, and so is a fan.
[**2.6. Toric varieties $\overline{\Cal{L}}_B$.**]{} We now define $\overline{\Cal{L}}_B$ (later to be identified with $\overline{L}_B$) as the toric variety associated with the fan $F_B$.
To check that it is smooth, it suffices to show that each $C(\tau )$ is generated by a part of a basis of $N_B$ (see \[Fu\], p. 29). In fact, let us choose a total ordering of $B$ such that if $i\in\tau_k,\,j\in\tau_l$ and $k<l$, then $i<j$. Let $B_k\subset B$ consist of the first $k$ elements of $B$ in this ordering. Then the classes of the characteristic functions of $B_1, B_2,\dots ,B_{n-1}$, $n=\roman{card}\,B$, form a basis of $N_B$, and $\{\chi_{\sigma^{(i)}}\}$ is a part of it.
To check that $\overline{\Cal{L}}_B$ is proper, we have to show that the support of $F_B$ is the total space. We have already proved this.
As any toric variety, $\overline{\Cal{L}}_B$ carries a family of subvarieties which are the closures of the orbits of $T_B$ and which are in a natural bijection with the cones $C(\tau )$ in $F_B$. We denote them $\overline{\Cal{L}}_{\tau}$. They are smooth. The respective orbit which is an open subset of $\overline{\Cal{L}}_{\tau}$ is denoted $\Cal{L}_{\tau}$.
[**2.6.1. Forgetful morphisms and a family of pointed curves over $\overline{\Cal{L}}_B$.**]{} Assume that $B\subset B^{\prime}$. Then we have the projection morphism $\bold{Z}^{B^{\prime}}\to\bold{Z}^B$ which induces the morphism $f^{B^{\prime},B}:\,N_{B^{\prime}}\to N_B.$ It satisfies the property stated in the last lines of \[Fu\], p. 22: for each cone $C(\tau^{\prime})\in F_{B^{\prime}}$, there exists a cone $C(\tau )\in F_{B}$ such that $f^{B^{\prime},B}(C(\tau^{\prime}))\subset C(\tau ).$ In fact, $\tau$ is obtained from $\tau^{\prime}$ by deleting elements of $B^{\prime}\setminus B$ and then deleting the empty subsets of the resulting partition of $B$.
Therefore, we have a morphism $f^{B^{\prime},B}_*:\,
\overline{\Cal{L}}_{B^{\prime}}\to\overline{\Cal{L}}_B$ (\[Fu\], p. 23) which we will call [*forgetful one*]{} (it forgets elements of $B^{\prime}\setminus B$).
If $B^{\prime}\setminus B$ consists of one element, then the forgetful morphism $\overline{\Cal{L}}_{B^{\prime}}\to\overline{\Cal{L}}_B$ has a natural structure of a stable $(\{0,\infty\},B)$–pointed curve of genus zero.
[**Proof.**]{} Let us first study the fibers of the forgetful morphism. Let $\tau$ be a partition of $B$ of length $l+1$ and $\Cal{L}_{\tau}$ the respective orbit in $\overline{\Cal{L}}_B.$ Its inverse image in $\overline{\Cal{L}}_{B^{\prime}}$ is contained in the union $\cup \overline{\Cal{L}}_{\tau^{\prime}}$ where $\tau^{\prime}$ runs over partitions of $B^{\prime}$ obtained by adding the forgotten point either to one of the parts $\tau_i$, or inserting it in between $\tau_i$ and $\tau_{i+1}$, or else putting it at the very beginning or at the very end as a separate part.
The inverse image of any point $x\in\Cal{L}_{\tau}$ is acted upon by the multiplicative group $\bold{G}_m =\roman{Ker}\,(T_{B^{\prime}}\to T_B)$. This action breaks the fiber into a finite number of orbits which coincide with the intersections of this fiber with various $\Cal{L}_{\tau^{\prime}}$ described above. When $\tau^{\prime}$ is obtained by adding the forgotten point to one of the parts, this intersection is a torsor over the kernel, otherwise it is a point. As a result, we get that the fiber is a chain of $\bold{P}^1$’s, whose components are labeled by the components of $\tau$ and singular points by the neighboring pairs of components.
The forgetful morphism is flat, because locally in toric coordinates it is described as adjoining a variable and localization.
In order to describe the two white sections of the forgetful morphism, consider two partitions $(B^{\prime}\setminus B,B)$ and $(B,B^{\prime}\setminus B)$ of $B^{\prime}$ and the respective closed strata. It is easily seen that the forgetful morphism restricted to these strata identifies them with $\overline{\Cal{L}}_B$. We will call them $x_0$ and $x_{\infty}$ respectively.
Finally, to define the $j$–th black section, $j\in B$, consider the morphism of lattices $s_j:\,N_B\to N_{B^{\prime}}$ which extends a function $\chi$ on $B$ to the function $s_j(\chi )$ on $B^{\prime}$ taking the value $\chi (j)$ at the forgotten point. This morphism satisfies the condition of \[Fu\], p. 22: each cone $C(\tau )$ from $F_B$ lands in an appropriate cone $C(\tau^{\prime})$ from $F_{B^{\prime}}$. This must be quite clear from the description at the end of 2.5.1: $\tau^{\prime}$ is obtained from $\tau$ by adding the forgotten point to the same part to which $j$ belongs. Hence we have the induced morphisms $s_{j*}:\,\overline{\Cal{L}}_B\to
\overline{\Cal{L}}_{B^{\prime}}$ which obviously are sections. Moreover, they do not intersect $x_0$ and $x_{\infty}$, and they are distributed among the components of the reducible fibers exactly as expected.
The morphism $\overline{\Cal{L}}_B\to\overline{L}_B$ inducing the family described in the Proposition 2.6.2 is an isomorphism.
This can be proved by induction on $\roman{card}\,B$ with the help of the more detailed analysis of the forgetful morphism, as above. We omit the details because they are not instructive.
An important corollary of this Theorem is the existence of a surjective birational morphism $\overline{M}_{0,n+2}\to\overline{L}_n$ corresponding to any choice of two different labels $i,j$ in $(1,\dots ,n+2)$. In terms of the of the respective functors, this morphism blows down all the components of a stable $(n+2)$–labeled curve except for those that belong to the single path from the component containing the $i$–th point to the one containing the $j$–th point.
In fact, M. Kapranov has shown the existence of such a morphism for $\overline{\Cal{L}}_n$ in place of $\overline{L}_n$ (see \[Ka2\], p. 102). He used a different description of $\overline{\Cal{L}}_n$ in terms of the defining polyhedron, which he identified with the so called permutohedron, the convex hull of the $\bold{S}_n$–orbit of $(1,2,\dots ,n)$. He has also proved that $\overline{\Cal{L}}_n$ can be identified with the closure of the generic orbit of the torus in the space of complete flags in an $n$–dimensional vector space.
[**2.7. Combinatorial model of $H^*(\overline{\Cal{L}}_B)$.**]{} We will denote by $[\overline{\Cal{L}}_{\sigma}]_*$ (resp. $[\overline{\Cal{L}}_{\sigma}]^*$) the homology (resp. the dual cohomology) class of $\overline{\Cal{L}}_{\sigma}$.
The remaining parts of this section (and the Appendix) are dedicated to the study of linear and non–linear relations between these classes, in the spirit of \[KM\] and \[KMK\], but with the help of the standard toric techniques.
Consider a family of pairwise commuting independent variables $l_{\sigma}$ numbered by 2–partitions of $B$ and introduce the ring $$H^*_B:=\Cal{R}_B/I_B
\eqno(2.13)$$ where $\Cal{R}_B$ is freely generated by $l_{\sigma}$ (over an arbitrary coefficient ring $k$), and the ideal $I_B$ is generated by the following elements indexed by pairs $i,j\in B$: $$r^{(1)}_{ij}:=
\sum_{\sigma :\,i\sigma j} l_{\sigma}-
\sum_{\tau :\,j\tau i} l_{\tau} ,
\eqno(2.14)$$ $$r^{(2)}(\sigma ,\tau ):= l_{\sigma} l_{\tau}\qquad
\roman{if}\ i\sigma j\ \roman{and}\ j\tau i\ \roman{for\ some}\ i,j.
\eqno(2.15)$$
a\) There is a well defined ring isomorphism $\Cal{R}_B/I_B\to A^*(\overline{\Cal{L}}_B,k)$ such that $l_{\sigma}\,\roman{mod}\,I_B\,\mapsto [\overline{\Cal{L}}_{\sigma}]^*$. The Chow ring $A^*(\overline{\Cal{L}}_B,k)$ and the cohomology ring $H^*(\overline{\Cal{L}}_B,k)$ are canonically isomorphic.
b\) The boundary divisors (strata corresponding to 2–partitions) intersect transversally.
[**Proof.**]{} We must check that the ideal of relations between $2^n-2$ dual classes of the boundary divisors $[\overline{\Cal{L}}_{\sigma}]^*$ contains and is generated by the following relations: $$R^{(1)}_{ij}:\qquad \sum_{\sigma :\,i\sigma j} [\overline{\Cal{L}}_{\sigma}]^*-
\sum_{\tau :\,j\tau i} [\overline{\Cal{L}}_{\tau}]^* =0.
\eqno(2.16)$$ If $i\sigma j$ and $j\tau i$, then $$R^{(2)}(\sigma ,\tau ):\qquad [\overline{\Cal{L}}_{\sigma}]^*\cdot [\overline{\Cal{L}}_{\tau}]^* =0.
\eqno(2.17)$$ We refer to the Proposition on p. 106 of \[Fu\] which gives a system of generators for this ideal for any smooth proper toric variety (Fulton additionally assumes projectivity which we did not check, but see \[Da\], Theorem 10.8 for the general proper case).
In our notation, these generators look as follows.
To get the complete system of linear relations, we must choose some elements $m$ in the dual lattice of $N_B$ spanning this lattice and form the sums $\sum_{\sigma}m(\chi_{\sigma_1})[\overline{\Cal{L}}_{\sigma}]^*$, where $\sigma$ runs over all 2–partitions. In our case, the dual lattice is spanned by the linear functionals $m_{ij}:\,\chi \mapsto \chi\,(i) -\chi\,(j)$ for all pairs $i,j\in B.$ Writing the respective relation, we get (2.16).
The complete system of nonlinear relations is given by the monomials $l_{\sigma^{(1)}}\dots l_{\sigma^{(k)}}$ such that $(C(\sigma^{(1)}),\dots ,C(\sigma^{(k)}))$ do not span a cone in $F_B$. This means that some pair $(C(\sigma^{(a)}),C(\sigma^{(b)}))$ already does not span a cone, because otherwise the respective 2–partitions would form a good family (cf. 2.4.1). And in view of Lemma 2.4.2 (iii), we can find $i,j\in B$ such that $i\sigma^{(a)}j$ and $j\sigma^{(b)}i$. Hence (2.16) and (2.17) together constitute a generating system of relations.
The remaining statements are true for all smooth complete toric varieties defined by simplicial fans.
[**2.8. Combinatorial structure of the cohomology ring.**]{} In the remaining part of this section we fix a finite set $B$ and study $H^*_B$ as an abstract ring.
For an $(N+1)$–partition $\tau$ define the respective [*good monomial*]{} $m(\tau )$ by the formula $$m(\tau )=l_{\sigma^{(1)}}\dots l_{\sigma^{(N)}}\in \Cal{R}_B .$$ If $\tau$ is the trivial 1–partition, we put $m(\tau ):=1.$ In view of the Theorem 2.7.1, $m(\tau )$ represents the cohomology class of $\overline{\Cal{L}}_{\tau}$.
Notice that if we have two good families of 2–partitions whose union is also good, then the product of the respective good monomials is a good monomial. This defines a partial operation $*$ on pairs of partitions $$m(\tau^{(1)})\,m(\tau^{(2)})=m(\tau^{(1)}*\tau^{(2)}).$$
Good monomials and $I_B$ span $\Cal{R}_B.$ Therefore, images of good monomials span $H^*_B.$
[**Proof**]{}. We make induction on the degree. In degrees zero and one the statement is clear because $l_{\sigma}$ are good. If it is proved in degree $N$, it suffices to check that for any 2–partition $\sigma$ and any nontrivial partition $\tau$, $l_{\sigma}m(\tau )$ is a linear combination of good monomials modulo $I_B.$ We will consider the three cases of Lemma 2.4.2 in turn.
\(i) [*$\sigma$ breaks $\tau$ between $\tau_{a}$ and $\tau_{a+1}$*]{}.
This means that $l_{\sigma}$ divides $m(\tau )$.
Choose $i\in \tau_a, j\in \tau_{a+1}.$ In view of (2.14), we have $$\left(\sum_{\rho :\,i\rho j} l_{\rho}-
\sum_{\rho :\,j\rho i} l_{\rho}\right) m(\tau )\in I_B.
\eqno(2.18)$$ But if $j\rho i$, then $l_{\rho}m(\tau )\in I_B$ because of (2.15). Among the terms with $i\rho j$ there is one $\l_{\sigma}.$ For all other $\rho$’s, $l_{\rho}$ cannot divide $m(\tau )$ since other divisors put $i$ and $j$ in the same part of the respective partition. Therefore, $l_{\rho}m(\tau )$ either belongs to $I_B$, or is good. So finally (2.18) allows us to express $l_{\sigma}m(\tau )$ as a sum of good monomials and an element of $I_B:$ $$l_{\sigma}m(\tau )= -\sum_{\rho\ne\sigma , \,i\rho j} m(\rho *\tau )
\ \roman{mod}\, I_B$$ where the terms for which $\rho *\tau$ is not defined must be interpreted as zero. More precisely, there are two types of non–vanishing terms. One corresponds to all 2–partitions $\alpha$ of $\tau_a$ such that $i\in\tau_{a1}$ which we will write as $i\alpha$. Another corresponds to 2–partitions $\beta$ of $\tau_{a+1}$ with $j$ belonging to the second part, $\beta j$: $$l_{\sigma}m(\tau )= -\sum_{\alpha :\,i\alpha}m(\tau (\alpha))
-\sum_{\beta :\,\beta j}m(\tau (\beta)) \ \roman{mod}\, I_B .
\eqno(2.19)$$ Notice that there are several ways to write the right hand side, depending on the choice of $i,\,j.$ Hence good monomials are not linearly independent modulo $I_B.$
\(ii) [*$\sigma$ breaks $\tau$ at $\tau_{a}$*]{}.
According to the analysis above, this means that $$l_{\sigma}m(\tau )=m(\sigma *\tau )=m(\tau (\alpha))
\eqno(2.20)$$ for an appropriate partition $\alpha$ of $\tau_a$.
\(iii) [*$\sigma$ does not break $\tau$*]{}.
In this case, let $(\tau_b,\tau_{b+1})$ be a bad pair for $\sigma$. Then from (2.12) it follows that there exist $i,j\in B$ such that $i\sigma j$ and $j \sigma^{(a)}i$. Hence $l_{\sigma}m(\tau )$ is divisible by $r^{(2)}(\sigma ,\sigma^{(a)})$ and $$l_{\sigma}m(\tau ) = 0\ \roman{mod}\,I_B.$$
[**2.8.2. Linear combinations of good monomials belonging to $I_B$.**]{} Let $\tau =(\tau_1,\dots ,\tau_{N})$ be a partition of $B$. Choose $a\le N$ such that $|\tau_a|\ge 2$, and two elements $i,j\in \tau_a,\ i\ne j.$ For any ordered 2–partition $\alpha = (\tau_{a1},\tau_{a2})$ of $\tau_a$, denote by $\tau (\alpha )$ the induced $N+1$–partition of $B$ as above: $$(\tau_1,\dots ,\tau_{a-1},\tau_{a1},\tau_{a2},\tau_{a+1},\dots ,\tau_N) .$$ Finally, put $$r^{(1)}_{ij}(\tau , a):=\sum_{\alpha :\,i\alpha j} m(\tau (\alpha )) -
\sum_{\alpha :\,j\alpha i} m(\tau (\alpha )).
\eqno(2.21)$$ Choosing for $\tau$ the trivial 1–partition, we get (2.14) so that these elements span the intersection of $I_B$ with the space of good monomials of degree one.
Generally, all $r^{(1)}_{ij}(\tau , a)$ belong to $I_B.$ In fact, keeping the notations above, consider $$r_{ij}^{(1)}m(\tau )= \left( \sum_{\rho :\,i\rho j} l_{\rho}-
\sum_{\rho :\,j\rho i} l_{\rho}\right) m(\tau ) \in I_B .
\eqno(2.22)$$ Arguing as above, we see that the summand corresponding to $\rho$ in (2.18) either belongs to $I_B$, or is a good monomial, and the latter happens exactly for those partitions $\rho$ that are of the type $\tau (\alpha )$ with either $i\alpha j$, or $j\alpha i.$ Hence (2.21) lies in $I_B.$ This proves our claim.
Elements (2.21) span the intersection of $I_B$ with the space generated by good monomials.
[**Proof.**]{} Define the linear space $H_{*B}$ generated by the symbols $\mu (\tau )$ for all partitions of $B$ as above which satisfy analogs of the linear relations (2.21): for all $(\tau,\tau_a,i,j)$ as above we have $$\sum_{\alpha :\,i\alpha j} \mu (\tau (\alpha )) -
\sum_{\alpha :\,j\alpha i} \mu (\tau (\alpha )) =0.
\eqno(2.23)$$ There exists an (obviously unique) structure of the $H^*_B$–module on $H_{*B}$ with the following multiplication table.
\(i) If $\sigma$ breaks $\tau$ between $\tau_a$ and $\tau_{a+1}$, then for any choice of $i\in \tau_a,\,j\in\tau_{a+1}$ $$l_{\sigma}\mu (\tau )= -\sum_{\alpha :\,i\alpha}\mu (\tau (\alpha))
-\sum_{\beta :\,\beta j}\mu (\tau(\beta )).
\eqno(2.24)$$ (cf. (2.19)).
\(ii) If $\sigma$ breaks $\tau$ at $\tau_a$, then $$l_{\sigma}\mu (\tau )=\mu (\sigma *\tau ).
\eqno(2.25)$$ (cf. (2.20)).
\(iii) If $\sigma$ does not break $\tau$, then $$l_{\sigma}\mu (\tau ) =0.
\eqno(2.26)$$
Our proof of the Technical Lemma consists in the direct verification that the prescriptions (2.24)–(2.23) are compatible with all relations that we have postulated. Unfortunately, such strategy requires the painstaking case–by–case treatment of a long list of combinatorially distinct situations, and we relegate it to the Appendix.
[**2.9.2. Deduction of Theorem 2.9 from the Technical Lemma.**]{} Since elements (2.21) belong to $I_B,$ there exists a surjective linear map $s:\,H_{*B}\to H^*_B$, $\mu (\tau )\mapsto m(\tau ).$ Now denote by $\bold{1}$ the element $\mu (\varepsilon )$ where $\varepsilon$ is the 1–partition. Then $t:\,m(\sigma )\mapsto m(\sigma )\bold{1}$ is a linear map $H^*_B\to H_{*B}$. From (2.25) one easily deduces that $m(\tau )\bold{1} =\mu (\tau )$ so that $s$ and $t$ are mutually inverse. Therefore, (2.22) span the linear relations between the images of good monomials in $H^*_B.$
According to the Theorem 2.4.1, $H_{*B}$, together with its structure of $H^*_B$–module, is a combinatorial model of the homology module $H_*(\overline{\Cal{L}}_B,k)$. The generators $\mu (\tau )$ correspond to $[\overline{\Cal{L}}_{\tau}]_*$.
**§3. Pencils of flat connections**
**and the Commutativity Equations**
[**3.1. Notation.**]{} Let $M$ be a (super)manifold over a field $k$ of characteristic zero in one of the standard categories (smooth, complex analytic, schemes, formal $\dots$). We use the conventions spelled out in \[Ma\], I.1. In particular, differentials in the de Rham complex $(\Omega^*_M,d)$ and connections are odd. This determines our sign rules; parity of an object $x$ is denoted $\widetilde{x}.$
Let $\Cal{F}$ be a locally free sheaf (of sections of a vector bundle) on $M,$ $\nabla_0$ a connection on $\Cal{F}$, that is an odd $k$–linear operator $\Cal{F}\to\Omega^1_M\otimes\Cal{F}$ satisfying the Leibniz identity $$\nabla_0(\varphi f)=d\varphi\otimes f+(-1)^{\widetilde{\varphi}}
\varphi\,\nabla_0f, \ \varphi\in \Cal{O}_M,\,f\in\Cal{F}.
\eqno(3.1)$$ This operator extends to a unique operator on the $\Omega_M^*$–module $\Omega_M^*\otimes \Cal{F}$ denoted again $\nabla_0$ and satisfying the same identity (3.1) for any $\varphi\in\Omega_M$. Any other connection differential $\nabla$ restricted to $\Cal{F}$ has the form $\nabla_0+\Cal{A}$ where $\Cal{A}:\,\Cal{F}\to\Omega^1_M\otimes \Cal{F}$ is an odd $\Cal{O}_M$–linear operator: $\Cal{A} (\varphi f)=
(-1)^{\widetilde{\varphi}}\varphi\Cal{A}(f).$ Any connection naturally extends to the whole tensor algebra generated by $\Cal{F},$ in particular, to $\Cal{E}nd\,\Cal{F}.$
The connection $\nabla_0$ is called flat, iff $\nabla_0^2=0.$ [*A pencil of flat connections*]{} is a line in the space of connections $\nabla_{\lambda}:=\nabla_0 +\lambda\Cal{A}$ such that $\nabla_{\lambda}^2=0$ ($\lambda$ is an even parameter). In the smooth, analytic or formal category, $\nabla_0$ is flat iff $\Cal{F}$ locally admits a basis of flat sections $f, \nabla_0f=0$.
$\nabla_0 +\lambda\Cal{A}$ is a pencil of flat connections iff the following two conditions are satisfied:
\(i) Everywhere locally on $M$, we have $$\Cal{A}=\nabla_0\Cal{B}
\eqno(3.2)$$ for some $\Cal{B}\in\Cal{E}nd\,\Cal{F}$.
\(ii) Such an operator $\Cal{B}$ satisfies the quadratic differential equation $$\nabla_0\Cal{B}\wedge\nabla_0\Cal{B} =0.
\eqno(3.3)$$
[**Proof.**]{} Calculating the coefficient of $\lambda$ in $\nabla_{\lambda}^2=0$ we get $\nabla_0\Cal{A}=0$. But the complex $\Omega_M^*\otimes\Cal{F}$ is the resolution of the sheaf of flat sections $\roman{Ker}\,\nabla_0\subset\Cal{F}.$ This furnishes (i); (ii) means the vanishing of the coefficient at $\lambda^2.$
[**3.2.1. Remarks.**]{} a) Write $\Cal{B}$ as a matrix in a basis of $\nabla_0$–flat sections of $\Cal{F}$, whose entries are local functions on $M.$ Then (3.3) becomes $$d\Cal{B}\wedge d\Cal{B}=0.
\eqno(3.4)$$ These equations written in local coordinates $(t^i)$ on $M$ were called “$t$–part of the $t$–$t^*$ equations” by S. Cecotti and C. Vafa. A. Losev in \[Lo1\] suggested to call them “the Commutativity Equations”.
b\) If $\nabla_0\varphi_0=0,$ then $$(\nabla_0+\lambda\nabla_0\Cal{B})(e^{-\lambda\Cal{B}}\varphi_0)=0.$$
[**3.2.2. Pencils of flat connections related to Frobenius manifolds.**]{} Any solution to the Associativity Equations produces a pencil of flat connections.
To explain this we will use the geometric language due to B. Dubrovin (and the notation of \[Ma\], I.1.5). Consider a Frobenius manifold $(M,g, \circ )$ where $$\circ :\,\Cal{T}_M\otimes_{\Cal{O}_M}\Cal{T}_M\to \Cal{T}_M$$ is a (super)commutative associative multiplication on the tangent sheaf satisfying the potentiality condition, and $g$ is an invariant flat metric (no positivity condition is assumed, only symmetry and non–degeneracy). Denote by $\nabla_0$ the Levi–Civita connection of $g$. Finally, denote by $\Cal{A}$ the operator obtained from the Frobenius multiplication in $\Cal{T}_M$ (\[Ma\], I.1.4). In other words, consider the pencil of connections on $\Cal{F}=\Cal{T}_M$ whose covariant derivatives are $$(\nabla_{0}+\lambda\Cal{A})_X(Y):=\nabla_{0,X}(Y)+\lambda\,X\circ Y\,.$$ This pencil is flat (see \[Ma\], Theorem I.1.5, p. 20). In fact, $\Cal{B}$ written in a basis of $\nabla_0$–flat coordinates and the respective flat vector fields is simply the matrix of the second derivatives of a local potential $\Phi$ (with one subscript raised). This is the first structure connection of $M$.
This pencil admits an infinite dimensional deformation: one should take the canonical extension of the potential to the large phase space and consider the coordinates with gravitational descendants as parameters of the deformation.
Another family of flat connections, this time on the [*cotangent*]{} sheaf of a Frobenius manifold $M$ admitting an Euler vector field $E$ (see \[Ma\], pp. 23–24), is defined as follows. Denote the scalar product on vector fields $\check{g}_{\lambda}(X,Y)
:=g((E-\lambda )^{-1}\circ X,Y).$ The inverse form induces a pencil of flat metrics on the cotangent sheaf, whose Levi–Civita connections however do not form a pencil of flat connections in our sense (see \[Du1\], Appendix D, and \[Du3\] for a general discussion of such setup). This is the second structure connection of $M$.
[**3.2.3. Flat coordinates and gravitational descendants.**]{} One can show that 1–forms on $M$ flat with respect to the dualized first structure connection are closed and therefore locally exact. Their integrals are called deformed flat coordinates. In \[Du2\], Example 2.3 and Theorem 2.2, B. Dubrovin gives explicit formal series in $\lambda$ ($z$ in his notation) for suitably normalized deformed flat coordinates. Coefficients of these series involve some correlators with gravitational descendants, namely those for which the non–trivial operators $\tau_p$ are applied only at one point. In \[KM2\] and \[Ma\], VI.7.2, p.278, it was shown that two–point correlators of this kind determine a linear operator in the large phase space which transforms the modified correlators with descendants into non–modified ones (in any genus). This is important because apriori only modified correlators are defined for an arbitrary Cohomological Field Theory in the sense of \[KM1\], which is not necessarily quantum cohomology of a manifold.
[**3.2.4. Pencils of flat connections in a global setting.**]{} Pencils of flat connections appear also in the context of Simpson’s non–abelian Hodge theory. Briefly, consider a smooth projective manifold $M$ over $\bold{C}$. One can define two moduli spaces, $Mod_1$ and $Mod_2$. The first one classifies flat connections (on variable vector bundles $\Cal{F}$ with vanishing rational Chern classes) with semisimple Zariski closure of the monodromy group. The second one classifies semistable Higgs pairs $(\Cal{F}, \Cal{A})$ where $\Cal{A}$ is an operator as in 3.1, satisfying only the condition $\Cal{A}\wedge\Cal{A}=0.$ (In fact, one should only consider smooth points of the respective moduli spaces). N. Hitchin, C. Simpson, Fujiki et al. established that $Mod_1$ and $Mod_2$ are canonically isomorphic as $C^{\infty}$–manifolds, but their complex structures $I$, $J$ are different, and together with $K=IJ$ produce a hypercomplex manifold.
P. Deligne has shown that the respective twistor space is precisely the moduli space of the pencils of flat connections on $M$ (where the Higgs complex structure corresponds to the point $\lambda =\infty$ in our notation).
For details, see \[Si\].
[**3.3. Formal solutions to the Commutativity Equations and the homology of $\overline{L}_n.$**]{} In \[KM1\] and \[KMK\] it was shown that formal solutions to the Associativity Equations are cyclic algebras over the cyclic genus zero homology modular operad $(H_*(\overline{M}_{0,n+1}))$ (see also \[Ma\], III.4). The main goal of this section is to show the similar role of the homology of the spaces $\overline{L}_n$ in the theory of the Commutativity Equations. This was discovered and discussed on a physical level in \[Lo1\], \[Lo2\]. Here we supply precise mathematical statements with proofs.
Unlike the case of the Associativity Equations, we will have to deal here with modules over an algebra (depending explicitly on the base space) rather than with algebras over an operad. The main ingredient of the construction is the direct sum of the homology spaces of all $\overline{L}_n$ endowed with the multiplication coming from the boundary morphisms. We work with the combinatorial models of these spaces defined in 2.9.1.
We start with some preparations. Let $V=\oplus_{n=1}^{\infty}V_n$ be a graded associative $k$–algebra (without identity) in the category of vector $k$–superspaces over a field $k$. We will call it [*an $\bold{S}$–algebra*]{}, if for each $n$, an action of the symmetric group $\bold{S}_n$ on $V_n$ is given such that the multiplication map $V_m\otimes V_n\to V_{m+n}$ is compatible with the action of $\bold{S}_m\times\bold{S}_n$ embedded in an obvious way into $\bold{S}_{m+n}.$
If $V$ is an $\bold{S}$–algebra, then the sum of subspaces $J_n$ spanned by $(1-s)v,\,s\in\bold{S}_n,\,v\in V_n,$ is a double–sided ideal in $V.$ Hence the sum of the coinvariant spaces $V_{\bold{S}_n} =V_n/J_n$ is a graded ring which we denote $V_{\bold{S}}$.
If $V$, $W$ are two $\bold{S}$–algebras, then the diagonal part of their tensor product $\oplus_{n=1}^{\infty}V_n\otimes W_n$ is an $\bold{S}$–algebra as well.
Let $T$ be a vector superspace (below always assumed finite–dimensional). Its tensor algebra (without the rank zero part) is an $\bold{S}$–algebra.
As a less trivial example, consider $H_*:=\oplus_{n=1}^{\infty}
H_{*n}$ where we write $H_{*n}$ for $H_{*\{1,\dots ,n\}}$. The multiplication law is given by what becomes the boundary morphisms in the geometric setting: if $\tau^{(1)}$ (resp. $\tau^{(2)}$) is a partition of $\{1,\dots ,m\}$ (resp. of $\{1,\dots ,n\}$), then $$\mu (\tau^{(1)})\mu (\tau^{(2)}) = \mu (\tau^{(1)}\cup \tau^{(2)})
\eqno(3.5)$$ where the concatenated partition of $\{1,\dots ,m,\, m+1,\dots ,m+n\}$ is defined in an obvious way, shifting all the components of $\tau^{(2)}$ by $m$.
Our main protagonist is the algebra of coinvariants of the diagonal tensor product of these examples: $$H_{*}T:= \left(\oplus_{n=1}^{\infty} H_{*n}\otimes T^{\otimes n}
\right)_{\bold{S}} .
\eqno(3.6)$$ We now fix $T$ and another vector superspace $F$ and assume that the ground field $k$ has characteristic zero.
There is a natural bijection between the set of representations of $H_{*}T$ in $F$ and the set of pencils of flat connections on the trivial bundle with fiber $F$ on the formal completion of $T$ at the origin.
This bijection will be precisely defined and discussed below: see Proposition 3.6.1. Before passing to this definition and the proof of the Theorem, we will give a down–to–earth coordinate–dependent description of the representations of $H_*T$.
[**3.4. Matrix correlators.**]{} Fix $T$ and choose its parity homogeneous basis $(\Delta_a\,|\,a\in I)$ where $I$ is a finite set of indices.
For any $n\ge 1$, the space $H_{*n}\otimes T^{\otimes n}$ is spanned by the elements $$\mu (\tau^{(n)})\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_n}
\eqno(3.7)$$ where $\tau^{(n)}$ runs over all partitions of $\{1,\dots ,n\}$ whereas $(a_1,\dots ,a_n)$ runs over all maps $\{1,\dots ,n\}\mapsto I:\,
i\to a_i.$
In view of the Theorem 2.9, all linear relations between these elements are spanned by the following ones: choose $(a_1,\dots ,a_n)$ and $(\tau^{(n)}, \tau^{(n)}_r,\,i\ne j\in \tau^{(n)}_r)$, then $$\sum_{\alpha :\,i\alpha j} \mu (\tau^{(n)} (\alpha ))
\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_n} -
\sum_{\alpha :\,j\alpha i} \mu (\tau^{(n)} (\alpha ))
\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_n} =0
\eqno(3.8)$$ where the summation is taken over all 2–partitions $\alpha$ of $\tau^{(n)}_r$ separating $i$ and $j$.
The action of a permutation $i\mapsto s(i)$ on (3.7) is defined by $$s\left(\mu (\tau^{(n)})\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_n}
\right)
=
\varepsilon (s, (a_i))\, \mu (s(\tau^{(n)}))\otimes \Delta_{a_{s(1)}}\otimes\dots\otimes\Delta_{a_{s(n)}} .
\eqno(3.9)$$ Here $\varepsilon (s, (a_i))=\pm 1$ is the sign of the permutation induced by $s$ on the subfamily of odd $\Delta_{a_i}$’s, and $s(\tau^{(n)})$ is defined as follows: $$s(i)\in s(\tau^{(n)})_r\quad \roman{iff}\quad i\in \tau^{(n)}_r .
\eqno(3.10)$$ Finally, the multiplication rule between the generators in the diagonal tensor product is given by: $$\mu (\tau^{(m)})\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_m}\cdot\,
\mu (\tau^{(n)})\otimes \Delta_{b_1}\otimes\dots\otimes\Delta_{b_n}$$ $$=
\mu (\tau^{(m)}\cup\tau^{(n)})\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_m}\otimes
\Delta_{b_1}\otimes\dots\otimes\Delta_{b_n} .
\eqno(3.11)$$ Any linear representation $K:\,H_*T\to \roman{End}\,F$ can be described as a linear representation of the diagonal tensor product satisfying additional symmetry restrictions. To spell it out explicitly, we define [*the matrix correlators*]{} of $K$ as the following family of endomorphisms of $F$: $$\tau^{(n)}\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle :=
K(\mu (\tau^{(n)})\otimes \Delta_{a_1}\otimes\dots\otimes\Delta_{a_n}) .
\eqno(3.12)$$
Matrix correlators of any representation satisfy the following relations:
(i) $\bold{S}_n$–symmetry: $$s^{-1}(\tau^{(n)})\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle =
\varepsilon (s, (a_i))\, \tau^{(n)}\langle \Delta_{a_{s(1)}}\dots\Delta_{a_{s(n)}}\rangle \,.
\eqno(3.13)$$
\(ii) Factorization: $$(\tau^{(m)}\cup \tau^{(n)})\langle \Delta_{a_1}\dots\Delta_{a_m}
\Delta_{b_1}\dots\Delta_{b_n}\rangle
=
\tau^{(m)}\langle \Delta_{a_1}\dots\Delta_{a_m}\rangle \cdot
\tau^{(n)}\langle \Delta_{b_1}\dots\Delta_{b_n}\rangle \,.
\eqno(3.14)$$
\(iii) Linear relations: $$\sum_{\alpha :\,i\alpha j} \tau^{(n)} (\alpha )
\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle -
\sum_{\alpha :\,j\alpha i} \tau^{(n)} (\alpha )
\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle =0
\eqno(3.15)$$
Conversely, any family of elements of $\roman{End}\,F$ defined for all $n,\,(a_1,\dots ,a_n),\,\tau^{(n)}$ and satisfying (3.13)–(3.15) consists of matrix correlators of a well defined representation $K:\,H_*T\to\roman{End}\,F$.
In fact, we obtain (3.13) by applying $K$ to (3.9) written for $s^{-1}(\tau^{(n)})$ in place of $\tau^{(n)}$, because $K$, coming from $H_*T$, vanishes on the image of $1-s$. Moreover, (3.14) means the compatibility with the multiplication of the generators. Finally, (3.15) is a necessary and sufficient condition for the extendability of the system of matrix correlators to a linear map $K$.
Notice that we can replace here $\roman{End}\,F$ by an arbitrary associative superalgebra over $k$.
[**3.5. Top matrix correlators.**]{} Define [*top matrix correlators of $K$*]{} as the subfamily of correlators corresponding to the identical partitions $\varepsilon^{(n)}$ of $\{1,\dots ,n\}$: $$\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle :=
\varepsilon^{(n)}\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle\, .$$
Top matrix correlators satisfy the following relations: $$\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle =
\varepsilon (s, (a_i))\,\langle \Delta_{a_{s(1)}}\dots\Delta_{a_{s(n)}}\rangle
\eqno(3.16)$$ and $$\sum_{\sigma:\,i\sigma j}\varepsilon (\sigma,(a_k))
\langle\prod_{k\in\sigma_1}\Delta_{a_k}\rangle\cdot
\langle\prod_{k\in\sigma_2}\Delta_{a_k}\rangle -
\sum_{\sigma:\,j\sigma i}\varepsilon (\sigma,(a_k))
\langle\prod_{k\in\sigma_1}\Delta_{a_k}\rangle\cdot
\langle\prod_{k\in\sigma_2}\Delta_{a_k}\rangle =0\,.
\eqno(3.17)$$ Here $\sigma$ runs over 2–partitions of $\{1,\dots ,n\}$. We choose additionally an arbitrary ordering of both parts $\sigma_1,
\sigma_2$ determining the ordering of $\Delta$’s in the angular brackets, and compensate this choice by the $\pm 1$–factor $\varepsilon (\sigma,(a_k))$.
Conversely, any family of elements $\langle\Delta_{a_1}\dots\Delta_{a_n}\rangle\in \roman{End}\,F$ defined for all $n$ and $(a_1,\dots ,a_n)$ and satisfying (3.16), (3.17) is the family of top matrix correlators of a well defined representation $K:\,H_*T\to \roman{End}\,F$.
[**Proof.**]{} Clearly, (3.16) is a particular case of (3.13). To get (3.17), we apply (3.15) to the identical partition $\tau^{(n)}=\varepsilon^{(n)}$ and then replace each term by the double product of top correlators using (3.14).
Conversely, assume that we are given $\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle$ satisfying (3.16) and (3.17). There is a unique way to extend this system to a family of elements $\tau^{(n)}
\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle$ defined for all $N$–partitions $\tau^{(n)}$ and satisfying the factorization property (3.14) and at least a part of the symmetry relations (3.13): $$\tau^{(n)}
\langle \Delta_{a_1}\dots\Delta_{a_n}\rangle :=
\varepsilon (\tau^{(n)},(a_k))
\prod_{r=1}^{r=N}\langle\prod_{k\in\tau^{(n)}_r}\Delta_{a_k}\rangle \, .
\eqno(3.18)$$ Here, as in (3.17), we choose arbitrary orderings of each $\tau^{(n)}_r$ and compensate this by the appropriate sign so that the result does not depend on the choices made. All the relations (3.13) become automatically satisfied with this definition. In fact, the left hand side of (3.13) puts into $s^{-1}(\tau^{(n)})_r$ those $i$ for which $s(i)\in \tau^{(n)}_r$ (see (3.10)) so that the expression of both sides of (3.13) through the top correlators consists of the same groups taken in the same order. The coincidence of the signs is left to the reader.
It remains to check that (3.18) satisfy the linear relations (3.15). Recall now that to write a concrete relation (3.15) down we choose $\tau^{(n)},\, r$, $i,j\in\tau_r^{(n)}$ and $(a_1,\dots ,a_n)$ and then sum over 2–partitions $\alpha$ of $\tau_r^{(n)}$. Hence replacing each term of the left hand side of (3.15) by the prescriptions (3.18) we get $$\prod_{p=1}^{r-1}\langle\prod_{k\in\tau^{(n)}_p}\Delta_{a_k}\rangle\cdot
\left( \sum_{\alpha:\,i\alpha j}\pm\langle\prod_{k\in\alpha_1} \Delta_{a_k}\rangle\cdot
\langle\prod_{k\in\alpha_2}\Delta_{a_k}\rangle -
\sum_{\alpha:\,j\alpha i}\pm\langle\prod_{k\in\alpha_1} \Delta_{a_k}\rangle\cdot
\langle\prod_{k\in\alpha_2}\Delta_{a_k}\rangle
\right)$$ $$\cdot
\prod_{q=r+1}^{N}\langle\prod_{k\in\tau^{(n)}_q}\Delta_{a_k}\rangle \,.$$ This expression vanishes because its middle term is an instance of (3.17).
[**3.6. Precise statement and proof of the Theorem 3.3.1.**]{} Assume that we are given a representation $K:\,H_*T\to\roman{End}\,F.$ We will produce from it a formal solution of the Commutativity Equations using only its top correlators. Let $(x^a)$ be the basis of formal coordinates on $T$ dual to $(\Delta_a)$. Put $$\Cal{B} =
\sum_{n=1}^{\infty}\sum_{(a_1,\dots ,a_n)}
\frac{x^{a_n}\dots x^{a_1}}{n!}\,
\langle \Delta_{a_1} \dots \Delta_{a_n}\rangle \in k[[x]]\otimes \roman{End}\,F.
\eqno(3.19)$$
a\) We have $$d\Cal{B}\wedge d\Cal{B}=0.
\eqno(3.20)$$
b\) Conversely, let $\Delta (a_1,\dots ,a_n)\in\roman{End}\,F$ be a family of linear operators defined for all $n\ge 1$ and all maps $\{1,\dots ,n\}\to I:\,i\mapsto a_i$. Assume that the parity of $\Delta (a_1,\dots ,a_n)$ coincides with the sum of the parities of $\Delta_{a_i}$ and that for any $s\in\bold{S}_n$ $$\Delta (a_{s(1)},\dots ,a_{s(n)})=\varepsilon (s,(a_i))\,\Delta (a_1,\dots ,a_n)\,.$$ Finally, assume that the formal series $$\Cal{B} =
\sum_{n=1}^{\infty}\sum_{(a_1,\dots ,a_n)}
\frac{x^{a_n}\dots x^{a_1}}{n!}\,
\Delta ({a_1}, \dots ,{a_n}) \in k[[x]]\otimes \roman{End}\,F
\eqno(3.21)$$ satisfies the equations (3.20). Then there exists a well defined representation $K:\,H_*T\to \roman{End}\,F$ such that $\Delta ({a_1}, \dots ,{a_n})$ are the top correlators $\langle \Delta_{a_1} \dots \Delta_{a_n}\rangle$ of this representation.
Notice that any even element of $k[[x]]\otimes\roman{End}\,F$ without constant term can be uniquely written in the form (3.21).
[**Proof.**]{} Clearly, the equations $d\Cal{B}\wedge d\Cal{B}=0$ written for the series (3.21) are equivalent to a family of bilinear relations between the symmetric matrix–valued tensors $\Delta ({a_1} \dots {a_n})$. In view of the Proposition 3.5.1, it remains to check only that this family of relations is equivalent to the family (3.17). This is a straightforward exercise.
**§4. Stacks $\overline{L}_{g;A,B}$ and the extended modular operad**
[**4.1. Introduction.**]{} The basic topological operad $(\overline{M}_{0;n+1},
n\ge 2)$ of Quantum Cohomology lacks the $n=1$ term which is usually formally defined as a point. We argued elsewhere (cf. \[MaZ\], sec. 7 and \[Ma\], VI.7.6) that it would be very desirable to find a non–trivial DM–stack which could play the role of $\overline{M}_{0;2}$. There are several tests that such an object should pass:
a\) It must be a semigroup (because for any operad $\Cal{P}$, the operadic multiplication makes a semigroup of $\Cal{P}(1)$).
b\) It must be a part of an extended genus zero operad, say, $(\widetilde{L}_{0;n+1},
n\ge 1)$ geometrically related to $(\overline{M}_{0;n+1},
n\ge 2)$ in such a way that the theory of Gromov–Witten invariants with gravitational descendants could be formulated in this new context. In particular, it must geometrically explain two–point correlators with gravitational descendants.
c\) In turn, the extended genus zero operad must be a part of an extended modular operad containing moduli spaces of arbitrary genus, in such a way that algebras over classical modular operads produce extended algebras.
In this section we will try to show that the space $$\widetilde{L}_{0;2}:=\coprod_{n\ge 1} \overline{L}_n
\eqno(4.1)$$ passes at least a part of these tests. (Another candidate which might be interesting is $\roman{lim\,proj}\,\overline{L}_n$ with respect to the forgetful morphisms).
[**4.2. Semigroup structure.**]{} It is defined as the union of boundary (clutching) morphisms $$b:= (b_{n_1,n_2}):\ \widetilde{L}_{0;2}\times \widetilde{L}_{0;2}\to \widetilde{L}_{0;2}
\eqno(4.2)$$ where $$b_{n_1,n_2}:\ \overline{L}_{n_1}\times \overline{L}_{n_2}\to
\overline{L}_{n_1+n_2}$$ glues $x_{\infty}$ of the first curve to $x_0$ of the second curve and renumbers the black points of the second curve keeping their order (cf. \[MaZ\], section 7). This is the structure that induced our multiplication on $H_*$ in 3.3 above.
[**4.3. Extended operads.**]{} In (4.2), only white points $\{x_0,x_{\infty}\}$ are used to define the operadic composition whereas the black ones serve only to stabilize the strings of $\bold{P}^1$’s which otherwise would be unstable. This is a key observation for our attempt to define an extended operad.
A natural idea would be to proceed as follows. Denote by $\overline{M}_{g;A,B}$ the stack of stable $(A,B)$–pointed curves of genus $g$ (see Definition 1.1). Check that it is a DM–stack. Put $\widetilde{M}_{g;m+1}:=
\coprod_{n\ge 0} \overline{M}_{g;m+1,n}$ and define the operadic compositions via boundary maps, using only white points as above. (We sometimes write here and below $n$ instead of $\{1,\dots ,n\}$).
It seems however that this object is too big for our purposes and that it must be replaced by a smaller stack which we will define inductively, by using the Construction 1.3 which we will call here simply the adjoining of a generic black point. The components of this stack will be defined inductively.
If $g\ge 2$, $m\ge 0$, we start with $\overline{M}_{g;m}=\overline{M}_{g;m,\emptyset}$ and add $n$ generic black points, one in turn. Denote the resulting stack by $\overline{L}_{g;m,n}.$
For $g=1$, one should add one more sequence of stacks, corresponding to $m=0$. Since we want to restrict ourselves to Deligne–Mumford stacks, we start at $\overline{M}_{1;0,1}$ identified with $\overline{M}_{1;1}$ (see 1.2 a)), and add black points to get the sequence $\overline{L}_{1;0,n}$, $n\ge 1.$ These spaces are needed to serve as targets for the clutching morphisms gluing $x_0$ to $x_{\infty}$ on the same curve of genus zero: cf. below.
Finally, for $g=0$ we obtain our series of spaces $\overline{L}_n=\overline{L}_{0;2,n}$, $n\ge 1$ and moreover $\overline{L}_{0;m,n}$, for all $m\ge 3, n\ge 0$.
[**4.3.1. Combinatorial types of fibers.**]{} Let us remind that combinatorial types of classical (semi)stable curves with (only white) points labeled by a finite set $A$ are isomorphism classes of graphs, whose vertices are labeled by “genera” $g\ge 0$ and tails are bijectively labeled by elements of $A$. Stability means that vertices of genus 0 bound $\ge 3$ flags, and vertices of genus 1 bound $\ge 1$ flags. Graphs can have edges with only one vertex, that is, simple loops. See \[Ma\], III.2 for more details.
Starting with such a graph $\Gamma$, or rather with its geometric realization, we can obtain an infinite series of graphs, which will turn out to be exactly combinatorial types of (semi)stable $(A,B)$–pointed curves that are fibers of the families described above. Namely, subdivide edges and tails of $\Gamma$ by a finite set of new vertices of genus zero (on each edge or tail, this set may be empty). If a tail was subdivided, move the respective label (from $A$) to the newly emerged tail. Distribute the black tails labeled by elements of $B$ arbitrarily among the old and the new vertices. Call the resulting graph [*a stringy stable combinatorial type*]{} if it becomes stable after repainting black tails into white ones. Clearly, new vertices depict strings of $\bold{P}^1$’s stabilized by black points and eventually two special points on the end components.
a\) $\overline{L}_{g;m,n}$ is the Deligne–Mumford stack classifying $(m,n)$–pointed curves of genus $g$ of stringy stable combinatorial types. It is proper and smooth.
b\) Therefore, one can define boundary morphisms gluing two white points of two different curves $$\overline{L}_{g_1;m_1+1,n_1}\times \overline{L}_{g_2;m_2+1,n_2}
\to \overline{L}_{g_1+g_2;m_1+m_2+1,n_1+n_2}$$ and gluing two white points of the same curve: $$\overline{L}_{g;m+1,n}\to \overline{L}_{g+1,m-1;n}$$ such that the locally finite DM–stacks $$\widetilde{L}_{g,m+1}:=
\coprod_{n\ge 0} \overline{L}_{g;m+1,n}$$ will form components of a modular operad.
The statement a) can be proved in the same way as the respective statement 2.2 a).
It remains to see whether one can develop an extension of the Gromov–Witten invariants, preferably with descendants, to this context. The Remark 3.2.3 seems promising in this respect.
**Appendix. Proof of the Technical Lemma**
We break the proof into several steps whose content is indicated in the title of the corresponding subsection. An advice for the reader who might care to check the details: the most daunting task is to convince oneself that none of the alternatives has been inadvertently omitted.
[**A.1. The right hand side of (2.24) does not depend on the choice of $i,\,j$.**]{}
We must check that a different choice leads to the same answer modulo relations (2.23). We can pass from one choice to another by consecutively replacing only one element of the pair. Consider, say, the passage from $(i,j)$ to $(i^{\prime},j)$. Form the difference of the right hand sides of (2.24) written for $(i^{\prime},j)$ and for $(i,j)$.
In this difference, the terms corresponding to the partitions $\beta$ will cancel. The remaining terms will correspond to the partitions $\alpha$ of $\tau_a$ which separate $i$ and $i^{\prime}$. Their difference will vanish in $H_{*B}$ because of (2.23).
[**A.2. Multiplications by $l_{\sigma}$ are compatible with linear relations (2.23) between $\mu (\tau )$.**]{}
Choose and fix one linear relation (2.23), that is, a quadruple $(\tau ,\tau_a, i,j\in \tau_a)$, $i\ne j$. Choose also a 2–partition $\sigma$. We want to check that after multiplying the left hand side of (2.23) by $l_{\sigma}$ according to the prescriptions (2.23)–(2.26) we will get zero modulo all relations of the type (2.23). There are several basic cases to consider.
[*(i) $\sigma$ breaks $\tau$ at $\tau_b$, $b\ne a$.*]{} Then put $\tau^{\prime}=\sigma *\tau$. After multiplication we will get again (2.23) written for $\tau^{\prime}$ and one of its components $\tau_a$.
[*(ii) $\sigma$ breaks $\tau$ at $\tau_a$.*]{} Let $(\tau_{a1},
\tau_{a2})$ be the induced partition; it is now fixed. We must analyze $l_{\sigma}\mu (\tau (\alpha))$ for variable 2–partitions $\alpha$ of $\tau_a$ with $i\alpha j$ or $j\alpha i$.
Those $\alpha$ which do not break $(\tau_{a1},\tau_{a2})$ will contribute zero because of (2.26).
Those $\alpha$ which break $(\tau_{a1},\tau_{a2})$ will produce a 3–partition of $\tau_a$, say $(\tau_{a11},\tau_{a12},\tau_{a2})$ or else $(\tau_{a1},\tau_{a21},\tau_{a22})$. Finally, there will be one $\alpha$ which is induced by $\sigma$ that is, coincides with $(\tau_{a1},\tau_{a2})$. We must show that the sum total of the respective terms vanishes. However, the pattern of cancellation will depend on the positions of $i$ and $j$. In order to present the argument more concisely, we will first introduce the numerotation of all possible positions [*with respect to a variable $\alpha$*]{} as follows. Partitions which break $(\tau_{a1},\tau_{a2})$ with $i\alpha j$: $$\roman{(I)}:\ i\in\tau_{a11},\,j\in\tau_{a12}\qquad
\roman{(II)}:\ i\in\tau_{a11},\,j\in\tau_{a2}$$ $$\roman{(III)}:\ i\in\tau_{a1},\,j\in\tau_{a22}\qquad
\roman{(IV)}:\ i\in\tau_{a21},\,j\in\tau_{a22}$$ Partitions which break $(\tau_{a1},\tau_{a2})$ and satisfy $j\alpha i$ will be denoted similarly, but with prime. Say, (III)${}^{\prime}$ means (III) with positions of $i$ and $j$ reversed.
Now we will explain the patterns of cancellation depending on the positions of $i,j$ with respect to $\sigma$. Recall that this latter data is fixed and determined by the choices we made at the beginning of this subsection.
If $i,j\in\tau_{a1}$, the only non–vanishing terms are of the types (I) and (I)${}^{\prime}$. Their sum over all $\alpha$ will vanish because of (2.23). Similarly, if $i,j\in\tau_{a2}$, (IV) and (IV)${}^{\prime}$ will cancel, and everything else will vanish.
Finally, assume that $i\in\tau_{a1},\,j\in\tau_{a2}$. that is, $\sigma$ separates $i,\,j$. Then we may have non–vanishing terms of the types (II) and (III) and in addition the terms coming from (the partition of $\tau_a$ induced by) $\sigma$ which must be treated using the formula (2.24), applied however to $(\tau_1,\dots ,\tau_{a-1},\tau_{a1},\tau_{a2},\tau_{a+1},
\dots )$ in place of $\tau$. Half of these latter terms (with $i\in\tau_{a11}$) will cancel (II), whereas the other half (with $j\in\tau_{a22}$) will cancel (III).
The case $j\in\tau_{a1},\,i\in\tau_{a2}$ is treated similarly.
[*(iii) $\sigma$ breaks $\tau$ between $\tau_b$ and $\tau_{b+1}$.*]{} In this case $\sigma$ breaks any $\tau (\alpha )$ in (2.23) between two neighbors as well. A contemplation will convince the reader that only the cases $b=a-1$ and $b=a$ may present non–obvious cancellations. Let us treat the first one; the second one is simpler.
For $\alpha =(\tau_{a1},\tau_{a2})$ we will calculate each term $l_{\sigma}\mu (\tau (\alpha ))$ using a formula of the type (2.24), first choosing some $k\in\tau_{a-1},\,l\in\tau_{a1}$ (in place of $i,j$ of (2.24): these letters are already bound). The choice of $k$ does not matter, but we will choose $l=i$ if $i\alpha j$, and $l=j$ if $j\alpha i$. We get then for $i\alpha j$: $$l_{\sigma}\mu (\tau (\alpha ))=
l_{\sigma}\mu (\dots \tau_{a-1}\tau_{a1}\tau_{a2}\dots )$$ $$=-\sum_{\beta :\,k\in\tau_{a-1,1}}
\mu (\dots \tau_{a-1,1}\tau_{a-1,2}\tau_{a1}\tau_{a2}\dots )
-\sum_{\gamma :\,i\in\tau_{a12}}
\mu (\dots \tau_{a-1}\tau_{a11}\tau_{a12}\tau_{a2}\dots )
\eqno(A.1)$$ where $\beta$ runs over 2–partitions of $\tau_{a-1}$ and $\gamma$ runs over 2–partitions of $\tau_{a1}.$ Write now a similar expression for $j\alpha i$ (with the choice $l=j$). The second sum in this expression will term–by–term cancel the second sum in (A.1), because our choices force $i\in\tau_{a12},\,j\in\tau_{a2}$ in both cases.
If we sum first over $\alpha$, we will see that the first two sums cancel modulo relations (2.23) because our choices imply $i\in\tau_{a1},\,j\in\tau_{a2}$ in the first sum of (A.1) and the reverse relation in the first sum written for $j\alpha i.$
[*(iv) $\sigma$ does not break $\tau$.*]{} In this case we choose a bad pair $(\tau_b,\tau_{b+1})$ for $\sigma$ and $\tau$ (see Lemma 2.4.2(iii)). One easily sees that if $a\ne b,\,b+1$, then it remains a bad pair for $\sigma$ and $\tau (\alpha )$ for any $\alpha$ in (2.23). Therefore, $\l_{\sigma}$ annihilates all terms of (2.23) in view of (2.26).
We will show that in the exceptional cases we still can find a bad pair for $\sigma$ and $\tau (\alpha )$, but it will depend on $\alpha =(\tau_{a1},\tau_{a2})$, which does not change the remaining argument.
Assume that $b=a$ that is, $\tau_a\setminus\sigma_1\ne\emptyset ,
\tau_{a+1}\cap\sigma_1\ne\emptyset$ (see (2.12)). Then $(\tau_{a2},\tau_{a+1})$ is a bad pair for $\sigma$ and $\tau (\alpha )$, unless $\tau_{a2}\subset\sigma_1$, in which case $\sigma_1$ cannot contain $\tau_{a1}$ so that $(\tau_{a1},\tau_{a2})$ form a bad pair.
Similarly, if $b=a-1$, then $\tau_{a-1},\tau_{a1}$ will be a bad pair unless $\tau_{a1}\cap\sigma_1=\emptyset$, in which case $(\tau_{a1},\tau_{a2})$ will be a bad pair.
By this time we have checked that multiplications by $l_{\sigma}$ are well defined linear operators on the space $H_{*B}$. We will now prove that they pairwise commute and therefore define an action of $\Cal{R}_B$ upon $H_{*B}$.
[**A.3. Multiplications by $l_{\sigma}$ pairwise commute.**]{}
We start with fixing $\tau$, $\sigma^{(1)}$ and $\sigma^{(2)}$. We want to check that $$l_{\sigma^{(1)}}(l_{\sigma^{(2)}}\mu (\tau )) =
l_{\sigma^{(2)}}(l_{\sigma^{(1)}}\mu (\tau )) .$$ We may and will assume that $\sigma^{(1)}\ne\sigma^{(2)}$. The following alternatives can occur for $\sigma^{(1)}$ and $\sigma^{(2)}$ separately:
\(i) [*$\sigma^{(1)}$ breaks $\tau$ at $\tau_a$.*]{}
\(ii) [*$\sigma^{(1)}$ breaks $\tau$ between $\tau_a$ and $\tau_{a+1}$.*]{}
\(iii) [*$\sigma^{(1)}$ does not break $\tau$.*]{}
(i)${}^{\prime}$ [*$\sigma^{(2)}$ breaks $\tau$ at $\tau_b$.*]{}
(ii)${}^{\prime}$ [*$\sigma^{(2)}$ breaks $\tau$ between $\tau_b$ and $\tau_{b+1}$.*]{}
(iii)${}^{\prime}$ [*$\sigma^{(2)}$ does not break $\tau$.*]{}
We will have to consider the combined alternatives (i)(i)${}^{\prime}$, (i)(ii)${}^{\prime}$, $\dots$ , (iii)(iii)${}^{\prime}$ in turn. The symmetry of $\sigma^{(1)}$ and $\sigma^{(2)}$ allows us to discard a few of them.
[*Subcase*]{} (i)(i)${}^{\prime}$
We will first assume that $a\ne b$, say $a<b$. Denote by $\alpha$ (resp. $\beta$) the partition induced by $\sigma^{(1)}$ (resp. $\sigma^{(2)}$) on $\tau_a$ (resp. $\tau_b$). Then $$l_{\sigma^{(1)}}(l_{\sigma^{(2)}}\mu (\tau ))=
l_{\sigma^{(2)}}(l_{\sigma^{(1)}}\mu (\tau ))=
\mu (\tau (\alpha )(\beta )) =\mu (\tau (\beta )(\alpha )) .$$ Now assume that $a=b$. If $\alpha$ breaks $\beta$, we will again have the desired equality, because $\alpha *\beta =
=\beta *\alpha$. If $\alpha$ does not break $\beta$, the both sides will vanish.
After having treated this subcase, we add one more remark which will be used below, in the subsection A.5. Namely, $\alpha$ does not break $\beta$ exactly when $\sigma^{(1)}$ does not break $\sigma^{(2)}$. Therefore, if $l_{\sigma^{(1)}}l_{\sigma^{(2)}}$ is one of the quadratic generators of $I_B$, then consecutive multiplication by the respective elements annihilates $\mu (\tau )$.
[*Subcase*]{} (i)(ii)${}^{\prime}$
If $a\ne b,\,b+1$, the modifications induced in $\tau$ by the two multiplications are made in mutually disjoint places and therefore commute as above. Consider now the case $a=b$, the case $a=b+1$ being similar.
Denote by $(\tau_{a1},\tau_{a2})$ the partition induced by $\sigma$ on $\tau_a.$ Then we have $$l_{\sigma^{(1)}}\mu (\tau )=
\mu (\dots \tau_{a-1}\tau_{a1}\tau_{a2}\tau_{a+1}\dots )=\mu (\tau^{\prime}).$$ Clearly, $\sigma^{(2)}$ breaks $\tau^{\prime}$ between $\tau_{a2}$ and $\tau_{a+1}$ so that, after choosing $i\in\tau_{a2},j\in\tau_{a+1}$ we have $$l_{\sigma^{(2)}}(l_{\sigma^{(1)}}\mu (\tau ))=-\sum_{\alpha :\,i\alpha}
\mu (\tau^{\prime}(\alpha ))
-\sum_{\beta :\,\beta j}
\mu (\tau^{\prime}(\beta ))
\eqno(A.2)$$ where $\alpha$ runs over 2–partitions of $\tau_{a2}$ and $\beta$ runs over 2–partitions of $\tau_{a+1}$.
On the other hand, with the same choice of $i,\,j$ we have $$l_{\sigma^{(2)}}\mu (\tau )=-\sum_{\gamma :\,i\gamma}
\mu (\tau (\gamma ))
-\sum_{\beta :\,\beta j}
\mu (\tau (\beta ))
\eqno(A.3)$$ where $\gamma$ runs over 2–partitions of $\tau_{a}$ and $\beta$ runs over 2–partitions of $\tau_{a+1}$. After multiplication of (A.3) by $l_{\sigma^{(1)}}$, the second sum in (A.3) will become the second sum of (A.2). In the first sum, only partitions $\delta$ breaking $(\tau_{a1},\tau_{a2})$ will survive, and they will produce exactly the first sum in (A.2).
[*Subcase*]{} (i)(iii)${}^{\prime}$
Here $\sigma^{(1)}$ breaks $\tau$ at $\tau_a$, and there exists a bad pair $(\tau_b,\tau_{b+1})$ for $\sigma^{(2)}$ and $\tau$. Since $l_{\sigma^{(2)}}\mu (\tau )=0$, it remains to check that $l_{\sigma^{(2)}}(l_{\sigma^{(1)}}\mu (\tau ))=0$. But $l_{\sigma^{(1)}}\mu (\tau ) =\mu (\tau^{\prime})$ as in the previous subcase. So it remains to find a bad pair for $\sigma^{(2)}$ and $\tau^{\prime}$.
If $a\ne b, b+1$, then $(\tau_b,\tau_{b+1})$ will will be such a bad pair
If $a=b$, denote by $(\tau_{a1},\tau_{a2})$ the partition of $\tau_a$ induced by $\sigma^{(1)}$. If $\tau_{a2}$ is not contained in $\sigma^{(2)}$, $(\tau_{a2},\tau_{a+1})$ will form a bad pair. Otherwise this role will pass to $(\tau_{a1},\tau_{a2})$.
Finally, consider the case when $a=b+1$. In the previous notation, if $\sigma_1^{(2)}\cap\tau_{a1}\ne \emptyset$, then $(\tau_{a-1},\tau_{a1})$ is the bad pair we are looking for, otherwise we should take $(\tau_{a1},\tau_{a2})$.
[*Subcase*]{} (ii)(ii)${}^{\prime}$
Here $\sigma^{(1)}$ (resp. $\sigma^{(2)}$) breaks $\tau$ between $a$ and $a+1$ (resp. between $b$ and $b+1$), and $a\ne b$.
If $a\ne b-1,\,b+1$, the modifications indiced in $\tau$ by $\sigma^{(1)}$ and $\sigma^{(2)}$ do not interact and the respective multiplications commute.
By symmetry, it remains to consider the case $a=b-1$. Choose $i\in\tau_a,\, j\in\tau_{a+1}.$ Summing first over partitions $\alpha =(\tau_{a1},\tau_{a2})$ and $\beta =(\tau_{a+1,1},\tau_{a+1,2})$ we have $$l_{\sigma^{(1)}}\mu (\tau )=
-\sum_{\alpha :\,i\alpha}\mu (\dots \tau_{a1}\tau_{a2}\dots )
-\sum_{\beta :\,\beta j}\mu (\dots \tau_{a+1,1}\tau_{a+1,2}\dots ).$$ Now, $\sigma^{(2)}$ will break the terms of the first (resp. second) sum between $\tau_{a+1}$ and $\tau_{a+2}$ (resp. between $\tau_{a+1,2}$ and $\tau_{a+2}$). In order to multiply by $l_{\sigma^{(2)}}$ each term of these sums we choose the same $j\in\tau_{a+1}$ and some $l\in\tau_{a+2}$. Below we sum additionally over 2–partitions $\beta =(\tau_{a+1,1},\tau_{a+1,2})$ and $\gamma =(\tau_{a+2,1},\tau_{a+2,2})$ in the first two sums. In the second two sums the respective notation is $\beta^{\prime}=(\tau_{a+1,2,1},\tau_{a+1,2,2})$: $$l_{\sigma^{(2)}}(l_{\sigma^{(1)}}\mu (\tau ))=$$ $$= \sum\Sb \alpha :\,i\alpha \\ \beta :\,j\beta \endSb
\mu (\dots \tau_{a1}\tau_{a2} \tau_{a+1,1}\tau_{a+1,2}\dots )
+\sum\Sb \alpha :\,i\alpha \\ \gamma :\,\gamma l \endSb
\mu (\dots \tau_{a1}\tau_{a2}\tau_{a+1} \tau_{a+2,1}\tau_{a+2,2}\dots )$$ $$+\sum\Sb \beta :\,\beta j \\ \beta^{\prime} :\,j\beta^{\prime} \endSb
\mu (\dots \tau_{a+1,1}\tau_{a+1,2,1} \tau_{a+1,2,2}\tau_{a+2}\dots )
+\sum\Sb \beta :\,\beta j \\ \gamma :\,\gamma l \endSb
\mu (\dots \tau_{a+1,1}\tau_{a+1,2} \tau_{a+2,1}\tau_{a+2,2}\dots )
\eqno(A.4)$$ On the other hand, with the same notation we have: $$l_{\sigma^{(2)}}\mu (\tau )=
-\sum_{\beta :\,j\beta}\mu (\dots \tau_{a+1,1}\tau_{a+1,2}\dots )
-\sum_{\gamma :\,\gamma l}\mu (\dots \tau_{a+2,1}\tau_{a+2,2}\dots )$$ and $$l_{\sigma^{(1)}}(l_{\sigma^{(2)}}\mu (\tau ))=$$ $$= \sum\Sb \alpha :\,i\alpha \\ \beta :\,j\beta \endSb
\mu (\dots \tau_{a1}\tau_{a2} \tau_{a+1,1}\tau_{a+1,2}\dots )
+\sum\Sb \beta :\, j\beta \\ \beta^{\prime\prime} :\,\beta^{\prime\prime} j \endSb
\mu (\dots \tau_{a+1,1,1}\tau_{a+1,1,2} \tau_{a+1,2}\dots )$$ $$+\sum\Sb \alpha :\,i\alpha \\ \gamma :\,\gamma l \endSb
\mu (\dots \tau_{a1}\tau_{a2} \tau_{a+1}\tau_{a+2,1}\tau_{a+2,2}\dots )
+\sum\Sb \beta :\,\beta j \\ \gamma :\,\gamma l \endSb
\mu (\dots \tau_{a+1,1}\tau_{a+1,2} \tau_{a+2,1}\tau_{a+2,2}\dots )
\eqno(A.5)$$ where $\beta^{\prime\prime}=(\tau_{a+1,1,1},\tau_{a+1,1,2})$. Three of the four sums in (A.4) and (A.5) obviously coincide. The third sum in (A.4) coincides with the second sum in (A.5) because both are taken over 3–partitions of $\tau_{a+1}$ with $j$ in the middle part.
[*Subcase*]{} (ii)(iii)${}^{\prime}$
Here $\sigma^{(1)}$ breaks $\tau$ between $a$ and $a+1$, $\sigma^{(2)}$ does not break $\tau$. We must check that $l_{\sigma^{(2)}}(l_{\sigma^{(1)}}\mu (\tau ))=0$, by finding a bad pair for $\sigma^{(2)}$ and each term in the right hand side of $$l_{\sigma^{(1)}}\mu (\tau )=
-\sum_{\alpha :\,i\alpha}\mu (\dots \tau_{a1}\tau_{a2}\dots )
-\sum_{\beta :\,\beta j}\mu (\dots \tau_{a+1,1}\tau_{a+1,2}\dots ).$$ Denote by $(\tau_b,\tau_{b+1})$ a bad pair for $\sigma^{(2)}$ and $\tau$. As in the subcase (i)(iii)${}^{\prime}$, it will remain the bad pair unless $b\in \{a-1,a,a+1\},$ and will change somewhat in the exceptional cases.
More preciasely, if $b=a-1$, then for the terms of the second sum $(\tau_{a-1},\tau_{a})$ will be bad. For the first sum, if $\sigma_1^{(2)}\cap \tau_{a1}\ne \emptyset$, the bad pair will be $(\tau_{a-1},\tau_{a1})$. Otherwise it will be $(\tau_{a1},\tau_{a2})$.
If $b=a$, then for the terms of the first sum $(\tau_{a2},\tau_{a+1})$ will be bad. For the second sum, if $\sigma_1^{(2)}\cap \tau_{a+1,1}\ne \emptyset$, the bad pair will be $(\tau_{a},\tau_{a+1,1})$. Otherwise it will be $(\tau_{a+1,1},\tau_{a+1,2})$.
Finally, if $b=a+1$, then for the terms of the first sum $(\tau_{a+1},\tau_{a+2})$ will be bad. For the second sum, it will be $(\tau_{a+1,2},\tau_{a+2})$.
In the last remaining case (iii)(iii)${}^{\prime}$ both multiplications produce zero.
To complete the proof of the Technical Lemma, it now remains to check that the elements (2.14), (2.15) generating $I_B$ annihilate $H_{*B}$.
[**A.4. Elements $r^{(1)}_{ij}$ annihilate $H_{*B}$.**]{}
Fix $i,j$ and a partition $\tau$. If $\tau$ does not separate $i$ and $j$, we have $i,j\in\tau_a$ for some $a$, and then $$r^{(1)}_{ij}\mu (\tau ) =
\left(\sum_{\sigma :\,i\sigma j}l_{\sigma}-
\sum_{\sigma :\,j\sigma i}l_{\sigma}\right)\mu (\tau )$$ $$=\sum_{\alpha :\,i\alpha j}\mu (\tau (\alpha))-
\sum_{\alpha :\,j\alpha i}\mu (\tau (\alpha))
\eqno(A.6)$$ where $\alpha$ runs over partitions of $\tau_a$. This expression vanishes because of (2.23).
Assume now that $\tau$ separates $i$ and $j$, say, $i\in\tau_a,
j\in\tau_b,\,a<b.$ In this case $l_{\sigma}\mu (\tau )=0$ for all $\sigma$ with $j\sigma i.$ The remaining terms of (A.6) vanish unless $\sigma$ breaks $\tau$ at some $\tau_c,\,a\le c\le b$, or else between $\tau_c$ and $\tau_{c+1}$ for $a\le c \le b-1.$ In the latter cases each term corresponding to one $\sigma$ can be replaced by a sum of terms corresponding to the 2–partitions $\alpha_c$ of $\tau_c$ with the help of (2.24) and (2.25).
Let us choose $k_c\in\tau_c$ for all $a\le c\le b$ so that $k_a=i,\,k_b=j$ and spell out the resulting expression: $$\sum_{\sigma :\,i\sigma j}l_{\sigma}\mu (\tau )=
\sum_{c=a}^{c=b}{}^{\prime}\left(\sum_{\alpha_c:\,k_c\alpha_c}+
\sum_{\alpha_c:\,\alpha_c k_c}\right) \mu (\tau (\alpha_c ))$$ $$-\sum_{c=a}^{c=b-1}\left(\sum_{\alpha_c:\,k_c\alpha_c}
+\sum_{\alpha_{c+1}:\,\alpha_{c+1}k_{c+1}}\right) \mu (\tau (\alpha_{c+1} ))\, .$$ Here prime at the first sum indicates that the terms with $\alpha_ai$ and $j\alpha_b$ should be skipped.
All the terms of this expression cancel.
[**A.5. Elements $r^{(2)}(\sigma^{(1)},\sigma^{(2)})$ annihilate $H_{*B}$.**]{}
These elements correspond to the pairs ($\sigma^{(1)},\,\sigma^{(2)}$) that do not break each other. If at least one of them, say $\sigma^{(1)}$, does not break $\tau$ either, then $l_{\sigma^{(1)}}\mu (\tau )=0$ so that $r^{(2)}(\sigma^{(1)},\sigma^{(2)})\mu (\tau )=0.$ If both $\sigma^{(1)},\,\sigma^{(2)}$ break $\tau$, a contemplation will convince the reader that they must break $\tau$ at one and the same component $\tau_a.$ This is the subcase (i)(i)${}^{\prime}$ of A.3, and we made the relevant comment there.
**References**
\[Da\] V. Danilov. [*The geometry of toric varieties.*]{} Russian Math. Surveys, 33:2 (1978), 97–54.
\[Du1\] B. Dubrovin. [*Geometry of 2D topological fielld theories.*]{} In: Springer LNM, 1620 (1996), 120–348.
\[Du2\] B. Dubrovin. [*Painlevé transcendents in two–dimensional topological field theory.*]{} Preprint math.AG/9803107.
\[Du3\] B. Dubrovin. [*Flat pencils of metrics and Frobenius manifolds.*]{} Preprint math.AG/9803106
\[Fu\] W. Fulton. [*Introduction to toric varieties.*]{} Ann. Math. Studies, Nr. 131, Princeton University Press, Princeton NJ, 1993.
\[GeSe\] I. M. Gelfand, V. Serganova. [*Combinatorial geometries and torus strata on compact homogeneous spaces.*]{} Uspekhi Mat. Nauk 42 (1987), 107–134.
\[Ka1\] M. Kapranov. [*The permutoassociahedron, MacLane’s coherence theorem and asymptotic zones for the KZ equation.*]{} Journ. of Pure and Appl. Algebra, 83 (1993), 119–142.
\[Ka2\] M. Kapranov. [*Chow quotients of Grassmannians. I.*]{} Advances in Soviet Math., 16:2 (1993), 29–110.
\[Ke\] S. Keel. [*Intersection theory of moduli spaces of stable $n$–pointed curves of genus zero.*]{} Trans. AMS, 330 (1992), 545–574.
\[Kn1\] F. Knudsen. [*Projectivity of the moduli space of stable curves, II: the stacks $M_{g,n}.$*]{} Math. Scand. 52 (1983), 161–199.
\[Kn2\] F. Knudsen. [*The projectivity of the moduli space of stable curves III: The line bundles on $M_{g,n}$ and a proof of projectivity of $\overline{M}_{g,n}$ in characteristic 0.*]{} Math. Scand. 52 (1983), 200–212.
\[KM1\] M. Kontsevich, Yu. Manin. [*Gromov–Witten classes, quantum cohomology, and enumerative geometry.*]{} Comm. Math. Phys., 164:3 (1994), 525–562.
\[KM2\] M. Kontsevich, Yu. Manin. [*Relations between the correlators of the topological sigma–model coupled to gravity.*]{} Comm. Math. Phys., 196 (1998), 385–398.
\[KMK\] M. Kontsevich, Yu. Manin. [*Quantum cohomology of a product (with Appendix by R. Kaufmann)*]{}. Inv. Math., 124, f. 1–3 (1996), 313–339.
\[Lo1\] A. Losev. [*Commutativity equations, operator–valued cohomology of the “sausage” compactification of $(\bold{C}^*)^N/\bold{C}^*$ and SQM.*]{} Preprint ITEP–TH–84/98, LPTHE–61/98.
\[Lo2\] A. Losev. [*Passing from solutions to Commutativity Equations to solutions to Associativity Equations and background independence for gravitational descendents.*]{} Preprint ITEP–TH–85/98, LPTHE–62/98.
\[Ma\] Yu. I. Manin. [*Frobenius manifolds, quantum cohomology, and moduli spaces.*]{} AMS Colloquium Publications, vol. 47, Providence, RI, 1999, xiii+303 pp.
\[MaZ\] Yu. I. Manin, P. Zograf. [*Invertible Cohomological Field Theories and Weil–Petersson volumes.*]{} Preprint math.AG/9902051 (to appear in Ann. Inst. Fourier).
\[Si\] C. Simpson. [*The Hodge filtration on non–abelian cohomology.*]{} Preprint, 1996.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The iron mass in galaxy clusters is about 6 times larger than could have been produced by core-collapse supernovae (SNe), assuming the stars in the cluster formed with a standard initial mass function (IMF). SNe Ia have been proposed as the alternative dominant iron source. Different SN Ia progenitor models predict different “delay functions”, between the formation of a stellar population and the explosion of some of its members as SNe Ia. We use our previous measurements of the cluster SN Ia rate at high redshift to constrain SN Ia progenitor models and the star-formation epoch in clusters. The low observed rate of cluster SNe Ia at $z\sim0 - 1$ means that, if SNe Ia produced the observed amount of iron, they must have exploded at even higher $z$. This puts a $>95\%$ upper limit on the mean SN Ia delay time of $\tau<2$ Gyr ($<5$ Gyr) if the stars in clusters formed at $z_f<2$ ($z_f<3$), assuming $H_{o}=70$ km s$^{-1}$ Mpc$^{-1}$. In a companion paper, we show that, for some current versions of cosmic (field) star formation history (SFH), observations of field SNe Ia place a [*lower*]{} bound on the delay time, $\tau>3$ Gyr. If these SFHs are confirmed, the entire range of $\tau$ will be ruled out. Cluster enrichment by core-collapse SNe from a top-heavy IMF will then remain the only viable option.'
author:
- |
Dan Maoz$^{1}$ and Avishay Gal-Yam$^{1,2}$\
$^{1}$ School of Physics & Astronomy and Wise Observatory, Tel Aviv University, Tel Aviv 69978, Israel; [email protected]; [email protected]\
$^{2}$ Colton Fellow.\
date: 'Accepted - . Received - ;'
title: 'The Type-Ia Supernova Rate in $z \le 1$ Galaxy Clusters: Implications for Progenitors and the Source of Cluster Iron'
---
== == == ==
\#1[[ \#1]{}]{} \#1[[ \#1]{}]{} @mathgroup@group @mathgroup@normal@group[eur]{}[m]{}[n]{} @mathgroup@bold@group[eur]{}[b]{}[n]{} @mathgroup@group @mathgroup@normal@group[msa]{}[m]{}[n]{} @mathgroup@bold@group[msa]{}[m]{}[n]{} =“019 =”016 =“040 =”336 ="33E == == == ==
\#1[[ \#1]{}]{} \#1[[ \#1]{}]{} == == == ==
supernovae: general
Introduction
============
Galaxy clusters, by virtue of their deep gravitational potentials, are “closed boxes”, from which little matter can escape. They thus constitute ideal sites to study the time-integrated enrichment of the intergalactic medium (see e.g., Buote 2002, for a recent review). Observations of galaxy clusters have revealed a number of intriguing puzzles. One such puzzle follows from X-ray spectroscopy, and shows that the intracluster medium (ICM) gas consistently has a surprisingly high iron abundance, with a “canonical” value of about 0.3 the Solar abundance (e.g., Mushotzky & Loewenstein 1997; Fukazawa et al. 1998; Finoguenov et al. 2000; White 2000). Combined with the fact that most of the baryonic mass of the cluster is in the ICM, this translates into a large mass of iron. The tight correlation between total iron mass and stellar light from the early-type galaxy population (and the lack of a correlation with late-type galaxies) suggest that the stellar population, whose remnants and survivors now populate the early-type galaxies, produced the ICM iron (Renzini et al. 1993; Renzini 1997). The near constancy of the ICM iron abundance with cluster mass (Renzini 1997; Lin, Mohr, & Stanford 2003) and with redshift out to $z\sim 1$ (Tozzi et al. 2003) further argues against a significant role in ICM iron enrichment for recent infall and disruption of metal-rich dwarf galaxies. However, the iron mass is at least several times larger than that expected from core-collapse supernovae (SNe), based on the present-day stellar masses, and assuming a standard stellar initial mass function (IMF; e.g., Renzini et al. 1993; Loewenstein 2000). The problem is aggravated if one considers the large mass of iron that exists in the cluster galaxies themselves.
A possibly related problem is the energy budget of the ICM gas and the “entropy floor” observed in clusters, which suggest a non-gravitational energy source to the ICM, again several times larger than the expected energy input from core-collapse SNe (e.g., Lloyd-Davies, Ponman, & Cannon 2000; Tozzi & Norman 2001; Brighenti & Mathews 2001; Pipino et al. 2002).
Proposed solutions to these problems have included an IMF skewed toward high-mass stars (so that a large number of iron-enriching core-collapse SNe are produced per present-day unit stellar luminosity), or a dominant role for SNe Ia in the ICM Fe enrichment. For example, Brighenti & Mathews (1998) calculate that the cluster SN Ia rate must be $> 4.8 h^{2}$ SNu today and $> 9.6 h^{2}$ SNu at $z=1$ to explain production of most of the iron in the ICM with SNe Ia. \[1 SNu$=1~{\rm SN~century}^{-1}(10^{10}
L_{B\odot})^{-1}$.\] This contrasts with the local elliptical-galaxy SN rate of $(0.28\pm0.12)h^2$ SNu (Cappellaro et al. 1999) and argues against the type-Ia enrichment scenario. On the other hand, Renzini (1997) has pointed out that the approximately Solar abundance ratios measured in cluster ellipticals argue for a mix of SN-types, and hence also an IMF, that are not too different from the mix that exists in the Milky Way. Attempts to derive the SN mix in clusters by means of direct X-ray measurements of element abundance ratios in the ICM are still ambiguous, with some results favoring a dominance of Ia’s (e.g., Buote 2002; Tamura et al. 2002) and others a dominance of core-collapse SNe (e.g., Lima Neto et al. 2003; Finoguenov, Burkert, & Boehringer 2003).
In Gal-Yam, Maoz, & Sharon (2002), we used multiple deep [*Hubble Space Telescope (HST)*]{} archival images of galaxy clusters to discover distant field and cluster SNe. The sample was composed of rich clusters, with X-ray temperatures in the range $4-12$ keV, and a median of 9 keV. The candidate type-Ia SNe in the clusters then led to an estimate of the SN Ia rate in the central $250 h^{-1}$ kpc of medium-redshift $(0.18 \le z \le 0.37)$ and high-redshift $(0.83\le z \le 1.27)$ cluster sub-samples. The measured rates are low. To within errors, they are not different from SN Ia measurements in field environments, both locally (Cappellaro et al. 1999) and at high redshift (Pain et al. 2002; Tonry et al. 2003). It was argued that our 95 per cent upper limits on the cluster SN Ia rates rule out the particular model by Brighenti & Mathews (1998) for SNe Ia as the primary source of iron in the ICM.
The issue of SN Ia rate vs cosmic time is closely tied to the presently unsolved question regarding the progenitor populations of SNe Ia. Different models predict different delay times between the formation of a stellar population and the explosion of some of its members as SNe Ia (e.g., Ruiz-Lapuente & Canal 1998; Yungelson & Livio 2000, and references therein). The SN Ia rate vs time in a given environment (e.g., cluster, or field) will then be a convolution of the star-formation history in that environment with a “delay” or “transfer function”, which is the SN Ia rate vs time following a brief burst of star formation \[e.g., Sadat et al. 1998; Madau, Della Valle, & Panagia 1998 (MDP); Dahlen & Fransson 1999; Sullivan et al. 2000\].
In the present paper, the cluster iron mass problem is revisited, and some simple relations connecting the iron mass and the SN rate to updated values of the various observables are derived. We then use the observed upper limits on the cluster SN Ia rate to set constraints on the SN Ia progenitor models, on the formation of stellar populations in galaxy clusters, and on the cluster enrichment scenario. Throughout the paper we assume a flat cosmology with $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$, and a Hubble parameter of $H_0=70$ km s$^{-1}$ Mpc$^{-1}$.
The iron problem, and the SN rates needed to resolve it
=======================================================
Detailed models of cluster metal enrichment have been calculated previously (e.g., Buote 2002; Pipino et al. 2002; Finoguenov et al. 2003). However, the specific problem of the total iron mass in clusters can be formulated rather simply as a function of observational parameters. The ratio of observed iron mass to iron mass expected from core-collapse SNe is $$\frac{M_{\rm Fe-observed}}{M_{\rm Fe-SNII}}
=\frac{ M f_{\rm bar}(f_{gas}
~Z_{{\rm Fe-gas}}+f_* ~Z_{{\rm Fe}*})} {M f_{\rm bar}~f_*~ f(>8 M_{\odot})~ f_{\rm Fe-SNII}}.
$$ Here $M$ is the mass of a cluster, $f_{\rm bar}$ is the baryon mass fraction, $f_{gas}$ is the mass fraction of the baryons in the ICM, and $Z_{\rm Fe-gas}$ is the mass fraction of that gas in iron. Similarly, $f_*=1-f_{gas}$ is the mass fraction of the baryons in stars and $Z_{\rm Fe*}$ is the iron abundance of the stars. In the denominator, $f(>8 M_{\odot})$ is the ratio of the initial stellar mass in stars of mass $>8 M_{\odot}$ (i.e., stars that underwent core-collapse), to the mass in stars of lower mass, and $f_{\rm Fe-SNII}$ is the iron yield of core-collapse SNe, expressed as a fraction of the progenitor masses.
Lin et al. (2003) have recently derived improved estimates of stellar and gas mass fractions in clusters using infrared data from the 2MASS survey. For rich clusters they find $f_{gas}=0.9$ and $f_*=0.1$. Their stellar mass fraction is based on the 2MASS K-band luminosity function (Kochanek et al. 2001) combined with dynamical stellar mass-to-light ratio measurements by Gerhard et al. (2001). Ettori (2003) has recently suggested that a significant fraction, 6-38%, of cluster baryons may be in a yet-undetected warm gas. However, if this new component has a similar iron abundance to that of the hot ICM, our arithmetic will not be affected. We adopt the “canonical” ICM iron abundance in rich clusters of $Z_{\rm Fe-gas}\approx 0.3 ~Z_{\odot {\rm Fe}}$ (e.g., Mushotzky & Loewenstein 1997; Fukazawa et al. 1998; Finoguenov et al. 2000; White 2000). This iron abundance relates to a photospheric Solar iron mass abundance of $Z_{\odot {\rm Fe}}=0.0026$ found by Anders & Grevesse (1989). \[Using the updated photospheric Solar abundance of $Z_{\odot {\rm Fe}}=0.00177$, given by Grevesse & Sauval (1999), which also agrees with the meteoritic Solar value of Anders & Grevesse (1989), would imply simply raising the ICM value accordingly\]. Most of the stellar mass is in the elliptical galaxies, for which we adopt $Z_{\rm Fe*}\approx 1.2 ~Z_{\odot {\rm Fe}}$, the median found by J$\o$rgensen (1999) for early-type galaxies in Coma.
To estimate, $f(>8 M_{\odot})$, the ratio of exploding to non-exploding initial stellar masses, an IMF, $dN/dm$, must be assumed: $$f(>8 M_{\odot})=\frac
{\int_{8M_{\odot}}^{m_{up}}~dN/dm~ m~ dm}
{\int_{m_{low}}^{8M_{\odot}}~dN/dm~ m~ dm},$$ where ${m_{low}}$ and ${m_{up}}$ are the lower and upper mass cutoffs of the IMF. For a Salpeter (1955) IMF, $dN/dm\propto m^{-2.35}$. Baldry and Glazebrook (2003) have recently modeled the local UV-to-IR luminosity density of galaxies assuming a range of IMFs and SFHs, and found the data to be consistent with the Salpeter IMF. A Salpeter slope with ${m_{low}}=0.1 M_{\odot}$ and ${m_{up}}=100 M_{\odot}$, gives $f(>8 M_{\odot})\approx 0.16$. However, other IMFs have been proposed. Figure 1 shows the dependence of $f(>8 M_{\odot})$ on ${m_{low}}$ for single-power-law IMFs (such as Salpeter’s) of various indices $\alpha$, and for “standard” IMFs. Varying ${m_{up}}$ has a weak effect on $f(>8 M_{\odot})$, as long as the IMF is steep enough.
Core collapse models generally agree on an iron yield of about $0.1 M_{\odot}$ per SN (e.g., Thielemann, Nomoto, & Hashimoto 1996). In observed core-collapse SNe, estimates of the total yield of Ni$^{56}$ (which decays to Fe$^{56}$ and drives the optical luminosity of SNe) have been obtained by different methods – the luminosity of the radioactive tail, the luminosity of the plateau phase, and the H$\alpha$ luminosity in the nebular phase (Elmhamdi, Chugai, & Danziger 2003). The different methods give consistent results for a given SN, but there is a large scatter among different SNe (e.g., Zampieri et al. 2003) and a range of yields as large as 0.0016 to 0.26 $M_{\odot}$ (Hamuy 2003), with a mean of $0.05 M_{\odot}$ (Elmhamdi et al. 2003). Progenitor masses are more difficult to estimate, but there are some indications that the iron yield and the progenitor mass are correlated, i.e., the iron yield is some fraction of the progenitor mass, of order 0.5-1% (P. Mazzali, private communication). Since the ratio of the mean iron yield found by Elmhamdi et al. (2003; $0.05 M_{\odot}$) to the minimum progenitor mass ($8 M_{\odot}$) is $0.063 \%$, and the ratio is probably even smaller for those events with more massive progenitors, we will err conservatively by assuming a large fractional iron mass yield of $f_{\rm Fe~ SNII}\approx 0.01$.
Combining the estimates above, we can parametrize the iron problem as $$\begin{aligned}
\frac{M_{\rm Fe-observed}}{M_{\rm Fe-SNII}}=
6.3
~\frac{\hspace{-.7cm}(f_{gas}~Z_{{\rm Fe-gas}}+f_* ~Z_{{\rm Fe}*})}{(0.9\times 0.3+0.1\times 1.2) 0.0026}
\left(\frac{f_*}{0.1}\right)^{-1} \nonumber\end{aligned}$$ $$\label{feobssnii}
\times~~ \left[\frac{f(>8 M_{\odot})}{0.16}\right]^{-1}
\left(\frac{f_{\rm Fe-SNII}}{0.01}\right)^{-1}.$$ For the fiducial values, the discrepancy is by a factor of about 6, as has been found by previous studies (e.g., Tozzi & Norman 2001). From Figure 1, we see that none of the conventional IMFs can solve the iron problem, which ranges from a factor 3.6 \[for a Gould, Bahcall, & Flynn (1997) IMF\] to a factor 6.6 \[for a Scalo (1998) IMF\] in the excess of the observed iron mass. From the figure, we can also see to what degree the IMF must be made “top heavy” in order to solve the problem with core-collapse SNe only; $m_{low}$ needs to be greater than $2 M_{\odot}$, or $\alpha>-1.9$, or some other suitable combination of $m_{low}>0.1M_{\odot}$ and $\alpha>-2.35$.
The discrepancy can be lowered somewhat by assuming a larger stellar mass fraction, $f_*$. However, only a completely unrealistic value of $f_*\approx 1$, would lower the discrepancy to a non-crisis level. Stated differently, the iron mass within the cluster galaxies is the amount expected from core-collapse SNe and a normal IMF, while all the iron in the ICM is an excess over this expectation. This, too, has been found in detailed modeling (e.g. Brighenti & Mathews 1998).
If the dominant contribution to the observed iron mass in clusters is from SNe Ia, it is straightforward to predict the rate of these events, $R_{Ia}$, as a function of time $t$ or redshift $z$. As already noted, the SN Ia rate vs time in a given environment is the convolution of the star-formation history in that environment with a “delay” or “transfer function”, $D(t)$, which is the SN Ia rate vs time following a brief burst of star formation at $t=0$. In the case of clusters, all evidence is that star formation occurred considerably before $z\sim 1$. Let us assume, first, that star formation occurred in a brief burst, at cosmic time $t_f$, corresponding to a redshift $z_f$. Then simply $$R_{Ia}(t) \propto D(t-t_f).$$
Various authors have used different approaches to represent $D(t)$. For example, Ruiz-lapuente & Canal (1998) and Yungelson & Livio (2000) have attempted to derive physically motivated versions of $D(t)$ for the different progenitor scenarios, including binary evolution and accretion physics. Such calculations are complex and, by necessity, include a large number of assumptions and poorly known parameters. Nevertheless, the resulting $D(t)$ functions have a number of generic features: a delay until the progenitor population forms; a fast rise to maximum; and a power-law or exponential decay. These general forms suggest an alternative, more phenomenological, parametrization of $D(t)$, as has been adopted by Sadat et al. (1998), MDP, and Dahlen & Fransson (1999).
We follow the delay function parameterization given by MDP. It is assumed that the progenitors of SNe Ia are white dwarfs, and therefore the overall time delay includes the mass-dependent lifetime of the progenitor as a main-sequence star, $\Delta t_{MS}$. Once the progenitor has become a white dwarf, it has a probability $\propto \exp(-{{\Delta t}\over \tau})$ to explode as a SN Ia, where $\Delta t$ is the time since the star left the main sequence. Following MDP, then $$\label{mdpdelay}
D(t) \propto \int_{m_{\rm min}(t)}
^{m_{\rm max}} \exp(-{t-\Delta t_{MS}\over \tau})\frac{dN}{dm}dm .$$ For consistency, a Salpeter (1955) IMF is assumed. The minimum and maximum initial masses that will lead to the formation of a WD that explodes as a SN Ia are $$\begin{aligned}
m_{\rm min}={\rm max}[3 M_{\odot},
({{t-t'}\over{10~{\rm Gyr}}})^{-0.4} M_{\odot}],~~~~
m_{\rm max}=8 M_{\odot}, \nonumber\end{aligned}$$ and $${{\Delta t_{MS}} \over {10~{\rm Gyr}}}=({{m} \over {M_{\odot}}})^{-2.5}.$$ Figure 2 shows several examples of the MDP delay function. After an instantaneous starburstat $t=0$, the function is zero for 55 Myr (until the first white dwarfs form), then rises approximately as $t^{0.5}$, and reaches a peak at $t=0.64$ Gyr. It then declines exponentially with a timescale $\tau$.
In a given cluster, the integral on $R(t)$ over time gives the total number of SNe Ia that have exploded in that cluster. If most of the cluster iron is from SNe Ia, this total number is just the observed iron mass (minus the small fraction expected from core-collapse SNe arising from a normal IMF, Eq. \[feobssnii\]), divided by the iron mass yield per SN Ia. The normalization of $R(t)$ is therefore set by $$\label{intria1}
\int R_{Ia}(t)~dt=\frac{M_{\rm Fe-observed}}
{ m_{\rm Fe-Ia}}
(1-\frac{M_{\rm Fe-SNII}}{M_{\rm Fe-observed}}).$$
The mean iron yield of a single SN Ia, $m_{\rm Fe-Ia}$, is generally agreed to be about $0.7\pm 0.1 M_{\odot}$. This emerges from modeling of the bolometric light curves of SNe Ia (e.g., Contardo, Leibundgut, & Vacca 2000, and references therein), as well as from SN Ia model calculations (e.g., Thielemann, Nomoto, & Yokoi 1986).
If star formation is not in an instantaneous burst, but rather begins at $t_f$ and lasts some period of time, the occurrence of some SNe Ia will be delayed. The SN Ia rate at later times, given the same total iron mass, will be necessarily greater than under the brief burst assumption. The assumption of a single, brief, star-formation burst at $t_f$ will therefore lead to a lower limit for the predicted SN Ia rate at later times.
Rewriting Eq. \[intria1\] with the fiducial values above and in Eq. \[feobssnii\], and normalizing the SN rate by the present-day $B$-band stellar luminosity of the cluster, we obtain $$\begin{aligned}
\frac{1}{L_B}\int R_{Ia}(t)~dt = \nonumber\end{aligned}$$ $$\begin{aligned}
0.042\frac{\rm SN}{L_{B\odot}}
\left(\frac{M/L_{B}}{200}\right) \left(\frac{f_{\rm bar}}{0.17}\right)
~\frac{\hspace{-.7cm}(f_{gas}~Z_{{\rm Fe-gas}}
+f_* ~Z_{{\rm Fe}*})}{(0.9\times 0.3+0.1\times 1.2) 0.0026} \nonumber\end{aligned}$$ $$\label{intria2}
\times ~~ \left(\frac{m_{\rm Fe-Ia}}{0.7 M_{\odot}}\right)^{-1} ~~ \times ~~
\frac{1-(\frac{M_{\rm Fe-observed}}{M_{\rm Fe-SNII}})^{-1}}{5/6}.$$ Here we have used a value of $M/L_B=(200 ~\pm 50) \frac{M_{\odot}}{L_{B\odot}}$ for the typical total mass-to-light ratio measured in rich clusters, assuming $H_0=70$ km s$^{-1}$ Mpc$^{-1}$. The central value and the error is obtained by taking the union of several recent determinations and their quoted uncertainties – Carlberg et al. (1996), Girardi & Giuricin (2000), Bahcall & Comerford (2002), and Girardi et al. (2002). Galaxy kinematics, modeling of the X-ray emission, and strong and weak gravitational lensing give generally consistent results for this parameter. The fraction of the cluster mass that is in baryons, $f_{\rm bar}$, is 10-30% (Ettori & Fabian 1999; Mohr, Mathiesen & Evrard 1999; Allen et al. 2002; Arnaud et al. 2002), and is thought to be representative of the universal baryonic mass fraction. We therefore adopt the cosmic value, 0.17, as recently measured with WMAP (Spergel et al. 2003).
The time-integrated number of SNe Ia per unit stellar luminosity obtained from Eq. \[intria2\] for the fiducial values, can also be expressed as $$0.042 \frac{\rm SN}{L_{B\odot}}=42~{\rm SNu~Gyr}.$$ In other words, to produce the iron mass seen in clusters with SNe Ia, there must have been in the past one SN Ia for every $23L_{B\odot}$ of present-day stellar luminosity. Equivalently, the mean SN Ia rate over a $\sim 10$ Gyr cluster age must have been 4.3 SNu. Since present day rates (e.g., in Virgo cluster ellipticals; Cappellaro et al. 1999) are much lower than this, the rate must have been much higher in the past.
Thus, the assumption of a brief star-formation burst at some time $t_f$ in the past, followed by a SN Ia rate $D(t-t_f)$ with characteristic time $\tau$, determines the form of $R_{Ia}(t)$. The observed iron mass determines the normalization of $R_{Ia}(t)$. Extended or multiple starbursts after $t_f$ can only lead to a higher $R_{Ia}(t)$ (except at times very soon after $t_f$). The only free parameters are therefore $t_f$ and $\tau$, which can be constrained by comparison to direct measurements of $R_{Ia}(t)$. Time and redshift are related, for our chosen cosmology, by $$\Delta t=H_0^{-1} \int_{z_1}^{z_2}(1+z)^{-1}[\Omega_m (1+z)^3+\Omega_{\Lambda}]^{-0.5} dz.$$
Comparison to Observations and Conclusions
==========================================
Figure 3 shows the expected $R_{Ia}$ vs. redshift based on Eqns. \[mdpdelay\] and \[intria2\], for several choices of $\tau$ and cluster stellar formation redshift $z_f$, and compares these predictions to the observations. Existing measurements of cluster SN Ia rates are by Reiss (2000) for $0.04<z<0.08$, and by Gal-Yam et al. (2002) for $z=0.25^{+0.12}_{-0.07}$, and $z=0.90^{+0.37}_{-0.07}$. The error bars show the 95% confidence intervals. Figure 4 is the same, but zooms in on the region $z<1.4$, where the data exist.
Figure 4 shows that the $z\sim 1$ SN Ia rate measurement is inconsistent with several of the plotted models. However, in the comparison, we must keep in mind that there are uncertainties in the parameters entering Eq. \[intria2\], which sets the normalization of the curves. The main uncertainty, $\pm 25\%$, is in the mean $M/L_{B}$ of clusters. Accounting also for the (smaller) uncertainties in the other parameters, we consider the measured 95% upper limit on $R_{Ia}$ at $z=0.9$ to be in conflict with the model with $z_f=2$, $\tau=2$ Gyr, and the model with $z_f=2$, $\tau=3$ Gyr. The upper limit on $R_{Ia}(t)$ is at $\sim 60\%$ of the predicted values for these models. The model with $z_f=3$, $\tau=5$ Gyr is marginally consistent with the $z=0.9$ data point, given the uncertainty in the prediction. However, this model gives an unacceptably high rate at low redshift, and therefore can also be rejected. The low observed SN Ia rate at $z\sim0-1$ means that, if cluster iron was produced by SNe Ia, those SNe must have occurred at earlier times, times that have yet to be probed by observations. To push the SNe Ia to such early times, one must invoke early star formation [*and*]{} a short SN Ia delay time.
A recent attempt by van Dokkum & Franx (2001) to deduce the epoch of star formation in cluster ellipticals by spectral synthesis modeling of the stellar populations, and accounting for biases in the selection of elliptical galaxies, finds stellar formation redshifts of $z_f=2^{+0.3}_{-0.2}$. If we adopt a particular formation redshift, say $z_f=2$, the resulting upper limit of $\tau<2$ Gyr places clear constraints on SN Ia progenitor models. Two of the models by Yungelson & Livio (2000) predict longer delays and are therefore ruled out. In the double degenerate model, two WDs merge, and the resulting object, having a mass larger than the Chandrasekhar limit, is subject to a runaway thermonuclear explosion triggered at its core. Yungelson & Livio find that the delay function of such systems has an exponential cutoff at $\sim11$ Gyr, at odds with the above upper limit on $\tau$. A similar discrepancy exists for a model where a WD accrets He-rich material from a He-star companion, leading to helium ignition on the surface of the WD and an edge-lit detonation of the star. This model has a delay function that cuts off at $\sim 5$ Gyr.
One must recap here that all these conclusions hold under the assumption that the stars in galaxy clusters were formed with a standard IMF, and therefore most of the iron is from SNe Ia. If the stars were formed with a sufficiently top-heavy IMF to produce the observed iron, few SNe Ia are expected at any redshift, and the SN Ia rate measurements place no constraints on cluster star-formation epoch or on SN Ia time delay.
Since our conclusions are based on the comparison of a low observed cluster SN Ia rate at $z\sim 1$ to the high rate predicted by some models, it is sensible to re-examine the reliability of the observation. One possibility to consider is that the rate measured by Gal-Yam et al. (2002), based on deep [*HST*]{} cluster images, was low because the SN detection efficiency was overestimated. Some faint cluster SNe that were missed would then be incorrectly accounted for in the rate calculations. This is highly unlikely, given that the actually detected cluster SNe were relatively bright, with $I<24$ mag. Much fainter SNe, down to 28 mag, [*were*]{} found by the survey, demonstrating its high sensitivity, but these SNe were background and foreground events, rather than cluster events. A second possibility is that the rate measured was skewed low by considering only the visibility time of normal SNe Ia. Ignoring a significant population of subluminous SNe Ia, which have short visibility times, will lower the derived rate. However, studies of local subluminous SNe Ia show that they have an extremely low iron yield, $\sim 0.007 M_{\odot}$ (e.g., Contardo et al. 2000). Thus, unless they are very common at $z\sim 1$, such SNe are irrelevant for iron production in clusters.
An interesting conclusion arises if we examine our results jointly with the results we report in a companion paper (Gal-Yam & Maoz 2003). There, we find that some versions of cosmic SFH, combined with particular SN Ia delay times, are incompatible with the observed redshift distribution of SNe Ia found by Perlmutter et al. (1999). For example, if the cosmic SFR rises between $z=0$ and $z\sim 1$ as sharply as implied by Lilly et al. (1996) and by Hippelein et al. (2003), and at $z>1$ as sharply as found by Lanzetta et al. (2002), then the observed SN Ia redshift distribution sets a 95%-confidence [*lower*]{} limit on the SN Ia delay time, of $\tau>3$ Gyr. Thus, if one had independent reasons to adopt this SFH scenario, [*and*]{} a cluster star-formation redshift $z_f<2$ (the latter implying an [*upper*]{} limit of $\tau<2$ Gyr at 95% confidence from the measured SN Ia rate), then [*all*]{} values of $\tau$ would be excluded. One would then be forced to the conclusion that SNe Ia cannot be the source of iron in clusters, leaving only the top-heavy IMF option.
To summarize, we have investigated the source of the total iron mass in rich galaxy clusters. Using updated values for the various observational parameters, we have rederived and quantified the excess of iron over the expectation from core-collapse SNe, provided the stars in cluster galaxies formed with a standard IMF. Assuming the source of the iron is from SNe Ia, we then showed that the SN Ia rate vs. redshift can be predicted quite robustly, given two parameters – the star formation redshift in clusters, and the mean SN Ia delay time. We set constraints on these two parameters by comparing the predicted rates to the measurements of the SN Ia rate at $z\sim 1$ by Gal-Yam et al. (2002). The low observed rates at $z\sim 1$ force the iron-producing SNe Ia to have occurred at higher redshifts. This implies an early epoch of star formation [*and*]{} a short delay time. Specifically, we showed that models with a mean SN Ia delay time of $\tau<2$ Gyr ($<5$ Gyr) and $z_f<2$ ($z_f<3$) are ruled out at high confidence. Thus, if other avenues of inquiry show that SNe Ia explode via a mechanism that leads to a long delay time that is ruled out by our study, it will mean that core-collapse SNe from a top-heavy IMF must have formed the bulk of the observed mass of iron in clusters. The same conclusion will apply if other studies confirm a cosmic SFH that rises sharply with redshift. As shown in Gal-Yam & Maoz (2003), short ($<4$ Gyr) delay times are incompatible with such a SFH and the redshift distribution of field SNe Ia.
Finally, we note that our results have relied on the upper limits on the $z \sim 1$ cluster SN Ia rate, calculated by Gal-Yam et al. (2002) based on just several SNe. Improved measurements of cluster SN rates at low, intermediate, and high redshifts can tighten the constraints significantly, and can potentially reveal the iron source directly.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank M. Hamuy, A. Filippenko, P. Mazzali, and B. Schmidt for valuable comments and discussions. This work was supported by the Israel Science Foundation — the Jack Adler Foundation for Space Research, Grant 63/01-1.
Allen, S. W., Schmidt, R. W., & Fabian, A. C. 2002, MNRAS, 334, L11 Anders, E. & Grevesse, N. 1989, GeCoA, 53, 197 Arnaud, M., Aghanim, N., & Neumann, D. M. 2002, A&A, 389, 1 Bahcall, N. A. & Comerford, J. M. 2002, ApJL, 565, L5 Baldry, I. K., & Glazebrook, K. 2003, ApJ, in press, astro-ph/0304423 Brighenti, F., & Mathews, W. G. 1998, ApJ, 515, 542 Brighenti, F., & Mathews, W. G. 2001, ApJ, 553, 103 Buote, D. A., 2002, to be published in the proceedings of the conference, “IGM/Galaxy Connection- The Distribution of Baryons at z=0”, held at the University of Colorado, Boulder, USA, August 8-10, 2002, astro-ph/0210608 Cappellaro, E., Evans, R., & Turatto, M. 1999, A&A, 351, 459 Carlberg, R. G., Yee, H. K. C., Ellingson, E., Abraham, R., Gravel, P., Morris, S., & Pritchet, C. J. 1996, ApJ, 462, 32 Contardo, G., Leibundgut, B., & Vacca, W. D. 2000, A&A, 359, 876 Dahlén, T., & Fransson, C. 1999, A&A, 350, 349 Elmhamdi, A., Chugai, N. N., & Danziger, I. J. 2003, A&A, in press, astro-ph/0304144 Ettori, S. & Fabian, A. C. 1999, MNRAS, 305, 834 Ettori, S. 2003, MNRAS, in press, astro-ph/0305296 Finoguenov, A., David, L. P., & Ponman, T. J. 2000, ApJ, 544, 188 Finoguenov, A. Burkert, A., & Boehringer, H. 2003, ApJ, in press, astro-ph/0305190 Fukazawa, Y., Makishima, K., Tamura, T., Ezawa, H., Xu, H., Ikebe, Y., Kikuchi, K., & Ohashi, T. 1998, PASJ, 50, 187 Gal-Yam, A., Maoz, D., & Sharon, K. 2002, MNRAS, 332, 37 Gal-Yam, A. & Maoz, D. 2003, MNRAS, submitted Gerhard, O., Kronawitter, A., Saglia, R. P., & Bender, R. 2001, AJ, 121, 1936 Girardi, M. & Giuricin, G. 2000, ApJ, 540, 45 Girardi, M., Manzato, P., Mezzetti, M., Giuricin, G., & Limboz, F. 2002, ApJ, 569, 720 Gould, A., Bahcall, J. N., & Flynn, C. 1997, ApJ, 482, 913 Grevesse, N. & Sauval, A. J. 1999, A&A, 347, 348 Hamuy, M. 2003, ApJ, 582, 905 Hippelein, H., et al. 2003, A&A, in press, astro-ph/0302116 J[ø]{}rgensen, I. 1999, MNRAS, 306, 607 Kennicutt, R. C. 1983, ApJ, 272, 54 Kochanek, C. S. et al. 2001, ApJ, 560, 566 Kroupa, P. 2001, MNRAS, 322, 231 Lanzetta, K. M., Yahata, N., Pascarelle, S., Chen, H., & Fern[' a]{}ndez-Soto, A. 2002, ApJ, 570, 492 Lilly, S. J., Le Fevre, O., Hammer, F., & Crampton, D. 1996, ApJL, 460, L1 Lima Neto, G. B., Capelato, H. V., Sodr[' e]{}, L., & Proust, D. 2003, A&A, 398, 31 Lin, Y.-T., Mohr, J. J., & Stanford, S. A. 2003, ApJ, in press, astro-ph/0304033 Lloyd-Davies, E. J., Ponman, T. J., & Cannon, D. B. 2000, MNRAS, 315, 689 Loewenstein, M. 2000, ApJ, 532, 17 Madau, P., Della Valle, M., & Panagia, N. 1998, MNRAS, 297, L17 (MDP) Miller, G. E. & Scalo, J. M. 1979, ApJS, 41, 513 Mohr, J. J., Mathiesen, B., & Evrard, A. E. 1999, ApJ, 517, 627 Mushotzky, R. F. & Loewenstein, M. 1997, ApJL, 481, L63 Pain, R., et al. 2002, ApJ, 577, 120 Perlmutter, S., et al. 1999, ApJ, 517 ,565 Pipino, A., Matteucci, F., Borgani, S., & Biviano, A. 2002, New Astronomy, 7, 227 Reiss, D. 2000, PhD Thesis, University of Washington Renzini, A. 1997, ApJ, 488, 35 Renzini, A., Ciotti, L., D’Ercole, A., & Pellegrini, S. 1993, ApJ, 419, 52 Ruiz-Lapuente, P., & Canal, R. 1998, ApJ, 497, L57 Sadat, R., Blanchard, A., Guiderdoni, B., & Silk, J. 1998, A&A, 331, L69 Scalo, J. 1998, ASP Conf. Ser. 142: The Stellar Initial Mass Function (38th Herstmonceux Conference), 201 Salpeter, E.E. 1955, ApJ, 121, 161 Spergel, D. N. et al. 2003, ApJ, submitted, astro-ph/0302209 Sullivan, M., et al. 2000, MNRAS, 319, 549 Tamura, T., Kaastra, J.S., Bleeker, J.A.M., & Peterson, J.R. 2002, in “New Visions of the X-ray Universe in the XMM-Newton and Chandra Era”, astro-ph/0209332 Thielemann, F.-K., Nomoto, K., & Yokoi, K. 1986, A&A, 158, 17 Thielemann, F., Nomoto, K., & Hashimoto, M. 1996, ApJ, 460, 408 Tonry, J. L., et al. 2003, ApJ, in press, astro-ph/0305008 Tozzi, P., & Norman, C. 2001, ApJ, 546, 63 Tozzi et al. 2003, ApJ, in press, astro-ph/0305223 van Dokkum, P. G. & Franx, M. 2001, ApJ, 553, 90 White, D. A. 2000, MNRAS, 312, 663 Yungelson, L., & Livio, M. 2000, ApJ, 528, 108 Zampieri, L., Pastorello, A., Turatto, M., Cappellaro, E., Benetti, S., Altavilla, G., Mazzali, P., & Hamuy, M. 2003, MNRAS, 338, 711
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We employ density functional theory to calculate the self consistent electronic structure, free energy and linear source-drain conductance of a lateral semiconductor quantum dot patterned via surface gates on the 2DEG formed at the interface of a $GaAs-AlGaAs$ heterostructure. The Schrödinger equation is reduced from 3D to multi-component 2D and solved via an eigenfunction expansion in the dot. This permits the solution of the electronic structure for dot electron number $N \sim 100$. We present details of our derivation of the total dot-lead-gates interacting free energy in terms of the electronic structure results, which is free of capacitance parameters. Statistical properties of the dot level spacings and connection coefficients to the leads are computed in the presence of varying degrees of order in the donor layer. Based on the self-consistently computed free energy as a function of gate voltages, $V_i$, and N, we modify the semi-classical expression for the tunneling conductance as a function of gate voltage through the dot in the linear source-drain, Coulomb blockade regime. Among the many results presented, we demonstrate the existence of a shell structure in the dot levels which (a) results in envelope modulation of Coulomb oscillation peak heights, (b) which influences the dot capacitances and should be observable in terms of variations in the activation energy for conductance in a Coulomb oscillation minimum, and (c) which possibly contributes to departure of recent experimental results from the predictions of random matrix theory.'
address: |
RIKEN (The Institute of Physical and Chemical Research)\
2-1, Hirosawa, Wako-Shi\
Saitama 351-01, Japan\
e-mail [email protected]
author:
- 'M. Stopa'
title: '**Quantum dot self consistent electronic structure and the Coulomb blockade**'
---
[2]{} \[
\]
Introduction
============
Study of the Coulomb blockade and charging effects in the transport properties of semiconductor systems is peculiarly suitable to investigation through self-consistent electronic structure techniques. While the orthodox theory [@Lik], in parameterizing the energy of the system in terms of capacitances, is strongly applicable to metal systems, the much larger ratio of Fermi wavelength to system size, $\lambda_F / L$, in mesoscopic semiconductor devices, requires investigation of the interplay of quantum mechanics and charging.
In the first step beyond the orthodox theory, the “constant interaction” model of the Coulomb blockade supplemented the capacitance parameters, which were retained to characterize the gross electrostatic contributions to the energy, with non-interacting quantum levels of the dots and leads of the mesoscopic device [@Ruskies; @Been]. This theory was successful in explaining some of the fundamental features, specifically the periodicity, of Coulomb oscillations in the conductance of a source-dot-drain-gate system with varying gate voltage. Other effects, however, such as variations in oscillation amplitudes, were not explained.
In this paper we employ density functional (DF) theory to compute the self-consistently changing effective single particle levels of a lateral $GaAs-AlGaAs$ quantum dot, as a function of gate voltages, temperature $T$, and dot electron number $N$ [@RComm]. We also compute the total system free energy from the results of the self-consistent calculation. We are then able to calculate the device conductance in the linear bias regime without any adjustable parameters. Here we consider only weak ($\stackrel{\sim}{<} 0.1 \; T$) magnetic fields in order to study the effects of breaking time-reversal symmetry. We will present results for the edge state regime in a subsequent publication [@lp2].
We include donor layer disorder in the calculation and present results for the statistics of level spacings and partial level widths due to tunneling to the leads. Recently we have employed Monte-Carlo variable range hopping simulations to consider the effect of Coulomb regulated ordering of ions in the donor layer on the mode characteristics of split-gate quantum [*wires*]{} [@BR2]. The results of those simulations are here applied to quantum dot electronic structure.
A major innovation in this calculation is our method for determining the two dimensional electron gas (2DEG) charge density. At each iteration of the self-consistent calculation, at each point in the $x-y$ plane we determine the subbands $\epsilon_n (x,y)$ and wave functions $\xi^{xy}_n (z)$ in the $z$ (growth) direction. The full three dimensional density is then determined by a solution of the multi-component 2D Schrödinger equation and/or 2D Thomas-Fermi approximation.
Among the many approximation in the calculation are the following. We use the local density approximation (LDA) for exchange-correlation (XC), specifically the parameterized form of Stern and Das Sarma [@SternDas]. While the LDA is difficult to justify in small ($N \sim 50-100$) quantum dots it is empirically known to give good results in atomic and molecular systems where the density is also changing appreciably on the scale of the Fermi wavelength [@Slater].
In reducing the 3D Schrödinger equation to a multi-component 2D equation we cutoff the expansion in subbands, often taking only the lowest subband into account. We also cutoff the wavefunctions by placing another artificial $AlGaAs$ interface at a certain depth (typically $200 \; \stackrel{0}{A}$) away from the first interface, thereby ensuring the existence of subbands at all points in the $x-y$ plane. Generally the subband energy of this bare square well is much smaller than the triangular binding to the interface in all but those regions which are very nearly depleted.
The dot electron states in the zero magnetic field regime are simply treated as spin degenerate. For $B \ne 0$ an unrenormalized Landé g-factor of $-0.44$ is used. We employ the effective mass approximation uncritically and ignore the effective mass difference between $GaAs$ and $AlGaAs$ ($m^* = 0.067 \; m_0$). Similarly we take the background dielectric constant to be that of pure $GaAs$ ($\kappa =
12.5$) thereby ignoring image effects (in the $AlGaAs$). We ignore interface grading and treat the interface as a sharp potential step. These effects have been treated in other calculations of self-consistent electronic structure for $GaAs-AlGaAs$ devices [@SternDas] and have generally been found to be small.
We mostly employ effective atomic units wherein $1 \; Ry^* = m^* e^4/2 \hbar^2 \kappa^2 \approx 5.8 \; meV$ and $1 \; a_B^* = \hbar^2 \kappa/m^* e^2 \approx 100 \; \stackrel{0}{A}$.
The structure of the paper is as follows. In section II we first discuss the calculation of the electronic structure, focusing on those features which are new to our method. Further subsections then consider the treatment of discrete ion charge and disorder, calculation of the total dot free energy from the self-consistent electronic structure results, calculation of the source-dot-drain conductance in the linear regime and calculation of the dot capacitance matrix. Section III provides new results which are further subdivided into basic electrostatic properties, properties of the effective single electron spectra, statistics of level spacings and widths and conductance in the Coulomb oscillation regime. Section IV summarizes the principal conclusions which we derive from the calculations.
Calculations
============
Quantum dot self-consistent electronic structure
------------------------------------------------
We consider a lateral quantum dot patterned on a 2DEG heterojunction via metallic surface gates (Fig. \[fig1\]). At a semiclassical level, other gate geometries, such as a simple point contact or a multiple dot system, can be treated with the same method [@BR2; @G-res]. However, a full 3D solution of Schrödinger’s equation, even employing our subband
expansion procedure for the $z$ direction, is only tractable in the current method when a region with a small number of electrons ($N \le 100$) is quantum mechanically isolated, such as in a quantum dot.
### Poisson equation and Newton’s method
In principal, a self-consistent solution is obtained by iterating the solution of Poisson’s equation and [*some*]{} method for calculating the charge density (see following sections II.A.2 and II.A.3). In practise, we follow Kumar [*et al*]{} [@Kumar] and use an ${\cal N}$-dimensional Newton’s method for finding the zeroes of the functional $\vec{F}(\vec{\phi}) \equiv {\bf \Delta} \cdot \vec{\phi} + \vec{\rho}(\vec{\phi})
+ \vec{q}$; where the potential, $\phi_i$, and density, $\rho_i$, on the ${\cal N}$ discrete lattice sites (${\cal N} \sim 100,000$) are written as vectors, $\vec{\phi}$ and $\vec{\rho}$. The vector $\vec{q}$ represents the inhomogeneous contribution from any Dirichlet boundary conditions, ${\bf \Delta}$ is the Laplacian (note that here it is a matrix, not a differential operator), modified for boundary conditions. Innovations for treating the Jacobian $\partial \rho_i / \partial \phi_j$ beyond 3D Thomas-Fermi, and for rapidly evaluating the mixing parameter $t$ (see Ref. [@Kumar]) are discussed below.
The Poisson grid spans a rectangular solid and hence the boundary conditions on six surfaces must be supplied. Wide regions of the source and drain must be included in order to apply Neumann boundary conditions on these ($x = $ constant) interfaces, so a non-uniform mesh is essential. It is also possible to apply Dirichlet boundary conditions on these interfaces using the ungated wafer (one dimensional) potential profile calculated off-line [@AFS]. In this case, failure to include sufficiently wide lead regions shows up as induced charge on these surfaces (non-vanishing electric field). To keep the total induced charge on all surfaces below $0.5$ electron, lead regions of $\sim 5 \; \mu m$ are necessary, assuming a surface gate to 2DEG distance (i.e. $AlGaAs$ thickness) of $1000 \stackrel{0}{A}$. In other words we need an aspect ratio of $50:1$. We note that we ignore background compensation and merely assume that the Fermi level is pinned at some fixed depth (“$z_{\infty}$” $\sim 2.5 \; \mu m$) into the $GaAs$ at the donor level. The donor energy for $GaAs$ is taken as $1 \; Ry^*$ below the conduction band. In the source and drain regions, the potential of the 2DEG Fermi surface is fixed by the desired (input) lead voltage.
We apply Neumann boundary conditions at the $y = $ constant surfaces. The $z=0$ surface of the device has Dirichlet conditions on the gated regions (voltage equal to the relevant desired gate voltage) and Neumann conditions, $\partial \phi / \partial n = 0$, elsewhere. This is equivalent to the “frozen surface” approximation of [@JHD2], further assuming a high dielectric constant for the semiconductor relative to air. Further discussion of this semiconductor-air boundary condition can be found in Ref. [@JHD2].
### Charge density, quasi-2D treatment
The charge density [*within*]{} the Poisson grid (i.e. not surface gate charge) includes the 2DEG electrons and the ions in the donor layer. The treatment of discreteness, order and disorder in the donor ionic charge $\vec{\rho}_{ion}$ has been discussed in Ref. [@BR2] in regards to quantum wire electronic structure. Some further relevant remarks are made below in section II.B.
As noted above, we take advantage of the quasi-2D nature of the electrons at the $GaAs-AlGaAs$ interface to simplify the calculation for their contribution to the total charge. Given $\vec{\phi}$, we begin by solving Schrödinger’s Eq. in the $z$-direction [*at every point*]{} in the $x-y$ plane, $$[-\frac{\partial^2}{\partial z^2} + V_B (z) + e \phi(x,y,z)] \xi^{xy}_n (z)
= \epsilon_n (x,y) \xi^{xy}_n (z) \label{eq:eqz}$$ where $V_B(z)$ is the potential due to the conduction band offset between $GaAs$ and $Al_x Ga_{1-x} As$. We generally employ fast Fourier transform with $16$ or $32$ subbands.
In order that there be a discrete spectrum at each point in the $x-y$ plane, it is convenient to take $V_B(z)$ as a [*square well*]{} potential (Fig. \[fig1\]). That is, we effectively cutoff the wave function with a second barrier, typically $200 \stackrel{0}{A}$ from the primary interface. In undepleted regions the potential is still basically triangular and only the tail of the wave function is affected. However, near the border between depleted and undepleted regions the artificial second barrier will introduce some error into the electron density. This is because as a depletion region is approached, the binding [*electric field*]{} at the 2DEG interface (slope of the triangular potential) reduces, in addition to the interface potential itself rising. Consequently, all subbands become degenerate and [*near the edge electrons are three dimensional*]{} [@McEuenrecent]. We have checked that this departure from interface confinement, and in general in-plane gradients of $\xi^{x,y}_n (z)$ contribute negligibly to quantum dot level energies. However, theoretical descriptions of 2DEG edges commonly assume perfect confinement of electrons in a plane. In particular the description of edge excitations in the quantum Hall effect regime in terms of a chiral Luttinger liquid [@Wen] may be complicated in real samples by the emergence of this vanishing energy scale and collective modes related to it.
Assuming only a single $z$-subband now and dropping the index $n$, we determine the charge distribution in the $x-y$ plane from the effective potential $\epsilon (x,y)$, employing a 2D Thomas-Fermi approximation for the charge in the leads and solving a 2D Schrödinger equation in the dot. In order that the dot states be well defined, the QPC saddle points must be classically inaccessible. (If this is not the case it is still possible to use a Thomas-Fermi approximation throughout the plane for the charge density [@BR2; @G-res]). In the dot, the density is determined from the eigenstates by filling states according to a Fermi distribution either to a prescribed “quasi-Fermi energy” of the dot, or to a fixed number of electrons. It has been pointed out that a Fermi distribution for the level occupancies in the dot is an inaccurate approximation to the correct grand canonical ensemble distribution [@Been]. Nonetheless, for small dots ($N \stackrel{<}{\sim} 15$) Jovanovic [*et al.*]{} [@Jovanovic] have shown that, regarding the filling factor, the discrepancy between a Fermi function evaluation and that of the full grand canonical ensemble is $\sim 5\%$ at half filling and significantly smaller away from the Fermi surface. As $N$ increases the discrepancy should become smaller.
### Solution of Schrödinger’s equation in the dot
To solve the effective 2D Schrödinger’s equation in the dot, $$(-{\bf \nabla}^2 + \epsilon({\bf x}) ) f ({\bf x}) = E f ({\bf x}) \label{eq:eqE}$$ we set the 2D potentials throughout the [*leads*]{} to their values at the saddle points, thereby ensuring that the wave functions decay uniformly into the leads. Thus the energy of the higher lying states will be shifted upward slightly. In seeking a basis in which to expand the solution of Eq. \[eq:eqE\] we must consider the approximate shape of the potential. The quantum dots which we model here are lithographically approximately square in shape. However the potential at the 2DEG level and also the effective 2-D potential $\epsilon(r,\theta)$, (now in polar coordinates) are to lowest order azimuthally symmetric. The [*radial*]{} dependence of the potential is weakly parabolic across the center. Near the perimeter higher order terms become important (cf. figure \[fig3\]b and Eq. \[eq:phi\]).
As the choice of a good basis is not completely clear, we have tried two different sets of functions: Bessel functions and the so-called Darwin-Fock (DF) states [@Darwin]. The details of the solution for the eigenfunctions and eigenvalues differ significantly whether we use the Bessel functions or the DF states. The Bessel function case is largely numerical whereas the DF functions together with polynomial fitting of the azimuthally symmetric part of the radial potential allow a considerable amount of the work to be done analytically. Further, neither of the two bases comes particularly close to fitting the somewhat eccentric shape of the actual dot potential. It is therefore gratifying that comparing the eigenvalues determined from the two bases when reasonable cutoffs are used, we find for up to the $50^{th}$ eigenenergy agreement to three significant figures, or to within roughly $5 \; micro \; eV$.
### Summary and efficiency
To summarize the calculation, we begin by choosing the device dimensions such as the gate pattern, the ionized donor charge density and its location relative to the 2DEG, the aluminum concentration for the height of the barrier, and the thickness of the $AlGaAs$ layer. We construct non-uniform grids in $x$, $y$ and $z$ that best fit the device within a total of about $10^5$ points. Gate voltages, temperature, source-drain voltages, and either the electron number $N$ or the quasi-Fermi energy of the dot are inputs. The iteration scheme begins with a guess of $\vec{\phi}^{(0)}$. The 1-D Schrödinger equation is solved at each point in the $x-y$ plane and an effective 2-D potential $\epsilon(x,y)$ for one or at most two subbands is thereby determined. Taking $|\xi^{xy}_n (z)|^2$ for the $z$-dependence of the charge density, we compute the 2D dependence in the leads using a 2D Thomas-Fermi approximation and in the dot by solving Schrödinger’s equation and filling the computed states according to a Fermi distribution. We compute $\vec{F} (\vec{\phi}^{(0)})$, which is a measure of how far we are from self-consistency, and solve for $\delta \vec{\phi}$, the potential increment, using a mixing parameter $t$. This gives the next estimate for the potential $\vec{\phi}^{(1)}$. The procedure is iterated and convergence is gauged by the norm of $F$.
In practise there are many tricks which one uses to hasten (or even obtain !) convergence. First, we use a scheme developed by Bank and Rose [@Bank; @Kumar] to search for an optimal mixing parameter $t$. Repeated calculation of Schrödinger’s equation, which is very costly, is in principle required in the search for $t$. Far from convergence the Thomas-Fermi approximation can be used in the dot as well as the leads. Nearer to convergence we find that diagonalizing $t \; \delta \vec{\phi}$ in a basis of about ten states near the Fermi surface, treating the charge in the other filled states as inert, is highly efficient. Periodically the full solution of Schrödinger’s equation is employed to update the wave functions.
The wave function information is also used to make a better estimate of $\partial \rho_i / \partial \phi_i$. The 3D Thomas-Fermi method for estimating this quantity does not account for the fact that the change in density at a given grid point will be most strongly influenced by the changes in the occupancies of the partially filled states at the Fermi surface. Thus use of these wave functions greatly improves the speed of the calculation.
Disorder
--------
Evidence of Coulombic [*ordering*]{} of the donor charge in a modulation doping layer adjacent to a 2DEG has recently accumulated [@Buks]. When the fraction ${\cal F}$ of ionized donors among all donors is less than unity, redistribution of the ionized sites through hopping can lead to ordering of the donor layer charge [@Efros; @BR2].
In this paper we consider the effects of donor charge distribution on the statistical properties of quantum dot level spectra, in particular the unfolded level spacings, and on the connection coefficients to the leads $\Gamma_p$ of the individual states (see below). These dot properties are calculated with ensembles of donor charge which range from completely random (identical to ${\cal F}=1$, no ion re-ordering possible) to highly ordered (${\cal F} \sim 1/10$). For a discussion of the glass-like properties of the donor layer and the Monte-Carlo variable range hopping calculation which is used to generate ordered ion ensembles, see Refs. [@BR2] and [@ISQM2].
Note that hopping is assumed to take place at temperatures ($\sim 160 \, K$) much higher than the sub-liquid Helium temperatures at which the dot electronic structure is calculated. Thus the ionic charge distributions generated in the Monte-Carlo calculation are, for the purposes of the 2DEG electronic structure calculation, considered fixed space charges which are specifically not treated as being in thermal equilibrium with the 2DEG.
The region where the donor charge can be taken as discrete is limited by grid spacing and hence computation time. In the wide lead regions and wide region lateral to the dot the donor charge is always treated as “jellium.” Also, to serve as a baseline, we calculate the dot structure with jellium across the dot region as well. We introduce the term “quiet dot” to denote this case.
Free energy
-----------
To calculate the total interacting free energy we begin from the semi-classical expression $$\begin{aligned}
F(&\{n_p\},Q_i,V_i) = \sum_p n_p \varepsilon_p^0 +
\frac{1}{2} \sum_i^M Q_i V_i \nonumber \\
& - \sum_{i \ne dot} \int dt \; V_i (t) I_i (t) \label{eq:cl}\end{aligned}$$ where $n_p$ are the occupancies of non-interacting dot energy levels $\varepsilon_p^0$; $Q_i$ and $V_i$ are the charges and voltages of the $M$ distinct “elements” into which we divide the system: dot, leads and gates. $I_i$ are the currents supplied by power supplies to the elements.
The [*self-consistent*]{} energy levels for the electrons in the dot are $\varepsilon_p = < \psi_p \mid - \nabla^2 + V_B (z) +e \phi ({\bf r})
\mid \psi_p >$. A sum over these levels double counts the electron-electron interaction. Thus, for the terms in Eq. \[eq:cl\] relating to the dot, we make the replacement: $$\begin{aligned}
& \sum_p n_p \varepsilon_p^0 + \frac{1}{2} Q_{dot} V_{dot} \rightarrow
\sum_p n_p \varepsilon_p \nonumber \\
& - \frac{1}{2} \int d{\bf r} \rho_{dot}({\bf r})
\phi ({\bf r}) + \frac{1}{2} \int d{\bf r} \rho_{ion}({\bf r}) \phi ({\bf r})\end{aligned}$$ where $\rho_{dot}({\bf r})$ refers only to the charge in the dot states and $\rho_{ion}({\bf r})$ refers to all the charge in the donor layer.
We have demonstrated [@BR1; @MCO] that previous investigations [@Been; @vanH] had failed to correctly include the work from the power supplies, particularly to the source and drain leads, in the energy balance for tunneling between leads and dot in the Coulomb blockade regime. Here, we assume a low impedance environment which allows us to make the replacement: $$\frac{1}{2} \sum_{i \ne dot} Q_i V_i -
\sum_{i \ne dot} \int dt \; V_i (t) I_i (t) \rightarrow
- \frac{1}{2} \sum_{i \ne dot} Q_i V_i.$$ The charges on the gates are determined from the gradient of the potential at the various surface regions, the voltages being given. Including only the classical electrostatic energy of the leads, the total free energy is [@RComm]: $$\begin{aligned}
& F(\{n_p\},N,V_i) = \sum_{p} n_p
\varepsilon_{p} - \frac{1}{2} \int d{\bf r}
\rho_{dot}({\bf r}) \phi ({\bf r}) \nonumber \\
& + \frac{1}{2} \int d{\bf r} \rho_{ion}({\bf r}) \phi ({\bf r})
- \frac{1}{2} \sum_{i \; \epsilon \; leads} \; \int d{\bf r}
\rho_i ({\bf r}) \phi ({\bf r}) \nonumber \\
& - \frac{1}{2} \sum_{i \; \epsilon \; gates} Q_i V_i \label{eq:free}\end{aligned}$$ where the energy levels, density, potential and induced charges are implicitly functions of $N$ and the applied gate voltages $V_i$. Note that the occupation number dependence of these terms is ignored. In the $T=0$ limit the electrons occupy the lowest $N$ states of the dot, and the free energy is denoted $F_0 (N,V_i)$.
Conductance
-----------
The master equation formula for the linear source-drain conductance though the dot, derived by several authors [@Been; @Ruskies; @Meir] for the case of a fixed dot spectrum, is modified to the self-consistently determined free energy case as follows [@RComm]: $$\begin{aligned}
& G(V_g) = \displaystyle{\frac{e^2}{k_B T} \sum_{\{ n_{i} \} }}
P_{eq}( \{ n_{i} \} ) \sum_{p} \delta_{n_{p},0}
\displaystyle{\frac{\Gamma_p^s \Gamma_p^d}{\Gamma_p^s +
\Gamma_p^d}} \nonumber \\
& \times f(F(\{n_i+p\},N+1,V_g)
- F(\{n_i\},N,V_g) - \mu ) \label{eq:cond}\end{aligned}$$ where the first sum is over dot level occupation configurations and the second is over dot levels. The equilibrium probability distribution $P_{eq} ( \{ n_i \} )$ is given by the Gibbs distribution, $$P_{eq} ( \{ n_i \} ) = \frac{1}{Z} exp[- \beta (F(\{n_i\},N,V_g) - \mu)]$$ and the partition function is $$Z \equiv \sum_{\{ n_{i} \} } exp[- \beta (F( \{ n_i \} ,N,V_g) - \mu)]$$ note that the sum on occupation configurations, $\{ n_{i} \}$, includes implicitly a sum on $N$. In Eq. \[eq:cond\] $f$ is the Fermi function, $\mu$ is the electrochemical potential of the source and drain and $\Gamma_p^{s(d)}$ are the elastic couplings of level $p$ to source (drain). The notation $\{ n_i + p \}$ denotes the set of occupancies $\{ n_i \}$ with the $p^{th}$ level, previously empty by assumption, filled. In Eq. \[eq:cond\] it is assumed that only a single gate voltage, $V_g$ (the “plunger gate”, cf. Fig. \[fig1\]), is varied.
Tunneling coefficients
----------------------
The elastic couplings in Eq. \[eq:cond\] are calculated from the self-consistent wave functions [@Bardeen]: $$\hbar \Gamma_{np} = 4 \kappa^2 W_n^2 (a,b) \; \left| \int dy \;
f_p (x_b,y) \chi^*_n (x_b,y) \right|^2 \label{eq:tun}$$ where $f_p (x_b,y)$ is the two dimensional part of the $p^{th}$ wave function evaluated at the midpoint of the barrier, $x_b$, and $\chi^*_n (x_b,y)$ is the $n^{th}$ channel wavefunction decaying into the barrier from the leads, $W_n(a,b)$ is the barrier penetration factor between the classical turning point in the lead and the point $x_b$, for channel $n$ computed in the WKB approximation, and $\kappa$ is the wave vector at the matching point. Though the channels are 1D we use the two dimensional density of states characteristic of the wide 2DEG region [@Matveev].
Capacitance
-----------
Quantum dot system electrostatic energies are commonly estimated on the basis of a capacitance model [@various]. When the self-consistent level energies and potential are known the total free energy can be computed without reference to capacitances. However, the widespread use of this model and the ease with which capacitances can be calculated from our self-consistent results (see below) encourages a discussion.
For a collection of $N$ metal elements with charges $Q_i$ and voltages $V_j$ the capacitance matrix, defined by [@BR1; @meandYasu] $Q_i = \sum_{j=1}^{N} C_{ij} V_j$, can be written in terms of the Green’s function $G_D ({\bf x,x^{\prime}})$ for Laplace’s equation satisfying Dirichlet boundary conditions on the element surfaces: $$C_{ij} = \frac{1}{4 \pi^2} \int d \Omega_i \int d \Omega_j
\hat{n}_j \cdot \vec{\nabla}_x (\hat{n}_i \cdot \vec{\nabla}_{x^\prime}
G_D ({\bf x,x^{\prime}}))$$ where the integrals are over element surfaces with $\hat{n}_j$ the outward directed normal.
In a system with an element of size $L$ not much greater than the screening length $\lambda_s$, the voltage of the component, and hence the capacitance, is not well defined [@meandYasu; @Buttikercap]. In this case, as discussed in reference [@meandYasu], the capacitance can no longer be written in terms of the solution of Poisson’s equation alone, but must take account of the full self-consistent determination of the $i^{th}$ charge distribution $\rho_i({\bf x})$ from the $j^{th}$ potential $\phi_j({\bf x})$ $\forall i,j$. In general the capacitance will then become a kernel in an integral relation. A relationship of this kind has recently been derived in terms of the Linhard screening function by Büttiker [@Buttikercap].
To compute the dot self-capacitance from the calculated self-consistent electronic structure we have three separate procedures. In all three cases we vary the Fermi energy of the dot by some small amount to change the net charge in the dot. This requires that the QPCs be closed. For the first method the total charge variation of the dot is divided by the change in the electrostatic potential minimum of the dot. This is taken as the dot self-capacitance $C_{dd}$. A second procedure for the dot self-capacitance is to divide the change in the dot charge simply by the fixed, imposed change of the Fermi energy. This result is denoted $C_{dd}^\prime$. Since the change in the potential minimum of the dot is not always equal to the change of the Fermi energy these results are not identical. Finally, we can fit the computed free energy $F(N,V_g)$ to a parabola in $N$ at each $V_g$. If the quadratic term is $\alpha N^2$ then the final form for the self-capacitance is $C_{dd}^{\prime \prime} = 1/(2 \alpha)$ (primes are [*not*]{} derivatives here). This form, which also serves as a consistency check on our functional for the energy, is generally quite close to the first form and we present no results for it.
For the capacitances between dot and gates or leads, the extra dot charge (produced by increasing the Fermi energy in the dot) is screened in the gates and the leads so that the net charge inside the system (including that on the gated boundaries) remains zero. The fraction of the charge screened in a particular element gives that element’s capacitance to the dot as a fraction of $C_{dd}$.
Results
=======
We consider only a small subspace of the huge available parameter space. For the results presented here we have fixed the nominal 2DEG density to $1.4 \times 10^{11} \; cm^{-2}$ and the aluminum concentration of the barrier to $0.3$. The lithographic gate pattern is shown in figure \[fig1\], as is the growth profile (including our artificial second barrier). Some results are presented with a variation of the total thickness $t$ of the AlGaAs (Fig. \[fig1\]).
To interpret the results we note the following considerations. Hohenberg-Kohn-Sham theory provides only that the ground state energy of an interacting electron system can be written as a functional of the density [@HKS1; @HKS2]. The single particle eigenvalues $\varepsilon_p$ have, strictly speaking, no physical meaning. However, as pointed out by Slater [@Slater], the usefulness of DF theory depends to some extent on being able to interpret the energies and wave functions as some kind of single particle spectrum. In the Coulomb blockade regime it is particularly important to be clear what that interpretation, and its limitations, are.
A distinction is commonly made between the addition spectrum and the excitation spectrum for quantum dots [@McEuen; @Ashoori]. Differences between our effective single particle eigenvalues represent an approximation to the excitation spectrum. As a specific example, in the absence of depolarization and excitonic effects the first single particle excitation from the $N$-electron ground state with gate voltages $V_i$ is $\varepsilon_{N+1}(N,V_i)-\varepsilon_{N}(N,V_i)$.
The addition spectrum, on the other hand, depends on the energy difference between the ground states of the dot [*interacting with its environment*]{} at two different $N$. Thus, in our formalism, the addition spectrum is given by differences in $F(\{n_p\},N,V_i)$ at neighboring $N$, possibly further modulated by excitations, i.e. differences in the occupation numbers $\{ n_p \}$.
In contrast to experiment, the electronic structure can be determined for arbitrary $N$ and $V_i$ (so long as the dot is closed). This includes both non-integer $N$ as well as values which are far from equilibrium (differing chemical potential) with the leads. The “resonance curve” [@RComm] is given by the $N$ which minimizes $F_0 (N,V_g)$ at each $V_g$ (gates other than the plunger gate are assumed fixed). This occurs when the chemical potential of the dot equals those of the leads (which are taken as equal to one another and represent the energy zero) and gives the most probable electron number. Results presented below as a function of varying gate voltage, particularly the spectra in Figs. \[fig10\] and \[fig14\], are assumed to be along the resonance curve.
Electrostatics
--------------
Figure \[fig3\]a shows an example of a potential profile along with a corresponding density plot for a quiet dot containing $62$ electrons. The basic potential/density configuration, as well as the capacitances are highly robust. These data are computed completely in the 2D Thomas-Fermi approximation, single $z$-subband, at $T = 0.1 \; K$. Solution of Schrödinger’s equation or variation of $T$ result in only subtle changes. The depletion region spreading is roughly $100 \; nm$. Figure \[fig3\]b shows a set of potential and density profiles along the y-direction (transverse to the current direction) in steps of $3.3 \; a_B^*$ in $x$, from the QPC saddle point to the dot center. Note that the density at the dot center is only about $65 \%$ of the ungated 2DEG
density. Correspondingly the potential at the center is above the floor of the ungated 2DEG ($\sim -0.9 \; Ry^*$).
We discuss a simple model for the potential shape of a circular quantum dot below (Sec. III.B.1). Here we note only that the radial potential can be regarded as parabolic to lowest order with quartic and higher order corrections whose influence increases near the perimeter. In Thomas-Fermi studies on larger dots [@MCO; @G-res] with a comparable aspect ratio we find that the potential and density achieve only $90 \; \%$ of their ungated 2DEG value nearly $200 \; nm$ from the gate. Regarding classical billiard calculations for gated structures therefore [@chaos1; @Been2; @Bird; @Ferry] even in the absence of impurities it is difficult to see how the “classical” Hamiltonian at the 2DEG level can be even approximately integrable unless the lithographic gate pattern is azimuthally symmetric [@square].
The importance of the remote ionized impurity distribution is demonstrated in figure \[fig4\] which shows a quantum dot with randomly placed ionized
donors on the left and with ions which have been allowed to reach quasi-equilibrium via variable range hopping, on the right. In both cases the total ion number in the area of the dot is fixed. The example shown here for the ordered case assumes, in the variable range hopping calculation, one ion for every five donors (${\cal F}=1/5$). As in Ref. [@BR2] we have, for simplicity, ignored the negative $U$ model for the donor impurities (DX centers), which is still controversial [@Buks; @Mooney; @Yamaguchi]. If the negative $U$ model, at some barrier aluminum concentration, is correct, the most ordered ion distributions will occur for ${\cal F}=1/2$, as opposed to the neutral DX picture employed here, where ordering increases monotonically as ${\cal F}$ decreases [@Heiblum_private].
For these assumptions figure \[fig5\] indicates that ionic ordering substantially reduces the potential fluctuations relative to the completely disordered case, even for relatively large ${\cal F}$. Here, using ensembles of dots with varying ${\cal F}$ we compare the effective 2D potential with a quiet dot (jellium donor layer) at the same gate voltages and same dot electron number. The distribution of the potential deviation is computed as: $$P(\Delta V) = \frac{1}{SN^2} \sum_s \sum_{i,j} \delta(\Delta V -
[V_{\cal F} (x_i,y_j) - V_{qd} (x_i,y_j)])$$ where $s$ labels samples (different ion distributions), typically up to $S=10$, $N$ is the total number of $x$ or $y$ grid points in the dot ($\sim 50$), and “qd” stands for quiet dot. The distributions for all ${\cal F}$ are asymmetric (Fig. \[fig5\]). Although the means are indistinguishably close to zero, the probability for large potential hills resulting from disorder is greater than for deep depressions. Also, the distributions for points above the Fermi surface (dashed lines) are broader by an order of magnitude (in standard deviation) than below, due to screening. Finally, saturation as ${\cal F} \rightarrow 0$ (inset Fig. \[fig5\]) shows that even if the ions are arranged in a Wigner crystal (the limiting case at ${\cal F} = 0$), potential fluctuations would be expected in comparison with ionic jellium.
The success of the capacitance model in describing experimental results of charging phenomena in mesoscopic systems has been remarkable [@various]. For our calculations as well, even the simplest formulations for the capacitance tend to produce smoothly varying results when gate voltages or dot charge are varied. Figure \[fig6\] shows the trend of the dot self-capacitances with $V_g$. Also shown are the equilibrium dot electron number $N$ and the minimum of the dot potential $V_{min}$ as functions of $V_g$. Note here that $V_{min}$ is the minimum of the 3D electrostatic potential rather than the effective 2D potential which is presented elsewhere (such as in Figs. \[fig3\] and \[fig4\]).
That $C_{dd}$ generally decreases as the dot becomes smaller is not surprising and has been discussed elsewhere [@ep2ds10]. All three forms of $C_{dd}$ are roughly in agreement giving a value $\sim 2 \; fF$ (the capacitance as calculated from the free energy is not shown). The fluctuations result from variations in the quantized level energies as the dot size and shape are changed by $V_g$. Note that [*numerical*]{} error is indiscernible on the scale of the figure. The pronounced collapse of $C_{dd}^{\prime}$ near $V_g = -1.15 \; V$, which is expanded in the upper panel, shows the presence of a region where the change of $N$ with $E_F$ is greatly suppressed. Since the change of $V_{min}$ with $E_F$ is similarly suppressed there is no corresponding anomaly in $C_{dd}$. Interestingly, the capacitance computed from the free energy also reveals no deep anomaly.
The anomaly at $V_g = -1.15 \; V$ and also the fluctuation in the electrostatic properties near $-1.1 \; V$ are related to a shell structure in the spectrum which we discuss below.
A frequently encountered model for the classical charge distribution in a quantum dot is the circular conducting disk with a parabolic confining potential [@Shikin; @Chklovskii]. It can be shown (solving, for example, Poisson’s equation in oblate spheroidal coordinates) that for such a model the 2D charge distribution in the dot goes as $$n(r) = n(0)(1-r^2/R^2)^{1/2} \label{eq:circ}$$ where $R$ is the dot radius and $n(0)=3N/2 \pi R^2$ is the density at the dot center. The “external” confining potential is assumed to go as $V(r)=V_0 + kr^2/2$ and $R$ is related to $N$ through $$R = \frac{3 \pi}{4} \frac{e^2}{\kappa k} N$$ where $\kappa$ is the dielectric constant [@Shikin].
To justify this model, the authors of Ref. [@Shikin] claim that the calculations of Kumar [*et al.*]{} [@Kumar] show that “the confinement...has a nearly parabolic form for the external confining potential ([*sic*]{}).” This is incorrect. What Kumar [*et al.*]{}’s calculations shows is that (for $N \stackrel{<}{\sim} 12$) the [*self-consistent*]{} potential, which includes the potential from the electrons themselves, is approximately cutoff parabolic. The [*external*]{} confining potential, as it is used in Ref. [@Shikin], would be that produced by the donor layer charge and the charge on the surface gates only. We introduce a simple model (see III.B.1 below) wherein this confining potential charge is replaced by a circular disk of positive charge whose density is fixed by the doping density and whose radius is determined by the number of electrons [*in the dot*]{}. The gates can be thought of as merely cancelling the donor charge outside that radius. The essential point, then, is this: adding electrons to the dot decreases the (negative) charge on the gates and therefore increases the radius. One can make the assumption, as in Ref. [@Shikin], that the external potential is parabolic, but it is a mistake to treat that parabolicity, $k$, as independent of $N$.
This is illustrated in figure \[fig7\] where we have plotted contours for the [*change*]{} in the 2D density, as $E_F$ is incrementally increased,
as determined self-consistently (Thomas-Fermi everywhere, left panel) and as determined from Eq. \[eq:circ\]. The white curves display the density change profiles across the central axis of the dot. The total change in $N$ is the same in both cases, but clearly the model of Eq. \[eq:circ\] underestimates the degree to which new charge is added mostly to the perimeter.
Recently the question of charging energy renormalization via tunneling as the conductance $G_0$ through a QPC approaches unity has received much attention [@Matveev2; @Halperin; @Kane]. In a recent experiment employing two dots in series a splitting of the Coulomb oscillation peaks has been observed as the central QPC (between the two dots) is lowered [@Westervelt]. Perturbation theory for small $G_0$ and a model which treats the decaying channel between the dots as a Luttinger liquid for $G_0 \rightarrow 1 \, (e^2/h)$ lead to expressions for the peak splitting which is linear in $G_0$ in the former case and goes as $(1-G_0)ln(1-G_0)$ in the latter case.
A crucial assumption of the model, however, is that the “bare” capacitance, specifically that between the dots $C_{d1-d2}$, remains approximately independent of the height of the QPC, even when an open channel connects the two dots. Thus the mechanism of the peak splitting is assumed to be qualitatively different from a model which predicts peak splitting entirely on an electrostatic basis when the inter-dot capacitance increases greatly [@Ruzin]. The independence of $C_{d1-d2}$ from the QPC potential is plausible insofar as most electrons, even when a channel is open, are below the QPC saddle points and hence localized on either one dot or the other. Further, if the screening length is short and if the channel itself does not accommodate a significant fraction of the electrons, there is little ambiguity in retaining $C_{d1-d2}$ to describe the gross electrostatic interaction of the dots, even when the dots are [*connected*]{} at the Fermi level.
In figure \[fig8\] we present evidence for this theory by showing the capacitance between a dot and the [*leads*]{} as the QPC voltage is reduced. In the figure $V_{L(R)}$ is the effective 2D potential of the left (right) saddle point as the left QPC gate voltages $V_{QPC}$ only are varied. The dot is nearly open when the QPC voltages (both pins on the left) reach $\sim -1.34 \; V$. The results here use the full quantum mechanical solution (without the LDA exchange-correlation energy), however the electrons in the lead continue to be treated with a 2D TF approximation. The dot “reconstruction” seen in figure \[fig5\] is visible
here also around $V_{QPC}=-1.365 \; V$. Note that the right saddle point is sympathetically affected when we change this left QPC. While the effect is faint, $\sim 5 \%$ of the change of the left saddle, the sensitivity of tunneling to saddle point voltage (see also below) has resulted in this kind of cross-talk being problematical for experimentalists. The figure also shows that the capacitance between the dot and one lead exceeds that to a (single) QPC gate or even to a plunger gate. However the most important result of the figure is to show that the dot to lead capacitance is largely insensitive to QPC voltage. When the left QPC is as closed as the right ($V_{QPC} \sim -1.375 \; V$) the capacitances to the source and drain are equal. But even near the open condition the capacitance to the left lead (arbitrarily the “source”) only exceeds that to the drain (which is still closed) minutely. Therefore the assumptions of a “bare” capacitance which remains constant even as contact is made with a lead (or, in the experiment, another dot) seems to be very well founded.
As noted above, the interaction between a gate and the 2DEG depends upon the distance of the gates from the 2DEG, i.e., the $AlGaAs$ thickness $t$. In figure \[fig9\] we show that, as we decrease $t$, simultaneously changing the gate voltages such that $N$ and the saddle point potentials remain constant, the total dot capacitance also decreases, but the distribution of the dot capacitance between leads, gates and (not shown) back gate change only moderately. That gates closer to the 2DEG plane should produce dots of lower capacitance is made clear in the upper panel of the figure, which shows the potential and density profile (using TF) near a depletion region at the side of the dot at varying $t$ and constant gate voltage. For smaller $t$ the depletion region is widened but the density achieves its ungated 2DEG value (here $0.14 \; a_B^{* \, -2}$) more quickly; a potential closer to hard walled is realized. In the presence of stronger confinement the capacitance decreases and the charging energy increases.
The profile of the tunnel barriers and the barrier penetration factors are also dependent on $t$. However we postpone a discussion of this until the section on tunneling coefficients.
Spectrum
--------
The bulk electrostatic properties of a dot are, to first approximation, independent of whether a Thomas-Fermi approximation is used or Schrödinger’s equation is solved. A notable exception to this is the fluctuation in the capacitances. Figure \[fig10\] shows the plunger gate voltage dependence of the energy levels. The Fermi level of the dot is kept constant and equal to that of the leads (it is the energy zero). Hence as the gate voltage increases (becomes less negative) $N$ increases.
Since the QPCs lie along the $x$-axis, the dot is never fully symmetric with respect to interchange of $x$ and $y$, however the most symmetric configuration occurs for $V_g \sim -1.16 \; V$, towards the right side of the plot. The levels clearly group into quasi-shells with gaps between. The number of states per shell follows the degeneracy of a 2D parabolic potential, i.e. 1,2,3,4,... degenerate levels per shell (ignoring spin). There is a pronounced tendency for the levels to cluster at the Fermi surface, here given by $E=0$, which we discuss below.
### Shell structure
Shell structure in atoms arises from the approximate constancy of individual electron angular momenta, and degeneracy with respect to $z$-projection. Since in two dimensions the angular momentum $m$ is fixed in the $z$ (transverse) direction, the isotropy of space is broken and the only remaining manifest degeneracy, and this only for azimuthally symmetric dots, is with respect to $\pm z$. A two dimensional parabolic potential, in the absence of magnetic field, possesses an accidental degeneracy for which a shell structure is recovered.
We have shown above that modelling a quantum dot as a classical, conducting layer in an [*external*]{} parabolic potential $kr^2/2$, where $k$ is independent of the number of electrons in the dot, ignores the image charge in the surface gates forming the dot and therefore fails to properly describe the evolving charge distribution as electrons are added to the dot. A more realistic model, which [*explains*]{} the approximate parabolicity of the [*self-consistent*]{} potential, and hence the apparent shell structure, is illustrated in figure \[fig11\].
The basic electrostatic structure of a quantum dot, in the simplest approximation, can be represented by two circular disks, of radius $R$ and homogeneous charge density $\sigma_0$, separated by a distance $a$. The positive charge outside $R$ is assumed to be cancelled by the surface gates. This approximation will be best for surface gates very close to the donor layer (i.e. small $t$). Larger $AlGaAs$ thicknesses will require a non-abrupt termination
of the positive charge. In either case, the electronic charge is assumed in the classical limit to screen the background charge as nearly as possible. This is similar to the postulate in which wide parabolic quantum wells are expected to produce approximately homogeneous layers of electronic charge [@parabola].
A simple calculation for the radial potential (for $a<R$) in the electron layer ($z=0$) gives, for the first few terms: $$\begin{aligned}
& \phi(r)= \frac{2 Ne}{\kappa R} [\sqrt{1-a/R} - 1 +
\frac{3}{8} \frac{a^2}{R^2} \frac{r^2}{R^2} \nonumber \\
& - \frac{15}{32} \frac{a^4}{R^4} \frac{r^2}{R^2} +
\frac{45}{128} \frac{a^2}{R^2} \frac{r^4}{R^4} + \cdots ] \label{eq:phi}\end{aligned}$$ where $Ne = \pi R^2 \sigma_0$ and $\kappa$ is the background dielectric constant. While the coefficient of the quartic term is comparable to that of the parabolic term, the dependences are scaled by the dot radius $R$. Hence, the accidental degeneracy of the parabolic potential is broken only by coupling via the quartic term near the dot perimeter. This picture clearly agrees with the full self-consistent results wherein the parabolic degeneracy is observed for low lying states and a spreading of the previously degenerate states occurs nearer to the Fermi surface.
Comparison (not shown) of the potential computed from Eq. \[eq:phi\] and the radial potential profile (lowest curve, Fig. \[fig3\]b) from the full self-consistent structure, shows good agreement for overall shape. However the former is about $25 \%$ smaller (same $N$) indicating that the sharp cutoff of the positive charge is, for these parameters, too extreme. However Eq. \[eq:phi\] improves for larger $N$ and/or smaller $t$.
The wavefunction moduli squared associated with the Fig. \[fig10\] quiet dot levels for $V_g \sim -1.16 \; V$, $N \approx 54$ are shown schematically for levels $1$ through $10$ in figure \[fig12\], and for levels $11$ through $35$ in figure \[fig13\].
The lowest level in a shell is, for the higher shells, typically the most circularly symmetric. When the last member of a shell depopulates with $V_g$ the inner shells expand outward, as can be seen near $V_g = -1.15 \; V$ (Fig. \[fig10\]) where level $p=29$ depopulates. Since to begin filling a new shell requires the inward compression of the other shells and hence more energy, the capacitance decreases in a step when a shell is depopulated. The shell structure should have two distinct signatures in the standard (electrostatic) Coulomb oscillation experiment [@various]. First, since the self-capacitance drops appreciably (figure \[fig6\]) when the last member of a shell depopulates, here $N$ goes from $57$ to $56$, a concomitant discrete rise in the activation energy in the minimum between Coulomb oscillations can be predicted. Second, envelope modulation of peak heights [@RComm] occurs when excited dot states are thermally accessible as channels for transport, as opposed to the $T=0$ case where the only channel is through the first open state above the Fermi surface (i.e. the $N+1^{st}$ state). When $N$ is in the middle of a shell of closely spaced, spin degenerate levels, the entropy of the dot, $k_B ln \Omega $, where $\Omega$ is the number of states accessible to the dot, is sharply peaked. For example, for six electrons occupying six spin degenerate levels (i.e. twelve altogether)
all within $k_B T$ of the Fermi surface, the number of channels available for transport is $924$. For eleven electrons in the shell, however, the number of channels reduces to $12$. Consequently, minima and peaks of envelope modulation (see also figure \[fig22\] below) of CB oscillations which are frequently observed are clear evidence of level bunching, if not an organized shell structure.
Recently experimental evidence has accumulated for the existence of a shell structure as observed by inelastic light scattering [@Lockwood] and via Coulomb oscillation peak positions in transport through extremely small ($N \sim 0-30$) vertical quantum dots [@Tarucha]. Interestingly, a [*classical*]{} treatment, via Monte-Carlo molecular dynamics simulation [@Peeters] also predicts a shell structure. Here, the effect of the neutralizing positive background are assumed to produce a parabolic confining potential. A similar assumption is made in Ref. [@Akera] which analyzes a vertical structure similar to that of Ref. [@Tarucha]. We believe that continued advances in fabrication will result in further emphasis on such invariant, as opposed to merely statistical, properties of dot spectra.
As noted above, there is a strong tendency for levels at the Fermi surface to “lock.” Such an effect has been described by Sun [*et al.*]{} [@Sun] in the case of subband levels for parallel quantum wires. In dots, the effect can be viewed as electrostatic pressure on the individual wavefunctions thereby shifting level energies in such a way as to produce level [*occupancies*]{} which minimize the total energy. Insofar as a given set of level occupancies is electrostatically most favorable, level locking is a temperature dependent effect which increases as $T$ is lowered. This self-consistent modification of the level energies can also be viewed as an excitonic correction to excitation energies.
The difference between the cases of a quantum dot and that of parallel wires is one of localized versus extended systems. It is well known that, unlike Hartree-Fock theory, wherein self-interaction is completely cancelled since the direct and exchange terms have the same kernel $1/|{\bf r} - {\bf r^{\prime}}|$, in Hartree theory and even density functional theory in the LDA, uncorrected self-interaction remains [@Perdew]. While it is reasonable to expect that excited states will have their energies corrected downward by the remnants of an excitonic effect, we expect that LDA and especially Hartree calculations will generally overestimate this tendency to the extent that corrections for self-interaction are not complete.
The panel labelled “xc” in figure \[fig10\] illustrates the preceding point. In contrast to the large panel (on the left) these results have had the XC potential in LDA included. The differences between Hartree and LDA are generally subtle, but here the clustering of the levels at the Fermi surface is clearly mitigated by the inclusion of XC. The approximate parabolic degeneracy is evidently not broken by LDA, however, and the shell structure remains intact. Similarly for xc, the capacitances also show anomalies near the same gate voltages, where shells depopulate, as in figure \[fig5\], which is pure Hartree.
The two remaining panels in figure \[fig10\] illustrate the effects of disorder and ordering in the donor layer (XC not included). As with the “xc” panel, $V_g$ is varied between $-1.142$ and $-1.17 \; V$. The “disorder” panel represents a single fixed distribution of ions placed at random in the donor layer as discussed above. Similarly, the “order” panel represents a single ordered distribution generated from a random distribution via the Monte-Carlo simulation [@BR2; @ISQM2]; here ${\cal F} = 1/5$ (cf. two panels of Fig. \[fig4\]).
The shell structure, which is completely destroyed for fully random donor placement (see also Fig. \[fig15\]), is almost perfectly recovered in the ordered case. In both cases the energies are uniformly shifted upwards relative to the quiet dot by virtue of the discreteness of donor charge (cf. also discussion of Fig. \[fig5\] above). Closer examination of the disordered spectrum shows considerably more level repulsion than the other cases.
The application of a small magnetic field, roughly a single flux quantum through the dot, has a dramatic impact on both the spectrum, figure \[fig14\], and the wave functions, figure \[fig15\], top. The magnetic field dependence of the levels (not shown) up to $0.1 \; T$ exhibits shell splitting according to azimuthal quantum number as
well as level anti-crossing. By $0.05 \; T$ level spacing (Fig. \[fig14\]) is substantially more uniform than $B=0$, Fig. \[fig10\]. Furthermore, while the $B=0$ quiet dot displays reconstruction due to the depopulation of shells at $V_g \approx -1.15$ and $-1.1 \; V$, the $B=0.05 \; T$ results show a similar pattern, a step in the levels, repeated
many times in the same gate voltage range. The physical meaning of this is clear. The magnetic field principally serves to remove the azimuthal dependence of the mod squared of the wave functions (Fig. \[fig15\]). In a magnetic field, the states at the Fermi surface also tend to be at the dot perimeter. Depopulation of an electron in a magnetic field, like depopulation of the last member of a shell for $B=0$, therefore removes charge from the perimeter of the dot and a self-consistent expansion of the remaining states outward occurs.
Statistical properties
----------------------
### Level spacings
The statistical spectral properties of quantum systems whose classical Hamiltonian is chaotic are believed to obey the predictions of random matrix theory (RMT) [@Andreev]. Arguments for this conjecture however invariably treat the Hamiltonian as a large finite matrix with averaging taken only near the band center. Additionally, an often un-clearly stated assumption is that the system in question can be treated [*semi-classically*]{}, that is, in some sense the action is large on the scale of Planck’s constant and the wavelength [*of all relevant states*]{} is short on the scale of the system size. Clearly, for small quantum dots these assumptions are violated.
RMT predictions apply to level spacings $S$ and to transition amplitudes (for the “exterior problem,” level widths $\Gamma$) [@Brody]. RMT is also applied to scattering matrices in investigations of transport properties of quantum wires [@Slevin]. Ergodicity for chaotic systems is the claim that variation of some external parameter $X$ will sweep the Hamiltonian rapidly through its entire Hilbert space, whereupon energy averaging and ensemble (i.e. $X$) averaging produce identical statistics. In our study $X$ is either the set of gate voltages, the magnetic field or the impurity configuration and we consider the statistics of the lowest lying $45$ levels (spin is ignored here). Care must also be taken in removing the secular variations of the spacings or widths with energy, the so-called unfolding.
According to RMT level repulsion leads to statistics of level spacings which are given by the “Rayleigh distribution:” $$P(S)=\frac{\pi S}{2D} exp(-\pi S^2/4 D^2) \label{eq:stat}$$ where $D$ is the mean local spacing [@Brody; @Wigner]. Figure \[fig16\] shows the calculated histogram for the level spacings for the quiet dot as well as for disordered, ordered and ordered with $B=0.05 \; T$ cases. Statistics are generated from (symmetrical) plunger gate variation, in steps of $0.001 \; V$, over a range of $0.1 \; V$, employing the spacings between the lowest $45$ levels; thus about $4500$ data points. Deviation from the Rayleigh distribution is evident. An important feature of our dot is symmetry under inversion through both axes bisecting the dot. It is well known that groups of states which are
un-coupled will, when plotted together, show a Poisson distribution for the spacings rather than the level repulsion of Eq. \[eq:stat\]. Thus we have also plotted (white bars) the statistics for those states which are totally even in parity. While the probability of degeneracy decreases, a $\chi^2$ test shows that the distribution remains substantially removed from the Rayleigh form.
In contrast to this, the disordered case shows remarkable agreement with the RMT prediction. As with the spectrum in figure \[fig10\] we use a single ion distribution. However we also find (not shown) that fixing the gate voltage and varying the random ion distributions results in nearly the same statistics. When the ions are allowed to order the level statistics again deviate from the RMT model. This is somewhat surprising since Fig. \[fig5\] shows that, even for ${\cal F}= 1/5$, the standard deviation of the effective 2D potential below the Fermi surface from the quiet dot case, $\sim 0.05 \; Ry^*$, is still substantially greater than the mean level spacing $\sim 0.02 \; Ry^*$. We have recently shown that, as ${\cal F}$ goes from unity to zero, a continuous transition from the level repulsion of Eq. \[eq:stat\] to a Poisson distribution of level spacings results [@NanoMes]. Finally, the application of a magnetic field strong enough to break time-reversal symmetry clearly reduces the incidence of very small spacings, but the distribution is still significantly different from RMT.
### Level widths
In Eq. \[eq:tun\] we defined $W_n(a,b)$ as the barrier penetration factor from the classically accessible region of the lead to the matching point in the barrier, for the $n^{th}$ channel. The penetration factor [*completely*]{} through the barrier, $P_n \equiv W_n(a,c)$ where $c$ is the classical turning point on the dot side of the barrier, is plotted as a function of QPC voltage in figure \[fig17\]. $P_n$ is simply the WKB penetration for a given channel with a given self-consistent barrier profile, and can be computed at any energy. Here we have computed it at energies coincident with the dot levels. Therefore the dashes recapitulate the level structure, spaced now not in energy but in “bare” partial width. The [*actual*]{} width of a level depends upon the wave function for that state (cf. Eq. \[eq:tun\]). For energies above the barrier $ln(P)=0$. The solid lines represent $P$ [*at the Fermi surface*]{} computed for three different $AlGaAs$ thicknesses $t$ (as in figure \[fig9\]) and for both $n=1$ and $n=2$ (the dashes are computed for $t=12 \; a_B^*$). The QPC voltage is given relative to values at which $P$ for $n=1$ is the same for all three $t$ (hence the top three solid lines converge at $\Delta V_{QPC} = 0$).
Quite surprisingly $t$ has very little influence on the trend of $P$ with QPC voltage. Note that the ratio of barrier penetration between the second and first channels $P_2/P_1$ decreases substantially with increasing $t$ since the saddle profile becomes wider for more distant gates. Even for $t=7.5 \; a_B^*$ however, penetration via the second channel is about a factor of five smaller than via $n=1$.
Figure \[fig18\] shows the partial width for tunneling via $n=1$ through the barrier, now using the full Eq. \[eq:tun\], for the quiet dot. The barriers here are fairly wide. While this strikingly coherent structure is quickly destroyed by discretely localized donors even when donor ordering is allowed, the pattern is nonetheless highly informative. The principal division between upper and lower states is based on parity. States which are odd with respect to the axis bisecting the QPC should in fact have identically zero partial width (that they don’t is evidence of numerical error, mostly imperfect convergence).
Note that [*this*]{} division is largely preserved for discrete but ordered ions. The widest states (largest $\Gamma$) are labelled with their level index for comparison with their wave functions in Figs. \[fig13\] and \[fig14\]. Comparison shows they represent the states which are aligned along the direction of current flow. Thus in each shell there are likely to be a spread of tunneling coefficients, that is, two members of the same shell will not have the same $\Gamma$.
Statistics of the level partial widths are shown in figure \[fig19\], here normalized to their local mean values. While the statistics for the quiet dot are in substantial disagreement with RMT it is clear that discreteness of the ion charge, even ordered, largely restores ergodicity. The RMT prediction, the “Porter-Thomas” (PT) distribution, is also plotted. For non-zero $B$, panels (b) and (c), the predicted distribution is $\chi_2^2$ rather than PT. Even the completely disordered case (e) retains a fraction of vanishing partial width states. Since in our case the zero width states result from residual reflection symmetry, it would be interesting to compare the data from references [@Chang] and [@Marcus], which employ nominally symmetric and non-symmetric dots respectively, to see if the incidence of zero width states shows a statistically significant difference.
One further statistical feature which we calculate is the autocorrelation function of the level widths as an external parameter $X$ is varied: $$C(\Delta X) = \\
\frac{\sum_{i,j} \delta \Gamma_i(X_j)
\delta \Gamma_i(X_j + \Delta X)}
{\sqrt{\sum_{i,j} \delta \Gamma_i(X_j)^2}
\sqrt{\sum_{i,j} \delta \Gamma_i(X_j + \Delta X)^2}} \label{eq:auto}$$ where $\delta \Gamma_i(X) \equiv \Gamma_i(X)-\bar{\Gamma}_i(X)$, and where $\bar{\Gamma}(X)$ is again the [*local*]{} average, over levels at fixed $X$, of the level widths. Note that the sum on $i$ is over levels and the sum on $j$ is over starting values of $X$.
In figure \[fig20\] we show the autocorrelation function for varying magnetic field (cf. Ref. [@Marcus], figure \[fig4\]). The sample is ordered, ${\cal F}=1/5$.
Our range of $B$ only encompasses $[0,0.1] \; T$ in steps of $0.005 \; T$, so we have here averaged over all levels (i.e. $i=1-45$). The crucial feature, which has been noted in Refs. [@Marcus] and, for conductance correlation in open dots in [@Bird2], is that the correlation function becomes negative, in contradiction with a recent prediction based on RMT [@Alhassid]. Indeed, as noted by Bird [*et al.*]{} [@Bird2], an oscillatory structure seems to emerge in the data. Comparison with calculation here is hampered since the statistics are less good as $B$ increases.
Nonetheless, the RMT prediction is clearly erroneous. We speculate that the basis of the discrepancy is in the assumption [@Alhassid] that $C(\Delta X)=C(-\Delta X)$. Given this assumption [@Ferry2] the correlation becomes positive definite. Physically this means that, regardless of whether $B$ is positive or negative, the self-correlation of a level width will be independent of whether $\Delta B$ is positive or negative. This implies that the level widths should be independent of the absolute value of $B$, or any even powers of $B$, at least to lowest order in $\Delta B/B$. For real quantum dot systems this assumption is inapplicable.
Similar behaviour is observed with $X$ taken as the (plunger) gate voltage, for which we have considerably more calculated results, Fig. \[fig21\].
The upper panel is the analogue of Fig. \[fig20\], only we have broken the average on levels into separate groups of fifteen levels centered on the level listed on the figure (e.g., the “$28$” denotes a sum in equ. \[eq:auto\] of $i=21,35$). the lower panel shows the autocorrelation as a grey scale for the individual levels (averaging performed only over starting $V_g$). The very low lying levels, up to $\sim 10$, remain self-correlated across the entire range of gate voltage. This simply indicates that the correlation field is level dependent. However, rather than becoming uniformly grey in a Lorentzian fashion, as predicted by RMT [@Alhassid], individual levels tend to be strongly correlated or anti-correlated with their original values, and the disappearance of correlation only occurs as an average over levels.
Again we expect that the explanation for this behaviour lies in the shell structure. Coulomb interaction prevents states which are nearby in energy from having common spatial distributions. Thus in a given range of energy, when one state is strongly connected to the leads, other states are less likely to be. Further, the ordering of states appears to survive at least a small amount of disorder in the ion configuration.
Conductance
-----------
The final topic we consider here is the Coulomb oscillation conductance of the dot. We will here focus on the temperature dependence [@RComm], although statistical properties related to ion ordering are also interesting.
We have shown in Ref. [@RComm] that detailed temperature dependence of Coulomb oscillation amplitudes can be employed as a form of quantum dot spectroscopy. Roughly, in the low $T$ limit the peak heights give the individual level connection coefficients and, as temperature is raised activated conductance [*at the peaks*]{} depends on the nearest level spacings at the Fermi surface. In this regard we have explained envelope modulation of peak heights, which had previously not been understood, as clear evidence of thermal activation involving tunneling through excited states of the dot [@RComm].
Figure \[fig22\]a shows the conductance as a function of plunger gate voltage for the ordered dot at $T=250 \; mK$. Note that the magnitude of the conductance is small because the coupling coefficients are evaluated with relatively wide barriers for numerical reasons. Over this range the dot $N$ depopulates from $62$ (far left) to $39$. The level spacings and tunneling coefficients are all changing with $V_g$. At low temperature a given peak height is determined mostly by the coupling to the first empty dot level ($\Gamma_{N+1}$) and by the spacings between the $N^{th}$ level and the nearest other level (above or below). The relative importance of the $\Gamma$’s and the level spacings can obviously vary. In this example, Figs. \[fig22\]a and \[fig22\]b suggest that peak heights correlate more strongly with the level spacings. The double envelope coincides with the Fermi level passing through two shells. In general, the DOS fluctuations embodied in the shell structure and the observation (above) that within a shell a spreading of the $\Gamma$’s (with a most strongly coupled level) results from Coulomb interaction provide the two fundamental bases of envelope modulation.
Finally, we typically find that, when peak heights are plotted as a function of temperature (not shown) some peaks retain activated conductance down to $T=10 \; mK$. Since the dot which we are modelling is small on the scale of currently fabricated structures, this study suggests that claims to have reached the regime where all Coulomb oscillations represent tunneling through a single dot level are questionable.
Conclusions
===========
We have presented extensive data from calculations on the electronic structure of lateral $GaAs-AlGaAs$ quantum dots, with electron number in the range of $N=50-100$. Among the principal conclusions which we reach are the following.
The electrostatic profile of the dot is determined by metal gates at fixed voltage rather than a fixed space charge. As a consequence of this the model of the dot as a conducting disk with fixed, “external,” parabolic confinement is incorrect. Charge added to the dot resides much more at the dot perimeter than this model predicts.
The assumption of complete disorder in the donor layer is probably overly pessimistic. In such a case the 2DEG electrostatic profile is completely dominated by the ions and it is difficult to see how workable structures could be fabricated at all. The presence of even a small degree of ordering in the donor layer, which can be experimentally modified by a back gate, dramatically reduces potential fluctuations at the 2DEG level.
Dot energy levels show a shell structure which is robust to ordered donor layer ions, though for complete disorder it appears to break up. The shell structure is responsible for variations in the capacitance with gate voltage as well as envelope modulation of Coulomb oscillation peaks. The claims that Coulomb oscillation data through currently fabricated lateral quantum dots shows unambiguous transport through single levels are questionable, though some oscillations will saturate at a higher temperature than others.
The capacitance between the dot and a lead increases only very slightly as the QPC barrier is reduced. Thus the electrostatic energy between dot and leads is dominated by charge below the Fermi surface and splitting of oscillation peaks through double dot structures [@Westervelt] is undoubtedly a result of tunneling.
Finally, chaos is well known to be mitigated in quantum systems where barrier penetration is non-negligible [@Smilansky]. Insofar as non-inegrability of the underlying classical Hamiltonian is being used as the justification for an assumption of ergodicity [@Jalabert] in quantum dots, our results suggest that further success in comparison with real (i.e. experimental) systems will occur only when account is taken in, for example, the level velocity [@Alhassid; @Simons], of the correlating influences of quantum mechanics.
I wish to express my thanks for benefit I have gained in conversations with many colleagues. These include but are not limited to: Arvind Kumar, S. Das Sarma, Frank Stern, J. P. Bird, Crispin Barnes, Yasuhiro Tokura, B. I. Halperin, Catherine Crouch, R. M. Westervelt, Holger F. Hofmann, Y. Aoyagi, K. K. Likharev, C. Marcus and D. K. Ferry. I am also grateful for support from T. Sugano, Y. Horikoshi, and S. Tarucha. Computational support from the Fujitsu VPP500 Supercomputer and the Riken Computer Center is also gratefully acknowledged.
For a review see, D. Averin and K. K. Likharev, in: [*Mesoscopic Phenomena in Solids*]{} edited by B. L. Altshuler, P. A. Lee and R. A. Webb, Elsevier, Amsterdam, (1990).
D.V. Averin, A.N. Korotkov, K.K. Likharev, Phys. Rev. B [**44**]{}, 6199, (1991).
C.W.J. Beenakker, Phys. Rev. B [**44**]{}, 1646 (1991).
A short paper concerning these results has already appeared in: M. Stopa, Phys. Rev. B [**48**]{}, 18340 (1993).
M. Stopa, to be published.
M. Stopa, Phys. Rev. B [**53**]{}, 9595 (1996).
F. Stern and S. Das Sarma, Phys. Rev. B [**30**]{}, 840 (1984).
J. C. Slater, [*The Self-Consistent Field for Molecules and Solids*]{}, McGraw-Hill, New York, 1974.
M. Stopa, J. P. Bird, K. Ishibashi, Y. Aoyagi and T. Sugano, Phys. Rev. Lett. [**76**]{}, (1996).
A. Kumar, S. Laux and F. Stern, Phys. Rev. B [**42**]{}, 5166 (1990).
T. Ando, A. Fowler and F. Stern, Rev. Mod. Phys. [**54**]{}, 437 (1982).
For a discussion of the treatment of surface states see: J. H. Davies and I. A. Larkin, Phys. Rev. B [**49**]{}, 4800 (1994); J. H. Davies, Semicond. Sci. Technol. [**3**]{}, 995 (1988).
J. A. Nixon and J. H. Davies, Phys. Rev. B [**41**]{}, 7929 (1990).
Recently, G. Pilling, D. H. Cobden, P. L. McEuen, C. I. Duruöz and J. S. Harris Jr., [*Proceedings of the Eleventh International Conference on the Electronic Properties of Two Dimensional Systems*]{}, Surface Science (in press), have presented experimental evidence of electrons escaping from the 2DEG layer into the substrate upon tunneling beneath a barrier.
Studies of edge states in quantum Hall liquids, for example, uniformly assume perfectly 2D systems, see for example C. de C. Chamon and X. G. Wen, Phys. Rev. B [**49**]{}, 8227 (1994).
D. Jovanovic and J. P. Leburton, Phys. Rev B, [**49**]{}, 7474 (1994).
C. G. Darwin, Proc. Cambridge Philos. Soc. [**27**]{}, 86 (1931).
R. E. Bank and D. J. Rose, SIAM J. Numer. Anal., [**17**]{}, 806 (1980).
E. Buks, M. Heiblum and Hadas Shtrikman, Phys. Rev. B [**49**]{}, 14790 (1994); P. Sobkowicz, Z. Wilamowski and J. Kossut, Semicond. Sci. Technol. [**7**]{}, 1155 (1992).
A. L. Efros, Solid State Comm. [**65**]{}, 1281 (1988); T. Suski, P. Wisniewski, I. Gorczyca, L. H. Dmowski, R. Piotrzkowski, P. Sobkowicz, J. Smoliner, E. Gornik, G. Böm and G. Weimann, Phys. Rev. B [**50**]{}, 2723 (1994).
Y.Aoyagi, M. Stopa, H. F. Hofmann and T. Sugano, in [*Quantum Coherence and Decoherence*]{}, edited by K. Fujikawa and Y. A. Ono, Elsevier, Holland (1996).
J. P. Bird, K. Ishibashi, M. Stopa, R. P. Taylor, Y. Aoyagi and T. Sugano, Phys. Rev. B [**49**]{}, 11488 (1994).
H. van Houten, C.W.J. Beenakker, A.A.M Staring in [*Single Charge Tunneling*]{}, edited by H. Grabert and M.H. Devoret, NATO ASI Series B (Plenum, New York, 1991).
Y. Meir, N. Winegreen, P.A. Lee, Phys. Rev. Lett. [**66**]{}, 3048 (1991).
J. Bardeen, Phys. Rev. Lett. [**6**]{}, 57 (1961).
K. A. Matveev, Phys. Rev. B [**51**]{}, 1743 (1995).
J. H. F. Scott-Thomas, S. B. Field, M. A. Kastner, D. A. Antoniadis and H. I. Smith, Phys. Rev. Lett. [**62**]{}, 583 (1989); U. Meirav, M. A. Kastner and S. J. Wind, Phys. Rev. Lett. [**65**]{},771 (1990); L.P. Kouwenhoven, N.C. van der Vaart, A.T. Johnson, C.J.P.M. Harmans, J.G. Williamson, A.A.M. Staring, C.T. Foxon, proceedings of the German Physical Society meeting, Münster 1991; Festkörperprobleme/Advances in Solid State Physics (Volume 31); E. B. Foxman, P. L. McEuen, U. Meirav, N. Wingreen, Y. Meir, P. A. Belk, N. R. Belk, M. A. Kastner and S. J. Wind, Phys. Rev. B [**47**]{}, 10020 (1993)..
M. Stopa, Y. Aoyagi and T. Sugano, Phys. Rev. B, [**51**]{}, 5494 (1995).
M. Stopa and Y. Tokura, in [*Science and Technology of Mesoscopic Structures*]{}, edited by S. Namba, C. Hamaguchi and T. Ando, Springer-Verlag, Tokyo, (1992).
M. Büttiker, J. Phys. Condens. Matter [**5**]{}, 9361 (1993).
P. Hohenberg and W. Kohn, Phys. Rev. [**136**]{} B864 (1964).
W. Kohn and L. Sham, Phys. Rev. [**140**]{} A1133 (1965).
P. L. McEuen, E. B. Foxman, U. Meirav, M. A. Kastner, Y. Meir, N. S. Wingreen and S. J. Wind, Phys. Rev. Lett. [**66**]{}, 1926 (1991).
R. C. Ashoori, H. L. Stormer, J. S. Weiner, L. N. Pfeiffer, S. J. Pearton, K. W. Baldwin and K. W. West, Phys. Rev. Lett. [**68**]{}, 3088 (1992).
R.Jalabert, H.U. Baranger and A. D. Stone, Phys. Rev. Lett. [**65**]{}, 2442 (1990).
C. W. J. Beenakker and H. van Houten, Phys. Rev. Lett. [**63**]{}, 1857 (1989).
J. P. Bird, D. M. Olatona, R. Newbury, R. P. Taylor, K. Ishibashi, M. Stopa, Y. Aoyagi, T. Sugano and Y. Ochiai, Phys. Rev. B [**52**]{}, R14336 (1995).
D. K. Ferry and G. Edwards, preprint.
In other words, since the dot floor is not flat, even a lithographically square dot, say, with gates very close to the 2DEG plane would seem to suffer rounding of the resulting potential departing appreciably from the flat bottomed, hard walled square shape envisioned.
P. M. Mooney, J. Appl. Phys. [**67**]{}, R1 (1990).
E. Yamaguchi, K. Shiraishi and T. Ono, Jour. Phys. Soc. Jap. [**60**]{}, 3093 (1991).
M. Heiblum, private communication.
M. Stopa, Y. Aoyagi and T. Sugano, Surf. Sci. [**305**]{}, 571 (1994).
V. Shikin, S. Nazin, D. Heitmann and T. Demel, Phys. Rev. B [**43**]{}, 11903 (1991); S. Nazin, K. Tevosyan and V. Shikin, Surf. Sci. [**263**]{}, 351 (1992).
D. B. Chklovskii, B. I. Shklovskii and L. I. Glazman, Phys. Rev. B [**46**]{}, 15606 (1992).
K. A. Matveev, preprint.
J. Golden and B. I. Halperin, preprint.
H. Yi and C. L. Kane, Report No. cond-mat/9500139.
F. R. Waugh, M. J. Berry, D. J. Mar, R. M. Westervelt, K. L. Campman and A. C. Gossard, Phys. Rev. Lett. [**75**]{}, 705 (1995).
I. M. Ruzin, V. Chandrasekhar, E. I. Levin and L. I. Glazman, Phys. Rev. B [**45**]{}, 13469 (1992).
See M. Stopa and S. Das Sarma, Phys. Rev. B [**47**]{}, 2122 (1993), and references therein.
D. J. Lockwood, P. Hawrylak, P. D. Wang, C. M. Sotomayor Torres, A. Pinczuk and B. S. Dennis, Phys. Rev. Lett. [**77**]{}, 354 (1996).
S. Tarucha, D. G. Austing, T. Honda, R. J. van der Hage and L. P. Kouwenhoven (to be published).
F. M. Peeters, V. A. Schweigert and V. M. Bedanov, Physica B [**212**]{}, 237 (1995).
Y. Tanaka and H. Akera, Phys. Rev. B [**53**]{}, 3901 (1996).
Y. Sun and G. Kirczenow, Phys. Rev. Lett. [**72**]{}, 2450 (1994).
For a discussion of corrections to self-interaction in density functional calculations, see: J. P. Perdew and A. Zunger, Phys. Rev. B [**23**]{}, 5048 (1981).
A. V. Andreev, O. Agam, B. D. Simons and B. L. Altshuler, Phys. Rev. Lett. [**76**]{}, 3947 (1996).
T. A. Brody, J. Flores, J. B. French, P. A. Mello, A. Pandey and S. S. M. Wong, Rev. Mod. Phys. [bf 53]{}, 385 (1981).
See K. Slevin and T. Nagao, International Jour. Mod. Phys. B [**9**]{}, 103 (1995), and references therein.
E. P. Wigner, in [*Conference on Neutron Physics by Time-of-Flight*]{}, Gatlinburg, Tennessee, 1956 (ORNL-2309, Oak Ridge National Laboratory).
M. Stopa, Microstructures and Superlattices (in press).
A. M. Chang, H. U. Baranger, L. N. Pfeiffer, K. W. West and T. Y. Chang, Phys. Rev. Lett. [**76**]{}, 1695 (1996).
J. A. Folk, S. R. Patel, S. F. Godijn, A. G. Huibers, S. M. Cronenwett, C. M. Marcus, K. Campman and A. C. Gossard, Phys. Rev. Lett. [**76**]{}, 1699 (1996).
J. P. Bird, K. Ishibashi, Y. Aoyagi, T. Sugano and Y. Ochiai, Phys. Rev. B [**50**]{}, 18678 (1994).
Y. Alhassid and H. Attias, Phys. Rev. Lett. [**76**]{}, 1711 (1996).
I am indebted to Dr. David Ferry for an informative discussion on this point.
U. Smilansky, in [*Proceedings of the 1994 Les-Houches Summer School on “Mesoscopic Quantum Physics”*]{}, ed. by E. Akkermans, G. Montambaux and J. L. Pichard (in press).
R. A. Jalabert, A. D. Stone, and Y. Alhassid, Phys. Rev. Lett. [**68**]{}, 3468 (1992).
B. D. Simons and B. L. Altshuler, Phys. Rev. B [**48**]{}, 5422 (1993).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Padé approximation has two natural extensions to vector rational approximation through the so called type I and type II Hermite-Padé approximants. The convergence properties of type II Hermite-Padé approximants have been studied. For such approximants Markov and Stieltjes type theorems are available. To the present, such results have not been obtained for type I approximants. In this paper, we provide Markov and Stieltjes type theorems on the convergence of type I Hermite-Padé approximants for Nikishin systems of functions.'
author:
- 'G. López Lagomasino, S. Medina Peralta'
title: 'On the convergence of type I Hermite-Padé approximants'
---
**Keywords:** Multiple orthogonal polynomials, Nikishin systems, type I Hermite-Padé approximants.
**AMS classification:** Primary 30E10, 42C05; Secondary 41A20.
Introduction {#section:intro}
============
Let $s$ be a finite Borel measure with constant (not neessarily positive) sign whose support $\mbox{supp}(s)$ contains infinitely many points and is contained in the real line $\mathbb{R}$. If $\mbox{supp}(s)$ is an unbounded set we assume additionally that $x^n \in L_1(s), n \in \mathbb{N}$. By $\Delta = \mbox{Co}(\mbox{supp}(s))$ we denote the smallest interval which contains ${\mathrm{supp}}(s)$. We denote this class of measures by ${\mathcal{M}}(\Delta)$. Let $$\widehat{s}(z) = \int\frac{d s(x)}{z-x}$$ be the Cauchy transform of $s$.
Given any positive integer $n \in \mathbb{N}$ there exist polynomials $Q_n,P_n$ satisfying:
- $\deg Q_n \leq n,\quad \deg P_n \leq n-1, \quad Q_n \not\equiv 0,$
- $(Q_n \widehat{s} -P_n)(z) = \mathcal{O}(1/z^{n+1}),\quad z \to \infty.$
The ratio $\pi_n = P_n/Q_n$ of any two such polynomials defines a unique rational function called the $n$th term of the diagonal sequence of Padé approximants to $\widehat{s}$. Cauchy transforms of measures are important: for example, many elementary functions may be expressed through them, the resolvent function of a bounded selfadjoint operator adopts that form, and they characterize all functions holomorphic in the upper half plane whose image lies in the lower half plane and can be extended continuously to the complement of a finite segment $[a,b]$ of the real line taking negative values for $z < a$ and positive values for $z > b$ (then $\mbox{supp}(s) \subset [a,b]$), see [@KN Theorem A.6]. Providing efficient methods for their approximation is a central question in the theory of rational approximation.
When $\Delta$ is bounded, A.A. Markov proved in [@Mar] (in the context of the theory of continued fractions) that $$\label{Mar} \lim_{n \to \infty} \pi_n(z) = \widehat{s}(z)$$ uniformly on each compact subset of $\overline{\mathbb{C}} \setminus \Delta$. It is easy to deduce that this limit takes place with geometric rate. When $\Delta$ is a half line, T.J. Stieltjes in [@Sti] showed that takes place if and only if the moment problem for the sequence $\left(c_n\right)_{n\geq 0}, c_n = \int x^n ds(x),$ is determinate. It is well known that the moment problem for measures of bounded support is determinate; therefore, Stieltjes’ theorem contains Markov’s result. In [@Car], T. Carleman proved when $\Delta \subset \mathbb{R}_+$ that $$\label{Carle} \sum_{n \geq 1} |c_{n}|^{-1/2n} = \infty$$ is sufficient for the moment problem to be determinate. For an arbitrary measure $s \in \mathcal{M}(\Delta)$, where $\Delta$ is contained in a half line, we say that it satisfies Carleman’s condition if after an affine transformation which takes $\Delta$ into $\mathbb{R}_+$ the image measure satisfies Carleman’s condition.
Padé approximation has two natural extensions to the case of vector rational approximation. These extensions were introduced by Hermite in order to study the transcendency of $e$. Other applications in number theory have been obtained. See [@Ass] for a survey of results in this direction. Recently, these approximants and their associated Hermite-Padé polynomials have appeared in a natural way in certain models coming from probability theory and mathematical physics. A summary of this type of applications can be found in [@Kuij]
Given a system of finite Borel measures $S = (s_1,\ldots,s_m)$ with constant sign and a multi-index ${\bf n} = (n_1,\ldots,n_m) \in \mathbb{Z}_+^m \setminus \{{\bf 0}\}, |{\bf n}|= n_1+\cdots+n_m$, where $\mathbb{Z}_+$ denotes the set of non-negative integers and $\bf 0$ the $m$-dimensional zero vector, their exist polynomials $a_{{\bf n},j}, j=0,\ldots,m,$ not all identically equal to zero, such that:
- $\deg a_{{\bf n},j} \leq n_j -1, j=1,\ldots,m, \quad \deg a_{{\bf n},0} \leq \max(n_j) -2,$
- $a_{{\bf n},0}(z) + \sum_{j=1}^m a_{{\bf n},j}(z) \widehat{s}_j(z) = \mathcal{O}(1/z^{|{\bf n}|}),\,\,\, z \to \infty \,\,\,(\deg a_{{\bf n},j} \leq -1$ means that $a_{{\bf n},j} \equiv 0$).
Analogously, there exist polynomials $Q_{\bf n}, P_{{\bf n},j}, j=1,\ldots,m$, satisfying:
- $\deg Q_{\bf n} \leq |{\bf n}|, Q_{\bf n} \not\equiv 0, \quad \deg P_{{\bf n},j} \leq |{\bf n}|-1, j=1,\ldots,m,$
- $Q_{\bf n}(z) \widehat{s}_j (z) - P_{{\bf n},j}(z) = \mathcal{O}(1/z^{n_j +1}), \quad z\to \infty, \quad j=1,\ldots,m.$
Traditionally, the systems of polynomials $(a_{{\bf n},0}, \ldots,a_{{\bf n},m})$ and $(Q_{\bf n},P_{{\bf n},1},\ldots,P_{{\bf n},m})$ have been called type I and type II Hermite-Padé approximants (polynomials) of $(\widehat{s}_1,\ldots,\widehat{s}_m)$, respectively. When $m=1$ both definitions reduce to that of classical Padé approximation.
From the definition, type II Hermite-Padé approximation is easy to view as an approximating scheme of the vector function $(\widehat{s}_1,\ldots,\widehat{s}_m)$ by considering a sequence of vector rational functions of the form $(P_{{\bf n},1}/Q_{\bf n},\ldots,P_{{\bf n},m}/Q_{\bf n}), {\bf n} \in \Lambda \subset \mathbb{Z}_+^m$, where $Q_{\bf n}$ is a common denominator for all components. Regarding type I, it is not obvious what is the object to be approximated or even what should be considered as the approximant. Our goal is to clarify these questions providing straightforward analogues of the Markov and Stieltjes theorems.
Before stating our main result, let us introduce what is called a Nikishin system of measures to which we will restrict our study. Let $\Delta_{\alpha}, \Delta_{\beta}$ be two intervals contained in the real line which have at most one point in common, $\sigma_{\alpha} \in {\mathcal{M}}(\Delta_{\alpha}), \sigma_{\beta} \in {\mathcal{M}}(\Delta_{\beta})$, and $\widehat{\sigma}_{\beta} \in L_1(\sigma_{\alpha})$. With these two measures we define a third one as follows (using the differential notation) $$d \langle \sigma_{\alpha},\sigma_{\beta} \rangle (x) := \widehat{\sigma}_{\beta}(x) d\sigma_{\alpha}(x).$$ Above, $\widehat{\sigma}_{\beta}$ denotes the Cauchy transform of the measure $\sigma_{\beta}$. The more appropriate notation $\widehat{\sigma_{\beta}}$ causes space consumption and aesthetic inconveniences. We need to take consecutive products of measures; for example, $$\langle \sigma_{\gamma}, \sigma_{\alpha},\sigma_{\beta} \rangle :=\langle \sigma_{\gamma}, \langle \sigma_{\alpha},\sigma_{\beta} \rangle \rangle.$$ Here, we assume not only that $\widehat{\sigma}_{\beta} \in L_1(\sigma_{\alpha})$ but also $\langle \sigma_{\alpha},\sigma_{\beta} \widehat{\rangle} \in L_1(\sigma_{\gamma})$ where $\langle \sigma_{\alpha},\sigma_{\beta} \widehat{\rangle}$ denotes the Cauchy transform of $\langle \sigma_{\alpha},\sigma_{\beta} {\rangle}$. Inductively, one defines products of a finite number of measures.
\[Nikishin\] Take a collection $\Delta_j, j=1,\ldots,m,$ of intervals such that, for $j=1,\ldots,m-1$ $$\Delta_j \cap \Delta_{j+1} = \emptyset, \qquad \mbox{or} \qquad \Delta_j \cap \Delta_{j+1} = \{x_{j,j+1}\},$$ where $x_{j,j+1}$ is a single point. Let $(\sigma_1,\ldots,\sigma_m)$ be a system of measures such that $\mbox{Co}({\mathrm{supp}}(\sigma_j)) = \Delta_j, \sigma_j \in {\mathcal{M}}(\Delta_j), j=1,\ldots,m,$ and $$\label{eq:autom}
\langle \sigma_{j},\ldots,\sigma_k {\rangle} := \langle \sigma_j,\langle \sigma_{j+1},\ldots,\sigma_k\rangle\rangle\in {\mathcal{M}}(\Delta_j), \qquad 1 \leq j < k\leq m.$$ When $\Delta_j \cap \Delta_{j+1} = \{x_{j,j+1}\}$ we also assume that $x_{j,j+1}$ is not a mass point of either $\sigma_j$ or $\sigma_{j+1}$. We say that $(s_1,\ldots,s_m) = {\mathcal{N}}(\sigma_1,\ldots,\sigma_m)$, where $$s_1 = \sigma_1, \quad s_2 = \langle \sigma_1,\sigma_2 \rangle, \ldots \quad , s_m = \langle \sigma_1, \sigma_2,\ldots,\sigma_m \rangle$$ is the Nikishin system of measures generated by $(\sigma_1,\ldots,\sigma_m)$.
Initially, E.M. Nikishin in [@Nik] restricted himself to measures with bounded support and no intersection points between consecutive $\Delta_j$. Definition \[Nikishin\] includes interesting examples which appear in practice (see, [@FL4 Subsection 1.4]). We follow the approach of [@FL4 Definition 1.2] assuming additionally the existence of all the moments of the generating measures. This is done only for the purpose of simplifying the presentation without affecting too much the generality. However, we wish to point out that the results of this paper have appropriate formulations with the definition given in [@FL4] of a Nikishin system.
When $m=2$, for multi-indices of the form $ {\bf n } = (n,n)$ E.M. Nikishin proved in [@Nik] that $$\lim_{n \to \infty} \frac{P_{{\bf n},j}(z)}{Q_{\bf n}(z)} = \widehat{s}_j(z), \qquad j=1,2,$$ uniformly on each compact subset of $\overline{\mathbb{C}} \setminus \Delta_1$ In [@BL] this result was extended to any Nikishin system of $m$ measures including generating measures with unbounded support. The convergence for more general sequences of multi-indices was treated in [@FL1], [@FL2] and [@GRS].
In [@FL4 Lemma 2.9] it was shown that if $(\sigma_1,\ldots,\sigma_m)$ is a generator of a Nikishin system then $(\sigma_m,\ldots,\sigma_1)$ is also a generator (as well as any subsystem of consecutive measures drawn from them). When the supports are bounded and consecutive supports do not intersect this is trivially true. In the following, for $1\leq j\leq k\leq m$ we denote $$s_{j,k} := \langle \sigma_j,\sigma_{j+1},\ldots,\sigma_k \rangle, \qquad s_{k,j} := \langle \sigma_k,\sigma_{k-1},\ldots,\sigma_j \rangle.$$
To state our main results, the natural framework is that of multi-point type I Hermite-Padé approximation.
\[type1\] Let $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m), {\bf n} = (n_1,\ldots,n_m) \in \mathbb{Z}_+^{m} \setminus \{{\bf 0}\},$ and $w_{\bf n},$ $ \deg w_{\bf n} \leq |{\bf n}| + \max(n_j)-2,$ a polynomial with real coefficients whose zeros lie in $\mathbb{C} \setminus\Delta_1$, be given. We say that $\left(a_{{\bf n},0}, \ldots, a_{{\bf n},m}\right)$ is a type I multi-point Hermite-Padé approximation of $(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to $w_{\bf n}$ if:
- $\deg a_{{\bf n},j} \leq n_j -1, j=1,\ldots,m, \quad \deg a_{{\bf n},0} \leq n_0 -1, \quad n_0 := \max_{j=1,\ldots,m} (n_j) -1,\quad$ not all identically equal to $0$ ($n_j = 0$ implies that $a_{{\bf n},j} \equiv 0$),
- $\mathcal{A}_{{\bf n},0}/w_{\bf n} \in \mathcal{H}(\mathbb{C} \setminus \Delta_1)\quad$ and $\quad \mathcal{A}_{{\bf n},0}(z)/w_{\bf n}(z)= \mathcal{O}(1/z^{|{\bf n}|}), \quad z\to \infty, \quad$ where $$\mathcal{A}_{{\bf n},j} := a_{{\bf n},j} + \sum_{k= j+1}^m a_{{\bf n},k} \widehat{s}_{j+1,k}(z), \quad j=0,\ldots,m-1, \qquad \mathcal{A}_{{\bf n},m} := {a}_{{\bf n},m}.$$
If $\deg w_{\bf n} = |{\bf n}| + \max(n_j)-2$ the second part of ii) is automatically fulfilled. Should $\deg w_{\bf n} = N < |{\bf n}| + \max(n_j)-2$ then $|{\bf n}| + \max(n_j)-2 -N$ (asymptotic) interpolation conditions are imposed at $\infty$. In general $|{\bf n}| + \max(n_j)-2$ interpolation conditions are imposed at points in $({\mathbb{C}}\setminus \Delta_1) \cup \{\infty\}$. The total number of free parameters (the coefficients of the polynomials $a_{{\bf n},j}, j=0.\ldots,m$) equals $|{\bf n}| + \max(n_j)-1$; therefore, the homogeneous linear system of equations to be solved in order that i)-ii) take place always has a non-trivial solution. Notice that when $w_{\bf n} \equiv 1$ we recover the definition given above for classical type I Hermite-Padé approximation.
An analogous definition can be given for type II multi-point Hermite-Padé approximants but we will not dwell into this. Algebraic and analytic properties regarding uniqueness, integral representations, asymptotic behavior, and orthogonality conditions satisfied by type I and type II Hermite-Padé approximants have been studied, for example, in [@BL], [@DrSt0], [@DrSt1], [@DrSt2], [@FL1], [@FL2], [@FL3], [@FL4], [@GRS], [@Nik0], and [@NS Chapter 4], which include the case of multi-point approximation.
Let $\stackrel{\circ}{\Delta}$ denote the interior of $\Delta$ with the Euclidean topology of the real line. We have
\[unicidad\] Let $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m), {\bf n} = (n_1,\ldots,n_m) \in \mathbb{Z}_+^{m} \setminus \{{\bf 0}\},$ and $w_{\bf n},$ $ \deg w_{\bf n} \leq |{\bf n}| + \max(n_j)-2,$ a polynomial with real coefficients whose zeros lie in $\mathbb{C} \setminus\Delta_1$, be given. The type I multi-point Hermite-Padé approximation $\left(a_{{\bf n},0}, \ldots, a_{{\bf n},m}\right)$ of $(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to $w_{\bf n}$ is uniquely determined except for a constant factor, and $\deg a_{{\bf n},j} = n_j -1, j=0,\ldots,m$. Moreover $$\label{ortog}
\int x^{\nu} \mathcal{A}_{{\bf n},1}(x) \frac{d\sigma_1(x)}{w_{\bf n}(x)} = 0, \qquad \nu = 0,\ldots,|{\bf n}| -2,$$ which implies that $\mathcal{A}_{{\bf n},1}$ has exactly $|{\bf n}| -1$ simple zeros in $\stackrel{\circ}{\Delta}_1$ and no other zeros in $\mathbb{C} \setminus \Delta_2$. Additionally, $$\label{resto}
\frac{\mathcal{A}_{{\bf n},0}(z)}{w_{\bf n}(z)} = \int \frac{ \mathcal{A}_{{\bf n},1}(x) d\sigma_1(x)}{w_{\bf n}(x)(z-x)}$$ and $$\label{a0} a_{{\bf n},0}(z) = - \int \frac{\sum_{j=1}^{m} (w_{\bf n}(x) a_{{\bf n},j}(z) - w_{\bf n}(z) a_{{\bf n},j}(x)) d{s}_{1,j}(x) }{(z-x) w_{\bf n}(x)}.$$
Notice that nothing has been said about the location of the zeros of the polynomials $a_{{\bf n},j}$. For special sequences of multi-indices this information can be deduced from the convergence of type I Hermite-Padé approximants. We have the following result (see also Lemma \[haus\]).
\[converge\] Let $S= (s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m)$, $\Lambda \subset \mathbb{Z}_+^{m}$ an infinite sequence of distinct muti-indices, and $(w_{\bf n})_{{\bf n}\in \Lambda}, \deg w_{\bf n} \leq |{\bf n}| + \max(n_j)-2,$ a sequence of polynomials with real coefficients whose zeros lie in $\mathbb{C} \setminus\Delta_1$, be given. Consider the corresponding sequence $\left(a_{{\bf n},0},\ldots, a_{{\bf n},m}\right), {\bf n} \in \Lambda,$ of type I multi-point Hermite-Padé approximants of $S$ with respect to $(w_{\bf n})_{{\bf n} \in \Lambda}$. Assume that $$\label{cond1} \sup_{{\bf n}\in \Lambda}\left(\max_{j=1,\ldots,m}(n_j) - \min_{k=1,\ldots,m}(n_k) \right)\leq C < \infty,$$ and that either $\Delta_{m-1}$ is bounded away from $\Delta_m$ or $\sigma_m$ satisfies Carleman’s condition. Then, $$\label{fund1} \lim_{{\bf n} \in \Lambda} \frac{a_{{\bf n}, j}}{a_{{\bf n},m}} = (-1)^{m-j}\widehat{s}_{m,j+1},\qquad j=0,\ldots,m-1,$$ uniformly on each compact subset $\mathcal{K} \subset \mathbb{C} \setminus \Delta_m$. The accumulation points of sequences of zeros of the polynomials $a_{{\bf n},j}, j=0,\ldots,m, {\bf n} \in \Lambda$ are contained in $\Delta_m \cup \{\infty\}$. Additionally, $$\label{fund2} \lim_{{\bf n} \in \Lambda} \frac{\mathcal{A}_{{\bf n},j}}{a_{{\bf n},m}} = 0, \qquad j=0,\ldots,m-1,$$ uniformly on each compact subset $\mathcal{K} \subset \mathbb{C} \setminus (\Delta_{j+1} \cup \Delta_m)$.
We wish to underline that Theorem \[converge\] requires no special analytic property from the generating measures of the Nikishin system except for Carleman’s condition on $\sigma_m$.
Notice that the sequences of rational functions $\left( {a_{{\bf n}, j}}/{a_{{\bf n},m}}\right), {\bf n} \in \Lambda, j=0,\ldots,m-1,$ allow to recover the Cauchy transforms of the measures in the Nikishin system $\mathcal{N}(\sigma_m,\ldots,\sigma_1)$ in contrast with the sequences $\left( {P_{{\bf n}, j}}/{Q_{{\bf n}}}\right), {\bf n} \in \Lambda, j=1,\ldots,m,$ of type II multi-point Hermite-Padé approximants which recover the Cauchy transforms of the measures in $\mathcal{N}(\sigma_1,\ldots,\sigma_m)$.
In the process of writing this paper, S.P. Suetin sent us [@RS1] and [@RS2]. The first one of these papers announces the results contained in the second one. Those papers deal with the study of type I Hermite-Padé approximants for an interesting class of systems of two functions $(m=2)$ which form a generalized Nikishin system in the sense that the second generating measure lives on a symmetric (with respect to the real line) compact set which does not separate the complex plane and is made up of finitely many analytic arcs. The authors obtain the logarithmic asymptotic of the sequences of Hermite-Padé polynomials $a_{{\bf n},j}, j=1,2,$ and an analogue of for $j=1$. Convergence is proved in capacity (see [@RS1 Theorem 1] and [@RS2 Corollary 1].
For the proof of Theorem \[converge\] we need a convenient representation of the reciprocal of the Cauchy transform of a measure. It is known that for each $\sigma
\in {\mathcal{M}}(\Delta),$ where $\Delta$ is contained in a half line, there exists a measure $\tau \in
{\mathcal{M}}(\Delta)$ and ${\ell}(z)=a z+b, a = 1/|\sigma|, b \in {\mathbb{R}},$ such that $$\label{s22}
{1}/{\widehat{\sigma}(z)}={\ell}(z)+ \widehat{\tau}(z),$$ where $|\sigma|$ is the total variation of $\sigma.$ See [@KN Appendix] and [@stto Theorem 6.3.5] for measures with compact support, and [@FL4 Lemma 2.3] when the support is contained in a half line.
We call $\tau$ the inverse measure of $\sigma.$ Such measures appear frequently in our reasonings, so we will fix a notation to distinguish them. In relation with measures denoted with $s$ they will carry over to them the corresponding sub-indices. The same goes for the polynomials $\ell$. For example, $${1}/{\widehat{s}_{j,k}(z)} ={\ell}_{j,k}(z)+
\widehat{\tau}_{j,k}(z).$$ We also write $${1}/{\widehat{\sigma}_{\alpha}(z)} ={\ell}_{\alpha }(z)+
\widehat{\tau}_{\alpha }(z).$$
The following result has independent interest and will be used in combination with Lemma \[BusLop\] below in the proof of Theorem \[converge\].
\[momentos\] Let $(s_{1,1},s_{1,2}) = \mathcal{N}(\sigma_1,\sigma_2)$. If $\sigma_1$ satisfies Carleman’s condition so do $s_{1,2}$ and $\tau_1$.
This paper is organized as follows. In Section \[aux\] we prove Theorems \[unicidad\] and \[momentos\]. We also present some notions and results necessary for the proof of Theorem \[converge\]. Section \[proofmain\] contains the proof of Theorem \[converge\] and some extensions of the main result to sequences of multi-indices satisfying conditions weaker than , estimates of the rate of convergence in - for the case when $\Delta_m$ or $\Delta_{m-1}$ is bounded and $\Delta_m \cap \Delta_{m-1} = \emptyset$, and applications to other simultaneous approximation schemes.
Proof of Theorem \[ortog\] and auxiliary results {#aux}
================================================
We begin with a lemma which allows to give an integral representation of the remainder of type I multi-point Hermite-Padé approximants.
\[reduc\] Let $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m)$ be given. Assume that there exist polynomials with real coefficients $a_0,\ldots,a_m$ and a polynomial $w$ with real coefficients whose zeros lie in $\mathbb{C} \setminus \Delta_1$ such that $$\frac{\mathcal{A}(z)}{w(z)} \in \mathcal{H}(\mathbb{C} \setminus \Delta_1)\qquad \mbox{and} \qquad \frac{\mathcal{A}(z)}{w(z)} = \mathcal{O}\left(\frac{1}{z^N}\right), \quad z \to \infty,$$ where $\mathcal{A} := a_0 + \sum_{k=1}^m a_k \widehat{s}_{1,k} $ and $N \geq 1$. Let $\mathcal{A}_1 := a_1 + \sum_{k=2}^m a_k \widehat{s}_{2,k} $. Then $$\label{eq:3}
\frac{\mathcal{A}(z)}{w(z)} = \int \frac{\mathcal{A}_1(x)}{(z-x)} \frac{d\sigma_1(x)}{w(x)}.$$ If $N \geq 2$, we also have $$\label{eq:4}
\int x^{\nu} \mathcal{A}_1(x) \frac{d\sigma_1(x)}{w(x)}, \qquad \nu = 0,\ldots, N -2.$$ In particular, $\mathcal{A}_1$ has at least $N -1$ sign changes in $\stackrel{\circ}{\Delta}_1 $.
We have $$\mathcal{A}(z) = a_0(z) + \sum_{k=1}^m a_{k}(z)\widehat{s}_{1,k}(z) \mp w(z) \int \frac{\mathcal{A}_1(x)}{(z-x)} \frac{d\sigma_1(x)}{w(x)} =$$ $$a_0(z) + \int \frac{\sum_{k=1}^{m} (w (x) a_{k}(z) - w (z) a_{k} (x)) d{s}_{1,k}(x) }{(z-x) w (x)} +
w (z) \int \frac{\mathcal{A}_1(x)}{(z-x)} \frac{d\sigma_1(x)}{w(x)} .$$ For each $k=1,\ldots,m$ $$\left( { w (x) a_{ k}(z) - w (z) a_{ k}(x) }\right)/{(z-x)}$$ is a polynomial in $z$. Therefore, $$P (z) := a_0(z) + \int \frac{\sum_{k=1}^{m} (w (x) a_{k}(z) - w (z) a_{k} (x)) d{s}_{1,k}(x) }{(z-x) w (x)}$$ represents a polynomial. Consequently $$\mathcal{A}(z) = P (z) + w(z) \int \frac{\mathcal{A}_1(x) d\sigma_1(x)}{(z-x)w (x)} =
w(z) \mathcal{O}(1/z^{N}), \quad z\to \infty.$$ These equalities imply that $$P (z) = w(z)\mathcal{O}(1/z), \qquad z \to \infty,$$ Therefore, $\deg P < \deg w$ and is equal to zero at all the zeros of $w$. Hence $P \equiv 0$. (Should $w$ be a constant polynomial likewise we get that $P \equiv 0$.) Thus, we have proved .
From our assumptions and , it follows that $$\frac{\mathcal{A}(z)}{w(z)} = \int \frac{\mathcal{A}_1(x)}{(z-x)} \frac{d\sigma_1(x)}{w(x)} = \mathcal{O}(1/z^{N}), \qquad z \to \infty.$$ Suppose that $N \geq 2$. We have the asymptotic expansion $$\int \frac{\mathcal{A}_1(x)}{(z-x)} \frac{d\sigma_1(x)}{w(x)} =$$ $$\sum_{\nu=0}^{N -2} \frac{d_{ \nu}}{z^{\nu +1}} + \int \frac{x^{N-1}\mathcal{A}_1(x)}{z^{N -1}(z-x) } \frac{d \sigma_1(x)}{w (x)}= \sum_{\nu=0}^{N-2} \frac{d_{ \nu}}{z^{N +1}} + \mathcal{O}( {1}/{z^{N}}), \quad z \to \infty,$$ where $$d_{ \nu} = \int x^{\nu} \mathcal{A}_1(x) \frac{d\sigma_1(x)}{ w (x)} ,\qquad \nu =0,\ldots,N -2.$$ Therefore, $$d_{ \nu} = 0, \qquad \nu=0,\ldots,N -2,$$ which is .
Suppose that $\mathcal{A}_1$ has at most $\widetilde{N} < N -1$ sign changes in $\stackrel{\circ}{\Delta}_1 $ at the points $x_1,\ldots, x_N$. Take $q(x) = \prod_{k=1}^{\widetilde{N}} (x-x_k)$. According to $\eqref{eq:4}$ $$\int q(x) \mathcal{A}_1(x) \frac{d\sigma_1(x)}{ w (x)} = 0$$ which is absurd because $q (a_1 + \sum_{k=2}^m a_k \widehat{s}_{2,k}) /w $ has constant sign in $\stackrel{\circ}{\Delta}_1 $ and $\sigma_1$ is a measure with constant sign in $\stackrel{\circ}{\Delta}_1 $ whose support contains infinitely many points. Thus, the number of sign changes must be greater or equal to $N -1$ as claimed.
In [@FL4 Lemma 2.10], several formulas involving ratios of Cauchy transforms were proved. The most useful ones in this paper establish that $$\label{4.4}
\frac{\widehat{s}_{1,k}}{\widehat{s}_{1,1}} =
\frac{|s_{1,k}|}{|s_{1,1}|} - \langle \tau_{1,1},\langle s_{2,k},\sigma_1
\rangle \widehat{\rangle} , \qquad 1=j < k \leq m.$$
We are ready for the
Let $(a_{{\bf n},0},\ldots,a_{{\bf n},m})$ be a type I multi-point Hermite-Padé approximation of $(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to $w_{\bf n}$. From Definition \[type1\], formulas and follow directly from and , respectively. Relation is obtained from solving for $a_{{\bf n},0}$.
In the proof of Lemma \[reduc\] we saw that implies that $\mathcal{A}_{{\bf n},1}$ has at least $|{\bf n}|-1$ sign changes in $\stackrel{\circ}{\Delta}_1 $. We have that $(s_{2,2},\ldots,s_{2,m}) = \mathcal{N}(\sigma_2,\ldots,\sigma_m)$ forms a Nikishin system. According to [@FL4 Theorem 1.1], $\mathcal{A}_{{\bf n},1}$ can have at most $|{\bf n}| -1$ zeros in $\mathbb{C} \setminus \Delta_2$. Taking account of what we proved previously, it follows that $\mathcal{A}_{{\bf n},1}$ has exactly $|{\bf n}|-1$ simple zeros in $\stackrel{\circ}{\Delta}_1$ and it has no other zero in $\mathbb{C} \setminus \Delta_2$. This is true for any ${\bf n} \in \mathbb{Z}_+^m \setminus \{\bf 0\}$.
Suppose that for some ${\bf n} \in \mathbb{Z}_+^m \setminus \{\bf 0\}$ and some $j \in \{1,\ldots,m\}$, we have that $\deg a_{{\bf n},j} = \widetilde{n}_j -1 < n_j -1$. Then, according to [@FL4 Theorem 1.1] $\mathcal{A}_{{\bf n},1}$ could have at most $|{\bf n}| - n_j + \widetilde{n}_j -1 \leq |{\bf n}| -2$ zeros in $\mathbb{C} \setminus \Delta_2$. This is absurd because we have proved that it has $|{\bf n}| -1$ zeros in $\stackrel{\circ}{\Delta}_1$.
Now, suppose that for some ${\bf n} \in \mathbb{Z}_+^m \setminus \{\bf 0\}$, there exist two non collinear type I multi-point Padé approximants $\left(a_{{\bf n},0}, \ldots, a_{{\bf n},m}\right)$ and $\left(\widetilde{a}_{{\bf n},0}, \ldots, \widetilde{a}_{{\bf n},m}\right)$ of $(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to $w_{\bf n}$. From it follows that $\left(a_{{\bf n},1}, \ldots, a_{{\bf n},m}\right)$ and $\left(\widetilde{a}_{{\bf n},1}, \ldots, \widetilde{a}_{{\bf n},m}\right)$ are not collinear. We know that $\deg a_{{\bf n},j} = \deg \widetilde{a}_{{\bf n},j} = n_j -1, j=1,\ldots,m$. Consequently, there exists some constant $C$ such that $\left({a}_{{\bf n},1}-C\widetilde{a}_{{\bf n},1}, \ldots, {a}_{{\bf n},m}-C \widetilde{a}_{{\bf n},m}\right) \neq {\bf 0}$ and $\deg(a_{{\bf n},{j}} - C\widetilde{a}_{{\bf n}, {j}}) < n_{ {j}} -1$ for some $j \in \{1,\ldots,m\}$. By linearity, $\left({a}_{{\bf n},0}-C\widetilde{a}_{{\bf n},0}, \ldots, {a}_{{\bf n},m}-C \widetilde{a}_{{\bf n},m}\right)$ is a multi-point type I Hermite-Padé approximant of $(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to $w_{\bf n}$. This is not possible because $\deg(a_{{\bf n},{j}} - C\widetilde{a}_{{\bf n}, {j}}) < n_{ {j}} -1$. Therefore, non-collinear solutions cannot exist.
We still need to show that $\deg a_{{\bf n}_0} = n_0-1$. To this end we need to transform $\mathcal{A}_{{\bf n},0}$. Let $j$ be the first component of $\bf n$ such that $n_j = \max_{k=1,\ldots,m} n_k $. Since $n_0 = n_j -1$, we have that either $j=1$ or $n_0 \geq n_k, k=1,\ldots,j-1$. If $j=1$, using and it follows that $$\mathcal{B}_{{\bf n},0} := \frac{\mathcal{A}_{{\bf n},0}}{\widehat{s}_{1,1}} = \ell_{1,1} a_{{\bf n},0} + \sum_{k=1}^m \frac{|s_{1,k}|}{|s_{1,1}|} a_{{\bf n},k} + a_{{\bf n},0} \widehat{\tau}_{1,1} - \sum_{k=2}^m a_{{\bf n},k} \langle \tau_{1,1}, \langle s_{2,k},\sigma_1 \rangle \widehat{\rangle},$$ where $$\mathcal{B}_{{\bf n},0}/w_{\bf n} \in \mathcal{H}(\mathbb{C} \setminus \Delta_1), \qquad \mathcal{B}_{{\bf n},0}(z)/w_{\bf n}(z)= \mathcal{O}(1/z^{|{\bf n}|-1}), \quad z\to \infty.$$ Using Lemma \[reduc\] it follows that $$\int x^{\nu} \mathcal{B}_{{\bf n},1}(x) \frac{d\tau_{1,1}(x)}{w_{\bf n}(x)}, \qquad \nu = 0,\ldots, |{\bf n}|-3.$$ where $\mathcal{B}_{{\bf n},1} = a_{{\bf n},0} - \sum_{k=2}^m a_{{\bf n},k} \langle \langle\sigma_2,\sigma_1 \rangle,\sigma_3,\ldots,\sigma_k \widehat{\rangle}$. Hence $\mathcal{B}_{{\bf n},1}$ has at least $|{\bf n}|-2$ sign changes in $\stackrel{\circ}{\Delta}_1$. According to [@FL4 Theorem 1.1] the linear form $\mathcal{B}_{{\bf n},1}$ has at most $\deg a_{{\bf n},0} + n_2+\cdots+ n_m$ zeros in all of $\mathbb{C} \setminus \Delta_2$. Should $\deg a_{{\bf n},0} \leq n_0-2$, we would have that $\deg a_{{\bf n},0} + n_2+\cdots+ n_m \leq |{\bf n}|-3$ which contradicts that $\mathcal{B}_{{\bf n},1}$ has at least $|{\bf n}|-2$ zeros in $\stackrel{\circ}{\Delta}_1$. Thus, when $j=1$ it is true that $\deg a_{{\bf n},0} = n_0 -1$. In general, the proof is similar as we will see.
Suppose that $j$, as defined in the previous paragraph, is $\geq 2$. Then, either $n_0 = n_k, k=1,\ldots,{j-1}$ or there exists $\overline{\jmath} < j$ for which $n_0 = n_k, k=1,\ldots,{\overline{\jmath}-1}$ and $n_0 > n_{\overline{\jmath}}$. In the first case, applying [@FL4 Lemma 2.12], we obtain that there exists a Nikishin system $(s_{1,1}^*,\ldots,s_{1,m}^*) =
{\mathcal{N}}(\sigma_1^*,\ldots,\sigma_m^*)$, a multi-index $\,{\bf n}^* = (n_0^*,\ldots,n_m^*) \in {\mathbb{Z}}_+^{m+1}$ which is a permutation of ${\bf n}$ with $n_0^* = n_j$, and polynomials with real coefficients $a_{{\bf n},k}^*, \deg a_{{\bf n},k}^* \leq n_k^* -1, k=0,\ldots,m$, such that $$\frac{\mathcal{A}_{{\bf n},0}}{\widehat{s}_{1,j}} = a_{{\bf n},0}^* + \sum_{k=1}^m a_{{\bf n},k}^*. \widehat{s}_{1,k}^*$$ Due to the structure of the values of the components of the multi-index $a_{{\bf n},j}^* = (-1)^ja_{{\bf n},0}$ and $ n_j^* = n_0$ (see formula (31) in [@FL3]). We can proceed as before and find that $\deg a_{{\bf n},j}^* = n_j^* -1, j=1,\ldots,m$. In particular, $\deg a_{{\bf n},j} = n_0 -1$. In the other case, [@FL4 Lemma 2.12] gives that $$\frac{\mathcal{A}_{{\bf n},0}}{\widehat{s}_{1,j}} = a_{{\bf n},0}^* + \sum_{k=1}^m a_{{\bf n},k}^*, \widehat{s}_{1,k}^*$$ where $a_{{\bf n}\overline{\jmath}}^* = \pm a_{{\bf n},0} + C a_{{\bf n},\overline{\jmath}},$ $C\neq 0$ is some constant, and $n_{\overline{\jmath}}^* = n_0$ (see formula (31) in [@FL3]). Repeating the arguments employed above, we obtain that $\deg a_{{\bf n},j}^* = n_j^* -1, j=1,\ldots,m$. In particular, $\deg a_{{\bf n},0} = n_0 -1$. because we already know that $\deg a_{{\bf n},\overline{\jmath}} = n_{\overline{\jmath}} -1< n_0 -1$.
We wish to point out that in the statement of [@FL4 Theorem 1.1] there is a missprint on the last line where $ {\mathbb{C}}$ should replace $\overline{\mathbb{C}}$. That is, it should refer to zeros at finite points. This can be checked looking at the statements of [@FL4 Lemmas 2.1, 2.2] and the proof of [@FL4 Theorem 1.1] itself.
The notion of convergence in Hausdorff content plays a central role in the proof of Theorem \[converge\]. Let $B$ be a subset of the complex plane $\mathbb{C}$. By $\mathcal{U}(B)$ we denote the class of all coverings of $B$ by at most a numerable set of disks. Set $$h(B)=\inf\left\{\sum_{i=1}^\infty
|U_i|\,:\,\{U_i\}\in\mathcal{U}(B)\right\},$$ where $|U_i|$ stands for the radius of the disk $U_i$. The quantity $h(B)$ is called the $1$-dimensional Hausdorff content of the set $B$.
Let $(\varphi_n)_{n\in\mathbb{N}}$ be a sequence of complex functions defined on a domain $D\subset\mathbb{C}$ and $\varphi$ another function defined on $D$ (the value $\infty$ is permitted). We say that $(\varphi_n)_{n\in\mathbb{N}}$ converges in Hausdorff content to the function $\varphi$ inside $D$ if for each compact subset $\mathcal{K}$ of $D$ and for each $\varepsilon
>0$, we have $$\lim_{n\to\infty} h\{z\in K :
|\varphi_n(z)-\varphi(z)|>\varepsilon\}=0$$ (by convention $\infty \pm \infty = \infty$). We denote this writing $h$-$\lim_{n\to \infty} \varphi_n =
\varphi$ inside $D$.
To obtain Theorem \[converge\] we first prove with convergence in Hausdorff content in place of uniform convergence (see Lemma \[haus\] below). We need the following notion.
Let $s \in \mathcal{M}(\Delta)$ where $\Delta$ is contained in a half line of the real axis. Fix an arbitrary $\kappa \geq -1$. Consider a sequence of polynomials $(w_n)_{n \in \Lambda}, \Lambda \subset \mathbb{Z}_+,$ such that $\deg w_n = \kappa_n \leq 2n + \kappa +1$, whose zeros lie in $\mathbb{R} \setminus \Delta$. Let $(R_n)_{n \in \Lambda}$ be a sequence of rational functions $R_n = p_n/q_n$ with real coefficients satisfying the following conditions for each $n \in \Lambda$:
- $\deg p_n \leq n + \kappa,\quad \deg q_n \leq n, \quad q_n \not\equiv 0,$
- $ {(q_n \widehat{s} - p_n)(z)}/{w_n} = \mathcal{O}\left( {1}/{z^{n+1 - \ell}}\right) \in \mathcal{H}(\mathbb{C}\setminus \Delta), z \to \infty,$ where $\ell \in \mathbb{Z}_+$ is fixed.
We say that $(R_n)_{n \in \Lambda}$ is a sequence of incomplete diagonal multi-point Padé approximants of $\widehat{s}$.
Notice that in this construction for each $n \in \Lambda$ the number of free parameters equals $2n + \kappa +2$ whereas the number of homogeneous linear equations to be solved in order to find $q_n$ and $p_n$ is equal to $2n + \kappa - \ell + 1$. When $\ell =0$ there is only one more parameter than equations and $R_n$ is defined uniquely coinciding with a (near) diagonal multi-point Padé approximation. When $\ell \geq 1$ uniqueness is not guaranteed, thus the term incomplete.
For sequences of incomplete diagonal multi-point Padé approximants, the following Stieltjes type theorem was proved in [@BL Lemma 2] in terms of convergence in Hausdorff content.
\[BusLop\] Let $s \in \mathcal{M}(\Delta)$ be given where $\Delta$ is contained in a half line. Assume that $(R_n)_{n \in \Lambda}$ satisfies [a)-b)]{} and either the number of zeros of $w_n$ lying on a closed bounded segment of $\mathbb{R} \setminus \Delta$ tends to infinity as $n\to\infty, n \in \Lambda$, or $s$ satisfies Carleman’s condition. Then $$h-\lim_{n \in \Lambda} R_n = \widehat{s},\qquad \mbox{inside}\qquad \mathbb{C} \setminus \Delta.$$
We will need to use Lemma \[BusLop\] for different measures and Theorem \[momentos\] comes in our aid.
Without loss of generality, we can assume that $\Delta \subset \mathbb{R}_+$ and that $\sigma_1$ is positive. Let $(c_n)_{n \in \mathbb{Z}_+}$ and $(\widetilde{c}_n)_{n \in \mathbb{Z}_+}$ denote the sequences of moments of $\sigma_1$ and $s_{1,2}$, respectively. Since $\widehat{\sigma}_2$ has constant sign on $\mathbb{R}_+$, we have that $$|\widetilde{c}_n| = \int x^{n} |\widehat{\sigma}_2(x)| d\sigma_1(x) \leq \int_0^1 x^{n} |\widehat{\sigma}_2(x)| d\sigma_1(x) + \int_1^\infty x^{n} |\widehat{\sigma}_2(x)| d\sigma_1(x) \leq
|s_{1,2}| + C c_n,$$ where $C = \max \{|\widehat{\sigma}_2(x)|: x \in [1,+\infty)\} < \infty$ because $\lim_{x\to \infty}\widehat{\sigma}_2(x) =0$. Consequently, $$\sum_{n \geq 1} |\widetilde{c}_n|^{-1/2n} \geq \sum_{n \geq 1} (|s_{1,2}| + C c_n )^{-1/2n} \geq
\sum_{\{n:C c_n < |s_{1,2}|\}}(2|s_{1,2}|)^{-1/2n} + \sum_{\{n:C c_n \geq |s_{1,2}|\}}(2C c_n )^{-1/2n}.$$ If the first sum after the last inequality contains infinitely many terms then that sum is already divergent. If it has finitely many terms then Carleman’s condition for $\sigma_1$ guarantees that the second sum is divergent. Thus, $s_{1,2}$ satisfies Carleman’s condition.
To prove the second part we need to express the moments $(d_n)_{n\in \mathbb{Z}_+} $ of $\tau_1$ in terms of the moments of $\sigma_1$. In the proof of [@FL4 Lemma 2.3] we showed that the moments $(d_n)_{n \in \mathbb{Z}_+}$ are finite (since all the moments of $\sigma_1$ are finite) and can be obtained solving the system of equations $$\begin{array}{ccl}
1 & = & d_{-2} c_0 \\
0 & = & d_{-2} c_1 + d_{-1} c_0\\
0 & = & d_{-2} c_2 + d_{-1} c_1 + d_0 c_0\\
\vdots & = & \vdots \\
0 & = & d_{-2} c_{n+2} + d_{-1} c_{n+1} + \cdots + d_n c_0\,\,.
\end{array}$$ (The values of $d_{-2}$ and $d_{-1}$ turn out to be the coefficients $a$ and $b$, respectively, of the polynomial $\ell_1$ in the decomposition of $1/\widehat{\sigma}_1$.) Read the paragraph after formula (9) in [@FL4].
To find $d_n$ we apply Cramer’s rule and we get $$\label{dn} d_n = (-1)^{n}\Omega_n/c_0^{n+3}$$ where $c_0^{n+3}$ gives the value of the determinant of the system and $$\Omega_n = \left|
\begin{array}{cccc}
c_1 & c_0 & 0 & \cdots \\
c_2 & c_1 & \ddots & \ddots \\
\vdots & \ddots & \ddots & \ddots \\
c_{n+2}& c_{n+1} & \cdots & c_1
\end{array}
\right|$$ is the determinant of a lower Hessenberg matrix of dimension $n+2$ with constant diagonal terms. The expansion of the determinant $\Omega_n$ has several characteristics:
- It has exactly $2^{n+1}$ non zero terms.
- For each $n \geq 0$, the sum of the subindexes of each non zero term equals $n+2$ (if a factor is repeated its subindex is counted as many times as it is repeated).
- The number of factors in each term is equal to $n+2$.
The last assertion is trivial. To calculate the number of non zero terms notice that from the first row we can only choose 2 non zeros entries. Once this is done, from the second row we can only choose 2 non zero entries, and so forth, until we get to the last row where we only have left one non zero entry to choose.
Regarding the second assertion we use induction. When $n=0$ it is obvious. Assume that each non zero term in the expansion of $\Omega_n$ has the property that the sum of its subindexes equals $n+2$ and let us show that each non zero term in the expansion of $\Omega_{n+1}$ has the property that the sum of its subindexes equals $n+3$. Expanding $\Omega_{n+1}$ by its first row we have $$\Omega_{n+1} = c_1 \Omega_n - c_0 \Omega_{n}^*,$$ where $\Omega_n^*$ is obtained substituting the first column of $\Omega_n$ by the column vector $(c_2,\ldots,c_{n+3})^t$ (the superscript $t$ means taking transpose). Using the induction hypothesis it easily follows that in each term arising from $c_1 \Omega_n$ and $c_0 \Omega_{n}^*$ the sum of its subindexes must equal $n+3$.
Using the properties proved above we obtain that the general expression of $\Omega_n$ is $$\Omega_n = \sum_{j=1}^{n+2 }\sum_{\alpha_1 + \cdots + \alpha_{j} = n+2} \varepsilon_{\alpha} c_0^{n+2-j}c_{\alpha_1}\cdots c_{\alpha_{j}},$$ where $\alpha = (\alpha_1, \ldots, \alpha_{j}), 1 \leq \alpha_k \leq n+2, 1\leq k \leq j$ and $\varepsilon_{\alpha} = \pm1$. Thus $$\label{omegan} |\Omega_n| \leq \sum_{j=1}^{n+2 }\sum_{\alpha_1 + \cdots + \alpha_{j} = n+2} c_0^{n+2-j}c_{\alpha_1}\cdots c_{\alpha_{j}}.$$ In this sum, there is there is only one term which contains the factor $c_{n+2}$ and that is when $j=1$. That term is $c_0^{n+1}c_{n+2}$. In the rest of the terms $1 \leq {\alpha_k} \leq n+1$. Let us prove that $$\label{cota} c_0^{n+2-j}c_{\alpha_1}\cdots c_{\alpha_{j}} \leq c_0^{n+1}c_{n+2}\qquad \mbox{for all} \qquad \alpha.$$ In fact, using the Holder inequality on each factor except the first, it follows that $$c_0^{n+2-j}c_{\alpha_1}\cdots c_{\alpha_{j}} \leq c_0^{n+2-j} \left(\int x^{n+2} d\sigma_1(x)\right)^{\sum_{k=1}^j \alpha_k/(n+2)} \left(\int d\sigma_1(x)\right)^{j - (\sum_{k=1}^j \alpha_k)/(n+2)}.$$ It remains to employ that $\sum_{k=1}^j \alpha_k=n+2$ to complete the proof of .
From , , and , we have that $$d_n \leq 2^{n+1}c_{n+2}/c_0^2$$ and the Carleman condition for $\tau_1$ readily follows.
An immediate consequence of Theorem \[momentos\] is the following
\[momentosNik\] Let $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m)$ be such that $\Delta_1$ is contained in a half line and $\sigma_1$ satisfies Carleman’s condition. Then, for all $j=1,\ldots,m$ we have that $s_{1,j}$ and $\tau_{1,j}$ satisfies Carleman’s condition.
For $s_{1,1}$ the assertion is the hypothesis. Let $j\in \{2,\ldots,m\}$. Notice that $s_{1,j} = \langle \sigma_1, s_{2,j} \rangle$ and $(s_{1,1},s_{1,j}) = \mathcal{N}(\sigma_1,s_{2,j})$ so $s_{1,j}, j=2,\ldots,n$ satisfies Carleman’s condition due to Theorem \[momentos\]. Since $s_{1,j},j=1,\ldots,m$ satisfies Carleman’s condition then Theorem \[momentos\] also gives that $\tau_{1,j}, j=1,\ldots,m$ satisfy Carleman’s condition.
Actually we will use this result for $(s_{m,m},\ldots,s_{m,1}) = \mathcal{N}(\sigma_m,\ldots,\sigma_1)$.
Proof of Theorem \[converge\] {#proofmain}
=============================
The first step consists in proving a weaker version of .
\[haus\] Assume that the conditions of Theorem \[converge\] are fulfilled. Then, for each fixed $j=0,\ldots,m-1$ $$\label{convHaus}
h-\lim_{n\in \Lambda}\frac{a_{{\bf n}, j}}{a_{{\bf n},m}} = (-1)^{m-j}\widehat{s}_{m,j+1}, \quad h-\lim_{n\in \Lambda}\frac{a_{{\bf n}, m}}{a_{{\bf n},j}} = (-1)^{m-j}\widehat{s}_{m,j+1}^{-1}\quad \mbox{inside} \quad \mathbb{C} \setminus \Delta_m.$$ There exists a constant $C_1$, independent of $\Lambda$, such that for each $j=0,\ldots,m$ and ${\bf n} \in \Lambda,$ the polynomials $a_{{\bf n},j}$ have at least $(|{\bf n}|/m) - C_1$ zeros in $\stackrel{\circ}{\Delta}_j $.
If $m=1$ the statement reduces directly to Lemma \[BusLop\], so without loss of generality we can assume that $m \geq 2$. Fix ${\bf n} \in \Lambda$.
In Theorem \[unicidad\] we proved that $\mathcal{A}_{{\bf n},1}$ has exactly $|{\bf n}| -1$ simple zeros in $\mathbb{C} \setminus \Delta_2$ and they all lie in $\stackrel{\circ}{\Delta}_1 $. Therefore, there exists a polynomial $w_{{\bf n},1}, \deg w_{{\bf n},1} = |{\bf n}| -1,$ whose zeros lie in $\stackrel{\circ}{\Delta}_1 $ such that $$\label{A1} \frac{\mathcal{A}_{{\bf n},1}}{w_{{\bf n},1}} \in \mathcal{H}(\mathbb{C} \setminus \Delta_2) \qquad \mbox{and} \qquad \frac{\mathcal{A}_{{\bf n},1}}{w_{{\bf n},1}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1}}\right), \qquad z\to \infty,$$ where $\overline{n}_j = \max \{n_k: k=j,\ldots,m\}$.
From and Lemma \[reduc\] it follows that $$\label{ortog2}
\int x^{\nu} \mathcal{A}_{{\bf n},2}(x) \frac{d\sigma_2(x)}{w_{{\bf n},1}(x)} = 0, \qquad \nu = 0,\ldots,|{\bf n}| - \overline{n}_1 -2,$$ and $$\label{resto2}
\frac{\mathcal{A}_{{\bf n},1}(z)}{w_{{\bf n},1}(z)} = \int \frac{ \mathcal{A}_{{\bf n},2}(x) d\sigma_2(x)}{w_{{\bf n},1 }(x)(z-x)}.$$ In particullar, implies that $\mathcal{A}_{{\bf n},2}$ has at least $|{\bf n}| - \overline{n}_1 -1$ sign changes in $\stackrel{\circ}{\Delta}_2 $. (We cannot claim that $\mathcal{A}_{{\bf n},2}$ has exactly $|{\bf n}| - \overline{n}_1 -1$ simple zeros in $\mathbb{C} \setminus \Delta_3$ and that they all lie in $\stackrel{\circ}{\Delta}_2$ except if $\overline{n}_1 = n_1$.) Therefore, there exists a polynomial $w_{{\bf n},2}, \deg w_{{\bf n},2} = |{\bf n}| - \overline{n}_1 -1$, whose zeros lie in $\stackrel{\circ}{\Delta}_2$, such that $$\frac{\mathcal{A}_{{\bf n},2}}{w_{{\bf n},2}} \in \mathcal{H}(\mathbb{C} \setminus \Delta_3) \qquad \mbox{and} \qquad \frac{\mathcal{A}_{{\bf n},2}}{w_{{\bf n},2}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \overline{n}_2}}\right) , \quad z \to \infty.$$
Iterating this process, using Lemma \[reduc\] several times, on step $j,\, j \in \{1,\ldots,m\},$ we find that there exists a polynomial $w_{{\bf n},j}, \deg w_{{\bf n},j} = |{\bf n}| -\overline{n}_1 -\cdots - \overline{n}_{j-1}- 1,$ whose zeros are points where $\mathcal{A}_{{\bf n},j}$ changes sign in $\stackrel{\circ}{\Delta}_j $ such that $$\label{Anj} \frac{\mathcal{A}_{{\bf n},j}}{w_{{\bf n},j}} \in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+1}) \qquad \mbox{and} \qquad \frac{\mathcal{A}_{{\bf n},j}}{w_{{\bf n},j}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_j}}\right) , \quad z \to \infty.$$ This process concludes as soon as $|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_j \leq 0$. Since $\lim_{{\bf n} \in \Lambda} |{\bf n}| = \infty$, because of we can always take $m$ steps for all ${\bf n} \in \Lambda$ with $|{\bf n}|$ sufficiently large. In what follows, we only consider such ${\bf n}$’s.
When $n_1 = \overline{n}_1\geq \cdots \geq n_m = \overline{n}_m$, we obtain that $\mathcal{A}_{{\bf n},m} \equiv a_{{\bf n},m}$ has $n_m -1$ sign changes in $\stackrel{\circ}{\Delta}_m $ and since $\deg a_{{\bf n},m} \leq n_m-1$ this means that $\deg a_{{\bf n},m} = n_m-1$ and all its zeros lie in $\stackrel{\circ}{\Delta}_m $. (In fact, in this case we can prove that $\mathcal{A}_{{\bf n},j}, j=1,\ldots,m$ has exactly $|{\bf n}| - n_1-\cdots -n_{j-1} -1$ zeros in $\mathbb{C} \setminus \Delta_{j+1}$ that they are all simple and lie in $\stackrel{\circ}{\Delta}_j $, where $\Delta_{m+1} = \emptyset$, compare with [@FLLS Propositions 2.5, 2.7].)
In general, we have that $a_{{\bf n},m}$ has at least $|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-1}-1$ sign changes in $\stackrel{\circ}{\Delta}_m $; therefore, the number of zeros of $a_{{\bf n},m}$ which may lie outside of $\Delta_m$ is bounded by $$\deg a_{{\bf n},m} - (|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-1}-1) \leq \sum_{k=1}^{m-1} \overline{n}_k -n_k \leq (m-1)C,$$ where $C$ is the constant given in , which does not depend on ${\bf n} \in \Lambda$.
For $j=m-1$ we have that there exists $w_{{\bf n},m-1}, \deg w_{{\bf n},m-1} = |{\bf n}| - \overline{n}_{1} - \cdots - \overline{n}_{m-2} -1,$ whose zeros lie in $\stackrel{\circ}{\Delta}_{m-1}$ such that $$\frac{\mathcal{A}_{{\bf n},m-1}}{w_{{\bf n},m-1}} = \frac{a_{{\bf n},m-1} + a_{{\bf n},m} \widehat{\sigma}_m}{w_{{\bf n},m-1}} \in \mathcal{H}(\mathbb{C} \setminus \Delta_m)\quad \mbox{and} \quad \frac{\mathcal{A}_{{\bf n},m-1}}{w_{{\bf n},m-1}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-1}}}\right) ,\,\, z \to \infty,$$ where $\deg a_{{\bf n},m-1} \leq n_{m-1} -1, \deg a_{{\bf n},m} \leq n_{m} -1$. Thus, using it is easy to check that $(a_{{\bf n},m-1}/a_{{\bf n},m})_{ n \in \Lambda}$ forms a sequence of incomplete diagonal multi-point Padé approximants of $-\widehat{\sigma}_m$ satisfying a)-b) with appropriate values of $n,\kappa$ and $\ell$. Due to Lemma \[BusLop\] it follows that $$h-\lim_{n\in \Lambda} \frac{a_{{\bf n},m-1}}{a_{{\bf n},m}} = - \widehat{\sigma}_m, \qquad \mbox{inside} \qquad \mathbb{C} \setminus \Delta_m.$$ Dividing by $\widehat{\sigma}_m$ and using , we also have $$\frac{\mathcal{A}_{{\bf n},m-1}}{\widehat{\sigma}_m w_{{\bf n},m-1}} = \frac{a_{{\bf n},m-1} \widehat {\tau}_m + b_{{\bf n},m-1}}{w_{{\bf n},m-1}} \in \mathcal{H}(\mathbb{C} \setminus \Delta_m),$$ where $b_{{\bf n},m-1} = a_{{\bf n},m} + \ell_m a_{{\bf n},m-1}$ and $$\frac{\mathcal{A}_{{\bf n},m-1}}{\widehat{\sigma}_m w_{{\bf n},m-1}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-1}-1}}\right) ,\,\, z \to \infty.$$ Consequently, $(b_{{\bf n},m-1}/a_{{\bf n},m-1})_{ n \in \Lambda}$ forms a sequence of incomplete diagonal multi-point Padé approximants of $-\widehat{\tau}_m$ satisfying a)-b) with appropriate values of $n,\kappa$ and $\ell$. Then Lemma \[BusLop\] and Corollary \[momentosNik\] imply that $$h-\lim_{n\in \Lambda} \frac{b_{{\bf n},m-1}}{a_{{\bf n},m-1}} = - \widehat{\tau}_m, \qquad \mbox{inside} \qquad \mathbb{C} \setminus \Delta_m,$$ or, equivalently, $$h-\lim_{n\in \Lambda} \frac{a_{{\bf n},m}}{a_{{\bf n},m-1}} = - \widehat{\sigma}_m^{-1}, \qquad \mbox{inside} \qquad \mathbb{C} \setminus \Delta_m.$$ We have proved for $j=m-1$.
For $j= m-2$, we have shown that there exists a polynomial $w_{{\bf n},m-2}, \deg w_{{\bf n},m-2} = |{\bf n}| -\overline{n}_1 - \cdots \overline{n}_{m-3}- 1,$ whose zeros lie in $\stackrel{\circ}{\Delta}_{m-2} $ such that $$\frac{\mathcal{A}_{{\bf n},m-2}}{w_{{\bf n},m-2}} = \frac{a_{{\bf n},m-2} + a_{{\bf n},m-1} \widehat{\sigma}_{m-1} + a_{{\bf n},m} \langle \sigma_{m-1},\sigma_m\widehat{\rangle}}{w_{{\bf n},m-2}} \in \mathcal{H}(\mathbb{C} \setminus \Delta_{m-1})$$ and $$\frac{\mathcal{A}_{{\bf n},m-2}}{w_{{\bf n},m-2}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-2}}}\right), \qquad z \to \infty.$$ However, using and , we obtain $$\frac{a_{{\bf n},m-2} + a_{{\bf n},m-1} \widehat{\sigma}_{m-1} + a_{{\bf n},m} \langle \sigma_{m-1},\sigma_m\widehat{\rangle}}{\widehat{\sigma}_{m-1}} =$$ $$(\ell_{m-1} a_{{\bf n},m-2}+ a_{{\bf n},m-1} + C_1 a_{{\bf n},m})+ a_{{\bf n},m-2}\widehat{\tau}_{m-1} - a_{{\bf n},m} \langle {\tau}_{m-1}, \langle \sigma_{m}, \sigma_{m-1} \rangle \widehat{\rangle},$$ where $\deg \ell_{m-1} = 1$ and $C_1$ is a constant. Consequently, $ {\mathcal{A}_{{\bf n},m-2}}/{\widehat{\sigma}_{m-1} } $ adopts the form of $\mathcal{A}$ in Lemma \[reduc\], $ {\mathcal{A}_{{\bf n},m-2}}/({\widehat{\sigma}_{m-1}w_{{\bf n},m-2} }) \in \mathcal{H}(\mathbb{C} \setminus \Delta_{m-1})$, and $$\label{Anm} \frac{\mathcal{A}_{{\bf n},m-2}}{\widehat{\sigma}_{m-1}w_{{\bf n},m-2}} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-2} -1}}\right), \qquad z \to \infty.$$ From in Lemma \[reduc\] it follows that for $\nu = 0,\ldots, |{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-2} -3$ $$\int x^{\nu} \left({a_{{\bf n},m-2}(x) - a_{{\bf n},m}(x) \langle \sigma_{m}, \sigma_{m-1} \widehat{\rangle}(x)} \right) \frac{d \tau_{m-1}(x)}{w_{{\bf n},m-2}(x)} = 0.$$
Therefore, $ a_{{\bf n},m-2}- a_{{\bf n},m} \langle \sigma_{m}, \sigma_{m-1} \widehat{\rangle} \in \mathcal{H}(\mathbb{C} \setminus \Delta_m)$ must have at least $|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-2} -2$ sign changes in $\stackrel{\circ}{\Delta}_{m-1}$. This means that there exists a polynomial $w_{{\bf n},m-2}^*, \deg w_{{\bf n},m-2}^* = |{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-2} -2,$ whose zeros are simple and lie in $\stackrel{\circ}{\Delta}_{m-1}$ such that $$\frac{a_{{\bf n},m-2}- a_{{\bf n},m} \langle \sigma_{m}, \sigma_{m-1} \widehat{\rangle}}{w_{{\bf n},m-2}^*} \in \mathcal{H}(\mathbb{C} \setminus \Delta_m)$$ and $$\frac{a_{{\bf n},m-2}- a_{{\bf n},m} \langle \sigma_{m}, \sigma_{m-1} \widehat{\rangle}}{w_{{\bf n},m-2}^*} = \mathcal{O}\left(\frac{1}{z^{{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{m-3}-2\overline{n}_{m-2}-1}}}\right), \qquad z\to \infty.$$ Due to , this implies that $(a_{{\bf n},m-2}/a_{{\bf n},m}), n \in \Lambda,$ is a sequence of incomplete diagonal Padé approximants of $\langle \sigma_{m}, \sigma_{m-1} \widehat{\rangle}$. By Lemma \[BusLop\] and Corollary \[momentosNik\] we obtain its convergence in Hausdorff content to $\langle \sigma_{m}, \sigma_{m-1} \widehat{\rangle}$. To prove the other part in , we divide by $\langle \sigma_m,\sigma_{m-1}\widehat{\rangle}(z)$ use and proceed as we did in the case $j=m-1$.
Let us prove in general. Fix $j \in \{0,\ldots,m-3\}$ (for $j=m-2,m-1$ it’s been proved). Having in mind we need to reduce $\mathcal{A}_{{\bf n},j}$ so as to eliminate all $a_{{\bf n},k}, k=j+1,\ldots,m-1$. We start out eliminating $a_{{\bf n},j+1}$. Consider the ratio $\mathcal{A}_{{\bf n},j}/\widehat{\sigma}_{j+1}$. Using and we obtain that $$\frac{\mathcal{A}_{{\bf n},j}}{\widehat{\sigma}_{j+1}} = \left(\ell_{j+1} a_{{\bf n},j}+ \sum_{k=j+1}^m \frac{|s_{j+1,k}|}{|\sigma_{j+1}|} a_{{\bf n},j+1} \right) + a_{{\bf n},j}\widehat{\tau}_{j+1} - \sum_{k=j+2}^m a_{{\bf n},k} \langle {\tau}_{j+1}, \langle s_{j+2,k}, \sigma_{j+1} \rangle \widehat{\rangle},$$ has the form of $\mathcal{A}$ in Lemma \[reduc\], where $ {\mathcal{A}_{{\bf n},j}}/({\widehat{\sigma}_{j+1}w_{{\bf n},j}}) \in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+1})$, and $$\frac{\mathcal{A}_{{\bf n},j}}{\widehat{\sigma}_{j+1}w_{{\bf n},j}} \in \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j} -1}}\right), \qquad z \to \infty.$$ From of Lemma \[reduc\], we obtain that for $\nu = 0,\ldots,|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j} -3$ $$0 = \int x^{\nu} \left( a_{{\bf n},j}(x) - \sum_{k=j+2}^m a_{{\bf n},k} \langle s_{j+2,k}, \sigma_{j+1} \widehat{\rangle}(x) \right)\frac{d\tau_{j+1}(x)}{w_{{\bf n},j}(x)}$$ which implies that the function in parenthesis under the integral sign has at least $|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j} -2$ sign changes in $\stackrel{\circ}{\Delta}_{j+1} $. In turn, it follows that there exists a polynomial $\widetilde{w}_{{\bf n},j }, \deg \widetilde{w}_{{\bf n},j } = |{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j} -2$, whose zeros are simple and lie in $\stackrel{\circ}{\Delta}_{j+1} $ such that $$\frac{a_{{\bf n},j} - \sum_{k=j+2}^m a_{{\bf n},k} \langle s_{j+2,k}, \sigma_{j+1} \widehat{\rangle} }{\widetilde{w}_{{\bf n},j }} \in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+2})$$ and $$\frac{a_{{\bf n},j} - \sum_{k=j+2}^m a_{{\bf n},k} \langle s_{j+2,k}, \sigma_{j+1} \widehat{\rangle} }{\widetilde{w}_{{\bf n},j }} = \mathcal{O}\left(\frac{1}{z^{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j-1}-2\overline{n}_{j} -1}}\right), \qquad z \to \infty.$$ Notice that $a_{{\bf n},j+1}$ has been eliminated and that $$\langle s_{j+2,k}, \sigma_{j+1} {\rangle} = \langle \langle \sigma_{j+2},\sigma_{j+1}\rangle, \sigma_{j+3}, \ldots,\sigma_k \rangle, \qquad k = j+3,\ldots,m.$$
Now we must do away with $a_{{\bf n},j+2}$ in $a_{{\bf n},j} - \sum_{k=j+2}^m a_{{\bf n},k} \langle s_{j+2,k}, \sigma_{j+1} \widehat{\rangle}$ (in case that $j+2 < m$). To this end, we consider the ratio $$\frac{a_{{\bf n},j} - \sum_{k=j+2}^m a_{{\bf n},k} \langle s_{j+2,k}, \sigma_{j+1} \widehat{\rangle} }{\langle \sigma_{j+2},\sigma_{j+1}\widehat{\rangle}}$$ and repeat the arguments employed above with $\mathcal{A}_{{\bf n},j}$. After $m-j-2$ reductions obtained applying consecutively Lemma \[reduc\], we find that there exists a polynomial which we denote $w_{{\bf n},j}^*, \deg w_{{\bf n},j}^* = |{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j-1}-(m-j-1)\overline{n}_{j} -2$, whose zeros are simple and lie in $\stackrel{\circ}{\Delta}_{m-1} $ such that $$\frac{a_{{\bf n},j} - (-1)^{m-j} a_{{\bf n},m} \langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}}{w_{{\bf n},j}^*} \in \mathcal{H}(\mathbb{C} \setminus \Delta_m)$$ and $$\frac{a_{{\bf n},j}- (-1)^{m-j} a_{{\bf n},m} \langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}}{w_{{\bf n},j}^*} = \mathcal{O}\left(\frac{1}{z^{{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j-1}-(m-j)\overline{n}_{j}-1}}}\right), \qquad z\to \infty.$$ Dividing by $(-1)^{m-j} \langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle},$ from here we also get that $$\frac{a_{{\bf n},j}(-1)^{m-j}\langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}^{-1} - a_{{\bf n},m} }{w_{{\bf n},j}^*} \in \mathcal{H}(\mathbb{C} \setminus \Delta_m)$$ and $$\frac{a_{{\bf n},j}(-1)^{m-j}\langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}^{-1}- a_{{\bf n},m} }{w_{{\bf n},j}^*} = \mathcal{O}\left(\frac{1}{z^{{|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j-1}-(m-j)\overline{n}_{j}-2}}}\right), \qquad z\to \infty.$$ On account of , these relations imply that $(a_{{\bf n},j}/a_{{\bf n},m}), {\bf n} \in \Lambda,$ is a sequence of incomplete diagonal multi-point Padé approximants of $(-1)^{m-j}\langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}$ and $(a_{{\bf n},m}/a_{{\bf n},j}), {\bf n} \in \Lambda,$ is a sequence of incomplete diagonal multi-point Padé approximants of $(-1)^{m-j}\langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}^{-1}$. Since $\langle \sigma_{m},\ldots, \sigma_{j+1} \widehat{\rangle}^{-1} = \widehat{\tau}_{m,j+1} + \ell_{m,j+1},$ from Lemma \[BusLop\] and Corollary \[momentosNik\] we obtain .
Going one step further using Lemma \[reduc\], we also obtain that $$0 = \int x^{\nu} a_{{\bf n},j}(x) \frac{d \tau_{m,j+1}}{w_{{\bf n},j}^*(x)},\qquad \nu=0,\ldots,|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j-1}-(m-j)\overline{n}_{j}-4$$ which implies that $a_{{\bf n},j}$ has at least $|{\bf n}| - \overline{n}_1 - \cdots-\overline{n}_{j-1}-(m-j)\overline{n}_{j}-3$ sign changes in $\stackrel{\circ}{\Delta}_{m} $. From we obtain that there exists a non-negative constant $C_1$, independent of $n\in \Lambda$, such that the number of zeros of $a_{{\bf n},j}, j=0,\ldots,m,$ in $\Delta_m$ is bounded from below by $(|{\bf n}|/m) - C_1$. This settles the last statement.
In the case of decreasing components in $\bf n$, we saw that all the zeros of $a_{{\bf n},m}$ lie in $\Delta_m$ and [@gon Lemma 1] would allow us to derive immediately uniform convergence on each compact subset of $\mathbb{C} \setminus \Delta_m$ from the convergence in Hausdorff content. For other configurations of the components we have to work a little harder.
Let $\overline{\jmath}$ be the last component of $(n_0,\ldots,n_m)$ such that $n_{\overline{\jmath}} = \min_{j=0,\ldots,m} (n_j)$. Let us prove that $\deg a_{{\bf n},\overline{\jmath}} = n_{\overline{\jmath}} -1$, that all its zeros are simple and lie in $\stackrel{\circ}{\Delta}_{m} $.
From [@FL4 Theorem 3.2] (see also [@FL3 Theorem 1.3]) we know that there exists a permutation $\lambda$ of $(0,\ldots,m)$ which reorders the components of $(n_0,n_1,\ldots,n_m)$ decreasingly, $n_{\lambda(0)} \geq \cdots \geq n_{\lambda(m)},$ and an associated Nikishin system $(r_{1,1},\ldots,r_{1,m}) = {\mathcal{N}}(\rho_{1},\ldots,\rho_m)$ such that $$\mathcal{A}_{{\bf n},0} = (q_{{\bf n},0} + \sum_{k=1}^m q_{{\bf n},k} \widehat{r}_{1,k})\widehat{s}_{1,\lambda(0)}, \qquad \deg q_{{\bf n},k} \leq n_{\lambda(k)} -1, \qquad k=0,\ldots,m.$$ The permutation may be taken so that for all $0 \leq j <k \leq n$ with $n_j = n_k$ then also $\lambda(j) < \lambda(k)$. In this case, see formulas (31) in the proof of [@FL3 Lemma 2.3], it follows that $q_{{\bf n},m} = \pm a_{{\bf n},\overline{\jmath}}$. Reasoning with $q_{{\bf n},0} + \sum_{k=1}^m q_{{\bf n},k} \widehat{r}_{1,k}$ as we did with $\mathcal{A}_{{\bf n},0}$ we obtain that $\deg q_{{\bf n},m} = n_{\lambda(m)} -1$ and that its zeros are all simple and lie in $\stackrel{\circ}{\Delta}_{m} $. However, $n_{\lambda(m)} = n_{\overline{\jmath}}$ and $q_{{\bf n},m} = \pm a_{{\bf n},\overline{\jmath}}$ so the statement holds.
The index $\overline{\jmath}$ as defined above may depend on the multi-index ${\bf n} \in \Lambda$. Given $\overline{\jmath} \in \{0,\ldots,m\}$, let us denote by $\Lambda(\overline{\jmath})$ the set of all ${\bf n} \in \Lambda$ such that $\overline{\jmath}$ is the last component of $(n_0,\ldots,n_m)$ such that $n_{\overline{\jmath}} = \min_{j=0,\ldots,m} (n_j)$. Fix $\overline{\jmath}$ and suppose that $\Lambda(\overline{\jmath})$ contains infinitely many multi-indices. If $\overline{\jmath} = m$, then [@gon Lemma 1] and the first limit in imply that $$\lim_{n\in \Lambda(m)}\frac{a_{{\bf n}, j}}{a_{{\bf n},m}} = (-1)^{m-j}\widehat{s}_{m,j+1}, \qquad j=0,\ldots,m-1,$$ uniformly on each compact subset of $\mathbb{C}\setminus \Delta_m$, as needed.
Assume that $\overline{\jmath} \in \{0,\ldots,m-1\}$. Since all the zeros of $a_{{\bf n},\overline{\jmath}}$ lie in $\stackrel{\circ}{\Delta}_m,$ using [@gon Lemma 1] and the second limit in for $j = \overline{\jmath}$, we obtain that $$\label{convinv} \lim_{{\bf n} \in \Lambda(\overline{\jmath})} \frac{a_{{\bf n},m}}{a_{{\bf n},\overline{\jmath}}} = \frac{1}{(-1)^{m-\overline{\j}}\widehat{s}_{m,\overline{\jmath} +1}},$$ uniformly on each compact subset of $\mathbb{C}\setminus \Delta_m$. The function on the right hand side of is holomorphic and never zero on $\mathbb{C} \setminus \Delta_m$ and the approximating functions are holomorphic on $\mathbb{C} \setminus \Delta_m$. Using Rouche’s theorem it readily follows that on any compact subset $\mathcal{K} \subset \mathbb{C} \setminus \Delta_m$ for all sufficiently large $|{\bf n}|, n \in \Lambda(\overline{\jmath}),$ the polynomials $a_{{\bf n},m} $ have no zero on $\mathcal{K}$. This is true for any $\overline{\jmath} \in \{0,\ldots,m\}$ such that $\Lambda(\overline{\jmath})$ contains infinitely many multi-indices. Therefore, the only accumulation points of the zeros of the polynomials ${a_{{\bf n},m}}$ are in $\Delta_m \cup \{\infty\}$.
Hence, on any bounded region $D$ such that $\overline{D} \subset \mathbb{C} \setminus \Delta_m$ for each fixed $j=0,\ldots,m-1,$ and all sufficiently large $|{\bf n}|, {\bf n} \in \Lambda$, we have that $ {a_{{\bf n},j}}/{a_{{\bf n},m}} \in \mathcal{H}(D)$. From [@gon Lemma 1] and the first part of it follows that $$\label{convunifgen} \lim_{{\bf n} \in \Lambda} \frac{a_{{\bf n},j}}{a_{{\bf n},m}} = (-1)^{m-j}\widehat{s}_{m,j+1}, \qquad j=0,\ldots,m-1,$$ uniformly on each compact subset of $D$. Since $D$ was chosen arbitrarily, as long as $\overline{D} \subset \mathbb{C} \setminus \Delta_m$, it follows that the convergence is uniform on each compact subset of $\mathbb{C} \setminus \Delta_m$ and we have . Since the right hand of is a function which does not vanish in $\overline{D} \subset \mathbb{C} \setminus \Delta_m$, Rouche’s theorem implies that for each $j=0,\ldots,m-1$ the accumulation points of the zeros of the polynomials $a_{{\bf n},j}$ must be in $\Delta_m \cup \{\infty\}$ as claimed. (For $j=m$ this was proved above.)
Now, $$\frac{\mathcal{A}_{{\bf n},j}}{a_{{\bf n},m}} = \frac{a_{{\bf n},j}}{a_{{\bf n},m}} + \sum_{k=j+1}^{m-1} \frac{a_{{\bf n},k}}{a_{{\bf n},m}} \widehat{s}_{j+1,k} + \widehat{s}_{j+1,m}.$$ According to formula (17) in [@FL4 Lemma 2.9] $$0 \equiv (-1)^{m-j }\widehat{s}_{m,j+1} + \sum_{k=j+1}^{m-1}(-1)^{m-k} \widehat{s}_{m,k+1} \widehat{s}_{j+1,k} +
\widehat{s}_{j+1, m}, \quad z \in \mathbb{C} \setminus (\Delta_{j+1} \cup \Delta_m).$$ Deleting one expression from the other we have that $$\label{difer} \frac{\mathcal{A}_{{\bf n},j}}{a_{{\bf n},m}} = \left(\frac{a_{{\bf n},j}}{a_{{\bf n},m}} - (-1)^{m-j }\widehat{s}_{m,j+1}\right) + \sum_{k=j+1}^{m-1} \left(\frac{a_{{\bf n},k}}{a_{{\bf n},m}} - (-1)^{m-k} \widehat{s}_{m,k+1} \right) \widehat{s}_{j+1,k}$$ Consequently, for each $j=0,\ldots,m-1$, from we obtain .
Suppose that $\Delta_m$ is bounded. Let $\Gamma$ be a positively oriented closed simple Jordan curve that surrounds $\Delta_m$. Define $\kappa_{{\bf n},j}(\Gamma), j=0,\ldots,m$ to be the number of zeros of ${a_{{\bf n},j}}$ outside $\Gamma$. As above, given $\overline{\jmath} \in \{0,\ldots,m\}$, let us denote by $\Lambda(\overline{\jmath})$ the set of all ${\bf n} \in \Lambda$ such that $\overline{\jmath}$ is the last component of $(n_0,\ldots,n_m)$ which satisfies $n_{\overline{\jmath}} = \min_{j=0,\ldots,m} (n_j)$.
\[ceros\] Suppose that the assumptions of Theorem \[converge\] hold and $\Delta_m$ is bounded. Then for all sufficiently large $|{\bf n}|, {\bf n} \in \Lambda(\overline{\jmath}),$ $$\label{kappaj} \kappa_{{\bf n},j}(\Gamma) = \left\{
\begin{array}{ll}
n_j - n_{\overline{\jmath}}\,\,, & j=0,\ldots,m-1, \\
n_m - n_{\overline{\jmath}} -1, & j=m.
\end{array}
\right.$$ The rest of the zeros of the polynomials $a_{{\bf n},j}$ accumulate (or lie) on $ {\Delta_{m}} $.
Fix $\overline{\jmath} \in \{0,\ldots,m-1\}$. Assume that $\Lambda(\overline{\jmath})$ contains infinitely many multi-indices. Using the argument principle and it follows that $$\lim_{{\bf n}\in \Lambda(\overline{\jmath})} \frac{1}{2\pi i} \int_{\Gamma} \frac{({a_{{\bf n},m}}/{a_{{\bf n},\overline{\jmath}}})^{\prime}(z)}{({a_{{\bf n},m}}/{a_{{\bf n},\overline{\jmath}}})(z)} d z = \frac{1}{2\pi i} \int_{\Gamma} \frac{(1/\widehat{s}_{m,\overline{\jmath} +1})^{\prime}(z)}{(1/\widehat{s}_{m,\overline{\jmath} +1})(z)} d z = 1,$$ because $1/\widehat{s}_{m,\overline{\jmath} +1}$ has one pole and no zeros outside $\Gamma$ (counting the point $\infty$). Recall that $\deg a_{{\bf n},j} = n_j -1, j=0,\ldots,m$ and that all the zeros of $a_{{\bf n},\overline{\jmath}}$ lie on $\Delta_m$. Then, for all sufficiently large $|{\bf n}|, {\bf n} \in \Lambda(\overline{\jmath}),$ $$(n_m -1) - (n_{\overline{\jmath}} -1) - \kappa_{{\bf n},m}(\Gamma) = 1.$$ Consequently, $$\label{kappam} \kappa_{{\bf n},m}(\Gamma) = n_m - n_{\overline{\jmath}} -1, \qquad {\bf n} \in \Lambda(\overline{\jmath}).$$ Analogously, from , for $j=0,\ldots,m-1$, we obtain $$\lim_{{\bf n}\in \Lambda} \frac{1}{2\pi i} \int_{\Gamma} \frac{({a_{{\bf n},j}}/{a_{{\bf n}, m}})^{\prime}(z)}{({a_{{\bf n},j}}/{a_{{\bf n},m}})(z)} d z = \frac{1}{2\pi i} \int_{\Gamma} \frac{ \widehat{s}_{m, {j} +1}^{\prime}(z)}{ \widehat{s}_{m, {j} +1} (z)} d z = -1.$$ Therefore, for all sufficiently large $|{\bf n}|, {\bf n} \in \Lambda,$ $$n_j - n_m + \kappa_{{\bf n},m}(\Gamma) - \kappa_{{\bf n},j}(\Gamma) = -1, \qquad j=0,\ldots,m-1,$$ which together with gives . The last statement follows from the fact that the only accumulation points of the zeros of the $a_{{\bf n},j}$ are in $\Delta_m \cup \{\infty\}$.
The thesis of Theorem \[converge\] remains valid if in place of we require that $$\label{cond2} n_j = \frac{|{\bf n}|}{m} + o(|{\bf n}|),\qquad |{\bf n}| \to \infty , \qquad j=1,\ldots,m.$$ To prove this we need an improved version of Lemma \[BusLop\] in which the parameter $\ell$ in b) depends on $n$ but $\ell(n) = o(n), n \to \infty$. The proof of Lemma 2 in [@BL] admits this variation with some additional technical difficulties in part resolved in the proof of [@FL2 Corollary 1].
If either $\Delta_m$ or $\Delta_{m-1}$ is a compact set and $\Delta_{m-1} \cap \Delta_{m} = \emptyset$, it not difficult to show that convergence takes place in and with geometric rate. More precisely, for $j=0,\ldots,m-1,$ and $\mathcal{K} \subset \mathbb{C} \setminus \Delta_m$, we have $$\label{asin1} \limsup_{{\bf n} \in \Lambda} \left\|\frac{a_{{\bf n}, j}}{a_{{\bf n},m}} - (-1)^{m-j}\widehat{s}_{m,j+1} \right\|_{\mathcal{K}}^{1/|{\bf n}|} = \delta_j < 1.$$ For $j=0,\ldots,m-1,$ and $\mathcal{K} \subset \mathbb{C} \setminus (\Delta_{j+1} \cup \Delta_m)$ $$\label{asin2} \limsup_{{\bf n} \in \Lambda} \left\|\frac{\mathcal{A}_{{\bf n},j}}{a_{{\bf n},m}}\right\|_{\mathcal{K}}^{1/|{\bf n}|} \leq \max\{\delta_k:j \leq k \leq m-1\} < 1.$$ The second relation trivially follows from the first and $\eqref{difer}$. The proof of the first is similar to that of [@FL2 Corollary 1]. It is based on the fact that the number of interpolation points on $\Delta_{m-1}$ is $\mathcal{O}(|{\bf n}|), |{\bf n}| \to \infty,$ and that the distance from $\Delta_m$ to $\Delta_{m-1}$ is positive. Relations and are also valid if is replaced with .
Asymptotically, still means that the components of $\bf n$ are equally valued. One can relax requiring, for example, that the generating measures are regular in the sense of [@stto Chapter 3] in which case the exact asymptotics of and can be given (see [@Nik0], [@NS Chapter 5, Section 7], [@FLLS Theorem 5.1, Corollary 5.3], and [@RS1 Theorem 1].
The previous results can be applied to other approximation schemes. Let $S^1= \mathcal{N}(\sigma_0^1,\ldots,\,\sigma^1_{m_1}), S^2
=\mathcal{N}(\sigma^2_0,\ldots,\,\sigma^2_{m_2}), \sigma_0^1 =
\sigma_0^2$ be given. Fix ${\bf
n}_1=(n_{1,0},\,n_{1,1},\ldots,\,n_{1,m_1})\in{{\mathbb{Z}}}_+^{m_1+1}$ and ${\bf
n}_2=(n_{2,0},\,n_{2,1},\ldots,\,n_{2,m_2})\in{{\mathbb{Z}}}_+^{m_2+1},
|{\bf n}_2| = |{\bf n}_1| -1$. Let ${\bf n} = ({\bf n}_1,{\bf n}_2)$. There exists a non-zero vector polynomial with real coefficients $(a_{{\bf
n},0},\ldots,a_{{\bf n},m_1}),$ $\deg (a_{{\bf n},k}) \leq
n_{1,k} -1, k=0,\ldots,m_1,$ such that for $j=0,\ldots,m_2,$ $$\int x^{\nu} \mathcal{A}_{{\bf n},0}(x) d s^2_{j}(x) =0, \qquad \nu =0,\ldots,n_{2,j}
-1,$$ where $$\mathcal{A}_{{\bf n},0} = a_{{\bf n},0} + \sum_{k=1}^{m_1} a_{{\bf n},k} \widehat{s}^1_{1,k}.$$ In other words $$\label{definition**}
\int \left(b_{{\bf n},0}(x) + \sum_{j=1}^{m_2} b_{{\bf n},j}(x) \widehat{s}_{1,j}^2(x)\right) \mathcal{A}_{{\bf n},0}(x) d \sigma^2_{0}(x) =0, \qquad \deg b_{{\bf n},j} \leq n_{2,j} -1.$$ This implies that $\mathcal{A}_{{\bf n},0}$ has exactly $|\bf n_2|$ zeros in $\mathbb{C} \setminus \Delta_1^1$ they are all simple and lie in $\stackrel{\circ}{\Delta^1 _0}$ (see [@FL4 Theorem 1.2]. Here $\Delta_0^1 = \mbox{Co}({\mathrm{supp}}(\sigma_0^1))$ and $\Delta_1^1 = \mbox{Co}({\mathrm{supp}}(\sigma_1^1))$. Therefore, $\left(a_{{\bf n},0}, \ldots, a_{{\bf n},m}\right)$ is a type I multi-point Hermite-Padé approximation of $(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to $w_{\bf n}$ and the results of this paper may be applied.
[99]{}
J. Bustamante and G. López Lagomasino. Hermite-Padé approximation for Nikishin systems of analytic. functions. Sb. Math. [**77**]{} (1994), 367–384.
T. Carleman. Les fonctions quasi-analytiques. Gauthier Villars, Paris, 1926.
K. Driver and H. Stahl. Normality in Nikishin systems. Indag. Math. N.S. 5 (1994), 161–187.
K. Driver and H. Stahl, Simultaneous rational approximants to Nikishin systems. I. Acta Sci. Math. (Szeged) 60 (1995), 245–263.
K. Driver and H. Stahl, Simultaneous rational approximants to Nikishin systems. II. Acta Sci. Math. (Szeged) 61 (1995), 261–284.
U. Fidalgo and G. López Lagomasino. Rate of convergence of generalized Hermite-Padé approximants of Nikishin systems. Constr. Approx. [**23**]{} (2006), 165–196.
U. Fidalgo and G. López Lagomasino. General results on the conv. of multi-point Hermite-Padé approximants of Nikishin systems. Constr. Approx. [**25**]{} (2007), 89–107.
U. Fidalgo, A. López, G. López Lagomasino, and V.N. Sorokin. Mixed type multiple orthogonal polynomials for two Nikishin systems. Constr. Approx. [**32**]{} (2010), 255–306.
U. Fidalgo and G. López Lagomasino. Nikishin systems are perfect. Constr. Approx. [**34**]{} (2011), 297–356.
U. Fidalgo and G. López Lagomasino. Nikishin systems are perfect. Case of unbounded and touching supports. J. of Approx. Theory 163 (2011), 779–811.
A.A. Gonchar. On the convergence of generalized Padé approximants of meromorphic functions. Sb. Math. [**27**]{} (1975), 503–514.
A.A. Gonchar, E.A. Rakhmanov, and V.N. Sorokin. On Hermite-Padé approximants for systems of functions of Markov type. Sb. Math. [**188**]{} (1997), 671–696.
M.G. Krein and A.A. Nudel’man. The Markov Moment Problem and Extremal Problems. Transl. of Math. Monog. Vol. 50, Amer. Math. Soc., Providence, R.I., 1977.
A.B.J. Kuijlaars, Multiple orthogonal polynomial ensembles. Contemp. Math. Vol. 507, 2010, 155–176.
A.A. Markov. Deux démonstrations de la convergence de certains fractions continues. Acta Math. [**19**]{} (1895), 93–104.
E.M. Nikishin. On simultaneous Padé approximants. Math. USSR Sb. [**41**]{} (1982), 409–425.
E.M. Nikishin. Asymptotics of linear forms for simultaneous Padé approximants. Soviet Math (Izv. VUZ) [**30**]{} (1986) 43–52.
E.M. Nikishin and V.N. Sorokin. Rational Approximations and Orthogonality, Amer. Math. Soc., Providence, RI, 1991.
E.A. Rakhmanov and S.P. Suetin. Asymptotic behaviour of the Hermite–Padé polynomials of the $1$st kind for a pair of functions forming a Nikishin system. Uspekhi Mat. Nauk, [**67**]{} (2012), 177–178
E.A. Rakhmanov and S.P. Suetin. Distribution of zeros of Hermite-Padé polynomials for a pair of functions forming a Nikishin system. Sb. Math. (submitted).
H. Stahl and V. Totik. General Orthogonal Polynomials. Cambridge University Press, Cambridge, 1992.
T.J. Stieltjes. Recherches sur les fractions continues. Ann. Fac. Sci. Univ. Toulouse [**8**]{} (1894) J1–J122, [**9**]{} (1895), A1–A47, reprinted in his Oeuvres Complètes, Tome 2, Noordhoff, 1918, pp. 402–566.
W. Van Assche. Analytic number theory and rational approximation, in Coimbra Lecture Notes on Orthogonal Polynomials, A. Branquinho and A. Foulquié Eds., pp. 197–229. Nova Science Pub., New York, 2008.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present an analysis of the intrinsic UV absorption in the Seyfert 1 galaxy Mrk 279 based on simultaneous long observations with the [*Hubble Space Telescope*]{} (41 ks) and the [*Far Ultraviolet Spectroscopic Explorer*]{} (91 ks). To extract the line-of-sight covering factors and ionic column densities, we separately fit two groups of absorption lines: the Lyman series and the CNO lithium-like doublets. For the CNO doublets we assume that all three ions share the same covering factors. The fitting method applied here overcomes some limitations of the traditional method using individual doublet pairs; it allows for the treatment of more complex, physically realistic scenarios for the absorption-emission geometry and eliminates systematic errors that we show are introduced by spectral noise. We derive velocity-dependent solutions based on two models of geometrical covering – a single covering factor for all background emission sources, and separate covering factors for the continuum and emission lines. Although both models give good statistical fits to the observed absorption, we favor the model with two covering factors because: (a) the best-fit covering factors for both emission sources are similar for the independent Lyman series and CNO doublet fits; (b) the fits are consistent with full coverage of the continuum source and partial coverage of the emission lines by the absorbers, as expected from the relative sizes of the nuclear emission components; and (c) it provides a natural explanation for variability in the Ly$\alpha$ absorption detected in an earlier epoch. We also explore physical and geometrical constraints on the outflow from these results.'
author:
- 'Jack R. Gabel, Nahum Arav, Jelle S. Kaastra, Gerard A. Kriss, Ehud Behar, Elisa Costantini, C. Martin Gaskell, Kirk T. Korista, Ari Laor, Frits Paerels, Daniel Proga, Jessica Kim Quijano, Masao Sako, Jennifer E. Scott, Katrien C. Steenbrugge'
title: |
X-ray/UV Observing Campaign on the Mrk 279 AGN Outflow:\
A Global Fitting Analysis of the UV Absorption
---
Introduction
============
Mass outflow, seen as blueshifted absorption in UV and X-ray spectra, is an important component of active galactic nuclei [AGNs; see recent review in @cren03]. This “intrinsic absorption" is ubiquitous in nearby AGNs, appearing in over half of Seyfert 1 galaxies having high-quality UV spectra obtained with the [*Hubble Space Telescope*]{} [[*HST*]{}; @cren99] and the [*Far Ultraviolet Spectroscopic Explorer*]{} [[*FUSE*]{}; @kris02]. Spectra from the [*Advanced Satellite for Cosmology and Astrophysics (ASCA)*]{} identified X-ray “warm absorbers", seen as absorption edges, in a similar percentage of objects [@reyn97; @geor98]. Large total ejected masses have been inferred for these outflows, exceeding the accretion rate of the central black hole in some cases, indicating mass outflow plays an important role in the overall energetics in AGNs [e.g. @reyn97]. Recent studies have recognized and explored the potential effect of outflows on all scales of the AGN environment, from feeding the central supermassive black hole in AGNs [@blan99; @blan04], to influencing the evolution of the host galaxy [@silk98; @scan04] and the metallicity of the intergalactic medium [@cava02].
Measured ionic column densities provide the basis for interpretation of the physical nature of AGN outflows. Detailed UV spectral studies over the past decade have shown measurements of these crucial parameters are often not straightforward. Analyses of absorption doublets and multiplets has revealed the absorbers typically only partially occult the background emission sources. Without proper treatment of this effect, the column densities could be severely in error [e.g. @wamp93; @barl97; @hama97]. Additional complications that could affect column density measurements are different covering factors for different background emission sources [@gang99; @gabe03], velocity-dependent covering factors [e.g. @arav99], and inhomogeneous distributions of absorbing material [@deko02].
Many recent investigations of AGN outflows have focused on intensive multiwavelength observations of Seyfert 1 galaxies. Seyferts are well suited for these studies because they include the brightest AGNs in the UV and X-ray. The X-ray spectra contain the imprint of the bulk of the outflow’s mass, which can now be deblended into individual absorption lines with the high-resolution capabilities of the [*Chandra X-ray Observatory (CXO)*]{} and [*XMM-Newton Space Observatory*]{}, allowing detailed study. The high quality UV spectra available with [*HST*]{} and [*FUSE*]{} provide a complimentary, precise probe of the complex absorption troughs. Due to the relatively narrow absorption in Seyfert outflows, the important UV doublets and multiplets are typically unblended, allowing measurements of these key diagnostic lines.
We have undertaken an intensive multiwavelength observing campaign with [*HST*]{}/STIS, [*FUSE*]{}, and [*CXO*]{} to study the intrinsic absorption in the Seyfert 1 galaxy Mrk 279. Mrk 279 was selected for this study because of its UV and X-ray brightness and the rich absorption spectrum in both bands, including unblended and well-resolved UV doublets [see @scot04 hereafter SK04]. Additionally, it has minimal contamination by Galactic absorption and a relatively weak contribution from a narrow emission line region (NLR), both of which can complicate measurements of the absorption properties. As part of a series of papers devoted to this campaign, we present here a detailed study of the UV absorption in the combined STIS and [*FUSE*]{} spectra. We develop a new approach for measuring the covering factors and column densities in the absorbers, making full use of the high quality far-UV spectrum. These measurements provide the foundation for subsequent analysis and interpretation of the mass outflow in Mrk 279, and provide novel geometric constraints. In parallel papers, we present analysis of the X-ray spectrum [@cost04], inhomogeneous models of the UV absorption [@arav04], and density diagnostics based on O[v]{} K-shell X-ray lines [@kaas04]. In future papers, we will present photoionization models of the UV and X-ray absorption and analysis of absorption variability. In the next section, we describe the [*HST*]{} and [*FUSE*]{} observations and present an overview of the absorption spectrum; in §3, we review the standard doublet technique for measuring intrinsic absorption and, together with an Appendix, discuss important limitations of this method; the formalism of our fitting method and results for Mrk 279 are described in §4; in §5, the fits are interpreted, and implications for physical constraints on the outflow are explored; finally, a summary is presented in §6.
Observations and the Intrinsic Absorption Spectrum
==================================================
Simultaneous [*HST*]{}/STIS and [*FUSE*]{} Observations of Mrk 279
------------------------------------------------------------------
The nucleus of Mrk 279 was observed for a total of 41 ks (16 orbits) with the Space Telescope Imaging Spectrograph (STIS) on board [*HST*]{} between 2003 May 13 – 18 and for 91 ks with [*FUSE*]{} between 2003 May 12 – 14. The STIS observation used the E140M grating, which covers 1150 – 1730 Å, and was obtained through the 0$\arcsec$.2 $\times$ 0$\arcsec$.2 aperture. The spectrum was processed with CALSTIS v2.16, which removes the background light from each echelle order using the scattered light model from @lind00. Low residual fluxes in the cores of saturated Galactic lines indicate accurate removal of scattered light: typical fluxes in the cores are $\pm <$ 2.5% of the local unabsorbed continuum flux levels, and mean fluxes averaged over the absorption cores are $<$ 3% of the noise in the troughs. The final spectrum was sampled in 0.012 – 0.017 Å bins, thereby preserving the $\sim$ 6.5 km s$^{-1}$ kinematic resolution of STIS/E140M.
We found that the standard pipeline processing did not yield a fully calibrated spectrum due to two effects: a) the echelle ripple structure, due to the characteristic efficiency of the detector along each order, is not completely removed [see @heap97], and b) the sensitivity of the MAMA detectors has degraded with time, and the change has not been incorporated in the pipeline for the echelle gratings. In order to correct for these effects in the Mrk 279 spectrum, we used multiple spectra of the white dwarf spectrophotometric standard, BD+28 4122, one of which was taken close in time to our observation. First, a composite stellar spectrum of BD+28 4122 composed of FOS and STIS data [@bohl01] was used to flux-calibrate a 1997 STIS spectrum of BD+28 4122 that does not exhibit the echelle ripple structure. This flux-calibrated spectrum was then used to correct a STIS spectrum of BD+28 4122, taken on 3 May 2003 with the same grating and aperture as our Mrk 279 observation, and which does show the same ripple structure seen in the Mrk 279 spectrum. These two corrections were performed by dividing the fiducial spectrum for each step by the comparison spectrum, fitting a polynomial to the result, and then multiplying the comparison spectrum by that polynomial. We used the polynomials from the second step described above to correct the spectrum of Mrk 279. To obtain a smooth correction for each order in the Mrk 279 spectrum, we applied averages of the polynomials corresponding to the four adjacent orders. Although we were able to remove most of the echelle ripple structure in this way, some lower amplitude residual curvature remains in some orders. However, the intrinsic absorption features measured in this study are well-corrected.
[lrr]{} 1 & 85 & 40\
2 & -265 & 50\
2a & -290 & 30\
2b & -325 & 30\
2c & -355 & 65\
3 & -390 & 20\
4 & -460 & 20\
4a & -490 & 65\
5 & -550 & 30\
The [*FUSE*]{} spectrum, obtained through the 30$\arcsec \times$ 30$\arcsec$ aperture, covers 905 – 1187 Å. The spectrum was processed with the current standard calibration pipeline, CALFUSE v2.2.3. The eight individual spectra obtained with [*FUSE*]{}, from the combination of four mirror/grating channels and two detectors, were coadded for all exposures. Mean residual fluxes measured in the cores of saturated Galactic lines are consistent with zero within the noise (i.e., standard deviation of the fluxes) in the troughs of these lines, indicating accurate background removal. The spectrum was resampled into $\sim$ 0.02 – 0.03 Å bins to increase the signal-to-noise ratio (S/N) and preserve the full resolution of [*FUSE*]{}, which is nominally $\sim$ 20 km s$^{-1}$.
To place the [*FUSE*]{} and STIS spectra on the proper wavelength scale, we followed the procedure described in SK04. The centroids of prominent, unblended Galactic interstellar absorption lines were measured and used as fiducials in comparing to the Galactic 21 cm H[i]{} line in the line-of-sight to Mrk 279 [@wakk01]. The lines measured in the STIS spectrum are consistent with the 21 cm H[i]{} line within the measurement uncertainties and thus required no correction. The [*FUSE*]{} spectrum showed substantial shifts relative to the adopted standard. Due to non-linear offsets in the wavelength scale, local shifts were measured and applied individually to each spectral region containing intrinsic absorption features.
To normalize the absorption, we fit the total intrinsic (i.e., unabsorbed) AGN emission in the [*FUSE*]{} and STIS spectra over each intrinsic absorption feature. This was done empirically by fitting cubic splines to unabsorbed spectral regions adjacent to the features, at intervals of $\sim$ 5 Å . We also derived models for the individual contributions of the different emission sources (continuum and emission lines), since they are required for our analysis. For the continuum source, we fit a single power law ($f_{\lambda} \propto \lambda^{-\alpha}$) to the observed flux at two widely separated wavelengths that are relatively uncontaminated with absorption or line emission features, $\lambda =$ 955 and 1500 Å. After first correcting for Galactic extinction [using the extinction law of @card89 with $E(B-V)=0.016$], we find a best-fit spectral index $\alpha =$ 1.6. This power law model matches the few other line-free regions of the UV spectrum, i.e. $\lambda \approx$ 1150 – 1180, 1330, and 1390 Å, to within a few percent. Thus, for the emission line model, we simply subtracted the continuum power law model from the empirical fit to the total emission.
The Far-UV Absorption Spectrum
------------------------------
The full far-UV spectrum from our [*FUSE*]{} and STIS observations is shown in Figure 1. The active nucleus in Mrk 279 was in a relatively high flux state during this epoch; the UV continuum flux was similar to a 2000 January [*FUSE*]{} observation, and $\approx$ 7 times stronger than in [*FUSE*]{} and STIS spectra obtained in 2002 May. Full treatment of these earlier observations is given in SK04. Qualitatively, the intrinsic absorption spectrum in our new observations is similar to the earlier epochs (although some important variations were detected that will be the subject of a later study). Here, we give a brief overview, and refer the reader to SK04 for a more thorough phenomenological discussion of the absorption.
{width="17cm"}
Absorption from a range of ions is detected around the systemic velocity of the host galaxy, between $v = -$600 to $+$150 km s$^{-1}$; we adopt the redshift for Mrk 279 from SK04, $z =$ 0.0305 $\pm$ 0.0003. Normalized absorption profiles for some of the prominent lines are shown in Figure 2. The absorption is seen to be resolved into multiple distinct kinematic components at the resolution of STIS E140M and [*FUSE*]{}, revealing striking differences in the kinematic structure of different ions. Low-ionization species appear in several narrow components (see Si[iii]{} in Figure 2, but also Si[ii]{}, C[ii]{}, C[iii]{}, and N[iii]{} in SK04 Figures 7 – 14). However, the more highly ionized O[vi]{}, N[v]{}, and C[iv]{} doublets, which are the primary UV signatures of intrinsic absorption in AGNs, are much broader and have different centroid velocities. The Lyman lines exhibit the kinematic structure of the low-ionization components, but also appear in the lower outflow velocity region coinciding with the high-ionization lines, $v \approx -$300 to $-$200 km s$^{-1}$.
![Normalized absorption profiles from 2003 May STIS and [*FUSE*]{} spectra. The spectra are plotted as a function of radial velocity with respect to the systemic redshift of the host galaxy. The centroid velocities of kinematic components associated with the AGN outflow are identified with dashed vertical lines. Components identifying low-ionization absorbers likely unrelated to the outflow (see text) are shown with dotted lines. The difference in kinematic structure is evident in comparing the high-ionization CNO doublets with Si[iii]{}. The Ly$\delta$ profile is not plotted at $v < -$400 km s$^{-1}$ due to contamination with Galactic absorption at these velocities. \[fig2\]](f2.ps){width="8.5cm"}
We adopt the component numbering system from SK04, which was based on the kinematic structure in Ly$\beta$. In Figure 2, dotted vertical lines mark the centroids of the components in SK04 that exhibit narrow absorption structure in low-ionization species in the current spectrum but which have no corresponding structure in the high-ionization CNO doublets. Centroids of the components seen in the high-ionization lines are identified with dashed lines; we have added component 2c to the SK04 system based on structure in the C[iv]{} and N[v]{} profiles. Measured centroid radial velocities and widths of the components are listed in Table 1. The differences in ionization and kinematic structure between these two groups of components strongly suggests they are physically distinct. Based on their ionization, narrow widths, distinct centroid velocities, and (in component 4) low density implied by the stringent upper limit on the C[ii]{} column density in the excited fine-structure level, SK04 concluded at least some of the low-ionization components are not associated with the AGN outflow. Instead, they posited they are associated with gas located at relatively large distances from the nucleus - perhaps from an interaction with the companion galaxy MCG$+$12-13-024, high-velocity clouds associated with the host galaxy of Mrk 279, or, in the case of component 1, in the interstellar medium of the host galaxy.
In this study, we restrict our attention to the [*bona fide*]{} intrinsic absorption, i.e., that presumed to be directly associated with an outflow from the AGN. We take this to include all absorption from the broad O[vi]{}, N[v]{}, and C[iv]{} doublets; Figure 2 shows that any absorption associated with the narrow low-ionization components will at most only effect the outer wings of the outflow components in these lines. Conversely, Figure 2 shows Lyman line absorption from the low-ionization components is strong and heavily blended with the intrinsic absorption components; thus, we limit our analysis of H[i]{} to the uncontaminated region, $v \approx -$300 to $-$200 km s$^{-1}$.
The Doublet Method: Overview and Limitations
============================================
We present here a brief review of the standard technique for measuring UV absorption in AGN outflows and describe some limitations of this method to highlight the motivation for our method of analysis. In earlier studies of intrinsic absorption, the red members of doublet pairs were often found to be deeper than expected relative to the blue lines, based on their intrinsic 2:1 optical depth ratios. In many cases this was interpreted as due to partial coverage of the background nuclear emission by the absorbing gas, e.g., @wamp93, @barl97, @hama97; [other possibilities are scattering from an extended region and emission from an extended source unrelated to the central engine of the AGN; @cohe95; @good95; @krae01]. If partial coverage is not accounted for, the absorption ionic column densities can be severely underestimated, which will dramatically affect the interpretation of the outflow. The expression for the observed absorption that includes the effects of line-of-sight covering factor ($C$) and optical depth ($\tau$) is: $$I(v) = (1 - C(v)) + C(v) e^{-\tau(v)},$$ where $I$ is the normalized flux, and all quantities are written as a function of radial velocity, $v$. Since the optical depths of the UV doublet pairs are in the simple 2:1 ratio, equation 1 can be solved for the covering factor and optical depths of each doublet [@barl97; @hama97]. The resulting expressions, which we will refer to as the doublet solution, are: $$C = \frac{I_r^2 - 2 I_r + 1}{I_b - 2 I_r + 1},$$ $$\tau_r = -\ln(\frac{I_r - I_b}{1 - I_r}),$$ where $r$ and $b$ subscripts identify the red and blue members of the doublet, and the equation for $\tau$ was derived by @arav02. These expressions can be evaluated for unblended (i.e., sufficiently narrow) absorption doublets with members that are individually resolved, and derived as a function of radial velocity. While this has provided a revolutionary advance in the study of intrinsic AGN absorption, there are some key limitations to this method, described below.
Multi-Component Nature of the Background Emission
-------------------------------------------------
An implicit assumption in the doublet solution is that the absorption is imprinted on a uniform, homogeneous background emission source, since it allows for the solution of only a single $C$ and $\tau$. However, the AGN emission is comprised of multiple, physically distinct sources, i.e., a continuum source and emission line regions (including multiple kinematic components), which have different sizes, morphologies, and flux distributions. Thus, in cases where the absorber only partially occults the total background emission, the distinct sources would be expected to have different line-of-sight covering factors, in general. This possibility was first explored by Ganguly et al. (1999) for the continuum source and broad emission line region (BLR), and was demonstrated in the intrinsic absorption systems by Ganguly et al., @gabe03, @hall03, and SK04.
To account for multiple discrete background emission sources, equation 1 can be expanded to give the normalized flux of the j$^{th}$ line $$I_j = \Sigma_i [ R_j^i (C_j^i e^{-\tau_j} + 1 - C_j^i) ],$$ where the i$^{th}$ individual emission source contributes a fraction $R_j^i = F_j^i / \Sigma_i [F_j^i]$ to the total intrinsic flux and has covering factor $C_j^i$. The [*effective*]{} covering factor for each line is the weighted combination of individual covering factors, and can be written: $$C_{j} = \Sigma_i C_j^i R_j^i$$ These are expansions of the expressions given in @gang99 to include an arbitrary number of emission sources.
The multi-component nature of the background emission has several important implications for the analysis of AGN outflows:
$\bullet$ From equation 5 it can be seen that lines of the same ion could have different effective covering factors, which may introduce an error into the doublet equation. This happens when the underlying emission fluxes differ from the spectral position of one line to the other and is illustrated in the N[v]{} doublet absorption shown in Figure 3. Here, the emission line flux underlying the blue member is $\approx$ 15% greater than under the red line, while the continuum flux under the two lines is identical. The magnitude of this error depends on the slopes of the flux distributions between the doublet lines and differences in individual covering factors of the distinct emission sources. @gang99 showed this effect is typically small when considering the continuum/BLR distinction, due to the gradual slope of the BLR; however, if the doublet lines are near saturation, it could have a very large effect since small flux differences correspond to large optical depth differences in these cases. Also, an underlying narrow emission line component could have a pronounced effect on the solution [@arav02; @krae02; @gabe04].
![Spectrum of Ly$\alpha$ - N[v]{} (top panel) and Ly$\beta$ - O[vi]{} (bottom panel), illustrating complexities in treating covering factors in intrinsic absorption measurements. The continuum flux level, plotted as dashed lines, is essentially identical for each pair of lines, while the emission-line fluxes underlying each line differs greatly. This has important implications for the relative [*effective*]{} covering factors. Additionally, the nature of the BLR emission underlying these lines is complex: Ly$\beta$ absorption lies on the high-velocity blue wing of O[vi]{}, while N[v]{} absorption has a contribution from the red wing of the Ly$\alpha$ BLR.\[fig3\]](f3.ps){width="8.5cm"}
$\bullet$ Without separation of the covering factors of the distinct sources, covering factors derived from a doublet pair cannot be applied to measure column densities of other lines. This is evident in Figure 3; clearly the effective covering factor derived for N[v]{} is not applicable to Ly$\alpha$ if the individual continuum and emission-line covering factors differ since Ly$\alpha$ has much more underlying line flux. Similarly, Ly$\beta$, plotted in the bottom panel in Figure 3, will not generally have the same effective covering factor as Ly$\alpha$, due to the different emission line contributions under each line. However, if the individual covering factors are known, effective covering factors can be constructed for any line using equation 5. This is important for measuring singlet lines or contaminated multiplets that have no independent measure of the covering factor.
$\bullet$ Finally, the doublet solution misses potentially important, unique constraints on the absorption and emission geometry. For example, combined with estimates of the sizes of the individual sources derived from other techniques, the individual covering factors constrain the size of the absorber, e.g. @gabe03, and the relative location of the different emission components and absorber as projected on the plane of the sky. The individual covering factors can also serve as a unique probe of more detailed geometry of the background emission. Consider for example the O[vi]{} – Ly$\beta$ spectrum shown in Figure 3. The O[vi]{} doublet absorbs its own emission line flux at the blueshifted velocity of the outflow ($v_{BLR} \approx -$600 to $-$ 200 km s$^{-1}$), while the Ly$\beta$ absorber sits primarily on the high-velocity blue wing of the O[vi]{} BLR at $v_{BLR} \approx -$2200 km s$^{-1}$. The Ly$\beta$ line emission is relatively weak. Similarly, there is a contribution from the extreme red wing of the Ly$\alpha$ BLR profile under the N[v]{} absorption. Constraints on the emission line covering factors for these lines could be used to probe the kinematic-geometric structure of the BLR; the absorber can thus serve as a filter to view and explore the background AGN sources.
Systematic Errors in the $C$ – $\tau$ Solutions
-----------------------------------------------
Another limitation is that the doublet method always gives a solution, but it is often difficult to gauge its accuracy due to the non-linear dependency of the solution on measurement errors. To explore this, we have generated synthetic absorption profiles that include random fluctuations simulating spectral noise, and calculated $C$ and $\tau$ using the doublet equations. Illustrative results are shown in an Appendix, whereas a complete quantitative treatment will be presented in a later paper. We find there are systematic errors in the solutions that can give misleading results. These errors are not random about the actual value, but rather systematically underestimate the actual covering factor, as seen in the Appendix; the discrepancy in the solution increases with weaker absorption doublets and decreased S/N. Additionally, @gang99 demonstrated that the finite instrumental line spread function can lead to further systematic errors in the doublet solution.
Optimization Fitting of the Intrinsic Absorption: Lyman Series and CNO Doublet Global Line Fits
===============================================================================================
Motivated by the limitations of the traditional doublet method described above, we introduce here a different approach for measuring intrinsic absorption. The underlying principle is to increase the number of lines that are simultaneously fit in order to (a) explore additional parameters contributing to the formation of observed absorption troughs and (b) overconstrain the set of equations. This allows for the treatment of more complex, physically realistic scenarios of the absorption-emission geometry. By minimizing errors to simultaneous fits of multiple lines, noise in the spectrum will generally be smoothed out, in contrast to the erratic behavior of the doublet solution demonstrated in §3.2 and the Appendix.
Formalism
---------
Our fitting algorithm employs the Levenberg-Marquardt non-linear least-squares minimization technique to solve equation 4 for specified absorption parameters ($C_j^i$, $\tau_j$)[^1]. It is similar in principle to that used in SK04 to analyze the Lyman lines in earlier spectra of Mrk 279. Given a total of n observed absorption lines ($I_j$) as constraints, up to n$-$1 parameters can be modeled. No a priori assumptions are made about the kinematic distribution of the covering factors and optical depths of the absorbing material (e.g., Gaussian) – indeed, one goal is to solve for the velocity-dependent absorption parameters to constrain the kinematic-geometric structure of the mass outflow. Thus, we derive fits to the absorption equations for each velocity bin. This also avoids errors in the solutions resulting from averaging over variable profiles. The algorithm minimizes the $\chi^2$ function, with each data point appropriately weighted by the 1 $\sigma$ errors, which are a combination of spectral noise and estimated uncertainties in fitting the intrinsic underlying fluxes. For the latter, the continuum flux uncertainties were determined from the residuals between the power-law model and the line-free regions of the spectrum. We estimated uncertainties in the emission-line fluxes by testing different empirical fits over the absorption features, finding the range that gave what we deemed reasonable line profile shapes.
The key requirement in employing this technique is to link multiple absorption lines for simultaneous fitting. There are two general ways to do this:
$\bullet$ [*Lines from the Same Energy Level:*]{} The most straightforward way is to fit all available lines arising from the same ionic energy level, thereby eliminating uncertainties in ionic abundances or level populations. If the relative underlying fluxes from distinct emission sources differs between the lines, the individual covering factors of those emission sources can be derived. In the subsequent analysis, we fit all of the uncontaminated Lyman series lines in Mrk 279. These lines are ideal because they span a very large range in optical depth and have significantly different amounts of underlying emission-line flux [see @gabe03]. Additionally, the full set of lines is accessible in low redshift AGNs with combined [*FUSE*]{} and STIS spectra. We note the Fe[ii]{} UV multiplets, which appear in a small fraction of AGN absorbers [@deko01; @krae01], are another promising set of lines for this analysis.
$\bullet$ [*Global Fitting Approach:*]{} The second approach involves linking lines from different ions (or any group of lines arising from different levels) by placing physically motivated constraints on their absorption parameters. In our analysis of Mrk 279 below, we fit the six combined lines of the O[vi]{}, N[v]{}, and C[iv]{} doublets by assuming they share the same covering factors. Another potential application of this method is to link the absorption for a given line in spectra from different epochs via assumptions about the relative values of the absorption parameters between epochs (e.g., assuming the covering factors did not change). The validity of the assumptions used to link the equations can then be tested by the result of the fit.
Covering Factor and Optical Depth Solutions for Mrk 279
-------------------------------------------------------
For the intrinsic absorption in Mrk 279, we independently fit the two groups of lines described above: the Lyman series lines and the combined CNO doublets, i.e. global fit. For each set of lines, we tested two different models of the absorption covering factor. In model A, a single covering factor was assumed to describe all lines, i.e., no distinction was made between the different emission sources. In model B, independent covering factors for the continuum source and emission lines, $C^c$ and $C^l$, were assumed. In this case, the general expressions in equations 4 and 5 reduce to those in @gang99.
For the Lyman lines, the solvable range is limited to $-$300 $\lesssim v \lesssim -$200 km s$^{-1}$, due to blending with the narrow, low-ionization components in the high velocity region of the outflow (see §2.2). This is due to the failure of equation 4 where multiple absorption components with different covering factors contribute in the same velocity bin; there is no straightforward way to disentangle how the different absorbers overlap as projected against the background emission sources. Figure 2 shows Ly$\alpha$, Ly$\beta$, and Ly$\gamma$ give the best constraints for the Lyman series analysis, exhibiting clean well-defined absorption profiles at relatively high S/N. Weak absorption in Ly$\delta$ is also present in component 2, while Ly$\epsilon$ is contaminated with Galactic H$_2$ absorption and thus omitted from the fitting. Ly$\zeta$ is not detected in components 2 – 2a within the limits of the spectral noise, thus to reduce the effect of noise on the solution, we set the normalized flux to unity for this line. All lines of higher order than Ly$\zeta$ were omitted from the analysis since they exhibit no intrinsic absorption and provide no additional constraints. Thus, there are five lines as constraints to fit the two and three free parameters for the Lyman solution in model A and B, respectively.
For the global fitting of the O[vi]{}, N[v]{}, and C[iv]{} doublets, the absorption equations were linked by assuming their covering factors are equal (separately for the continuum and emission lines in model B). The optical depth of each ion is a free parameter. As described in §2.2 and seen in Figure 2, these lines are not strongly contaminated by the narrow, low-ionization absorption that dominates the Lyman lines; at most, there is only weak contamination in the wings of the broad, intrinsic absorption features by these low-ionization systems.
![Best-fit covering factor and column density/optical depth solutions from $\chi^2$ minimization for the single covering factor geometric model (model A). Top panels show $C$ solutions, solved independently for the Lyman series (left panels) and combined CNO doublets (right). Bottom panels show the H[i]{} column density and C[iv]{} (dashed histogram), N[v]{} (dotted), and O[vi]{} (solid) optical depth solutions. The contaminated, and thus unreliable, region of H[i]{} absorption is plotted with dotted histograms. \[fig4a\]](f4a.ps){width="8.5cm"}
![ \[fig4b\]](f4b.ps){width="8.5cm"}
![Same as Figure 4 for the two covering factor geometric model (model B). Continuum covering factors are plotted in black and emission-line covering factors in red. \[fig5a\]](f5a.ps){width="8.5cm"}
![ \[fig5b\]](f5b.ps){width="8.5cm"}
![Best-fit intrinsic absorption profiles for the Lyman lines (left) and CNO doublets (right) from the two geometric models. The normalized observed spectrum is plotted in black, model A profiles are in blue, and model B profiles in red. The model profiles were derived from the best-fit covering factors and column densities shown in Figures 3 and 4 using equation 4. The contaminated region of the Lyman line fit (dotted lines in Figures 3,4) is not plotted. \[fig6a\]](f6a.ps){width="8.5cm"}
![ \[fig6b\]](f6b.ps){width="8.5cm"}
Best-fits to the covering factors and optical depths (or equivalently column densities) for both line groups are shown in Figure 4 (model A) and Figure 5 (model B). In all cases, the parameter space search for the covering factors was restricted to the physically meaningful range, 0 $\leq C^i \leq$ 1. The plotted error bars represent the formal 1 $\sigma$ statistical errors in the best-fit parameters. Specifically, these correspond to values giving $\Delta \chi^2 =$1, and were computed from the diagonal elements of the covariance matrix for the optimal fit [@bevi69]. For cases where the covering factor solution is a boundary value (0,1), e.g. much of the $C^c$ solution for the CNO doublets in model B, no covariance matrix elements are computed for that parameter since it is not a minimum in the solution. For these cases, we estimated uncertainties by deriving solutions for models keeping the parameter fixed, and finding the value giving $\Delta \chi^2 =$1 from the best-fit solution at the boundary value. To ensure the computations did not erroneously stop at local minima, we generated solutions using different starting points for the parameter search space; identical results were found in all cases. The fitted profiles, derived by inserting the best-fit solutions into equation 4, for both models (model A in blue, model B in red) are compared with the observed normalized absorption profiles in Figure 6.
Errors and Uncertainties
------------------------
There are some possible errors in this fitting that should be mentioned. First, the emission lines are treated as all arising from a single component. Line emission from distinct kinematic components, i.e. broad, intermediate, narrow line regions, that are covered by different fractions by the absorbers could introduce an error into the solution (although, the narrow line region in Mrk 279 is relatively weak). Also, there are cases where absorption features sit on the BLR emission from different lines and thus sample different velocities of the BLR profile (i.e., Ly$\beta$ and N[v]{} described in §3.1 and shown in Figure 3). This could introduce an error into the solution if there are spatial inhomogeneities in the BLR gas as a function of velocity. Second, the area on the sky sampled by the [*FUSE*]{} aperture is four orders of magnitude greater than the STIS aperture. While this has no consequences for the continuum and BLR emission, which are unresolved and much smaller than the 0.$\arcsec$2 STIS aperture, any extended emission might effect the solutions. This could include scattering of nuclear emission by an extended scatterer [e.g. @krae01] or extended O[vi]{} NLR emission. Finally, we have assumed the absorption optical depth of each line ($\tau_j$) is uniform across the lateral extent of the emission sources, and thus the absorber is completely homogeneous [see @deko02]. Models departing from this assumption are presented in @arav04.
Discussion and Interpretation
=============================
Favored Absorption Model: Independent Continuum and Emission Line Covering Factors
----------------------------------------------------------------------------------
Figure 6 shows both geometric models are able to match the intrinsic absorption features well at most outflow velocities, indicating the solutions for these models and parameters are degenerate over much of the profiles. There are some regions fit somewhat better by model B, particularly C[iv]{} in the low-velocity outflow component. However, there is additional, stronger evidence that supports the two covering factor model for the outflow, as outlined below.
$\bullet$ [*Consistent Covering Factor Solutions From the Independent Lyman Line and Global CNO Doublet Fitting Methods:*]{} Figure 5 shows that both the Lyman and global CNO doublet best-fit solutions are consistent with full coverage of the continuum source over most of the velocity range with a valid Lyman solution. Further, the independent solutions for the two groups of lines are plotted together in Figure 7 for a direct comparison; the emission line covering factor profiles computed in model B are shown in the top panel and the single covering factors derived in model A are in the bottom panel. The H[i]{} and CNO $C^l$ fits are nearly identical over most of the core region of the absorption, with $C^l \approx$ 0.7 (although in the wing of the absorption, $v > -$250 km s$^{-1}$, the solutions diverge somewhat). In contrast, the Lyman method solution in model A is systematically less than the global doublet solution by about 0.1 – 0.15 at all velocities.
![Comparison of covering factor solutions for the two independent methods – Lyman and global CNO doublet lines. Best-fit solutions to the emission-line covering factors from model B are shown in the top panel. The single covering factor solutions from model A are in the bottom panel. \[fig7\]](f7.ps){width="8.5cm"}
$\bullet$ [*Consistency with the Emission Source Sizes:*]{} The continuum and emission line covering factor solutions in model B are also physically consistent with our understanding of the sizes of those emission sources. Reverberation mapping studies of AGNs have shown the BLR is substantially larger than the ionizing continuum source. These studies measure the BLR to be several to tens of light-days across [e.g. @pete98; @kasp00], while the UV continuum source is likely at least an order of magnitude smaller in size and possibly much smaller [e.g. @laor89; @prog03]. Thus, the solution of a fully covered continuum source and partial coverage of the emission lines is consistent with the nuclear emission geometry. The fact that this result was arrived at separately with the independent Lyman and global CNO fits is compelling.
$\bullet$ [*Variability in Ly$\alpha$ Absorption:*]{} In Figure 8, the normalized Ly$\alpha$ profile from our observation is compared with the STIS spectrum obtained a year earlier, showing the absorption was shallower in the previous epoch by $\sim$ 0.15 in normalized flux units. This variability is not due to a lower H I column density; the strength of Ly$\beta$ in a contemporaneous [*FUSE*]{} observation indicates Ly$\alpha$ was saturated in the earlier epoch (SK04), as it is in the 2003 spectrum, hence the profiles in both epochs simply delineate the unocculted flux. Thus, a change in covering factor must be responsible for the observed variability. Noting that the emission line-to-continuum flux ratio was greater in the 2002 spectrum than in 2003, we tested if the difference in covering factor of the respective sources could explain the variability. To this end, we constructed a synthetic absorption profile for Ly$\alpha$ in the 2002 epoch based on the results from our analysis of the 2003 spectrum. Specifically, we set $C^c =$1 at all velocities, and solved for $C^l$ in the 2003 Ly$\alpha$ profile. The observed continuum and emission line fluxes in the 2002 spectrum were then weighted by these covering factors according to equation 4. The resulting model profile, shown in the bottom panel of Figure 8, matches the observed absorption very well. This provides a natural explanation for the variability in the Ly$\alpha$ absorption profile: the covering factors of each source individually remain the same, but a change in the [*effective*]{} covering factor occurs due to different relative strengths of the distinct background emission components.
![Variability in Ly$\alpha$ absorption. The top panel shows Ly$\alpha$ absorption in components 2–2a was shallower in a May 2002 STIS spectrum (solid histogram) than in May 2003 (dashed). The bottom panel shows a model of the 2002 profile (dotted), constructed with individual covering factors derived from the 2003 spectrum (Figure 5), matches the observed profile well. This indicates the variation in absorption depth is likely due to a change in [*effective*]{} covering factor resulting from different relative strengths of the continuum and emission-line fluxes in the two epochs.\[fig8\]](f8.ps){width="8.5cm"}
Comparison of Solutions from Different Methods
----------------------------------------------
The covering factor and optical depth solutions for C[iv]{}, N[v]{}, and O[vi]{} from the two-$C$ global fit (model B) are compared with the single doublet solutions (equations 2 and 3) in Figure 9. The global fit effective covering factors are weighted combinations of the individual covering factors shown in Figure 5, derived using equation 5. In Figure 9, solid red lines show $C$ profiles for the short-wavelength members of each doublet and dotted lines show results for the long-wavelength lines; they are nearly identical in all cases, indicating negligible errors in the doublet solution due to different contributions from underlying line emission (see first bullet in §3.1).
![Comparison of covering factor and optical depth solutions derived with the global-fit (red) and single doublet (black) methods. Left panels: Covering factors from the global-fitting method are the effective values derived by weighting the individual covering factors in model B by their respective fluxes. The short-wavelength doublet members are plotted with solid red lines and the long-wavelength members with dotted lines. Right panels: Optical depths for the long-wavelength lines of the doublets. \[fig9a\]](f9a.ps){width="8.5cm"}
![ \[fig9b\]](f9b.ps){width="8.5cm"}
Figure 9 shows the solutions from the two methods differ significantly in some regions, with important implications for the kinematic-geometric and -ionization structure in the outflow. For example, the doublet covering factor solution, plotted in black, exhibits much more velocity-dependence than the global fit. This is seen, for example, in the wings of C[iv]{} and N[v]{} in component 4a and the red wing of component 2, where $C$ decreases strongly in the wings, while the optical depth profiles show little or no relation to the absorption trough structure when compared with Figure 2. Further, the kinematic structure in these ions is much different from the O[vi]{} solution. In these cases, the doublet solution implies the observed absorption profiles are determined almost entirely by velocity-dependent covering factor. Additionally, in some regions, particularly in C[iv]{}, the doublet solutions for $C$ and $\tau$ exhibit sharp fluctuations over small velocity intervals that do not coincide with structure in the observed absorption features, nor with the solutions from the other ions (e.g., components 2a – 2c, and 4a).
Comparison with the observed profiles in Figure 2 shows these effects in the doublet solution are strongly correlated with the absorption strength. Smaller derived covering factors are associated with weaker absorption, both in the wings of individual kinematic components and in overall absorption features, such as C[iv]{} components 2c and 2a. As discussed in the Appendix and §3.2, synthetic absorption profiles show the doublet solution suffers from systematic errors consistent with this general trend – spectral noise causes an underestimate of the covering factor. The simulations we present in the Appendix indicate the solutions become unreliable for cases where $\tau \lesssim$ 1 in the blue doublet member, and increasingly so for lower $\tau$. This, combined with the large, seemingly random fluctuations seen in certain regions suggests some features in the doublet solutions in Figure 9 are artifacts due to noise and not real features in the outflow. However, one puzzling aspect of the solutions, in comparison to the results in the Appendix, is that there are not more negative values for the covering factors. The simulations predict that for sufficiently low optical depth, there is a high probability that $C <$0. This should be especially apparent for intermediate values of $\tau$, such as the middle two models presented in the Appendix, where dramatic fluctuations between large positive and negative values are expected. It is unclear why this is not more prevalent in the solutions in Figure 9.
In Table 2, we compare integrated column densities measured from the different solutions: the two geometric coverage models presented in §4.2, plus the traditional single doublet solution for the CNO doublets. The integrated column densities for the doublet solution are larger than the global fits in all cases. In component 2$+$2a, the doublet solution is 60% greater than the 2-$C$ model for C[iv]{}, while in component 4a, it is 45% and 70% greater for C[iv]{} and N[v]{}, respectively. Comparing the two global fit models, the CNO column densities from model B are greater than the single $C$ model by 0 – 30% , while the H[i]{} column density is 50% less.
Constraints on the Nuclear Absorption – Emission Geometry
---------------------------------------------------------
Here we explore constraints on the nuclear absorption and emission geometry available from the covering factor solutions derived in §4.2. Full analysis of the physical conditions in the outflow, using photoionization modeling of the combined UV and X-ray absorption, will be presented in a future study.
As discussed in §5.1, we adopt the 2-$C$ global fit (model B) for the covering factors and column densities. The covering factor solutions in Figure 5 are consistent with the UV continuum source being fully in our sight-line through the absorber, for all kinematic components, while the emission lines are only partially so. This constrains the relative line-of-sight geometry of the absorbers and nuclear emission sources and places a lower limit on the transverse size of the outflow. Monitoring of time lags between BLR and continuum variations in Mrk 279 by @maoz90 gives the size of the BLR, $R_{BLR} =$ 12 light days. Thus, the projected size of the UV absorbers on the plane of the sky, against the AGN emission, is at least $(C^l)^{1/2} \times R_{BLR}$, or $\approx$10 light days in the cores of the absorption components, and possibly decreasing in the wings.
[lcccccccc]{} C IV & 1.6 & 0.9 & 2.1 & 1.1 & 3.3 & 1.6 & 1.5 & 0.9\
N V & $>$6.6 & 2.1 & $>$6.6 & 2.1 & $>$8.1 & 3.6 & 4.7 & 2.0\
O VI & $>$14 & $>$13 & $>$14 & $>$13 & $>$14 & $>$13 & 9.8 & 9.5\
H I & 7.5 & & 5.0 & & & & 4.4 &\
An additional probe of the nuclear geometry is possible because the Ly$\beta$ absorption lies on the high-velocity blue wing of the O[vi]{} BLR profile, at $v_{BLR} \approx -$2200 km s$^{-1}$, as discussed in §3.1 and illustrated in Figure 3. Thus, it samples different BLR kinematics than most other absorption lines, which absorb BLR velocities coinciding with the absorption outflow velocities ($v_{BLR} \approx -$300 km s$^{-1}$). Therefore, comparison of the Ly$\beta$ emission-line covering factor and those associated with other lines serves as a probe of the kinematic-spatial structure of the BLR. The potential affect of complex velocity-dependent structure in the BLR on the absorption covering factors was explored in @sria99. Since we assumed a priori that all lines share the same individual covering factors in the Lyman series fit in §4.2, a legitimate comparison requires obtaining independent constraints on the covering factor of the O[vi]{} BLR blue wing by the Ly$\beta$ absorber. This comes straightforwardly from the observed Ly$\gamma$ absorption line, since Ly$\gamma$ has no underlying line emission. Using the result that $C^c =$1, the intrinsic optical depth ratio for Ly$\beta$ : Ly$\gamma$, and the observed Ly$\gamma$ profile, the absorption profile for Ly$\beta$ can be derived as a function of $C^l$ from equation 4. Some illustrative results are shown in Figure 10. This shows, for example, that both an unocculted and fully occulted high velocity O[vi]{} BLR are ruled out by the data. We find that values ranging from 0.5 $\lesssim C^l \lesssim$ 0.8 are required to fit the majority of the Ly$\beta$ absorption in components 2 – 2a. This is similar to the emission line covering factor derived from the global fit to the CNO doublets (Figure 5b). Additionally, the Ly$\alpha$ $C^l$ profile can be derived independently for comparison with Ly$\beta$, in a similar manner as for Ly$\beta$ (i.e., using $C^c =$1, the observed Ly$\gamma$ profile, and the Ly$\alpha$ : Ly$\gamma$ $\tau$ ratio). The result is identical to the solution to the combined Lyman lines in components 2 – 2a shown in Figure 5a within uncertainties. Therefore, the absorption covering factor of the O[vi]{} BLR emission at $v_{BLR} \approx -$2200 km s$^{-1}$ is similar to $C^l$ at lower emission-line velocities (by the CNO doublets and Ly$\alpha$). These results may provide constraints for models of the BLR, e.g., testing disk vs spherical geometries and outflow vs rotational kinematics for the BLR.
{width="17cm"}
Summary
=======
We have presented a study of the intrinsic UV absorption in the Seyfert 1 galaxy Mrk 279 from an analysis of combined long observations with [*HST*]{}/STIS and [*FUSE*]{}. These spectra were obtained simultaneously in May 2003 as part of an intensive multiwavelength observing campaign.
We present a review of the standard technique for measuring intrinsic UV absorption parameters based on individual doublet pairs, showing some key limitations of this method: 1) It cannot treat multiple background emission sources. This introduces a potential error in the solution and misses important geometric constraints on the outflow. 2) Using synthetic absorption profiles, we show it systematically underestimates the covering factor (and overestimates $\tau$) in response to spectral noise. The discrepancy in the solution is shown to be strongly dependent on absorption strength.
To measure the UV absorption parameters in Mrk 279, we independently fit two groups of lines: the Lyman series lines and the combined CNO lithium-like doublets. The doublet fitting involved a global fitting approach, which assumes the same covering factors apply to all ions. By increasing the number of lines that are simultaneously fit, more complex and physically realistic models of the absorption-emission geometry can be explored. Solutions for two different geometrical models, one assuming a single covering factor for all background emission and the other separate covering factors for the continuum and emission lines, both give good statistical fits to the observed absorption. However, several lines of evidence support the model with two covering factors: 1) the independently fit Lyman lines and CNO doublets give similar solutions to the covering factors of both emission sources; 2) the fits are consistent with absorbers that fully occult the continuum source and partially cover the emission lines, consistent with the relative sizes of the emission sources; and 3) observed variability in the Ly$\alpha$ absorption depth can be explained naturally by this model as a change in effective covering factor resulting from a change in the relative strengths of the emission components.
Comparison of the traditional solutions based on individual doublets and the global-fit solutions shows the former exhibits much stronger velocity dependence. This is seen as decreases in covering factor in the wings of individual kinematic components, and as peculiar fluctuations in both $\tau$ and $C$ in other regions of relatively weak absorption. In light of the systematic errors shown to be inherent in the individual doublet solution, we conclude some of these effects are likely artifacts of the solution and should be interpreted with caution.
The covering factor solutions from our global fit constrains the relative line-of-sight geometry of the absorbers and nuclear emission sources. The derived emission line covering factor, combined with the size of the BLR, constrains the projected size of the absorber to be $\gtrsim$ 10 light days. We utilize the coverage of the high velocity O[vi]{} BLR by the Ly$\beta$ absorber to explore kinematic structure in the BLR; we find no evidence for dependence of the absorber’s BLR covering factor on the BLR velocity.
Support for this work was provided by NASA through grants HST-AR-9536, HST-GO-9688, and NAGS 12867 and through [*Chandra*]{} grant 04700532. We thank D. Lindler and J. Valenti for their assistance in correcting the STIS spectrum and C. Markwardt for making his software publicly available. We also thank the referee P. Hall for comments that helped clarify and improve this study.
Appendix
========
Here, we address how the expressions for covering factor and optical depth from the doublet solution (equations 2 and 3) depend on noise in the spectrum. We have generated synthetic absorption profiles for doublet pairs that include random fluctuations to simulate spectral noise. We derive the $C$ – $\tau$ solutions for these synthetic profiles to determine any trends in the solution as a function of noise level and absorption strength.
Our synthetic profiles were derived in the following way. The optical depth profiles were assumed to be Gaussian, parameterized by the width ($\sigma$) and peak optical depth in the core of the blue line ($\tau_{max}$). The “true" normalized absorption profiles for the lines are then derived from equation 1, with the covering factors set at a constant value across the profile for each doublet pair, and $\tau(v)_r = \tau(v)_b$/2 at all velocities. Thus, we have assumed in effect a single background emission source to avoid the complications described in §3.1. For comparison with our study of Mrk 279, we have set the velocity resolution of our synthetic profiles to that of the STIS E140M grating, and the absorption width ($\sigma$ = 50 km s$^{-1}$) to be approximately consistent with intrinsic absorption components 2–2c. To simulate spectral noise, we generated a normally distributed random number associated with each velocity bin in the synthetic spectra. The noise level was normalized by selecting the desired S/N in the unabsorbed continuum and then weighting the noise by flux level, S/N$(v) \propto I(v)^{1/2}$, according to Poisson statistics.
We generated profiles for a range of S/N, $C$ (real), and $\tau_{max}$. Here, we give some brief illustrative results, while reserving a full analysis for a later study. The left panels in Figure 11 show synthetic profiles for doublet pairs with $\tau_{max} =$ 2, 1, 0.5, and 0.25. In all cases, $C =$0.8 and the noise level was normalized to be 3% in the continuum. The middle two panels show the corresponding covering factor and optical depth profiles derived directly from equations 2 and 3, with the actual values marked with dashed lines for comparison. Errors in the doublet solution are immediately apparent. The covering factors are systematically underestimated, and the magnitude of error is strongly dependent on absorption strength. This is seen both in the lower covering factors derived in the cores of features with lower $\tau_{max}$ (means and standard deviations measured over the central 150 km s$^{-1}$ are printed in each plot), and in the decrease in $C$ computed in the wings of each profile.
{width="18cm"}
These systematic errors are due to non-linear effects in the doublet equations. This is seen most clearly by comparing the numerator, $N = (I_r - 1)^2$, and denominator, $D = I_b - 2 I_r + 1$, in the expression for covering factor (equation 2). The right panels of Figure 11 show the values of $D$ and $N / C$ (in red), which would be identical in each velocity bin for infinite S/N. Due to the forms of $N$ and $D$, these quantities have very different dependences on noise; $\Delta N / N$ becomes much smaller than $\Delta D / D$ for weak absorption and, at sufficiently small $\tau$, $N$ is less than the noise level of $D$. As $N$ decreases relative to the noise in $D$ for weaker absorption, the probability that $0 \leq D \leq N / C$ becomes vanishingly small, and the average value of $N / D$ becomes increasingly small.
These errors could have pronounced effects on the interpretation of the outflow. Each solution that underestimates $C$ overestimates $\tau$. Thus, ionic column densities are systematically overestimated, with increasing relative discrepancies in weaker doublets, leading to errors in determining the ionization structure and total gas in the absorber via photoionization models. Additionally, the errors in covering factor solutions will effect geometric inferences. For example, due to the high-ionization state of AGN outflows, lower-ionization species appearing in UV spectra are generally weaker. Thus, the increasing discrepancy in weaker absorption doublets may lead to the misinterpretation of ionic-dependent covering factors. Also, weaker absorption in the wings of an absorption feature could lead to apparent velocity-dependent covering factors that are instead due to optical depth variations, or at least exaggerate the effect.
Arav, N., et al. 2004, , in press Arav, N., Korista, K. T., & de Kool, M. 2002, , 566, 699 Arav, N., Becker, R. H., Laurent-Muehleisen, S. A., Gregg, M. D., White, R. L., Brotherton, M. S., & de Kool, M. 1999, ApJ, 524, 566 Blandford, R, D., & Begelman, M. C. 2004, MNRAS, 349, 68 Blandford, R, D., & Begelman, M. C. 1999, MNRAS, 303, L1 Barlow, T. A., & Sargent, W. L. W. 1997, , 113, 136 Bevington, P. R. 1969, Data Reduction and Error Analysis for the Physical Sciences, pp 242-245 Bohlin, R. C., Dickinson, M. E., & Calzetti, D. 2001, , 122, 2118 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245 Cavaliere, A., Lapi, A., & Menci, N., 2002, ApJ, 581, L1 Cohen, M. H., Ogle, P. M., Tran, H. D., Vermeulen, R. C., Miller, J. S., Goodrich, R. W. & Martel, A. R. 1995, ApJ, 448, L77 Costantini, E., et al. 2004, , in press Crenshaw, D. M., Kraemer, S. B., & George, I. M. 2003, AARA, 41, 117 Crenshaw, D. M., Kraemer, S. B., Boggess, A., Maran, S. P., Mushotzky, R. F., & Wu, C.-C. 1999, , 516, 750 de Kool, M., Korista, K. T., & Arav, N. 2002, ApJ, 580, 54 de Kool, M., Arav, N., Becker, R., Laurent-Muehleisen, S. A., White, R. L., Price, T., Gregg, M. D. 2001, ApJ, 548, 609 Gabel, J. R., Kraemer, S. B., & Crenshaw, D. M. 2004, in AGN Physics with the Sloan Digital Sky Survey, eds. G. T. Richards & P. B. Hall (San Francisco: ASP), p239 Gabel, J. R., et al. 2003, ApJ, 583, 178 Ganguly, R., Eracleous, M., Charlton, J. C., & Churchill, C. W. 1999, , 117, 2594 George, I. M., Turner, T. J., Netzer, H., Nandra, K., Mushotzky, R. F., & Yaqoob, T. 1998, , 114, 73 Goodrich, R. W., & Miller, J. S. 1995, , 448, L73 Hall, P. B., Hutsemekers, D., Anderson, S. F., Brinkmann, J., Fan, X., Schneider, D. P., York, D. G. 2003, , 593, 189 Hamann, F., Barlow, T. A., Junkkarinen, V., & Burbidge, E. M. 1997, , 478, 80 Heap, S. R. & Brown, T. M. 1997, in The 1997 HST Calibration Workshop with a New Generation of Instruments, ed. S. Casertano (Baltimore: STScI), 114 Kaastra, J. S., et al. 2004, , in press Kaspi, S., Smith, P. S., Netzer, H., Maoz, D., Jannuzi, B. T., & Giveon, U. 2000, ApJ, 533, 631 Kriss, G. A. 2002, in ASP Conf. Ser. 255, Mass Outflow in Active Galactic Nuclei: New Perspectives, ed. D. M. Crenshaw, S. B. Kraemer, & I. M. George (San Francisco: ASP), 69 Kraemer, S. B., et al. 2001, , 551, 671 Kraemer, S. B., Crenshaw, D. M., George, I. M., Netzer, H., Turner, T. J., & Gabel, J. R. 2002, ApJ, 577, 98 Laor, A. & Netzer, H. 1989, MNRAS, 238, 897 Lindler, D., & Bowers, C. 2000, BAAS, 197, 1202 Maoz et al. 1990, ApJ, 351, 75 Peterson, B. M., Wanders, I., Bertram, R., Hunley, J. F., Pogge, R. W., Wagner, R. M. 1998, ApJ, 501, 82 Proga, D. 2003, ApJ, 585, 406 Reynolds, C. S. 1997, MNRAS, 286, 513 Scannapieco, E., & Peng Ho, S., 2004, ApJ, submitted, astro-ph/0401087 al. 2004, ApJS, 152, 1 (SK04) Silk, J., & Rees, M. J., 1998, A&A 331, L1S Srianand, R., & Shankaranarayanan, S., 1999, , 518, 672 Wakker, B. P., Kalberla, P. M. W., van Woerden, H., de Boer, K. S., & Putman, M. E. 2001, ApJS, 136, 537 Wampler, E. J., Bergeron, J., & Petitjean, P. 1993, A&A, 273, 15
[^1]: Using software provided by C. Markwardt, http://cow.phys.wisc.edu/$\sim$craigm/idl/idl.html, which is based on the MINPACK-1 optimization software of J. Morè available at www.netlib.org
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'David Marpaung, Chris Roeloffzen, René Heideman, Arne Leinse, Salvador Sales, and José Capmany'
bibliography:
- 'IEEEabrv.bib'
- 'LPR-IMWP.bib'
title: Integrated microwave photonics
---
Introduction {#sec:intro}
============
Microwave photonics (MWP) [@CapmanyNatPhoton2007; @SeedsMWP2002; @SeedsMWP2006; @YaoMWP2009], a discipline which brings together the worlds of radio-frequency engineering and optoelectronics, has attracted great interest from both the research community and the commercial sector over the past 30 years and is set to have a bright future. The added value that this area of research brings stems from the fact that, on the one hand, it enables the realization of key functionalities in microwave systems that either are complex or even not directly possible in the radio-frequency domain and, on the another hand, that it creates new opportunities for information and communication (ICT) systems and networks.
While initially, the research activity in this field was focused towards defense applications, MWP has recently expanded to address a considerable number of civil applications [@CapmanyNatPhoton2007; @SeedsMWP2002; @SeedsMWP2006; @YaoMWP2009], including cellular, wireless, and satellite communications, cable television, distributed antenna systems, optical signal processing and medical imaging. Many of these novel application areas demand ever-increasing values for speed, bandwidth and dynamic range while at the same time require devices that are small, lightweight and low-power, exhibiting large tunability and strong immunity to electromagnetic interference. Despite the fact that digital electronics is widely used nowadays in these applications, the speed of digital signal processors (DSPs) is normally less than several gigahertz (a limit established primarily by the electronic sampling rate). In order to preserve the flexibility brought by these devices, there is a need for equally flexible front-end analog solutions to precede the DSP. Thus, there is a need of a wideband and highly flexible *analog signal processing engine*. Microwave photonics offer this functionality, by exploiting the unique capabilities of photonics, to bring advantages in terms of size, weight and power (SWAP) budgets in radio-frequency signal processing.
As an emerging technology, One of the main driving forces for MWP in the near future is expected to come from broadband wireless access networks installed in shopping malls, airports, hospitals, stadiums, and other large buildings. The market for microwave photonic equipment is likely to grow with consumer demand for wireless gigabit services. For instance, the IEEE standard WiMAX (the Worldwide Interoperability for Microwave Access) has recently upgraded to handle data rates of 1 Gbit/s, and it is envisaged that many small, WiMAX-based stations or picocells will soon start to spring up. In fact, with the proliferation of tablet devices such as the iPads, more wireless infrastructure will be required. Furthermore, it is also expected that the demand for microwave photonics will be driven by the growth of fiber links directly to the home and the proliferation of converged and in-home networks. To cope with this growth scenario, future networks will be expected to support wireless communications at data rates reaching multiple gigabits per second. In addition, the extremely low power consumption of an access network comprised of pico- or femtocells would make it much greener than current macrocell networks, which require high-power base stations.
For the last 25 years, MWP systems and links have relied almost exclusively on discrete optoelectronic devices and standard optical fibers and fiber-based components which have been employed to support several functionalities like RF signal generation, distribution, processing and analysis. These configurations are bulky, expensive and power-consuming while lacking in flexibility. We believe that a second generation, termed as *Integrated Microwave Photonics* (IMWP) and which aims at the incorporation of MWP components/subsystems in photonic circuits, is crucial for the implementation of both low-cost and advanced analog optical front-ends and, thus, instrumental to achieve the aforementioned evolution objectives. This paper reviews the salient advances reported during the last years in this emergent field.
Fundamentals of microwave photonics {#sec:fundMWP}
===================================
The heart of any MWP system is an *MWP link*. As depicted in Figure \[MWPlink\] (a), the link consists of a modulation device for electrical-to-optical (E/O) conversion connected by an optical fiber to a photodetector that does the O/E conversion. Most of the MWP links used today employ the intensity modulation-direct detection (IMDD) although as will be discussed in Section \[sec:APL\], phase or frequency modulation schemes in combination with either direct detection or coherent detection are also gaining popularity. From the point of view of the modulation device, the modulation scheme employed in MWP links can be divided into two broad categories: direct modulation or external modulation. In the former, the modulation device is a directly modulated laser (DML) that acts as both a light source and the modulator, while in external modulation the modulation device consists of a continuous wave (CW) laser and an external electro-optic modulator (EOM). Virtually all MWP links use p-i-n photodetectors for the O/E conversion. An excellent review of the range of devices that have been used in MWP links can be found in Chapter 2 of Ref [@CoxBook2004].
An *MWP system* is established by means of adding functionalities between the two conversions, i.e. processing in the optical domain (Figure \[MWPlink\] (b)). The advantage of optical processing includes the large bandwidth, constant attenuation over the entire microwave frequency range, small size, lightweight, immunity to electromagnetic interference and the potential of large tunability and low power consumption. The capabilities of such MWP systems include the generation, distribution, control and processing of microwave signals. Some of the key functionalities in this case are high fidelity microwave signal transport, true time delay and phase shifting of microwave signals, frequency tunable and high selectivity microwave filtering, frequency up and down conversions and microwave carrier and waveform generations.
In order to obtain full functionalities from the MWP systems, the MWP link needs to reach sufficient performance. The main hurdle for this is the fact that the E/O and O/E conversions add loss, noise and distortion to the RF signal being processed. Moreover, the relation between the RF loss and the optical loss in the MWP link is quadratic. This means that it is imperative to minimize the optical losses. For these reasons, we have seen that the best part of MWP activities of the 80’s and 90’s has been dedicated to design and to optimize the performance of MWP links. In the next section the figures of merit of MWP links is described. Comprehensive reviews of the progress in MWP links performance are reported in [@CoxMTT1997; @CoxMTT2006].
![Schematics of (a) an MWP link and (b) a simple MWP system. The MWP link basically consists of a modulation device for E/O conversion and a photodetector for O/E conversion. Such an MWP link with added functionalities between the conversion will make an MWP system.[]{data-label="MWPlink"}](MWPlink){width="\linewidth"}
Figures of merit {#subsec:FOM}
----------------
The important figures of merit for MWP links are link gain, noise figure, input/output intercept points and spurious free dynamic range (SFDR). These metrics show the impact of losses, noise and nonlinearities in the link.
The **link gain** describes the RF to RF power signal transfer in the MWP link or system. Due to the limited conversion efficiencies in the modulation device and photodetector, it is common that the MWP link shows negative link gain in the decibel scale, i.e. a net loss. This is especially true for the case of direct modulation since the link gain depends only on three parameters, namely the laser slope efficiency, the photodetector responsivity and the optical loss in the system. In the case of lossy impedance matching[^1] employed at the laser and the photodetector , the link gain of of direct modulation link can be expressed as
$$\label{eq:DMLgain}
g_{\mathrm{link,DM}} = \frac{1}{4}\left(\frac{s_{\mathrm{LD}}r_{\mathrm{PD}}}{L}\right)^2$$
where $s_{\mathrm{LD}}$ is the laser slope efficiency in W/A, $r_{\mathrm{PD}}$ is the photodiode responsivity in A/W and $L$ is the optical loss in the link defined as $L=\left[1..\infty\right]$. In the case of external modulation, the link gain is a function of relatively more parameters, involving the laser, the EOM and the photodetector. For example, for a link employing a CW laser with output optical power $P_{\mathrm{o}}$ and a Mach-Zehnder modulator (MZM) with an insertion loss of $L_{\mathrm{MZ}}$, an RF half-wave voltage of $V_{\pi,\mathrm{RF}}$ and a biased at the angle of $\phi_{\mathrm{b}}=\pi V_{\mathrm{b}}/V_{\pi,\mathrm{DC}}$, the link gain for the case of lossy impedance matching can be expressed as
$$\label{eq:MZgain}
g_{\mathrm{link,MZ}}=\left(\frac{\pi\,r_{\mathrm{PD}}\,R_{\mathrm{L}}\,P_{\mathrm{o}}\,\sin{\phi_{\mathrm{B}}}}{4\,L_{\mathrm{MZ}}\,V_{\pi,\mathrm{RF}}}\right)^2$$
where $R_{\mathrm{L}}$ is the load resistance and $V_{\pi,\mathrm{RF}}$ is the half-wave voltage. A careful look at Eq. will reveal that the link gain scales up quadratically with the input optical power from the laser. This means that the link gain can be increased by pumping more optical power to the system. This technique has been effectively used to demonstrate MWP links with net gain (i.e. positive link gain) instead of loss, where the value as high as +44 dB has been demonstrated using an ultra-low half-wave voltage MZM [@UrickElectLett2006]. Comparing Eq. and Eq. one can identify that the gain of a direct modulation MWP link is relatively more difficult to increase since the most of the time the laser and photodetector have fixed range of slope efficiency and responsivity.
The E/O and O/E conversions also add **noise** to the MWP system. The dominant noise sources are thermal noise, shot noise and relative intensity noise (RIN). In case of systems with optical amplifiers, the amplified spontaneous emission (ASE) noise of the amplifier will often dominate over the other sources. The total noise power in the link (with a receiver electrical bandwidth $B$) comprises of the electrical powers delivered to *a matched load* for the three sources considered above. Hence:
$$\label{eq:noisepower}
p_{\mathrm {N}}= \left(1+g_{\mathrm{link}}\right)\,p_{\mathrm {th}}+\frac{1}{4}\,p_{\mathrm {shot}}+\frac{1}{4}\,p_{\mathrm {rin}}$$
where $p_{\mathrm {th}}$, $p_{\mathrm {th}}$ and $p_{\mathrm {th}}$ are the thermal noise, shot noise and RIN powers defined as
$$p_{\mathrm {th}}=kTB\,$$
$$p_{\mathrm {shot}}=2q\,{I_{\mathrm {D}}}BR_{\rm L}\,$$
$$p_{\mathrm {RIN}}=\mathrm{RIN}\,{I_{\mathrm {D}}}^2BR_{\mathrm {L}}\,.$$
The quantity $I_{\mathrm{D}}$ in the equations above is the average photocurrent. For an MWP link with MZM, the photocurrent can be expressed as
$$\label{eq:IavMZ}
I_{\mathrm {D,MZ}}=\frac{r_{\mathrm {PD}}\,P_{\rm o}}{2L_{\mathrm{MZ}}}\left(1-\cos\phi_{\mathrm B}\right)\,.$$
The **noise figure** is a useful metric that measures the signal-to-noise ratio (SNR) degradation in the system, expressed in decibels. It is determined by the noise power and the link gain and can be written as
$$\label{eq:noisefigure}
{\mathrm {NF}}= 10\log_{10}\left(\frac{p_{\mathrm N}}{g_{\mathrm{link}}kTB}\right)\,.$$
The noise figure is often used as an important measure of the usefulness of MWP links and systems. In recent years, efforts are directed towards realizing MWP links with sub-10 dB noise figure. Several groups have been successful achieving this feature using the low biasing of a low $V_{\pi}$ MZM in conjunction with a very high power optical source and a customized high power-handling photodetector [@AckermanIMS2007; @KarimPTL2007]. Low biasing the MZM away from the quadrature $\left(\phi_{\mathrm{b}}=\pi/2\right)$ towards the minimum transmission point $\left(\phi_{\mathrm{b}}=0\right)$ is advantageous for the noise figure because the noise powers reduces faster with the bias compared to the reduction of the link gain. This can be seen from Eq. and Eq. . Sub-10 dB noise figure has also been achieved by using a high power source, a dual-output MZM and a balanced detection scheme [@AckermanIMS2007; @McKinneyPTL2007]. By carefully matching the path length of the fibers going to the balanced photodetector (BPD) the RIN which is common mode noise in the paths can be canceled. The most recent review of progress in achieving MWP link gain with $G > 0~\mathrm{dB}$ and $\mathrm{NF} < 20~\mathrm{dB}$ can be found in [@UrickIMS2011].
The E/O and O/E conversions in the MWP link also add **nonlinear distortions** to the output RF signal. The most common way to probe these nonlinearities is to use the so-called two tone test. In such a test, the input to the link is a pair of two closely spaced tones, for example at frequencies $f_{1}$ and $f_{2}$. Due to the nonlinear response of the link (i.e. components like the EOM or the photodetector), these tones will generate new frequency components called the intermodulation distortions (IMDs). The second-order intermodulation (IMD2) is generated due to the quadratic nonlinearity in the link and the frequency components appear at the sum and the difference of the modulating frequencies $\left(f_{1} \pm f_{2}\right)$. The third-order intermodulation (IMD3) is generated by cubic nonlinearity in the link and appear at the sum and the difference of twice of one frequency with the other frequency $\left(2f_{1} \pm f_{2}, 2f_{2} \pm f_{1}\right)$. An illustration of the output two tone test spectrum of an MWP link depicting the fundamental tones and the IMDs is shown in Figure \[twotone\]. The spectrum in this figure reveals that the distortion component that fall closest to the fundamental signals are the IMD3 terms at $2f_{1}-f_{2}$ and $2f_{2}-f_{1}$, which most of the time cannot be filtered out. Thus, there is hardly any usable signal bandwidth that is free from these spurious signals. For this reason the IMD3 is regarded as the main limiting distortion factor in MWP links. As for the even order distortions, the IMD2 fall relatively far from the fundamental signals. But as the signal bandwidth increases, the separation between the signals and these distortion terms reduces. For a wideband system with a multioctave signal bandwidth, i.e. the case where the highest frequency component of the signal, $f_{\mathrm{high}}$ is more that twice of the lowest frequency component, $f_{\mathrm{low}}$, IMD2 will interfere with the signal. This is in contrast with a narrowband system with sub-octave bandwidth $\left(f_{\mathrm{high}}< 2f_{\mathrm{low}}\right)$, where IMD2 can easily be filtered out.
![An illustration of a typical two-tone test output RF spectrum of an MWP link. IMD: intermodulation distortion, HD: harmonic distortion. []{data-label="twotone"}](twotone){width="\linewidth"}
It is often useful to investigate how the power of each component in the output spectrum shown in Figure \[twotone\] varies with the input signal power. Such plot is shown in Figure \[SFDR\]. Here we have plotted the fundamental signal and the IMD2 and IMD3 powers decibels. The fundamental signal, being linearly dependent on the input signal, is plotted as line with the slope of one. The IMD$2$ power has a quadratic relation with the input RF power and thus in such a plot will appear as a line with a slope of 2 with respect to the input signal power. The IMD3, having a cubic dependence with the input power, is plotted as a line with the slope of 3. At some point, the extrapolated fundamental power and the $n^{\mathrm{th}}$ order IMD power will intersect. This intersection is known as the **intercept point**. Depending on which power this point is referred to, for each distortion order an input intercept point (IIP$n$) and output intercept point (OIP$n$) can be defined. These two intercept points are related to each other by the link gain via the relation $\mathrm{OIP}n\left(\mathrm{dBm}\right)=\mathrm{IIP}n\left(\mathrm{dBm}\right)+G_{\mathrm{link}}\left(\mathrm{dB}\right)$. It is important to mention however, that these intercept points cannot be directly measured since the fundamental powers will undergo a compression [@KolnerAO1987]. For this reason, the intercept points are deduced from the extrapolation of the measured fundamental and IMD powers.
It is useful to inspect the expressions of the intercept points of an MWP link with an MZM. The reason is that the nonlinearity profile is well known due to the well-defined sinusoidal transfer function of the MZM. Since the performance of such a link is well-explored, often its intercept points are used as the benchmarks for judging the performance of a novel type of MWP links and systems. We will see later on in Section \[sec:APL\] that it is indeed the case. The IIP2 and IIP3 of an MZM link can be written as
$$\label{eq:IIP2MZ}
{\rm IIP2_{\mathrm {MZ}}}=\frac{2}{R_{\mathrm L}}\left({\frac{V_{\pi,\mathrm {RF}}}{\pi }}\tan{\phi_{\rm B}}\right)^2$$
$$\label{eq:IIP3MZ}
{\rm IIP3_{\mathrm {MZ}}}=\frac{4\,\left(V_{\pi,\mathrm{RF}}\right)^2}{{\pi}^2\,R_{\mathrm{L}}}$$
The IIP2 is very sensitive to the bias angle and ideally goes to infinity at quadrature because the even order distortion vanishes at this bias point. The IIP3 is, however, independent of the bias angle and virtually depends only to the modulator RF half-wave voltage. The simplicity of the IIP3 expression is very useful when comparing the performance of different types of links. The OIP3 of an MZM link on the other hand, is bias dependent. However, the expression for the OIP3 at quadrature bias is very simple, which is
$$\label{eq:OIP3MZMquad}
{\rm OIP3_{\rm Q}}={I^2_{\mathrm {Q}}}\,R_{\mathrm{L}}\,,$$
where $I_{\mathrm {Q}}$ is the average (DC) photocurrent in the quadrature bias case, which can be obtained by substituting $\phi_{\mathrm{B}}=\pi/2$ into Eq. .
![The relation of the input RF power to an MWP link with the output RF powers of the fundamental tone and the IMD products expressed in decibels. From such a graph, key link metrics such as gain, intercept points and SFDR can be deduced.[]{data-label="SFDR"}](SFDR){width="\linewidth"}
The figure of merit that incorporates the effect of noise and nonlinearity in the MWP link is the **spurious-free dynamic range (SFDR)**. The SFDR is defined as the ratio of input powers where, on the one hand, the fundamental signal power is equal to the noise power and, on the other hand, the $n^{\rm th}$-order intermodulation distortion (IMD$n$) power is equal to the noise power. In terms of output powers, this can be interpreted as the maximum output SNR that can be achieved while keeping the IMD$n$ power below the noise floor. is illustrated in Figure \[SFDR\]. For link designers, it is desirable to express the $n^{\mathrm{th}}$-order SFDR (SFDR$_{n}$) in terms of other link measurable parameters such as the link gain, noise figure, and the intercept points. Such expressions can also be deduced from Figure \[SFDR\]. The SFDR$_{n}$ in terms of IIP$n$ can be written as : $$\label{eq:SFDRin}
{\rm SFDR}_{n}=\frac{n-1}{n}\left({\rm IIP}n-{\rm NF}+174\right)\,$$ where the SFDR, IIP3 and NF are expressed in decibels. Alternatively we can express the SFDR in terms of OIP$n$, yielding $$\label{eq:SFDRout}
{\rm SFDR}_{n}=\frac{n-1}{n}\left({\rm OIP}n-{\rm NF}-G_{\mathrm{link}}+174\right)\,.$$ Again, here $G_\mathrm{link}$ is the link gain in decibels. SFDR$_{n}$ is usually expressed in ${\rm dB}\cdot{\rm Hz}^{\left(\frac{n-1}{n}\right)}$. This is essentially the same as saying that the SFDR is measured in dB in 1 Hz noise bandwidth. $$\label{eq:SFDRscale}
{\rm SFDR}_{n}\left(B\,{\rm Hz}\right)={\rm SFDR}_{n}\left(1\,{\rm Hz}\right)-\left(\frac{n-1}{n}\right)10\log_{10}\left(B\right)\,.$$
Optimizing the SFDR of an MWP link in a wideband manner has been the holy-grail in microwave photonics. As mentioned earlier, using an MZM one can achieve noise figure reduction via low biasing. According to Eq. and Eq. , the third-order SFDR of such a link can be increased. However, the low biasing reduces the IIP2 dramatically, making the link only suitable for narrowband (i.e., less than one octave bandwidth) signals. Moreover, the SFDR of the MZM link is always bounded by the third-order nonlinearities of the MZM. For this reason in the past people have turned to linearization techniques to overcome the MZM nonlinearities. However, the distortion cancellation often works only in a relatively narrow operating bandwidth and is critically sensitive to the modulator parameters [@Cummings1998]. A type of link that has been theoretically predicted to show a very large dynamic range (over ${150~\rm dB}\cdot{\rm Hz}^{2/3}$) is called the Class-B MWP link [@DarcieJLT2007]. The realization of such a link, however, has been found challenging. The properties of Class-B MWP links will be discussed in more detail in Section \[sec:APL\]. Currently successful techniques to push the dynamic range of MWP links have shown the SFDR in the range of ${120-130~\rm dB}\cdot{\rm Hz}^{2/3}$. The most recent review of the techniques achieving a wideband high SFDR is reported in [@UrickSPIE2012].
Applications {#subsec:function}
------------
The MWP concept has found widespread applications over the last 20 years. The earliest application for MWP technique was microwave signal distribution [@SeedsMWP2002]. Here the MWP link is used as a direct replacement of coaxial cables, exploiting the advantage in size, weight, flexibility and flat attenuation over the entire frequency range of interest. This concept is extended for antenna remoting purposes. Here, the MWP link is used to separate the highly sensitive and complex signal processing part of a radio receiver away from the antenna. In this way, the signal processing part can be protected in case of antennas deployed in harsh environments, for example in radar system [@Roman1998] or in radio astronomy [@Montebugnoli2005]. On the other hand, this concept is also very attractive for distributed or multiple antennas system, where a large number of antennas are needed to extend the coverage of service, for example in mobile communications. In this case, the MWP concept is used to centralize the signal processing chain (modulation, filtering etc.) and separate it away from the antennas. The advantage is that the antenna architecture can be simplified. This is the concept of radio over fiber [@SeedsMWP2006; @LimJLT2010], which is currently the main commercial driver[^2] for MWP.
Although signal distribution is the main driver for MWP, applications like microwave signal generation and processing are catching up. The generation of high purity microwave signals and high complexity ultra-broadband waveforms are the latest development in MWP. The main attraction for signal generation using MWP technique is the large frequency tunability and the potential of reaching very high frequencies (up to the THz region) using relatively simpler technique compared to the traditional microwave/electronic approach. Moreover, the distribution of such high frequency signals using extremely low-loss optical fibers is also attractive, which would have been very lossy using coaxial cables. For waveform generations MWP techniques offer broad bandwidth and the full reconfigurabilities of the phase and amplitude of the the RF waveforms [@YaoNatPhoton2010; @YaoOptComm2011].
For microwave signal processing, MWP techniques have enabled filtering, tunable true time delay and wideband phase shifting of microwave signals. The MWP concept added value are the operation bandwidth and the potential of fast and agile reconfigurabilities of these functionalities. Combining these basic functionalities lead to the realization of MWP processors for optical beamforming and phase array antenna systems.
Integrated microwave photonics {#sec:integratedMWP}
==============================
There are several factors that still distance the MWP concept to be widely implemented in real life applications and beyond the laboratory setups. The first factor is performance, particularly in terms of dynamic range. Typically MWP systems show prominent functionalities (for example true time delay or pulse shaping) over a large bandwidth but the performance in terms of dynamic range is not good enough to actually replace the traditional microwave solution. The other factors are reliability and cost. Most of MWP systems are composed of discrete components, i.e. lasers, modulators and detectors connected by fiber pigtails. This imposes several problems. Discrete components occupy larger size while interconnections with fiber pigtails reduce the system sturdiness. These lead to reduced reliability of the system. Second, the use of discrete components leads to a high system cost since each component will bear packaging cost. The use of discrete components might also lead to higher power consumption. These factors have counted against MWP solutions to replace traditional microwave solutions which have reached maturity over years of development.
The promise of ultra-broad bandwidth and excellent reconfigurabilities of MWP systems are still very much tantalizing to be explored if the drawback factors mentioned earlier can be addressed. If MWP systems can be more viable in terms of cost, power consumptions and reliability, they will be able to replace microwave solutions for processing beyond only replacing the coaxial cables. Many believe that these challenges can be addressed by RF and photonics integration [@JacobsOFC2007; @GasullaPhotonicsJournal2011; @CapmanyNatPhoton2011; @WoodwardMWP2011]. With photonic integration, one can achieve a reduction in footprint, inter-element coupling losses, packaging cost as well as power dissipation since a single cooler can be used for multiple functions [@ColdrenMWP2010]. Thus, the MWP functionalities can be brought a step closer towards real applications and subsequently commercial marketplace.
Even though at first the concept of integrated MWP seems to be very much in line with the recent trend of large scale PIC technology [@JalaliMicrowaveMag2006; @SorefJSTQE2006; @JalaliJLT2006; @LiangElectLett2009; @ColdrenJLT2011; @SmitLPR2012], certain differences occur in terms of requirements and market/applications scale. Large scale photonic integration has heavily been driven by the so called “digital” applications, like high capacity optical communications and optical interconnects. This concept relies heavily on increasing speed, component counts as well as incorporating as much functionalities (active and passive) in a single chip/technology platform [@KishJSTQE2011; @OrcuttOpex2011; @OrcuttOpex2012; @KoehlOPN2011]. Since currently there is no single technology platform that offers the best performance in all aspects, large scale integration often compromises the total photonic system performance.
Due to the stringent requirements in handling analog signals, PIC technology for integrated MWP should show high performance, most of the time higher than the one expected from the digital applications. Moreover, at the present state, MWP is addressing lower-volume market, hence lower volume PIC productions. These are the aspects that we believe will force PIC technology players to take a different approach to integrated MWP.
In the next section we will highlight a host of PIC technologies that have recently been demonstrated in MWP systems. Two key criteria that will be discussed from these technologies are the performance and the availability to MWP community. As have been discussed in the previous section, an important objective in MWP processing is to optimize the system link gain while maintaining a healthy noise figure. This objective thus dictates that the insertion loss of the PIC in the MWP system should be minimized. In most cases, this will lead to a stringent requirements in the propagation loss and the fiber-to-chip coupling losses in the PIC.
As for the availability of PIC technology, integrated MWP will hugely benefit from initiatives like ePIXfab [@DumonEpixfab] and Jeppix [@Jeppix] in Europe and OpSIS [@HochbergNatPhotonics2010] in the USA, that allow users to access fabrication technologies (for silicon photonics or indium phosphide technologies) that would otherwise be too costly to bear by individual users. This is already reflected from the growing number of reported integrated MWP devices and systems that have enabled by these initiatives. In the next section some of the available platforms for integrated MWP are reviewed.
Photonic integration technology {#sec:technology}
===============================
At this point, commercial wafer scale fabrication of photonic devices have crystallized into several major technologies: compound semiconductors (GaAs, InP), , nonlinear crystals (LiNbO~3~), dielectrics (silica and silicon nitride based waveguides) and element semiconductor (silicon-on-insulator (SOI)). Each technology has boast specific strength like light generation and detection, modulation, passive routing with low propagation loss, electronic integration, ease in packaging, etc. Nevertheless, integration in single platform without sacrificing an overall system performance has not been achieved [@AurrionIPC2011]. In the past 20 years, four platforms have been frequently used to demonstrate integrated MWP functionalities. They are InP, silica planar lightwave circuits (PLCs) , silicon-on-insulator (SOI) and Si~3~N~4~/SiO~2~, known as the TriPleX waveguide technology. In this section we will focus on these technologies and briefly comment on other technologies such as LiNbO~3~, polymers and chalcogenides in Subsection \[subsec:other\].
Indium Phosphide (InP) {#subsec:InP}
----------------------
The InP platform inherently supports light generation, amplification, modulation, detection, variable attenuation, and switching in addition to passive functionalities. For this reason, InP photonics is highly attractive for large scale photonic integration as have been consistently demonstrated by the company Infinera [@KishJSTQE2011]. In this case the PICs are highly complex with components (lasers, modulators, arrayed waveguide gratings) count of more than 400 integrated in a single chip. The PICs are developed for the application of the high-speed digital optical communications (100/500 Gb/s). As for MWP applications [@ColdrenMWP2010; @ColdrenJLT2011], InP PICs have been developed for a number of applications, like optical beamforming [@StulemeijerPTL1999], fully programmable MWP filters using ring resonator structures [@NorbergPTL2010; @NorbergJLT2011; @GuzzonOpex2011; @GuzzonJQE2012] and monolithic integrated optical phase-locked loop for coherent detection scheme [@LiPTL2011; @BhardwajEL2011; @KrishnamachariMOTL2011].
It is well known that the propagation losses in InP passive optical waveguides can be an order of magnitude higher compared to waveguides based on silica or silicon [@LiangElectLett2009]. For example, in [@StulemeijerPTL1999] a waveguide propagation loss of 1.4 dB/cm was reported. For some applications this large propagation loss needs to be compensated by optical gain from active components like semiconductor optical amplifiers (SOAs). This is especially important for cascade stages of resonator filters in the active MWP filters reported in [@NorbergPTL2010; @NorbergJLT2011; @GuzzonOpex2011] which are highly sensitive to losses. An issue arising with such active filters are the noise added from the SOAs that might limit the SFDR of the device. More detail about the SFDR of such filters can be found in [@GuzzonJQE2012]. This filter structure will be revisited in Section \[subsec:coherent\] where MWP filter is discussed in more details.
The capability of InP PIC to support modulation and detection functionalities has been exploited for coherent receiver in phase modulated MWP links. The most critical element in this receiver is a linear OPLL for linear phase demodulator. Through feedback, the OPLL forces the phase of a local (tracking) phase modulator to mirror the phase of an incoming optical signal. Thus, the output from the photodetector is a scaled replica of the RF input [@LiPTL2011]. The tracking phase modulator has to nearly instantly track the phase deviation out of the photodetector, dictating that delay must be very small. This call for photonic and electric circuit integration. The most recent approaches in the realization of such coherent receiver will be discussed in more details in Section \[subsec:OPLL\].
![(a) Illustration of programmable filter array chip. (b) A single filter stage and its functional components (SOAs, PMs and 3 dB MMI couplers) shown schematically. (c) Scanning electron microscopy (SEM) image of a programmable photonic filter device wire bonded to a carrier. (from [@NorbergJLT2011] courtesy of the IEEE).[]{data-label="InPGuzzon"}](InPPIC){width="\linewidth"}
Silica PLCs
-----------
Silica glass planar lightwave circuits (PLCs) are widely used as key devices for wavelength division multiplexing (WDM) transmission and fiber-to-the-home (FTTH) systems because of their excellent optical properties and mass-producibility. In such applications, the PLCs have been used for wavelength multi/demultiplexers, optical add/drop or cross-connect switches and programmable filters [@HimenoJSTQE1998]. The silica-based waveguides are very popular due to the very low propagation loss characteristic. The lowest propagation loss in such a waveguide at $\lambda=1550$ nm has been demonstrated using a phosphorus-doped silica on silicon waveguide with a propagation loss of 0.85 dB/m [@AdarJLT1994]. But this has been shown on a waveguide with a low refractive index contrast of 0.7%. Such a low contrast is less attractive for photonic chip integration since it only allows large bending radius and hence, larger chip size.
Several MWP functionalities have been demonstrated in silica PLCs over the years. Horikawa et al. have demonstrated a true time-delay beamforming network based on silica waveguides with an index contrast of 1.5%, a minimum bending radius of 2 mm and a propagation loss of 0.1 dB/cm [@HorikawaIMS1995; @HorikawaOFC1996]. More recently Grosskopf et al. demonstrated a beamforming network on a lower index constrast silica waveguides ($\Delta n = 0.7$%) and a minimum radius of 10 mm [@GrosskopfFIO2003]. In 2005 Rasras et al. [@RasrasPTL2005] proposed a wide-tuning-range optical delay line in a high ($\Delta n = 2$%) index contrast waveguides. This device integrates four-stage ring resonator all-pass filters (APFs) with cascaded fixed spiral-type delay waveguides and enables continuous tuning ranges up to 2.56 ns. The minimum bending radius and the reported propagation loss are 1 mm and 0.07 dB/cm, respectively. More details on these functions can be found in Section \[sec:delay\].
Beside delay lines and beamformer, the silica PLCs have also been used to demonstrate integrated frequency discriminator [@LaGassePTL1997; @WyrwasMWP2011; @WyrwasThesis2012] (more details in Section \[subsec:PMIM\]) and an arbitrary waveform generator [@SamadiOptComm2011] (see Section \[subsec:arbitrary\]). Recent investigations in silica PLCs for MWP applications are aimed at increasing the index contrast to 4% (and above) and to reduce the footprint of the device [@CallenderSPIE2012].
Silicon photonics {#subsec:SOI}
-----------------
Silicon photonics is one of the most exciting and fastest growing photonic technologies in recent years. The initial pull of this technology is its compatibility with the mature silicon IC manufacturing. Another motivation is the availability of high-quality silicon-on-insulator (SOI) planar waveguide circuits that offer strong optical confinement due to the high index contrast between silicon $(n = 3.45)$ and SiO~2~ $(n = 1.45)$. This opens up miniaturization and large scale integration of photonic devices. Moreover, it has also been shown that silicon has excellent material properties like high third-order optical nonlinearities which, together with the high optical confinement in the SOI waveguides, enable functionalities like amplification, modulation, lasing, and wavelength conversion [@JalaliJLT2006; @RongNatPhoton2007]. Various review papers[^3] have been published highlighting recent breakthroughs and novel devices in this technology [@JalaliMicrowaveMag2006; @SorefJSTQE2006; @JalaliJLT2006] .
The past 15 years have also seen significant increase of silicon photonics implementation in MWP systems. As highlighted earlier in this section, the progress showed in silicon photonics for MWP takes a slightly different direction compared to applications like photonic interconnect or high speed data communications. In these latter fields, the use of silicon photonics is focused on large scale monolithic integration combining passive waveguides, modulators, detectors and sometimes light sources. But silicon is not ideal for electro-optic modulators and detectors in 1550 nm operating wavelengths [@JalaliJLT2006; @HochbergNatPhoton2012]. Thus, from the perspective of system performance, silicon modulators, detectors and lasers cannot yet provide the stringent requirement of MWP. For this reason, most of the advances in silicon photonics for MWP have been focused on passive reconfigurable devices and/or devices exploiting optical nonlinearities.
The propagation loss in SOI waveguides has a large variation depending on the waveguide dimensions and processing conditions. There are two types of waveguides commonly used in silicon photonics community: *shallow ridge or rib waveguides* with a width of 1-8 $\mu$m and *silicon strip waveguides (or nanowires)* with a dimension of approximately 500 nm wide by 250 nm thick. The rib waveguides exhibit relatively low losses down to 0.1-0.5 dB/cm, but limited in bending radius to hundred of micrometers [@FischerPTL1996; @DongOpex2010_loss; @RasrasJLT2009; @IbrahimOpex2011; @KhanOpex2011; @GiuntoniOpex2012]. Strip waveguides on the other hand exhibit much higher losses with the lowest reported cases are in the order of 1-3 dB/cm [@XiaNatPhoton2007; @XiaoOpex2007; @GnanEL2008; @BogaertsJLT2009] but they also allow ultra compact devices due to the tight minimum bending radius which is in the order of a few micrometers. Because of the high index contrast of the SOI waveguides surface roughness in due to imperfect etching will result in high scattering losses. But when an etchless process is used, the loss of SOI strip waveguides/nanowires could be as low as 0.3 dB/cm [@CardenasOpex2009]. A typical cross sections of nanowires, rib waveguides and the etchless strip waveguide are depicted in Figure \[SOIwaveguide\] (a), (b) and (c), respectively.
![(a) Silicon strip waveguide/nanowire (from [@XiaoOpex2007], courtesy of the OSA). (b) Rib waveguides (from [@RongNatPhoton2007], courtesy of the Macmillan Publishers Ltd). (c) Etchless SOI strip (from [@CardenasOpex2009], courtesy of the OSA).[]{data-label="SOIwaveguide"}](SOIwaveguides){width="\linewidth"}
Most of the integrated MWP devices in SOI were demonstrated using the rib waveguides instead of the nanowires. This is expected since MWP systems have a strict requirement regarding losses. In 1997 Yegnanarayanan et al. [@YegnanarayananPTL1997] demonstrated the first optical delay lines in SOI for true time-delay phased array antenna. They used eight-channel 3 $\mu$m wide waveguides with an incremental time delay of 12.3 ps measured over 2-20-GHz frequency range. MWP filters have also been demonstrated in SOI waveguides. Rasras et al.[@RasrasJLT2007; @RasrasJLT2009] demonstrated bandpass and notch MWP filters based on Mach-Zehnder interferometer (MZI) tunable couplers and ORRs fabricated in silicon-buried channel waveguides with a width of 2 $\mu$m and a propagation loss of 0.25 dB/cm. Dong et al.[@DongOpex2010_filter] demonstrated a bandpass filter with narrow passband using 5th-order ORR fabricated in shallow-ridge waveguides with a width of 1 $\mu$m and height of 0.25 $\mu$m. The waveguide propagation loss is 0.5 dB/cm and the ring radius is 248 $\mu$m. The filter characteristic will be discussed in more details in Section \[subsec:coherent\]. A similar waveguide structure with wider width (2 $\mu$m) was used in the delay lines in a programmable unit cell filter for signal RF processing reported in [@ToliverOFC2010; @FengOpex2010]. In another demonstration of MWP filter, two types of silicon rib waveguides, a narrow waveguide which is 0.5-$\mu$m wide and a wide waveguide which is 3-$\mu$m wide, were used in the lattice filter configuration shown in Figure \[Ibrahim\]. The narrow and wide waveguides are connected with a linear taper. The the narrow waveguides were used to obtain smaller bending radius and for fast and efficient reconfiguration of the filter response while the wide waveguides were used for the low propagation loss (measured value 0.5 dB/cm). Other demonstration of integrated MWP in wide SOI rib waveguides include optical delay lines [@KhanOpex2011; @GiuntoniOpex2012] which will be discussed in more details in Section \[sec:delay\].
![(a) Schematic of a single unit cell, red boxes indicate phase shifter electrodes. (b) Simulation showing the mode size in both the narrow (0.5 $\mu$m) and wide (3 $\mu$m) waveguides. (c) Schematic of a four-unit-cell filter (from [@IbrahimOpex2011], courtesy of the OSA).[]{data-label="Ibrahim"}](IbrahimOpex){width="\linewidth"}
Integrated MWP with SOI stripe waveguides has been demonstrated for optical delay lines [@CardenasOpex2010; @MortonPTL2012], arbitrary waveform generation [@ShenOpex2010; @KhanNatPhotonics2010] and ultrawideband (UWB) signal generation [@YunhongIPC2011; @YueOL2012; @MirshafieiPTL2012]. In [@MortonPTL2012] SOI waveguides with dimensions of 250 nm-by-500 nm were used for fabricating 20 ORRs in a balanced side-coupled integrated spaced sequence of resonators (SCISSOR) structure. As reported in [@CardenasOpex2010], the waveguides exhibit a propagation loss of 4.5 dB/cm. In [@ShenOpex2010; @KhanNatPhotonics2010] an eight-channel reconfigurable optical filter consisting of cascaded microring resonators and tunable MZI couplers is fabricated in SOI waveguides with a dimension of 500 nm-$\times$-250 nm. The rings have radius of 5 $\mu$m and the propagation loss is 3.5 dB/cm. In [@YueOL2012] the optical nonlinearity of SOI waveguides is exploited to generate UWB monocycles. The non-degenerate two-photon absorption (TPA) in a 4.046 cm long silicon waveguide with a 776-$\times$-300 nm$^{2}$ cross section is used to create a 143 ps Gaussian monocycle pulse. Details on the arbitrary waveform and UWB generation techniques are discussed in Section \[sec:generation\].
TriPleXtechnology (Si~3~N~4~/SiO~2~) {#subsec:triplex}
------------------------------------
Recently, many MWP functionalities like beamforming [@ZhuangPTL2007; @MeijerinkJLT2010; @ZhuangJLT2010; @MarpaungEuCAP2011; @MarpaungMWP2011; @BurlaAO2012], optical frequency discriminator [@MarpaungMWP2010; @MarpaungOpex2011], UWB pulse shaping [@MarpaungOpex2011] and MWP filter [@ZhuangOpex2011; @BurlaOpex2011] have been demonstrated in the TriPleX waveguide technology platform[^4]. This waveguide technology is based on a combination of silicon nitride (Si~3~N~4~) as waveguide layer(s), filled by and encapsulated with by silica (SiO~2~) as cladding layers. The consisting SiO~2~ and Si~3~N~4~ layers are fabricated with CMOS-compatible industrial standard low-pressure chemical vapor deposition (LPCVD) equipment which enables low cost volume production [@HeidemanSPIE2009]. TriPleX allows for extremely low loss integrated optical waveguides both on silicon and glass substrates for all wavelengths in between 405 nm (near UV) up to 2.35 $\mu$m, providing maximum flexibility from an integration standpoint. Several significantly different waveguide geometries (Figure \[TriplexWG\]) can be obtained by varying individual steps in the generic fabrication process. The details on the fabrication steps for these waveguide structures are give in [@HeidemanJSTQE2012].
![Schematics (top row) and corresponding SEM images of realized structures (bottom row) of three typical single-mode channel layouts: a symmetrical, box-shaped layout with minimal modal birefringence (left column), and two asymmetrical layouts with large modal birefringence: the double-stripe (=-shaped, center column), and a single-stripe layout (right column).[]{data-label="TriplexWG"}](TriPleXWG){width="\linewidth"}
The three geometries shown in Figure \[TriplexWG\] are called *box-shape* (left), *double-stripe* ($=-$shape, middle) and *single-stripe* (right), respectively. All modal characteristics are controlled and tuned by the design of the geometry. While the values of the parameters of these geometries are quite similar each type of waveguide typically has core dimensions in the order of 1 $\mu\mathrm{m}^2$ their corresponding wavelength dependence, modal characteristics, birefringence and therefore desired application differ greatly.
The *single-stripe* TriPleX structure (Figure \[TriplexWG\], right) is well suited for (opto-fluidic) sensing applications. This layout goes with large modal birefringence, which is a prerequisite for most integrated optical interferometric sensing schemes to prevent signal fading. The single-stripe layout is also compatible with microfluidics [@HeidemanJSTQE2012]. But more importantly, this waveguide structure has been used to demonstrate ultra-low propagation loss [@TienOpex2010; @BautersOpex2011_1; @TienOpex2011; @DaiOpex2011; @BautersOpex2011_2; @DaiLSA2012]. The Si~3~N~4~ single stripe optical waveguide has a high aspect ratio Si~3~N~4~ core (see Figure \[TriplexWG\], right) to minimize the scattering loss at the sidewall, which is the dominant loss mechanism [@BautersOpex2011_1]. With an optimized fabrication process, the Si~3~N~4~-on-SiO~2~ optical waveguide shows a loss as low as 0.045 dB/m [@BautersOpex2011_2], which is presently a world record for planar waveguides. This value has been measured on a spiral waveguide with a core thickness of 40 nm and width of 13 $\mu$m and a bonded thermal oxide upper cladding. The measured propagation loss is depicted in Figure \[TriplexstripeLoss\]. Several typical photonic integrated devices like arrayed waveguide gratings (AWGs) [@DaiOpex2011] and ultra-high Q ring resonators [@TienOpex2011] have also been demonstrated.
![Propagation loss vs. wavelength for a Si~3~N~4~-on-SiO~2~ (TriPleX single stripe) waveguide with a 40-nm-thick by 13-$\mu$m-wide core and bonded thermal oxide upper cladding, measured on a 1-m-long spiral waveguides (inset) (from [@DaiLSA2012], courtesy of the Macmillan Publishers Ltd).[]{data-label="TriplexstripeLoss"}](TriPleXStripeLoss){width="\linewidth"}
The *box-shaped* TriPleX (Figure \[TriplexWG\], left) layout is best exploited for telecom applications: due to its symmetrical layout, the polarization birefringence is largely reduced [@MorichettiJLT2007]. For this geometry a library of standard optical components with predictable characteristics is available, and currently has been offered as multi project wafer (MPW) service runs[^5]. The box shape geometry has been applied for a variety of applications, for example in ranging from polarization independent, thermally tunable ring resonators acting as mirrors to create narrow spectral bandwidth lasers tunable over the entire telecom C-band [@Oldenbeuving2012]. For MWP applications, the box-shaped waveguide has been used to fabricate a programmable optical beamformer [@MeijerinkJLT2010; @ZhuangJLT2010; @BurlaOpex2011; @BurlaAO2012] and frequency discriminators for high SFDR phase modulated MWP link [@MarpaungMWP2010; @MarpaungOpex2011].
The beamformer reported in [@MeijerinkJLT2010; @ZhuangJLT2010] consists of 8 inputs, 2 balanced outputs, 8 ORRs for tunable true time delays, more than 23 tunable couplers and an optical sideband filter. The minimum bend-radius used in the chip is 700 $\mu$m and the waveguide propagation loss is 0.6 dB/cm. The details of this beamformer will be discussed in Section \[subsec:OBFN\]. The FM discriminator was fabricated using a higher index-contrast box-shaped waveguide. It consists of five fully tunable race-track ORRs with a bend radius of 150 $\mu$m. The measured waveguide propagation loss in this chip amounts to 1.2 dB/cm, which is dominated by the sidewall roughness in the waveguide. The details of the FM discriminator is discussed in Section \[subsec:PMIM\].
![(a) Propagation loss of the TriPleX double-stripe waveguide structure versus different bend radii in the race track-shaped ORR. (b) Measured filter shapes of the single and double ring-assisted MZI (Inset: schematic of the filter architecture and mask layout design) (from [@ZhuangOpex2011], courtesy of the OSA).[]{data-label="TriplexLoss"}](TriPleXLoss){width="\linewidth"}
The *double-stripe* ($=$-shaped) TriPleX layout Figure \[TriplexWG\], middle) is standardized especially for MWP applications. This geometry goes with large polarization birefringence, tight bending radii and low channel attenuation levels at 1.55 $\mu$m. The propagation loss of such waveguides measured in a ring resonator as a function of the bending radius is depicted in Figure \[TriplexLoss\] (a). As shown in the result, the maximum waveguide propagation loss was measured to be 0.12 dB/cm in the ORR with a bend radius of 50 $\mu$m, and an average waveguide propagation loss as low as 0.095 dB/cm was achieved for the bend radii larger than 70 $\mu$m [@ZhuangOpex2011].
Using the double-stripe waveguides an MWP filtering function has been demonstrated [@MarpaungMWP2011; @ZhuangOpex2011]. Two types of optical sideband filters (OSBFs) for optical single-sideband suppressed carrier modulation (OSSB-SC) scheme have been fabricated and characterized. One filter consists of an asymmetric MZI with an ORR inserted in its shorter arm. The other is an upgraded version of the first one obtained by adding a second ORR to the longer arm of the asymmetric MZI. Both filters were designed with a full programmability by using the thermo-optic tuning mechanism. For the design of both filters, a waveguide bend radius of 125 $\mu$m was used, which results in footprints of 0.3$\times$1.5 cm (MZI + ORR) and 0.4$\times$1.5 cm cm (MZI + 2 ORRs). The measured filter responses show high frequency selectivity as depicted in Figure \[TriplexLoss\] (b).
All three standardized TriPleX geometries can be coupled very efficiently to the outside world through the use of integrated spot size converters. These components are used to adiabatically transform the profile of the optical mode between two sections having different modal characteristics, and are commercially available with typical coupling efficiency (or corresponding coupling loss) between individual components better than 80% (e.g. smaller than 1 dB).
Other technology {#subsec:other}
----------------
Besides the four platforms discussed above, integrated MWP has been demonstrated with a host of other materials, like GaAs [@NgSPIE1994; @CombrieCLEO2010], LiNbO~3~ [@HorikawaMTT1995; @MitchellLEOS2007; @IlchenkoPTL2008; @WangOpex2009; @WangPTL2010; @WijayantoElectLett2012], polymers [@YeniayJLT2004; @HowleyPTL2005; @HowleyJLT2007; @YeniayPTL2010] and chalcogenide glasses [@MaddenOpex2007; @EggletonNatPhotonics2011; @EggletonLPR2012; @PantOpex2011; @PantOL2012; @PelusiNatPhotonics2009; @ByrnesCLEO2012].
A summary of integrated MWP demonstrations since 1994 until mid-2012 is shown in Table \[tlab\].
--------- ---------------------- --------------------- ---------------- --------- ------------- -------------------------------------------------------------
Year Functionality Key component PIC Technology Loss Bend radius $1^{\mathrm{st}}$ author \[Ref\]
(dB/cm) $(\mu$m)
1994 Beamforming Rib waveguides GaAlAs/GaAs 1 3000 Ng [@NgSPIE1994; @NgPTL1994]
1995 Beamforming Switched delay Silica 0.1 2000 Horikawa [@HorikawaIMS1995; @HorikawaOFC1996]
1995 Beamforming Phase shifter LiNbO~3~ 0.3 - Horikawa [@HorikawaMTT1995]
1997 FM Discrim. MZI filter Silica - - Lagasse [@LaGassePTL1997]
1997 Delay Rib waveguide SOI - 5000 Yegnanarayanan [@YegnanarayananPTL1997]
1999 Beamforming Phase shifter InP 1.4 250 Stulemeijer [@StulemeijerPTL1999]
2000 Delay Channel waveguide Polymer 0.02 - Tang [@TangOptEng2000]
2003 Beamforming Switched delay Silica - 10000 Grosskopf [@GrosskopfFIO2003]
2005 Delay Switched delay, ORR Silica 0.07 1000 Rasras [@RasrasPTL2005]
2005/07 Delay, beamforming Switched delay Polymer 0.64 1750 Howley [@HowleyPTL2005; @HowleyJLT2007]
2005 Delay lines Photonic crystal SOI 64 - Jiang [@JiangSPIE2005]
2007 Beamforming ORR, MZI TriPleX 0.55 700 Zhuang [@ZhuangPTL2007]
2007/09 MWP filter ORR, MZI SOI 0.25 7 Rasras [@RasrasJLT2007; @RasrasJLT2009]
2008 Coherent receiver PM, BPD InP - - Ramaswamy [@RamaswamyJLT2008]
2008 Delay ORR SiON 0.35 570 Melloni [@MelloniOL2008]
2008/09 Differentiator, UWB ORR SOI - 40 Liu [@LiuOpex2008; @LiuElectLett2009]
2009/10 UWB PPLN waveguide LiNbO~3~ - - Wang [@WangOpex2009; @WangPTL2010]
2009 RF phase shifter ORR SOI - 20 Chang [@ChangPTL2009]
2009 RF spectrum analyzer NL waveguide Chalcogenide 0.5 3000 Pelusi [@PelusiNatPhotonics2009]
2010 Integrator ORR Silica 0.06 47.5 Ferrera [@FerreraNatCommunications2010]
2010 Delay ORR SOI 4.5 7 Cardenas [@CardenasOpex2010]
2010 Arb. waveform gen. Add-drop ORR SOI 3.5 5 Shen [@ShenOpex2010], Khan [@KhanNatPhotonics2010]
2010 MWP filter MZI, delay SOI 0.9 - Toliver [@ToliverOFC2010], Feng [@FengOpex2010]
2010 MWP filter ORR SOI 0.5 248 Dong [@DongOpex2010_filter]
2010 Beamforming AWG, delay line Polymer 0.06 - Yeniay [@YeniayPTL2010]
2010 Delay Photonic crystal GaAs - - Combrie [@CombrieCLEO2010]
2010 MWP filter ORR, SOA Hybrid silicon - - Chen [@ChenMTT2010]
2010/11 MWP filter ORR, SOA InP 1 - Norberg [@NorbergPTL2010; @NorbergJLT2011; @GuzzonOpex2011]
2010/11 FM discrim., UWB Add-drop ORR TriPleX 1.2 150 Marpaung [@MarpaungOpex2010; @MarpaungOpex2011]
2010/11 Beamforming, SCT ORR TriPleX 0.6 700 Zhuang [@ZhuangJLT2010], Burla [@BurlaOpex2011]
2011 OSSB filter RAMZI TriPleX 0.1 70 Zhuang [@ZhuangOpex2011]
2011 FM discrim. MZI, RAMZI Silica 0.045 - Wyrwas [@WyrwasMWP2011; @WyrwasThesis2012]
2011 OPLL coh. receiver PM, BPD InP 10 - Li [@LiPTL2011], Bhardwaj [@BhardwajEL2011]
2011 OPLL coh. receiver PM, BPD InP - - Krishnamachari [@KrishnamachariMOTL2011]
2011 Delay Bragg grating SOI 0.5 - Khan [@KhanOpex2011]
2011 MWP filter MZI, ORR SOI 0.5 300 Ibrahim [@IbrahimOpex2011; @DjordjevicPTL2011]
2011 MWP filter ORR TriPleX 0.029 2000 Tien [@TienOpex2011]
2011 MWP filter MZI, ORR SOI 0.7 20 Alipour [@AlipourOpex2011]
2011 Arb. waveform gen. MZI Silica 0.7 2000 Samadi [@SamadiOptComm2011]
2011 UWB ORR SOI - 10 Ding [@YunhongIPC2011]
2012 Delay ORR SOI - 7 Morton [@MortonPTL2012]
2012 Delay, MWP filter SBS in waveguide Chalcogenide 0.3 - Pant [@PantOL2012], Byrnes[@ByrnesCLEO2012]
2012 MWP filter Microdisk III-V/SOI - 4.5 Lloret [@LloretOpex2012]
2012 UWB NL waveguide SOI - - Yue [@YueOL2012]
2012 Photonic ADC Modulator, mux, PD SOI - - Grein [@GreinCLEO2011], Khilo [@KhiloOpex2012]
2012 FM Discrim. RAMZI InP - 150 Fandiño [@FandinoECIO2012]
2012 UWB ORR SOI 5 20 Mirshafiei [@MirshafieiPTL2012]
--------- ---------------------- --------------------- ---------------- --------- ------------- -------------------------------------------------------------
High dynamic range microwave photonic link {#sec:APL}
==========================================
With the development of new photonic technologies and components, new types of MWP links have been reported. As mentioned in Subsection \[subsec:FOM\], these investigations were driven by the need for higher performance links. Theoretical investigations have shown that an ideal Class-B photonic link would feature an SFDR in excess of 180 dB$\cdot$Hz for a relatively high photocurrent of 50 mA [@ZhangPTL2007]. In such an ideal Class-B photonic link, the input RF signal is half-wave rectified. Positive voltage is converted linearly to intensity and transmitted on one optical link. Negative voltage is transmitted as intensity over a second matched link. A balanced detector is used to subtract the two detected complementary half-wave rectified signals and restore the original RF signal with zero DC bias [@DarcieJLT2007]. In such a link a significant reduction of the shot noise and RIN can be expected due to the absence of the DC bias optical power. However, the realization of such a link has been found difficult, especially with IMDD scheme, either using MZMs [@DarciePTL2007] or directly modulated lasers [@MarpaungMWP2006].
![Two types of phase-modulated (PM) MWP links. (a) Direct detection (DD) with a frequency discriminator. (b) Coherent detection (CD) with an optical phase-locked loop (OPLL).[]{data-label="MWPL"}](PMMWPL){width="\linewidth"}
For this reason many have turned to phase or frequency modulation schemes to increase the MWP link performance and to eventually realize the ideal Class-B link. Phase modulation is highly attractive because it is intrinsically highly linear and its operation does not require biasing. Frequency modulation is identical to phase modulation but with a modulation depth that is linearly dependent on the modulation frequency. Moreover, FM lasers have been demonstrated with high modulation efficiency, thereby promising a high link gain if implemented in MWP links [@WyrwasJLT2009]. The main challenge in phase or frequency modulation is to restore the modulating microwave signals (i.e. demodulation) in a linear manner. Two options that have recently gained popularities are the direct detection scheme using a frequency discriminator (Figure \[MWPL\] (a)) and coherent detection scheme using optical phase-locked loop (Figure \[MWPL\] (b)).
Direct detection scheme with frequency discriminator {#subsec:PMIM}
----------------------------------------------------
In this approach, a phase-modulated signal is converted to intensity modulation (PM-IM conversion) using an optical discriminator, thereby allowing a simple direct detection scheme. This approach is attractive for the additional degree of freedom in tailoring the characteristic of the optical filter discriminator to enhance the MWP link performance. The photonic discriminator can be designed for increasing the link linearity and/or suppressing the noise in the MWP link. Different filter types have been proposed as the photonic discriminator, with the simplest one being a Mach-Zehnder interferometer (MZI) [@LaGassePTL1997; @UrickMTT2007; @McKinneyJLT2009]. The linearized version of the MZI filter approach has shown a very large SFDR (above ${125~\rm dB}\cdot{\rm Hz}^{2/3}$ at 5 GHz) but suffers a limited bandwidth. In another approach, Darcie et al. proposed the use of a phase modulator with a pair of fiber Bragg gratings as the frequency discriminators [@DarcieOFC2006; @DarcieJLT2007; @DriessenJLT2008]. The FBGs were custom designed to realize a linear slope for the PM-IM conversion for realizing the Class-B photonic link. However these FBGs and the required optical circulators are bulky and thus, preventing a compact discriminator. An SFDR of ${110~\rm dB}\cdot{\rm Hz}^{2/3}$ has been shown with this approach.
\[ht!!\] {width="70.00000%"}
To realize compact frequency discriminators many turn to photonic integrated circuits. The idea to use integrated photonic filter was initially proposed by Xie et al. [@XiePTL2002_grating; @XiePTL2002_ring] in 2002. However the study did not lead to any device realization and experiment. In 2010 Marpaung et al. demonstrated the first PIC frequency discriminator for MWP link [@MarpaungMWP2010; @MarpaungOpex2010]. The chip consists of five fully-reconfigurable optical ring resonators in add-drop configuration (Figure \[Discriminator\] (a)). The chip was fabricated using the box-shape TriPleX waveguides (Figure \[Discriminator\] (b)). Programmability of the chip is done using thermo-optical tuning mechanism. The discriminator is used to show linear operation indicated by high IIP2 (46 dBm) and IIP3 (36 dBm) achieved at a single bias point (Figure \[Discriminator\] (e)). For shot noise limited performance, the predicted SFDR is ${113~\rm dB}\cdot{\rm Hz}^{2/3}$ at 2 GHz. But high chip insertion loss prevents to achieve high SFDR due to the high amplified spontaneous emission (ASE) noise from EDFAs. Reducing the fiber-to-chip coupling and waveguide propagation losses will dramatically increase this SFDR.
Subsequently, the discriminator chip consisting of cascaded Mach-Zehnder interferometers (MZIs) has recently been demonstrated [@WyrwasOFC2010; @WyrwasMWP2011; @WyrwasThesis2012]. The phase discriminator is a $6^{\rm{th}}$ order finite impulse response (FIR) lattice filter Figure \[Discriminator\] (c)) fabricated in a silica-on-silicon, planar lightwave circuit (PLC) process by Alcatel-Lucent Bell Labs (Figure \[Discriminator\] (d)). It has 6 stages of symmetrical MZIs (switches) and asymmetrical MZIs (delay line interferometers) with an FSR of 120 GHz. The discriminator is tunable using chromium heaters deposited on the waveguides and is dynamically tuned to minimize the link distortion. At the optimal wavelength, the RF power input (two tones around 2 GHz) into the link is varied the IMD3 and fundamental powers are measured. The data is shown in Figure \[Discriminator\] (f). For a photocurrent of 0.11 mA, the measured OIP3 is -19.5 dBm which is a 6.7 dB OIP3 performance improvement over an MZI with the same received photocurrent. For shot-noise limited noise performance, the link SFDR is ${112~\rm dB}\cdot{\rm Hz}^{2/3}$. If the photocurrent can be increased to 10 mA, OIP3 increases to 19.7 dBm and the shot-noise limited SFDR is ${125~\rm dB}\cdot{\rm Hz}^{2/3}$.
Research activities in realizing the on-chip FM discriminators are expected to increase significantly. Recently a device which includes a tunable optical filter acting as a frequency discriminator and a high speed balanced photodetector integrated in the same chip has been proposed by researchers at the UPV Valencia, Spain [@FandinoECIO2012]. The filter has a cascade of two integrated ring-loaded Mach-Zehnder interferometers (RAMZIs) in each of the two branches. The filter is fabricated in InP technology. Deeply-etched (1.7 $\mu$m) rib waveguides with InGaAsP core are used to enable sharp bends (150 $\mu$m) and minimizing chip are to $6\times6$ mm$^2$. The performance of the MWP link using this chip has not yet been reported.
Coherent detection with integrated optical phase locked loop {#subsec:OPLL}
------------------------------------------------------------
The second approach to achieve linear phase demodulation is to use a coherent optical link and to detect the optical PM using an optical phase locked loop (OPLL) [@RamaswamyJLT2008; @KrishnamachariMOTL2011; @LiPTL2011; @BhardwajEL2011]. This alternative is complex, but offers potentially high performance. The challenge of this approach is to realize a phase tracking receiver that can follow the linear phase modulation applied at the transmitter to ultimately realize a linear transmission. As shown in Figure \[MWPL\] (b), the phase of the received signal is detected using a balanced detector by comparison with that of a local oscillator (LO) laser. The output signal is then reapplied to a receiver PM to modulate the LO phase and drive the loop such that the voltage driving the receiver PM is a replica of the transmitted signal. While attractive and simple in theory, this is difficult to achieve in practice, particularly at multi-GHz rates encountered in microwave photonics. Loop gain must be high to improve linearity over simpler approaches. This can be accomplished using either high optical power at the receiver or through electronic amplification. To ensure the loop stability a low pass filter (LPF) must be used. In practice, the loop delay must be a small fraction (e.g. 1/5) of the maximum RF frequency. This not only calls for monolithic integration of both the electronics and photonics, it also demands the elimination of any unwanted signal paths between the two.
In [@RamaswamyJLT2008], an InP photonic integration platform was fabricated consisting of a balanced uni-traveling carrier (UTC) photodetector pair \[13\], a $2\times2$ waveguide multimode interference (MMI) coupler and tracking phase modulators in a balanced configuration. The schematic and the SEM image of the realized device is shown in Figure \[OPLL\] (a). The tracking optical phase modulators are driven differentially so as to add opposite-sign phase shifts to the incoming signal and LO resulting in a cancellation of even-order nonlinearities and common-mode noise. Additionally, driving the modulators in a differential fashion doubles the drive voltage presented to the modulator thereby doubling the available phase swing. The photonic integrated circuit (PIC) was wirebonded to the electronic integrated circuit (EIC) used to provide feedback gain and filtering. Using this device a 3-dB loop bandwidth of 1.45 GHz was demonstrated, and SFDR of ${125~\rm dB}\cdot{\rm Hz}^{2/3}$ at 300 MHz and ${113~\rm dB}\cdot{\rm Hz}^{2/3}$ at 1 GHz were achieved. The reduced SFDR at 1 GHz is due to the large loop delay ($\approx$ 35 ps) of this receiver with a substantial portion coming from the wirebonds.
![OPLL realizations in InP for coherent detection MWP link. (a) SEM and block diagram of integrated optoelectronic receiver reported in [@RamaswamyJLT2008], courtesy of the IEEE. Layout of the PIC and (c) flip-chip bonded PIC and EIC reported in [@KrishnamachariMOTL2011], courtesy of Wiley. Photograph (d) and layout (e) of the ACP-OPLL reported in [@LiPTL2011], (d) courtesy of the SPIE, (e) courtesy of the IEEE. []{data-label="OPLL"}](OPLL){width="\linewidth"}
To reduce the loop delay, the same group proposed a novel ultra compact coherent receiver PIC containing two push-pull phase modulators, a balanced UTC photodetector pair and an ultrashort frustrated total internal reflection (FTIR) trench coupler. Moreover, smaller delay can be obtained by compact integration of the PIC and EIC via flip-chip bonding [@KrishnamachariMOTL2011].The fully fabricated PIC is shown in Figure \[OPLL\] (b). The flip-chip bonded PIC and EIC are shown in Figure \[OPLL\] (c). Using this device an SFDR of ${122~\rm dB}\cdot{\rm Hz}^{2/3}$ at 300 MHz has been achieved.
A different approach to control such a short loop propagation delay has been proposed by Li et al. [@LiJLT2009]. The configuration consists of the so-called attenuation-counterpropagating (ACP) in-loop phase modulator where the optical and microwave fields propagate in the opposite direction and the microwave field is strongly attenuated. This unique configuration eliminates the propagation delay of the in-loop phase modulator at the expense of a tolerable decrease in bandwidth. The proof of concept of this ACP-OPLL was experimentally demonstrated using an MZ-like structure fabricated in LiNbO~3~ butt-coupled to a bulk UTC BPD. A standard two-tone test was performed to probe the linearity of the device and the SFDR was measured to be ${134~\rm dB}\cdot{\rm Hz}^{2/3}$ at the frequency of 100 MHz. In an attempt to extend the high SFDR to a higher frequency, a monolithically integrated ACP-OPLL consisting of the phase modulators and the UTC BPD was fabricated InP-based material platform [@LiPTL2011; @BhardwajEL2011]. The photograph and the layout of the realized device are shown in Figure \[OPLL\] (d-e). The OPLL was designed to reach the OIP3 $>$ 40 dBm and the SFDR of ${140~\rm dB}\cdot{\rm Hz}^{2/3}$ at a bandwidth beyond 2.7 GHz [@LiPTL2011]. however, due to a faulty BPD, the loop showed a small bandwidth ($<$ 200 MHz) and relatively low OIP3 (13 dBm). Thus the SFDR was limited to ${124.5~\rm dB}\cdot{\rm Hz}^{2/3}$ at 150 MHz. The predicted shot-noise limited SFDR is ${130.1~\rm dB}\cdot{\rm Hz}^{2/3}$ at 150 MHz.
A quick comparison of the two detection schemes lead to a conclusion that the frequency discriminator approach have shown better performance at higher frequencies. The operating frequency actually is not a limiting factor for these discriminators, since it is bounded by the filters FSR which can be relatively large. The OPLL approach with PIC is very promising for very large SFDR, up to ${140~\rm dB}\cdot{\rm Hz}^{2/3}$, but in the current implementation is currently limited to low frequencies.
Microwave photonic filters {#sec:MWPfilter}
==========================
A microwave photonic filter [@MinasianMTT2006; @CapmanyJLT2005; @CapmanyJLT2006], is a photonic subsystem designed with the aim of carrying equivalent tasks to those of an ordinary microwave filter within a radio frequency (RF) system or link, bringing supplementary advantages inherent to photonics such as low loss, high bandwidth, immunity to electromagnetic interference (EMI), and also providing features which are very difficult or even impossible to achieve with traditional technologies, such as fast tunability, and reconfigurability. The term microwave is freely used throughout the literature to designate either RF, microwave, or millimeter-wave signals. Figure \[MPF1\] shows a generic reference layout of an MWP filter.
\[hb!\] ![Generic reference model of a Microwave Photonic filter.[]{data-label="MPF1"}](MPF_1 "fig:"){width="\linewidth"}
An input RF signal (with spectrum sideband centered at frequency $\pm f_{\mathrm{RF}}$, shown in point 1) coming from a generator or detected by means of a single or an array of antennas is used to modulate the output of an optical source which upconverts its spectrum to the optical region of the spectrum (point 2), such that the sidebands are now centered at $\nu\pm f_{\mathrm{RF}}$ , where $\nu$ represents the central frequency of the optical source. The combined optical signal is then processed by an optical system composed of several photonic devices and characterized by an optical field transfer function $H\left(\nu\right)$. The mission of the optical system is to modify the spectral characteristics of the sidebands so at its outputs they are modified according to a specified requirement as shown in point 3. Finally, an optical detector is employed to downconvert the processed sidebands again to the RF part of the spectrum by suitable beating with the optical carrier so the recovered RF signal, now processed (as shown in point 4) is ready to be sent to a RF receiver or to be re-radiated. The overall performance of the filter is characterized by an end-to-end electrical transfer function $H\left(f_{\mathrm{RF}}\right)$ which is shown in Figure \[MPF1\] and links the input and output RF signals. The most powerful and versatile approach for the implementation of MWP filters is that based on discrete-time signal processing [@CapmanyJLT2005] where a number of weighted and delayed samples of the RF signal are produced in the optical domain and combined upon detection. In particular, finite impulse response (FIR) [@CapmanyJLT2006] filters combine at their output a finite set of delayed and weighted replicas or taps of the input optical signal while infinite impulse response (IRR) are based on recirculating cavities to provide an infinite number of weighted and delayed replicas of the input optical signal [@CapmanyJLT2006]. For instance and taking as an example a FIR configuration, the electronic transfer function is given by:
$$\label{eq:MPFeq1}
H\left(f_{\mathrm{RF}}\right) =
\sum_{k=0}^{N-1} a_{k}\,e^{-j 2 \pi k f_{\mathrm{RF}} T} \,,$$
where $a_{k}=\left|a_{k}\right|\,e^{-j k \phi}$ represents the weight of the $k$-th sample, and $T$ the time delay between consecutive samples. Note that Eq. implies that the filter is periodic in the frequency domain. The period, known as free spectral range (FSR) is given by $f_{\mathrm{FSR}}=1/T$. The usual implementation of this concept in the context of microwave photonics can follow two approaches as shown in Figure \[MPF2\]. In the first one (Figure \[MPF2\]a), the delays between consecutive samples are obtained, for instance, by means of a set of optical fibers or waveguides where the length of the fiber/waveguide in the $k$-th tap is $c\left(k-1\right)T/n$, being $c$ and $n$ the light velocity in the vacuum and the refractive index respectively. This simple scheme does not allow tuning, as this would require changing the value of $T$ . An alternative approach (Figure \[MPF2\]b) is based on the combination of a dispersive delay line and different optical carriers where the value of the basic delay $T$ is changed by tuning the wavelength separation among the carriers, thereby allowing tunability [@CapmanyJLT2005]. While in the first case the weight of the $k$-th tap, represented by $a_{k}$ , can be changed by inserting loss/gain devices in the fiber coils, with the second approach $a_{k}$ is readily adjusted by changing the optical power emitted by the optical sources [@CapmanyJLT2005].
![General schematics of a discrete-time FIR MWP. (a) Traditional approach based on a single optical source in combination with multiple delay lines. (b) A more compact approach based on a multi-wavelength optical source combined with a single dispersive element.[]{data-label="MPF2"}](MPF_2){width="\linewidth"}
Finally, MWP filters can operate under *incoherent regime*, where sample coefficients in Eq. correspond to optical intensities and are thus positive or under *coherent regime* where the taps in Eq. can be complex-valued in general. In the first case the basic delay $T$ is much greater than the coherence time [^6] of the optical source that feeds the filter while in the second is much smaller.
Requirements of microwave photonic filters {#subsec:filterrequirements}
------------------------------------------
MWP filter flexibility in terms of tunability, reconfigurability and selectivity is achieved by acting over the different parameters characterizing the samples in Eq. with a variety of techniques having been reported in the literature [@MoraOL2003; @CapmanyOL2003; @CapmanyOpex2005; @SupradeepaNatPhotonics2012]. The effect of the relevant parameters in Eq. on the filter response is illustrated in Figure \[MPF3\].
![Illustration of the requirements on sample parameters to achieve MWP filter tunability, reconfigurability, selectivity.[]{data-label="MPF3"}](MPF_3){width="\linewidth"}
The number of samples $N$ dictates whether the filter is either a notch $\left(N=2\right)$, or a bandpass $\left(N>2\right)$ type. As mentioned above, $T$ fixes the spectral period, thus changing $T$ results in compressing or stretching the spectral response. This is a technique usually employed in the literature for tuning the notch or bandpass-positions of a MWP filter. Fast tuning can be achieved by changing the wavelength separation between adjacent carriers in the scheme of Figure \[MPF2\]b with a current record value in the range of 40 ns [@SupradeepaNatPhotonics2012]. The phase of the tap coefficients allow the tuning of the spectral response without actually stretching or compressing it. The implementation of phase values depends on the approach followed for its implementation. For incoherent MWP filters, a photonic RF-phase shifter is required which can be implemented in a variety of technologies, including, stimulated Brillouin scattering, coherent population oscillations in SOA devices and passive configurations based on ring cavities and resonators. All of the above can provide the required phase-shift dynamic range, but only the last two are prone for integration and can provide switching speed below one microsecond. For coherent filters, the phase shifts are optically provided by photonic components. The law followed by the tap coefficient moduli dictates the filter shape (reconfiguration). Filters featuring different windowing functions, both static and dynamically reconfigurable structures have been reported in the literature where tap amplitude setting has been achieved using different techniques, including spatial light modulators, SOAs, and also by fixing the output power of laser modes. Finally, the filter selectivity is dictated by the number of samples which determine the quality factor and the main to secondary sidelobe (SSL) rejection ratio. FIR schemes using multiwavelength sources can provide from 40 to over 60 samples with the current record featuring a SSL value of 61 dB.
Coherent filters {#subsec:coherent}
----------------
As far as integrated MWP coherent filtering is concerned, several groups have reported results [@RasrasJLT2007; @RasrasJLT2009; @TuJLT2010; @NorbergPTL2010; @ChenMTT2010; @GuzzonOpex2011; @NorbergJLT2011; @GuzzonJQE2012; @FengOpex2010; @DongOpex2010_filter; @DjordjevicPTL2011; @IbrahimOpex2011; @AlipourOpex2011]. Many of the preliminary approaches have been based mainly on single cavity ring resonators. A few however have also focused on more elaborated designs involving more than one cavity and programmable features. These filters can be useful particularly when the RF information has already been modulated onto the lightwave carrier and it might be advisable to perform some prefiltering in the optical domain prior to the receiver. Representative results from one cavity filters can be found in [@NorbergPTL2010; @ChenMTT2010; @NorbergJLT2011]. For instance, [@NorbergJLT2011] reports the results for a unit cell, shown in the upper of Figure \[MPF4\] that could be an element of more complex lattice filters.
![Integrated InP-InGaAsP first-order MWP coherent filter providing one pole and one zero reported in [@NorbergJLT2011]. (Upper) schematic of a unit cell configuration and SEM images of the contacts and waveguides. (Lower) measured transfer functions for one pole (left), one zero (center) and one pole-one zero (right) configurations (courtesy of the IEEE).[]{data-label="MPF4"}](MPF_4){width="\linewidth"}
This unit cell, integrated in InP-InGaAsP is composed of two forward paths and contains one ring. By selectively biasing one semiconductor optical amplifier (SOA) and phase modulators placed in the arms of the unit cell, filters with a single pole, a single zero or a combination of both can be programmed as shown in the lower part of Figure \[MPF4\]. In particular and for the design reported in [@NorbergJLT2011] the frequency tuning range spans around 100 GHz. A hybrid version incorporating silicon waveguides has also been reported [@ChenMTT2010] that combines III-V quantum well layers bonded with low loss passive silicon waveguides. Low loss waveguides allow for long loop delays while III-V quantum devices provide active tuning capability. The same group involved in [@GuzzonJQE2012] is now reporting results of more complex designs involving second and third order filters as well as other different unit cell configurations [@GuzzonOpex2011]. A more complex design, this time in Silicon, has been also recently presented [@FengOpex2010; @DongOpex2010_filter] where 1-2 GHz-bandwidth filters with very high extinction ratios ( 50 dB) have been demonstrated. The silicon waveguides employed to construct these filters have propagation losses of $~0.5$ dB/cm. Each ring of a filter is thermally controlled by metal heaters situated on the top of the ring. With a power dissipation of $~72$ mW, the ring resonance can be tuned by one free spectral range, resulting in wavelength-tunable optical filters. Both second-order and fifth-order ring resonators have been demonstrated, which can find ready application in microwave/radio frequency signal processing. The upper part of Figure \[MPF5\] shows the filter layouts while their spectral response can be found in the lower part of the same figure.
![Integrated two (upper) and five (middle) cavity Silicon MWP coherent filters. Details of measured passbands (lower) for the two (left) and five (right) cavity structures as reported in [@FengOpex2010] (courtesy of the OSA).[]{data-label="MPF5"}](MPF_5){width="\linewidth"}
Incoherent filters {#subsec:incoherent}
------------------
Work in integrated MWP incoherent filters as been reported as well by various groups [@MunozJLT2002; @PastorOL2003; @PoloPTL2003; @XuePTL2009; @LloretOpex2011]. Initially research efforts focused on the use of integrated array waveguide grating (AWG) devices [@MunozJLT2002] to perform a variety of functions, including spectral slicing to provide low-cost multiple input optical carriers [@PastorOL2003] or selective true time delays [@PoloPTL2003]. More recent efforts have focused towards the implementation of complex-valued sample filters by means of exploiting several techniques to integrate MWP phase shifters. For instance in [@XuePTL2009] a two-tap tunable notch filter configuration is proposed where phase shifting by means of coherent population oscillations in SOA devices followed by optical filtering. In another approach [@LloretOpex2011], the periodic spectrum of a integrated SOI ring resonator is employed as a multicarrier tunable and independent phase shifter. The configuration for a 4-tap filter is shown in Figure \[MPF6\].
![Tunable incoherent MWP filter based on multiple phase shifters implemented by periodic resonances of an integrated SOI ring resonator. (Upper left) filter configuration. (Upper right) amplitude and phase response of the SOI ring resonator. (Lower left) spectral locations of the four optical carriers and subcarriers to achieve respectively 0 and $90\textDegree$ phase shifts. (Lower right) Filter transfer functions corresponding to the two cases (0 and $90\textDegree$ phase shifts) showing tunability.[]{data-label="MPF6"}](MPF_6){width="\linewidth"}
Here the basic differential delay between samples is implemented in the first stage while an independent phase coefficient for each tap is selected in the ring resonator by fine tuning of the wavelength of each carrier. A similar configuration based on a hybrid InP-SOI tunable phase shifter has also been recently reported [@LloretOpex2012]. In this case tuning is achieved not by changing the source wavelength but by carrier injection into the III-V microdisk. A more versatile configuration which can provide both phase and optical delay line tuning has been recently reported [@BurlaOpex2011]. It consists of a reconfigurable optical delay line (ODL) with a separate carrier tuning (SCT) [@MortonPTL2009] unit and an optical sideband filter on a single CMOS compatible photonic chip. The processing functionalities are carried out with optical ring resonators as building blocks demonstrating reconfigurable microwave photonic filter operation in a bandwidth over 1 GHz. Most of the incoherent MWP filters reported so far require a dispersive delay line which is usually implemented by either a dispersive fiber link or a linearly chirped fiber Bragg grating which being bulky devices, prevent a complete integration of the filter on a chip. The ultimate and most challenging limitation towards the full implementation of integrated microwave photonic signal processors is therefore the availability of a dispersive delay line with a footprint compatible with the chip size providing at the same time the group delay variation required by high-frequency RF applications. A challenging and attractive approach is that based on a photonic crystal (PhC) waveguide which, if suitably designed, can fulfill the above requirements introducing moderate losses. Researchers have recently demonstrated for the first time both notch and bandpass microwave filters based on such component. Tuning over 0 - 50 GHz spectral range is demonstrated by adjusting the optical delay. The underlying technological achievement is a low-loss 1.5 mm long photonic crystal waveguide capable of generating a controllable delay up to 170 ps still with limited signal attenuation and degradation. Owing to its very small footprint, more complex and elaborate filter functions are potentially feasible with this technology.
Optical delay line and beamforming {#sec:delay}
==================================
Delaying and phase shifting RF signals are the basic functionalities for more complicated signal processing functionalities. In this section we review the integrated MWP techniques that have been proposed for tunable optical delay, phase shifting and beamforming.
Time delay and phase shifter {#subsec:delaytechnique}
----------------------------
Reconfigurable optical delay lines (ODL) and wideband tunable phase shifters have primary importance in a number of MWP signal processing applications like optical beamforming and MWP filter. The simplest way for generating delay in the optical domain is through physical length of optical fibers. However, this can become rather bulky. For this reason, integrated photonic solutions are used. A number of approach have been reported over the years. For example, optical switches can be used to provide discretely tunable delay by means of selecting waveguides with different propagation length. This approach has been demonstrated using devices in silica [@RasrasPTL2005] and in polymer [@HowleyPTL2005]. Others proposed tunable delay based on optical filters [@LenzJQE2006]. For example cascaded ORRs have been demonstrated for tunable delays in silica [@RasrasPTL2005], TriPleX [@ZhuangPTL2007], silicon oxynitride (SiON) [@MelloniOL2008], and SOI [@CardenasOpex2010; @MortonPTL2012]. Others used integrated Bragg-gratings in SOI that can be either electrically [@KhanOpex2011] or thermally tuned [@GiuntoniOpex2012].
Besides delay, phase shifting is also attractive for a number of signal processing applications. For narrowband phase shift, SOI ring resonators have been used as widely tunable RF phase shifters [@ChangPTL2009; @LloretOpex2011]. Others used semiconductor waveguides in SOAs [@XueOL2009; @XueOpex2010], or a microdisk in hybrid III-V/SOI platform [@LloretOpex2012].
\[ht!\] ![A signal processor based on separate carrier tuning scheme consisting of (a) an optical sideband filter (OSBF), (b) optical delay lines (ODL), and (c) carrier phase tuner (SCT). (d-f) denote the measured responses of the OSBF, ODL and SCT, respectively (from [@BurlaOpex2011], courtesy of the OSA).[]{data-label="SCT"}](SCT "fig:"){width="\linewidth"}
To have a complete signal processing capabilities, it is attractive to obtain both delay and phase shift at different signal frequency components. This is widely known as the separate carrier tuning (SCT) scheme, proposed by Morton and Khurgin [@MortonPTL2009]. Chin et al. demonstrated this functionality in optical fiber using the stimulated Brillouin scattering (SBS) effect [@ChinOpex2010]. Burla et al. [@BurlaOpex2011] demonstrated the SCT scheme together with optical single sideband filtering monolithically integrated in a single chip. The processor consists of a reconfigurable optical delay line, a separate carrier tuning unit and an optical sideband filter. The optical sideband filter, a Mach-Zehnder interferometer loaded with an optical ring resonator in one of its arms, removes one of the radio frequency sidebands of a double-sideband intensity-modulated optical carrier. The ODL and separate carrier tuning unit are individually implemented using a pair of cascaded optical ring resonators. Varying the group delay of the signal sideband by tuning the resonance frequencies and the coupling factor of the optical ring resonators in ODL, while also applying a full $0-2\pi$ carrier phase shift in separate carrier tuning, allowed the demonstration of a two-tap microwave photonic filter whose notch position can be shifted by 360over a bandwidth of 1 GHz. The principle of this SCT scheme is depicted in Figure \[SCT\].
Optical beamforming {#subsec:OBFN}
-------------------
In a phased-array antenna, the beam is formed by adjusting the phase relationship between a number of radiating elements [@SeedsMWP2002]. For wideband signals, a particular problem arise for phased arrays: if a constant phase shift is produced from element to element, the beam pointing is different for different frequency components-a phenomenon called beam squinting [@NgJLT1991; @FrigyesMTT1995]. It turns out that this squint can be compensated for by using (variable) delay lines rather than phase shifters. The use of optical techniques to generate true time delay for phased array applications has been the subject of extensive research over the past two decades. A compilation of these techniques can be found in [@CapmanyNatPhoton2007; @SeedsMWP2002; @SeedsMWP2006; @YaoMWP2009]. One of the most well-known concept is the fiber optic prism where the dispersion property of the fiber-optic link was used to create variable delays for variable source wavelengths [@Frankel1995]. As the laser wavelength is tuned, the differential delay between fiber paths changes, thus steering the antenna beam. The concept was later on revisited and demonstrated by Zmuda et al. in 1997 using fiber Bragg-gratings [@ZmudaPTL1997].
\[ht!!\] {width="100.00000%"}
Beamformers based on fibers can be rather bulky. To reduce the footprint of the beamformer, as well as to obtain precise time delay, many turn to integrated photonic solutions [@AckermanIMS1992; @NgSPIE1994; @NgPTL1994; @HorikawaIMS1995; @HorikawaOFC1996; @StulemeijerPTL1999; @GrosskopfFIO2003; @HowleyJLT2007; @ZhuangPTL2007; @ZhuangJLT2010; @CardenasOpex2010; @MortonPTL2012; @BurlaAO2012]. In general three categories of integrated photonic beamformer have been demonstrated over the years: wideband beamformer based on discretely tunable TTD [@AckermanIMS1992; @NgPTL1994; @NgSPIE1994; @HorikawaIMS1995; @HorikawaOFC1996; @HowleyJLT2007], wideband beamformer based on continuously tunable TTD [@ZhuangPTL2007; @ZhuangJLT2010; @CardenasOpex2010; @MortonPTL2012; @BurlaAO2012] and narrowband beamformer based on optical phase shifters [@HorikawaMTT1995; @StulemeijerPTL1999; @GrosskopfFIO2003; @GrosskopfAntenna2003].
As early as 1992, Ackerman et al. [@AckermanIMS1992] proposed an integrated optical switches in LiNbO~3~ to form a 6-bit TTD unit for 2-6 GHz radar application. However, in this approach the delays were induced by optical fibers. A monolithic integrated beamformer was proposed by Ng et al. [@NgPTL1994; @NgSPIE1994] using a PIC in GaAs. This approach used curved GaAlAs/GaAs rib-waveguides with propagation losses of 1 dB/cm integrated monolithically with GaAs based detector switches to form 4 (2-bit) switched delay lines. The switched delay lines approach was also demonstrated in silica PLC by Horikawa et al. [@HorikawaIMS1995; @HorikawaOFC1996]. Two architectures were proposed, a cascaded and a parallel switches configurations. The switching is done using a $2\times2$ thermo-optic switches and the loss in the delay lines was 0.1 dB/cm. Finally, polymer materials have been considered for the switched-delay beamformer. In [@HowleyJLT2007] the realization of a packaged 4-bit TTD device composed of monolithically integrated polymer waveguide delay lines and five $2\times2$ polymer total internal reflection (TIR) thermo optic switches was reported. The polymer waveguides exhibited a single-mode behavior with a measured propagation loss of 0.45 dB/cm at the wavelength of 1.55 $\mu$m. The delay lines used waveguides with bend radius of 1.75 mm. The dimension of the TTD device is 21.7-mm long by 13.7-mm wide. Beamforming at the X-band frequency range (8-12 GHz) was demonstrated.
Narrowband beamforming can be achieved using optical phase shifters instead of TTDs. Using a coherent detection scheme phase and amplitude of an optical signal can be directly transferred to a microwave signal by mixing this signal with an optical local oscillator signal. In this way, modulation of phase of a microwave signal can be performed using optical phase shifters. Initial works in this category include the self heterodyning beamformer based on LiNbO~3~ proposed by Horikawa et al. in 1995 [@HorikawaMTT1995]. InP-based PIC has also been considered [@StulemeijerPTL1999]. This beamformer controls the amplitude and phase of the RF signals using phase modulators and variable attenuators. Although the beamformer is narrowband, it featured an ultra compact footprint of $8.5\times8$ mm$^2$ for a $16\times1$ beamformer. A similar concept of narrowband beamforming was also implemented in silica PLC where thermo-optic effect is used to achieve the desired phase shifting [@GrosskopfFIO2003]. Amplitude control was performed using PLC type MZIs with two 3 dB multimode interferometer (MMI) couplers (50:50 ratio) and independent thermo-optic phase shifters on both waveguide arms. The chip size of the realized eight-channel OBFN is about 4 mm$\times$65 mm. Beamforming at 60 GHz was demonstrated [@GrosskopfAntenna2003].
For practical applications like satellite communications, a wideband continuously tunable beamformer is required. Such beamformer based on cascaded tunable ORRs was proposed by the researchers in the University of Twente [@ZhuangPTL2007; @MeijerinkJLT2010; @ZhuangJLT2010; @MarpaungEuCAP2011; @MarpaungMWP2011; @BurlaAO2012; @MarpaungEuCAP2011; @MarpaungMWP2011]. In [@ZhuangPTL2007], a state-of-the-art ring resonator-based $1\times8$ beamformer chip has been proposed. A binary tree topology is used for the network such that a different number of ORRs is cascaded for delay generation at each output. The beamformer was fabricated in TriPleXwaveguide technology. With this beamformer, a linearly increasing delay up to 1.2 ns in a 2.5 GHz bandwidth was demonstrated. In [@MeijerinkJLT2010; @ZhuangJLT2010], the suitability of the beamformer for satellite communications was investigated. The intended application was communications at the Ku-band frequency range (10-12.5 GHz). A coherent beamforming architecture using optical single sideband-suppressed carrier (OSSB-SC) modulation and balanced coherent detection was proposed and the performance was analyzed [@MeijerinkJLT2010]. The OSSB-SC signal was generated using a sideband filter integrated on the same chip. A $1\times8$ beamformer with 8 ORRs was fabricated using the box-shape TriPleXwaveguides with a propagation loss of 0.6 dB/cm [@ZhuangJLT2010]. The ORRs have a minimum bending radius of 700 $\mu$m. The total chip footprint measured 66.0 mm$\times$12.8 mm.
In [@BurlaAO2012], a $16\times1$ beamformer with a high degree of complexity was reported for phased-array antenna in radio astronomy application. The beamformer consists of 20 ORRs, more than 25 MZI tunable couplers and an OSBF. The chip was also fabricated in the box-shape TriPleXwaveguides that exhibit propagation loss of 0.2 dB/cm. The chip footprint is 70.0 mm$\times$13 mm. The schematic of the chip showing the optical waveguides layout the heaters for thermo-optical tuning and the realized chip (packaged) are shown in Figures \[OBFN\] (a) and (b). The work was focused on the system integration and the experimental demonstration of the beamformer with the phased array antenna. The measurements show a wideband, continuous beamsteering operation over a steering angle of 23.5 degrees and an instantaneous bandwidth of 500 MHz limited only by the measurement setup.
A novel $16\times1$ beamformer design was reported in [@MarpaungMWP2011]. The beamformer was designed to meet the requirements of a Ku-band PAA system with instantaneous bandwidth of more than 4 GHz and a maximum time delay of 290 ps. For the total PAA system, 32 of these beamformers will be used to beamform a large scale PAA of 2048 antenna elements. The complete system level analysis of this PAA system using the optical beamformers was reported in [@MarpaungEuCAP2011]. The novel beamformer consists of 40 ORRs. From the system level simulations, it was concluded that to have a good noise figure and SNR at the receiver output, the waveguide propagation loss should not exceed 0.2 dB/cm. For this reason, and for the sake of foot-print reduction, the beamformer was fabricated using the double-stripe TriPleXwaveguide technology. As reported in [@ZhuangOpex2011], these waveguides feature a propagation loss as low as 0.1 dB/cm while maintaining a tight bending radius down to 75 $\mu$m. For the beamformer reported in [@MarpaungMWP2011; @MarpaungEuCAP2011], the bend radius of the ORRs was chosen to be 125 $\mu$m. This enables a highly complex beamformer with a small total footprint of 22 mm$\times$7 mm, which is a size reduction of nearly 10 times of the beamformer reported in [@BurlaAO2012]. The schematic, layout and the photograph of this beamformer are shown in Figures \[OBFN\] (c)-(e). For an implementation in an actual PAA system, an important aspect of system stability and reliability must be addressed. For this reason, work towards hybrid RF and photonic integration of the passive optical beamformer in TriPleX technology, an array of surface normal electroabsorption modulators in InP platform and RF front-end is ongoing [@MarpaungMWP2011; @MarpaungEuCAP2011].
Microwave signal generation {#sec:generation}
===========================
Microwave signal generation techniques have enjoyed enormous progress over the past five years. The aim in such research activities is generating ultra broad bandwidth RF waves with arbitrary and reconfigurable phase and amplitude or to generate extremely stable and pure microwave carriers. In this section we review the integrated MWP approaches in arbitrary waveform generation, ultrawideband (UWB) pulse shaping and stable carrier generation using optoelectronic oscillators (OEO).
\[ht!!\] {width="100.00000%"}
Arbitrary waveform generation {#subsec:arbitrary}
-----------------------------
Microwave arbitrary waveform very useful for pulsed radar, modern instrumentation systems and UWB communications. However, current electronic arbitrary waveform generation (AWG) is limited in frequency and bandwidth. The current state-of-the-art electronics AWG operates at a maximum bandwidth of 5.6 GHz and a frequency up to 9.6 GHz [^7]. Photonics, on the other hand, has become a promising solution for generating high-frequency microwave waveforms [@CapmanyNatPhoton2007; @YaoOptComm2011]. A variety of MWP techniques have been proposed during the past few years that can generate microwave waveforms in the gigahertz and multiple gigahertz region. These include direct space-to-time mapping [@McKinneyOL2002], spectral shaping and wavelength-to-time mapping [@ChouPTL2003; @LinUWB2005] and temporal pulse shaping. With these techniques, frequency-chirped, and phase coded microwave waveforms have been demonstrated.
But the above mentioned techniques have been implemented using complex bulk optic devices which can be expensive, complicated and bulky. Another option is to use all-fiber pulse-shaper. For example, a linearly chirped microwave waveform can be generated by spectral shaping using chirped FBGs followed by frequency-to-time mapping in a dispersive device [@WangPTL2008]. But fiber-based devices lack of programmability which is necessary in arbitrary waveform generation.
An on-chip integrated pulse shaper is thus a desirable solution to overcome the limitations commonly associated with conventional bulk optics pulse shapers. Recently, researchers at Purdue University demonstrated an integrated ultrabroadband arbitrary microwave waveform generator that incorporates a fully programmable spectral shaper fabricated on a silicon photonic chip [@ShenOpex2010; @KhanNatPhotonics2010]. The spectral shaper is a reconfigurable filter consisting of eight add-drop microring resonators on a silicon photonics platform. Ultra-compact cross-section (500 nm$\times$250 nm) silicon nanowires have been used to fabricate the rings [@ShenOpex2010]. The typical bending radius used was 5 $\mu$m and the waveguide propagation loss was around 3.5 dB/cm [@XiaoOpex2007]. This spectral shaper programmability is achieved by thermally tuning both the resonant frequencies and the coupling strengths of the microring resonators. A cartoon of the spectral shaper and an optical image showing two channels of ring resonators with micro-heaters are shown in Figure \[AWG\] (a) and (b), respectively.
The principle of the photonic arbitrary microwave waveform generation system implemented here is shown in Figure \[AWG\] (a).The spectral shaper is used to modify the spectrum emitted from a mode-locked laser. The shaped spectrum then undergoes wavelength-to-time mapping in a dispersive device, which in this case is a length (5.5 km) of optical fiber, before being converted to the electrical domain a microwave waveform using a high-speed photodetector. By incorporating the spectral shaper into a, a variety of different waveforms are generated, including those with an apodized amplitude profile, multiple $\pi$ phase shifts (Figure \[AWG\] (c)), two-tone waveforms and frequency-chirped waveforms at the central frequency of 60 GHz.
In another demonstration of arbitrary waveform generation using PIC, a planar lightwave circuits (PLCs) fabricated on silica-on-silicon is used to generate pulse trains at 40 GHz and 80 GHz with flat-top, Gaussian, and apodized profiles [@SamadiOptComm2011]. The pulse shaper is a 12 tap finite impulse response (FIR) filter that performs both phase and amplitude filtering. It is implemented as 12 stages of cascaded Mach-Zehnder interferometers each with an FSR of 80 GHz, which corresponds to a temporal tap separation of 12.5 ps. The waveguide cross-section used for the PLC fabrication is designed to be 3.5 $\mu$m$\times$3.5 $\mu$m. The curved portions of the circuit have a radius of at least 2 mm in order to minimize bending losses. The propagation loss of the waveguides is 0.7 dB/cm.
\[ht!!\] {width="70.00000%"}
Impulse radio UWB pulse shaping {#subsec:UWB}
-------------------------------
In the last five years numerous techniques have been proposed for the so-called photonic generation of impulse-radio UWB (IR-UWB) pulses. In this approach UWB signals are generated and later on distributed in the optical domain to increase the reach of the UWB transmission, similar to the more general concept of radio over fiber. The generated pulses are usually the variants of Gaussian monocycles, doublets or in some occasion higher-order derivatives of the basic Gaussian pulses. These techniques usually aim at generating the pulses which power spectral densities (PSD) satisfy the regulation (i.e. spectral mask) specified by the U.S. Federal Communications Commission (FCC) for indoor UWB systems. Different techniques that have been proposed for IR-UWB pulses generation, such as spectral shaping combined with frequency-to-time mapping, nonlinear biasing of an MZM, spectral filtering using MWP delay-line filters [@BoleaOpex2009], or using PM-IM conversion to achieve temporal differentiation of the input electrical signals. A comprehensive review on these techniques is given in [@YaoJLT2007].
Integrated photonics technologies have also been exploited in the approach of IR-UWB signal generation [@LiuElectLett2009; @YunhongIPC2011; @MarpaungOpex2011; @MirshafieiPTL2012; @YueOL2012; @WangOpex2009; @WangPTL2010]. Liu et al. reported the use of an SOI ORR as a temporal differentiator to shape input Gaussian electrical pulses into their first-order derivatives (i.e., monocycles) [@LiuElectLett2009]. Using the a similar technique, a silicon add-drop ORR has been used to generate monocycles from a 12.5 Gb/s NRZ signals [@YunhongIPC2011]. However, Gaussian monocycles cannot fill the FCC spectral mask with high efficiency. For this reason, higher order derivatives of the Gaussian pulses are often desired. Marpaung et al. recently reported the use a cascade of two add-drop ORRs in TriPleX technology to generate the second-order derivatives (doublets and modified doublets) of the input Gaussian pulses used to modulate the phase of the optical carrier [@MarpaungOpex2011]. The PM-IM transfer using the cascaded ORRs forms an MWP bandpass filter that shapes the input Gaussian spectrum accordingly. This is the integrated photonics implementation of the MWP delay-line filtering technique. An example of the spectral filtering to generate the Gaussian doublet is shown in Figure \[UWB\].
An important aspect in IR-UWB pulse shaping is to fill the spectral mask with a high power efficiency. Very recently, Mirshafiei et al. have demonstrated an output pulse obtained from a linear combination of a Gaussian pulse and its copy, filtered using a silicon ORR. By careful adjustment of the amplitude and relative time delay between the pulses, an output pulse with a power efficiency of 52% has been obtained [@MirshafieiPTL2012].
Recently, on-chip nonlinear optics have been used for IR-UWB generation. These include a monocycle generation based on two-photon absorption in a silicon waveguide [@YueOL2012] and monocycles generations exploiting the parametric attenuation effect of sum-frequency generation (SFG) [@WangOpex2009] or using the quadratic nonlinear interaction seeded by dark pulses [@WangPTL2010] in a periodically poled lithium niobate (PPLN) waveguide. We expect to see more techniques based on on-chip nonlinear techniques for AWG and IR-UWB.
Optoelectronic oscillator and optical comb generation {#subsec:OEO}
-----------------------------------------------------
In 1986 Steve Yao and Lute Maleki, two researchers then at the Jet Propulsion Laboratory proposed a new type of high-performance oscillator known as the optoelectronic oscillator (OEO) [@YaoJOSAB1996]. The typical configuration of an OEO, which is shown Figure \[OEO\] (a), is based on the use of optical waveguides and resonators, which exhibit significantly lower loss than their electronic counterparts. Typically, light from a laser is modulated and passed through a long length of optical fiber before reaching a photodetector. The output of the photodetector is amplified, filtered, adjusted for phase and then feedback to the modulator providing self-sustained oscillation if the overall round trip gain is larger than the loss and the circulating waves can be combined in phase.
Optoelectronic oscillators (OEOs) are thus ultra-pure microwave generators based on optical energy storage instead of high finesse radio-frequency (RF) resonators. These oscillators have many specific advantages, such has exceptionally low phase noise, and versatility of the output frequency (only limited by the RF bandwidth of the optoelectronic components). Such ultra-pure microwaves are indeed needed in a wide range of applications, including time-frequency metrology, frequency synthesis, and aerospace engineering.
The spectral purity of the signal in the OEO is directly related to the Q-factor of the loop and so far, most OEOs utilize a long length of fiber to achieve high spectral purity. A disadvantage of using a fiber loop is the production of *super modes* that appear in the phase noise spectrum caused by the propagation of waves multiple times around the OEO loop. In addition, a fiber delay line is bulky, so that the oscillators can not be considered as an optimal solution for the implementation of transportable microwave source. Along the same line, this bulky delay line element has to be temperature-stabilized, a feedback control process which is energy consuming. One solution to circumvent all these disadvantages has been proposed which also leads to the possibility of full integration of OEOs. It consists in replacing the optical fiber loop by a high Q-factor cavity implemented by means of a whispering gallery mode resonator (WGMR) [@MalekiNatPhotonics2011; @LiangOL2010; @DevganOEOPTL2010; @VolyanskiyOpex2010]. WGMRs ranging in size from a few hundred micrometers to a few millimeters can be fabricated from a wide variety of optically transparent materials reaching Q-factors in the range of $3\times10^{11}$. OEOs based on high-Q WGMRs made from electro-optic materials can provide high performance in a miniaturized package smaller than a coin, as shown in Figure \[OEO\] (b) and (c) [@MalekiNatPhotonics2011], operating in frequency ranges from 10 to 40 GHz and featuring instantaneous linewidths below 200 Hz. Furthermore, in this configuration, the resonator, serves both as the high-Q element and as the modulator in the OEO loop.
Optical Comb generation on a chip [@KippenbergScience2011] is also of great interest in microwave photonics as it enables several applications such as the precise measurement of optical frequencies through direct referencing to microwave atomic clocks [@FosterOpex2011] and the production of multiple taps for high sidelobe rejection signal processors [@FerdousNatPhotonics2011]. Combs have been traditionally generated using mode-locked ultrafast laser sources but recently the generation of optical frequency combs through the nonlinear process of continuous-wave optical-parametric oscillation using micro-scale resonators has attracted significant interest since these devices have the potential to yield highly compact and frequency agile comb sources. In particular [@FosterOpex2011] reports the generation of optical frequency combs from a highly-robust CMOS-compatible integrated microresonator optical parametric oscillator where both the microresonator and the coupling waveguide are fabricated monolithically in a single silicon nitride layer using electron-beam lithography and subsequently clad with silica. This approach brings the advantage of providing a fully-monolithic and sealed device with coupling and operation that is insensitive to the surrounding environment.
![(a) Block diagram of a generic OEO. (b) and (c) Miniature OEO based on a lithium niobate WGMR (from [@MalekiNatPhotonics2011], courtesy of the Macmillan Publishers Ltd).[]{data-label="OEO"}](OEO){width="\linewidth"}
Other emerging applications {#sec:other}
===========================
A number of emerging and exciting applications have surfaced in the past years. They are beyond the scope of this paper, but are worthy to mention here due to their potential. These approach have taken the full advantage of the availability of PIC technologies. The first application is fundamental computing functions like differentiation and integration. Recently, photonic differentiators using SOI integrated Bragg-gratings [@RutkowskaOpex2011] and all-optical temporal integration using an SOI four-port microring resonator [@FerreraNatCommunications2010; @FerreraOpex2011] have been reported. These devices have applications in real time analysis of differential equations.
On chip microwave frequency conversion is another technique that recently received increasing interest. In this approach, RF frequency mixing for signal upconversion and downconversion is performed in a photonic integrated circuit. In [@GutierrezOL2012], the RF mixer is realized in silicon electro-optical MZ modulator enhanced via slow-light propagation. An upconversion from 1 GHz to 10.25 GHz was demonstrated. In [@JinPTL2012], Jin et al. proposed the so-called RF photonic link on chip PIC which can operate in a linear mode (as an MWP link) or a mixer mode for microwave frequency conversion. The device used for the mixing is the ACP-OPLL receiver reported earlier [@LiPTL2011]. Frequency downconversion from 1.05 GHz to 50 MHz was demonstrated.
A very exciting emerging application is the photonic analog-to-digital conversion (ADC). Photonic ADCs have been actively investigated over the last decades; an overview and classification of photonic ADCs can be found in an excellent review by Valley [@ValleyOpex2007]. Recently, a chip incorporating the core optical components of the photonic ADC (a modulator, wavelength demultiplexers, and photodetectors) has been fabricated in silicon photonics and shown to produce 3.5 ENOB for a 10 GHz input [@GreinCLEO2011; @KhiloOpex2012].
The final application is frequency measurement and spectrum analysis. The recent year have shown increase in techniques for instantaneous microwave frequency measurement (IFM). Wide bandwidth is the main attraction to do IFM system with photonics, compared to purely electrical solution. An added value will be fast reconfigurability, lightweight, transparency to microwave frequencies and small form factor. Such systems can readily be applied in the military and security applications [@JacobsOFC2007]. As for the spectrum analysis, on-chip RF spectrum analyzer with THz bandwidth based on nonlinear optics have been demonstrated both in chalcogenide [@PelusiNatPhotonics2009] and SOI [@CorcoranOpex2010] waveguides. These devices will find application in high speed optical communications.
Prospective: what’s next for integrated MWP? {#subsec:prospect}
============================================
We believe that integrated MWP has just started to bloom and is set to have a bright future. Continuing the current trend, we believe that MWP filters will continue as the leading signal processing applications. We expect to see more demonstration of PIC based MWP filters, involving resonators and/or photonic crystals. We also expect to see more use of nonlinear optics for MWP signal processing. For example four-wave mixing (FWM) for enhancing the gain of MWP link [@WallCLEO2012] and for MWP filtering [@SupradeepaNatPhotonics2012; @VidalPJ2012] have been reported. Moreover, SBS on chip based on chalcogenide has just been reported [@PantOpex2011]. This has been demonstrated for delay line [@PantOL2012] and MWP filter [@ByrnesCLEO2012]. It is expected that this phenomena will also be used for beamforming and MWP link. Recently the SBS on silicon chip has been predicted [@GaetaNatPhoton2012]. In the near future this will also be useful for integrated MWP. From the modulation technique point of view, the use of phase or frequency modulation is predicted to show significant rise in interest. Recently, a beamformer based on PM has been proposed [@XueOptComm2011], as well as radio over fiber link [@GasullaOpex2012]. More systems will also take advantage of simultaneous phase and intensity modulation [@UrickAVFOP2011]. These systems will also use FM discriminator to synchronize the two modulation scheme and to enhanced the link gain and SFDR. In the technology level, it is expected to see efforts in reducing the loss of optical waveguides even further as well as reducing the power consumption for tuning and reconfigurability, either for thermal tuning [@DongOpex2010_power], or tuning using other cladding materials like liquid crystals [@DeCortOL2011] or chalcogenides [@MelloniSPIE2012]. Finally, the field will see an increase in implementation of PICs for the generation and processing of THz signals and waveforms, as reported in [@SteedJSTQE2011; @BenYooTerahertz2012].
DM and CR acknowledge the support of the European Commission via the 7th Framework Program SANDRA project (Large Scale Integrating Project for the FP7 Topic AAT.2008.4.4.2). SS and JC acknowledge the GVA-2008-092 PROMETEO project Microwave Photonics.
DM and CR would like to thank Leimeng Zhuang, Maurizio Burla and Reza Khan for their contributions to the article. S.S and J.C would like to thank Ivana Gasulla, Juan Lloret and Juan Sancho for their constant support and collaboration.
[^1]: In this case the impedances of both the modulation device and the photodetector are regarded as purely resistive, and resistors are added in series or in parallel to match the input and output impedances to the $50\,\Omega$ source and load resistances. For in-depth discussion of impedance matching in MWP links the reader is referred to Chapter 2 of Ref [@CoxBook2004].
[^2]: The commercial aspects of MWP were recently covered in the Nature Photonics Technology Focus, vol. 5, issue 12, December 2011.
[^3]: See also Nature Photonics Technology Focus, vol. 4, issue 8, August 2010.
[^4]: The TriPleX waveguide technology is a proprietary technology of LioniX BV, Enschede, the Netherlands. See: http://www.lionixbv.nl
[^5]: The MPW service was initially offered as a part of a Dutch National project MEMPHIS. See: http://www.smartmix-memphis.nl
[^6]: The coherence time is defined as the measure of temporal coherence of a light source, expressed as the time over which the field correlation decays
[^7]: Tektronix AWG7122C. See: http://www.tek.com/signal-generator/awg7000-arbitrary-waveform-generator
|
{
"pile_set_name": "ArXiv"
}
|
---
address:
- |
Dipartimento di Fisica, Università di Trento, and\
Istituto Nazionale Fisica della Materia, I-38050 Povo, Italy
- |
Department of Physics, TECHNION, Haifa 32000, Israel, and\
Kapitza Institute for Physical Problems, ul. Kosygina 2, 117334 Moscow
author:
- 'F. Dalfovo, C. Minniti, S. Stringari'
- 'L. Pitaevskii'
date: 'December 18, 1996'
title: Nonlinear Dynamics of a Bose Condensed Gas
---
‘=11
2
bylinecite thanks
2maketitle2thanksauthoraddresstitledateabstractpacs
\#1
abstract
=-1000pt 0=-0 by17.5pt width 0pt height0
[ ]{}\#1
\#1
pacs
=-1000pt 0=-0 by20pt
PACS numbers: \#1
maketitle2
preprint title =-1000pt authoraddress date
[=0.10753==-1pc]{}
abstract
pacs
‘=12
2
The dynamic behavior of Bose-condensed gases of alkali atoms in magnetic traps has been the object of recent experiments at Jila [@jila1; @jila2; @jila3] and MIT [@MIT1; @MIT2; @MIT3]. Absorption images of the atomic cloud provide quantitative information on the dynamics of the expansion after switching off the trap as well as accurate data for the frequencies of the collective excitations. The experimental results reveal that the role played by the interatomic forces is important in these systems and can not be ignored for a quantitative understanding of the data. This opens new challenging tasks for theoretical investigation.
The natural theory to investigate the dynamic behavior of a nonuniform Bose condensate, at $T=0$, is given by the time-dependent Gross-Pitaevskii (GP) equation [@GP] for the condensate wavefunction $\Psi({\bf r},t)$: $$i\hbar\frac{\partial \Psi}{\partial t} =
\left( -\frac{\hbar^2\nabla^2}{2m} + V_{ext}
+ g \mid \!\Psi\!\mid^2 \right) \Psi \, .
\label{GP}$$ The coupling constant $g$ is proportional to the $s$-wave scattering length $a$ through $g=4\pi \hbar^2 a/m$. In the following we will discuss the case of repulsive interactions ($a>0$). The anisotropic trap is represented by the confining potential $V_{ext}$, which is chosen in the form $V_{ext}({\bf r})= (m/2) \sum_i
\omega_{0i}^2 r_i^2$, where $r_i \equiv x,y,z$. So far the experimental traps have cylindrical symmetry and hence are characterized by the radial frequency $\omega_\perp \equiv \omega_x=\omega_y$ and the asymmetry parameter $\lambda=\omega_z/\omega_\perp$. The ground state configuration [@Baym; @Dalfovo; @Edwards1] as well as the properties of small oscillations near equilibrium [@Edwards2; @Singh; @Stringari] have been the object of systematic investigation starting from Eq. (\[GP\]). A few calculations in the nonlinear regime have been also carried out [@Holland1; @Holland2; @Castin; @Kagan; @Edwards3].
In the present paper we discuss several features of the nonlinear behavior of the system, by solving Eq. (\[GP\]) in the large $N$ limit, where $N=\int\!d{\bf r} |\Psi({\bf r},t)|^2$ is the number of atoms. In this limit, it is possible to derive almost analytic results of the GP equation, thereby simplifying the numerical analysis and allowing for a systematic investigation of important phenomena. These include, the explicit time evolution of the condensate (shape of profiles, aspect ratio etc.) during the expansion and the dependence of the collective frequencies on the amplitude of the oscillation.
The effective strength of the interatomic forces in the GP equation is fixed by the adimensional parameter $Na/a_{0i}$, where $a_{0i}= \sqrt{\hbar/(m\omega_{0i})}$ is the harmonic oscillator length. When this parameter is much larger than $1$, the repulsion makes the system much wider than the noninteracting configuration, yielding a rather smooth profile. In such conditions, the equilibrium results from a balance between the external potential and the repulsive interaction, the kinetic energy playing a minor role. For large values of $Na/a_{0i}$, one can then neglect the kinetic energy term in (\[GP\]). This yields the Thomas-Fermi (TF) approximation for the ground state: $$\rho_0^{TF}({\bf r}) = | \Psi_0^{TF} ({\bf r})|^2
= g^{-1} [ \mu - V_{ext}({\bf r}) ]
\label{tfgs}$$ when $\mu > V_{ext}({\bf r})$ and $\rho_0({\bf r}) =0$ elsewhere. The chemical potential $\mu$ is fixed by the normalization of the density to the number of particle $N$: $$\mu = {1\over 2} \left[ {15 \over 4\pi} g m^{3/2} \omega_{0x}
\omega_{0y } \omega_{0z } N
\right]^{2/5} \; .
\label{mu}$$ The Thomas-Fermi approximation (\[tfgs\]) works in an excellent way for the configurations realized at MIT, where $N$ is of the order of 1 million atoms and more. Conversely, in the Jila experiments of Refs. [@jila1; @jila2; @jila3] the number of atoms is smaller ($10^3$-$10^4$) and the TF approximation provides only a semi-quantitative description.
Neglecting the kinetic energy pressure term in the time dependent GP equation corresponds to taking into account the effect of the kinetic energy operator in (\[GP\]) only on the phase of the order parameter $\Psi$. This permits to rewrite (\[GP\]) in the useful hydrodynamic form [@Stringari]: $$\begin{aligned}
\frac{\partial}{\partial t} \rho &+& {\bf \nabla} \cdot ({\bf v}\rho)
= 0
\label{continuity} \\
m \frac{\partial}{\partial t} {\bf v} &+&
{\bf \nabla} \left( V_{ext} + g\rho
+ { mv^2 \over 2 } \right) = 0 \; ,
\label{Euler} \end{aligned}$$ where density and velocity are defined by $\rho=|\Psi|^2$ and ${\bf v}= (\Psi^* {\bf \nabla}\Psi -\Psi{\bf \nabla}\Psi^*)
\hbar/(2mi\rho)$. Equation (\[continuity\]) is the usual equation of continuity, while (\[Euler\]) establishes the irrotational nature of the superfluid velocity. It is immediate to verify that the equilibrium configuration given by Eqs. (\[continuity\]-\[Euler\]) (${\bf v} =0$ and $\partial \rho / \partial t =0$) coincides with the TF result (\[tfgs\]).
The hydrodynamic equations (\[continuity\]-\[Euler\]) have been recently shown to provide the correct frequencies of the normal modes of the condensate in the large $N$ limit [@Stringari]. With respect to the full solution of the GP equation, which includes the effect of the kinetic energy pressure, the approach based on (\[continuity\]-\[Euler\]) has the major advantage of providing an algebraic expression for the dispersion relation of the elementary excitations. The resulting predictions quite well compare with both the Jila [@jila2; @jila3] and MIT [@MIT3] experiments.
For nonlinear time dependent motions, which are the object of the present work, almost analytic solutions can be found starting from equations (\[continuity\]-\[Euler\]). In fact they admit [*exact*]{} solutions of the form $$\begin{aligned}
\rho({\bf r},t) &=& a_x(t) x^2 + a_y(t) y^2 + a_z(t) z^2 + a_0(t)
\label{scalingrho} \\
{\bf v} &=& {1 \over 2} {\bf \nabla} [\alpha_x(t) x^2 + \alpha_y(t) y^2
+ \alpha_z(t) z^2] \; .
\label{scalingv}\end{aligned}$$ Equation (\[scalingrho\]) is restricted to the region where $\rho\ge 0$ and the coefficient $a_0$ is fixed by the normalization of the density: $a_0=-(15N/8\pi)^{2/5}(a_xa_ya_z)^{1/5}$. The time dependent coefficients $a_i$ and $\alpha_i$ obey the following coupled differential equations: $$\begin{aligned}
\dot{a}_i &+& 2a_i\alpha_i+a_i\sum_j\alpha_j =0
\label{ai} \\
\dot{\alpha}_i &+& \alpha_i^2+\omega_{0i}^2+(2g/m) a_i = 0 \; ,
\label{alphai}\end{aligned}$$ with $i,j=x,y,z$. One can use the three equations (\[ai\]) to express $\alpha_i$ in terms of $\dot{a}_i/a_i$; the solution is greatly simplified by introducing the new adimensional variables $b_i$ defined by $a_i=-m\omega_{0i}^2 (2g b_x b_y b_z
b_i^2)^{-1}$. With this choice, Eqs. (\[ai\]) reduce to $\alpha_i=\dot{b}_i/b_i$ and Eqs. (\[alphai\]) become $$\ddot{b}_i + \omega_{0i}^2 b_i - \omega_{0i}^2 /(b_i b_x b_y b_z)
= 0 \; .
\label{ddotb}$$ The second and third terms of (\[ddotb\]) give the effect of the external trap and of the interatomic forces, respectively. It is worth noticing that, using the new variables $b_i$, the equations of motion do not depend on the value of the coupling constant $g$. This is a typical feature characterizing the large $N$ behavior of the GP equation. The mean square radii and velocities of the atomic cloud can be easily expressed in terms of $b_i$: $$\begin{aligned}
\langle r^2_i \rangle & \equiv & {1\over N} \int \! d{\bf r} \
\rho({\bf r},t) \ r^2_i \ =
\left( {2 \mu \over 7 m \omega_{0i}^2 }\right) b_i^2 \\
\langle v^2_i \rangle & \equiv & {1\over N} \int \! d{\bf r} \
\rho({\bf r},t) \ v^2_i \ =
\left( {2 \mu \over 7 m \omega_{0i}^2 }\right) \dot{b}_i^2 \; ,
\label{rms} \end{aligned}$$ where $\mu$ is given by (\[mu\]).
The solutions (\[scalingrho\]-\[scalingv\]) are well suited to describe both the oscillations around the ground state [@note1] and the problem of the expansion of the gas after switching off the confining potential. Equations (\[ddotb\]) have been already derived by other authors [@Castin; @Kagan], using the formalism of scaling transformations, and applied to study several dynamic phenomena. In the present approach the same equations emerge as an exact solution of the hydrodynamic equations of superfluids (\[continuity\]-\[Euler\]) and will be used to investigate nonlinear oscillations and the expansion in both the Jila and MIT traps.
In order to apply the above formalism to study the expansion of the cloud, let us suppose that at $t=0$ the system is in its equilibrium configuration. Comparing Eq. (\[scalingrho\]) with the TF ground state density (\[tfgs\]) and substituting $a_i$ in terms of $b_i$, one finds $b_i=1$. One has also $\dot{b}_i=0$ at equilibrium. The external potential is then suddenly switched off and the system starts expanding. This corresponds to solving the equations (\[ddotb\]) with the second term set equal to zero: $$\ddot{b}_i - \omega_{0i}^2 /(b_i b_x b_y b_z)
= 0 \; .
\label{expansion}$$
For an initially spherical configuration the expansion will proceed isotropically. In the presence of anisotropy, the expansion (and consequently the asymptotic velocities) will be instead faster in the direction where the repulsive forces (proportional to the gradient of the density) are stronger. For an axially deformed trap with $\lambda
= \omega_z/\omega_\perp \ge 1$ this will occur in the axial direction, while if $\lambda \le 1$ it will occur in the radial direction. The ratio $$R_r(t) \equiv \sqrt{ \langle z^2 \rangle / \langle x^2 \rangle }
= \lambda^{-1} b_z(t)/b_x(t)$$ of the radii in the two different directions is called the aspect ratio in co-ordinate space. One can also define the aspect ratio of velocities $$R_v(t) \equiv \sqrt{ \langle v_z^2 \rangle / \langle v_x^2 \rangle }
= \lambda^{-1} \dot{b}_z(t) / \dot{b}_x(t) \; ,$$ whose deviations from unity reflect the anysotropy of the velocity distribution. This anysotropy represents a crucial feature of Bose condensates. Asymptotically ($t \to \infty$) the two aspect ratios $R_r$ and $R_v$ converge to the same value $R$. However for finite values of the expansion time they can behave quite differently. In fact, the velocities in both radial and axial directions reach their asymptotic values very rapidly, since the time scale for acceleration is very short; vice versa the radii approach their asymptotic behavior, $\propto v_i t$, much more slowly. This is an important feature to take into account in the analysis of experimental data.
In Figs. \[fig:jila\] and \[fig:mit\] we show the results of the two aspect ratios $R_r$ and $R_v$ obtained by solving numerically Eqs. (\[expansion\]) for two different sets of frequencies $\omega_{0i}$, corresponding to the Jila [@jila2; @jila3] and MIT [@MIT2] traps, respectively. The different behavior of the two aspect ratios $R_r$ and $R_v$ is evident in both cases. In the same figures we have also shown the predictions of the noninteracting harmonic oscillator model, which gives $\langle v_i^2 \rangle = \hbar \omega_{0i}/(2m)$ and $\langle r^2_i \rangle = (1/2) a_{0i}^2 + \langle v_i^2 \rangle t^2$. The figures point out very clearly the role of two-body interactions which modify both the timescale of the expansion process and the asymptotic value of the aspect ratio. The comparison with the experimental data for $R_r$ should be however taken with care. In the case of the Jila experiments, $N$ is of the order of $10^3\div 10^4$, so that the TF approximation is expected to be rather crude; moreover, the points in Fig. \[fig:jila\] are taken from a gaussian fit to the spatial distribution of the atoms and they can be significantly corrected by using different fitting functions [@Holland2; @Jin]. In the case of the MIT experiments, the points in Fig. \[fig:mit\] correspond to our estimate of the aspect ratio, extracted from the time-of-flight images in Fig. 1 of Ref. [@MIT2].
In Fig. \[fig:lambda\] we report the asymptotic aspect ratio $R$ as a function of $\lambda$. This curve has been recently calculated also in Ref. [@Kagan]. In the limit $\lambda \to 0$ the aspect ratio approaches the value $(\pi/2) \lambda$ [@Castin]. For comparison we also show the predictions of the noninteracting harmonic oscillator model, given by $R_{HO} = \sqrt{\lambda}$.
Another important quantity to discuss is the release energy $E_{rel}$, defined as the energy per particle of the system after the switch-off of the trap. This energy is given by the sum of the kinetic and interaction energy of the atoms and it is conserved during the expansion, being finally converted entirely into the kinetic energy of the expanding cloud. In the present formalism the release energy is a first integral of equation (\[expansion\]) and can be written as $$E_{rel} = {2 \mu \over 7} \left[ {1 \over b_xb_yb_z} +
{1\over2} \sum_i { \dot{b}_i^2 \over \omega_{0i}^2 }\right]
= {2 \mu \over 7} \; ,
\label{ereltf}$$ where $\mu$ is given by (\[mu\]). At the beginning of the expansion $E_{rel}$ coincides with the 2-body interaction energy of the TF ground state. The comparison between Eq. (\[ereltf\]) and the numerical results for $E_{kin}+E_{int}$, calculated with the exact ground state solution of the Gross-Pitaevskii equation, provides a test of the validity of the TF approximation. For $N \simeq 10^3$ the agreement with the exact result is only semiquantitative, becoming better and better as $N$ increases [@Dalfovo]. After long expansion time the release energy can be related to the value of the square radii of the system, through the equation $$E_{rel} = {m \over 2} \langle v^2 \rangle_{t \to \infty} =
{m \over 2 t^2} \langle r^2 \rangle_{t \to \infty} \; .
\label{erel}$$ The release energy has been measured both at Jila [@Holland2] and MIT [@MIT2]. It is worth noticing that in the MIT trap, where the initial configuration is a strongly anisotropic ellipsoid with the major axis along $z$, the release energy is almost entirely converted into kinetic energy of the radial motion, the velocity along $z$ being much smaller. This can be tested by solving equation (\[expansion\]) with the parameters appropriate for the trap of Ref. [@MIT2]. One finds that, after an expansion of $40$ ms, the 2-body interaction energy is a factor $10^{-4}$ smaller than the initial value and the ratio between the axial and radial kinetic energies is approximately $4 \times 10^{-3}$.
The same formalism can be used to investigate the oscillations of the trapped gas. One can easily check that, in the limit of small deformations, the solutions of (\[ddotb\]) yield the dispersion relation discussed in Ref. [@Stringari]. For an axially deformed trap the normal modes are classified in terms of the third component $m$ of the angular momentum. We will discuss here the $m=0$ and $m=2$ modes which are accounted for by the parametrization (\[scalingrho\]-\[scalingv\]). The $m=2$ mode, in the linear limit, has the frequency $\omega=\sqrt{2}
\omega_\perp$, while the low-lying $m=0$ mode, resulting from the coupling between [*monopole*]{} and [*quadrupole*]{} oscillations, has the frequency [@Stringari] $\omega^2 = \eta \omega_{\perp}^2$, with $\eta= (4 + 3 \lambda^2 - \sqrt{9\lambda^4-16\lambda^2+16} )/2$. The frequency of the collective modes is expected to change when the amplitude of the oscillations becomes large, due to nonlinear effects. In order to calculate such deviations, we solve (\[ddotb\]) using, as initial conditions, the ground state values $b_i(0)=1$, but with $\dot{b}(0) \ne 0$. We choose the values of the velocities $\dot{b}_i(0)$ in order to to excite, in the linear limit, the two separate $m=2$ and $m=0$ modes. For the $m=2$ mode this implies $\dot{b}_x=-
\dot{b}_y=\epsilon$ and $\dot{b}_z=0$, where the parameter $\epsilon$ fixes the amplitude of the oscillations. For the $m=0$ mode, one has $\dot{b}_x=\dot{b}_y=\epsilon$ and $\dot{b}_z=\epsilon(\eta-4)$. In this case, the system oscillates also along $z$, the axial and radial oscillations having relative amplitude $\eta-4$. It is worth noticing that the occurrence of a simultaneous oscillation in both the radial and axial widths is a typical effect of the interaction between the atoms. In fact, in the absence of 2-body forces (noninteracting harmonic oscillator), the motion in the two directions would be exactly decoupled. The experiments of Ref. [@jila2] reveal not only a good agreement with the predicted frequencies of the two modes but also a clear evidence for the coupling between the axial and radial oscillations (see Fig. 2 of Ref. [@jila2]). These results are crucial signatures of the important role played by the interaction.
By increasing the initial values $\dot{b}_i$, it is possible to explore the nonlinear regime. A major problem for the comparison with the experimental data is that the oscillations are measured by imaging the atomic cloud after switching off the trap and leaving the atoms to expand for a few ms. During the expansion the relative amplitude of the axial and radial motions can be significantly modified. This is especially true for the MIT trap, due to the strong asymmetry of the starting cigar-shaped configuration which makes the expansion in the radial and axial directions quite different. This nontrivial evolution of the oscillating cloud is not surprising if one thinks that, after an expansion time of a few ms, the size of the system can increase by more than a factor ten. In order to make a significant comparison with the experiments it is then important to simulate both the oscillations in the trap and the subsequent expansion.
In Fig. \[fig:frequencies\] we show our predictions for the frequencies of the $m=0$ and $m=2$ modes, in the case of the Jila trap ($\lambda=
\sqrt{8}$), as a function of the relative amplitude, defined as $(1/2)[\langle x^2 \rangle_{max}^{1/2} - \langle x^2
\rangle_{min}^{1/2}]/ \langle x^2 \rangle_{ave}^{1/2}$. The latter is calculated after expanding the trap for $7$ ms, as in the experiments of Refs. [@jila2; @jila3]. We find that the frequency of the $m=0$ mode does not exhibit any significant dependence on the relative amplitude, while the $m=2$ frequency increases. This agrees with the experimental findings [@jila2; @jila3], though the measured frequency shift of the $m=2$ mode is about a factor two larger than our prediction.
As already pointed out, the relative amplitude of the oscillations in the radial and axial directions behave differently during the expansion. For example, after exciting the $m=2$ mode in the trap, with a relative amplitude of $5$% and $4$% in the radial and axial directions respectively, the corresponding oscillations of the expanded cloud, after $7$ ms, have a relative amplitude of $9$% and $4$%. The aspect ratio $R_r$ after $7$ ms is found to be $1.65$. These predictions rather well agree with the experiments (see Fig. 2 of Ref. [@jila2]), where the aspect ratio is found to be $1.75$ and the radial and axial relative amplitudes are $10$% and $4$%, respectively.
We have repeated the same calculations for the cigar-shaped trap of the MIT experiments ($\lambda=0.077$ [@MIT3]). In this case we excite the low-lying $m=0$ mode in the trap and then we calculate the frequency and the amplitude of the oscillations after expanding the gas for $40$ ms. We do not observe any frequency shift of the low-lying $m=0$ mode as a function of the amplitude, in agreement with the experiments. One should note that the nonlinearity of the oscillations are amplified during the expansion much more than in the Jila trap, since the system is very anisotropic. As a consequence, the large axial amplitude observed after an expansion of $40$ ms corresponds to a rather small amplitude at $t=0$ and this explain in part the absence of shift in the frequency. As an example, let us consider the aspect ratio plotted in Fig. 2 of Ref. [@MIT3], which oscillates between about $0.28$ and $0.38$. One can easily reproduce the same oscillations by solving (\[expansion\]) starting from a configuration which oscillates as a low-lying $m=0$ mode in the trap (see also Ref. [@Castin]). To get the measured aspect ratio one has to start with an oscillation having relative amplitude less than $1$% in the radial direction and about $3.3$% in the axial one. During the expansion the relative amplitude of the radial motion remains practically unchanged while the one of the axial motion increases up to $15$ %. These large fluctuations of the axial width of the system are compatible with the conservation of energy, since almost all the energy is carried by the motion of the atoms in the radial expansion, which is much faster than the axial one.
It is also worth noticing that, when $\lambda \ll 1$, the frequency of the high-lying $m=0$ mode is exactly $2\omega_\perp$ even in the nonlinear regime. In this limit, this mode corresponds to a two-dimensional motion and the absence of a shift in its frequency reflects the occurrence of a hidden symmetry of a 2D Bose gas in a harmonic trap [@Pit].
In conclusion we have shown that the hydrodynamic equations of superfluids provide a useful description of several nonlinear effects, associated with the dynamic behavior of a trapped Bose gas at zero temperature. A reasonable agreement with the first available experimental data is found, though for a more quantitative comparison, when the number of atoms is relatively small, the complete solution of the Gross-Pitaevskii equations is expected to be relevant. Our analysis points out the crucial role played by the interatomic forces in the dynamics of the expansion as well as in the behavior of the collective excitations. A natural extension of this work should include thermal effects and, in particular, the interaction between the consensed and thermal components of these systems.
M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Science [**269**]{}, 198 (1995).
D. S. Jin, J. R. Ensher, M. R. Matthews, C. E. Wiemann, and E. A. Cornell, Phys. Rev. Lett. [**77**]{}, 420 (1996).
D. S. Jin, M. R. Matthews, J. R. Ensher, C. E. Wiemann, and E. A. Cornell, preprint 1996
K. B. Davis, M.-O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle, Phys. Rev. Lett. [**75**]{}, 3969 (1995).
M.-O. Mewes, M. R. Andrews, N. J. van Druten, D. M. Kurn, D. S. Durfee, and W. Ketterle, Phys. Rev. Lett. [**77**]{}, 416 (1996)
M.-O. Mewes, M. R. Andrews, N. J. van Druten, D. M. Kurn, D. S. Durfee, C. G. Townsend, and W. Ketterle, Phys. Rev. Lett. [**77**]{}, 988 (1996)
L.P. Pitaevskii, Zh. Eksp. Teor. Fiz. [**40**]{}, 646 (1961) \[Sov. Phys. JETP [**13**]{}, 451 (1961)\]; E.P. Gross, Nuovo Cimento [**20**]{}, 454 (1961); E.P. Gross, J. Math. Phys. [**4**]{}, 195 (1963).
G. Baym and C. Pethick, , 6 (1996).
F. Dalfovo and S. Stringari, Phys. Rev. A [**53**]{}, 4377 (1996).
Mark Edwards, R. J. Dodd, C. W. Clark, P. A. Ruprecht, and K. Burnett, Phys. Rev. A [**53**]{}, R1950 (1996).
Mark Edwards, P. A. Ruprecht, K. Burnett, R. J. Dodd, and C. W. Clark, , 1671 (1996).
K.G. Singh and D.S. Rokhsar, Phys. Rev. Lett. [**77**]{}, 1667 (1996)
S. Stringari, Phys. Rev. Lett. [**77**]{}, 2360 (1996)
M. J. Holland and J. Cooper, , R1954 (1996).
M.J. Holland, D. Jin, M.L. Chiofalo, and J. Cooper, preprint 1996
Y. Castin and R. Dum, preprint 1996
Yu. Kagan, E.L. Surkov and G.V. Shlyapnikov, Phys. rev. A [**54**]{}, R1753 (1996); Yu. Kagan, E.L. Surkov and G.V. Shlyapnikov, preprint
P.A. Ruprecht, M. Edwards, K. Burnett, and C.W. Clark, Phys. Rev. A [**54**]{}, 4178 (1996)
Another exact solution of Eqs. (\[continuity\]-\[Euler\]) is obtained including terms linear in $x$ (or $y, z$) in the density and in the velocity potential. This solution corresponds to the motion of the center of mass of the cloud. Other solutions can be found including terms of the form $xy$ ($xz$, $yz$).
D. S. Jin and E. A. Cornell, private communication
L.P. Pitaevskii, Phys. Lett. [**A 221**]{}, 14 (1996); L.P. Pitaevskii and A. Rosch, preprint cond-mat/9608135
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present experimental results showing the diffuse reflection of a Bose-Einstein condensate from a rough mirror, consisting of a dielectric substrate supporting a blue-detuned evanescent wave. The scattering is anisotropic, more pronounced in the direction of the surface propagation of the evanescent wave. These results agree very well with theoretical predictions.'
address:
- |
$^1$ Laboratoire de Physique des Lasers, Universit[é]{} Paris 13\
99 avenue Jean-Baptiste Cl[é]{}ment, 93430 Villetaneuse, France
- |
$^2$ Institut für Physik, Universität Potsdam\
Am Neuen Palais 10, 14469 Potsdam, Germany
author:
- 'H[é]{}l[è]{}ne Perrin$^1$, Yves Colombe$^1$, Brigitte Mercier$^1$, Vincent Lorent$^1$ and Carsten Henkel$^2$'
title: 'Diffuse reflection of a Bose-Einstein condensate from a rough evanescent wave mirror'
---
Introduction
============
The study of the interactions between ultra cold atoms and surfaces is of major interest in the context of Bose-Einstein condensation on microchips [@Reichel01b; @Folman02]. One motivation is to understand the limitations on integrated matter wave devices due to imperfect surface fabrication or finite temperature. For example, it has been shown that the quality of the wires used in microfabricated chips is directly linked to the fragmentation effects observed in Bose-Einstein condensates (BECs) trapped near a metallic wire [@Zimmermann02a]. Moreover, the thermal fluctuations of the current in a metallic surface induce spin flip losses in an atomic cloud when the distance to the surface is smaller than $10~\mu$m typically [@Jones03].
Dielectric surfaces and evanescent waves have also been explored for producing strong confinement. They have the advantage of a strong suppression of the spin flip loss mechanism compared to metallic structures [@Henkel99c]. With such a system, one can realize mirrors [@Balykin88], diffraction gratings [@Dalibard96], 2D traps [@Ovchinnikov91] or waveguides [@Prentiss00]. Experiments involving ultra cold atoms from a BEC at the vicinity of a dielectric surface have recently made significant progress, leading for instance to the realization of a two dimensional BEC [@Grimm04a], to the study of atom-surface reflection in the quantum regime [@Pasquini04], and to sensitive measurements of adsorbate-induced surface polarization [@Cornell04a] and of the Van der Waals/Casimir-Polder surface interaction [@Vuletic04].
In this paper, we present experimental results and a related theoretical analysis of Bose condensed Rubidium atoms interacting with the light field of an evanescent wave above a dielectric slab. The evanescent wave is detuned to the blue of an atomic transition line and provides a mirror for a BEC that is released from a trap and falls freely in the gravity field of the earth. After the bounce off the mirror, we observe a strong scattering of the atomic cloud (diffuse mirror reflection) that is due to the roughness of the slab surface where the evanescent wave is formed [@Landragin96b]. In our case, the phase front of the reflected matter waves is significantly distorted because the effective corrugation of the mirror is comparable to $\lambda_{\rm dB}/4 \pi
\cos\theta$ where $\lambda_{\rm dB}$ is the incident de Broglie wavelength and $\theta$ the angle of incidence. This is similar to early experiments with evanescent waves [@Landragin96b] and with magnetic mirrors [@HindsHughes]. We mention that later experiments achived a significantly reduced diffuse reflection (Arnold [@Arnold02]) and were even able to distinguish a specularly reflected matter wave (Savalli [@Savalli2002]). The key result of our experiment is that we can quantitively confirm the theoretical analysis developed by Henkel [@Henkel97a], combining independent measurements of the dielectric surface and the bouncing atoms.
The paper starts with a presentation of the experiment and an analysis of the experimental results, following Ref.[@Perrin05a]. We then outline an improved theoretical analysis based on Ref.[@Henkel97a] and discuss the momentum distribution of the reflected atoms, in particular its diffuse spread and its isotropy.
Setup
=====
![Dielectric prism supporting the evanescent wave. The surface is coated by two layers of successively low and high refraction index to realize a wave guide and enhance the evanescent field. For each incident polarization, TE or TM, coupling is resonant for a given incident angle. The experiments were performed with TE polarization. One denotes $x$ as the propagation axis of the evanescent wave along the surface, $y$ as the other horizontal axis and $z$ as the vertical one.[]{data-label="prism"}](Figs/prism.eps){height="20mm"}
The evanescent wave is produced by total internal reflection of a Gaussian laser beam at the surface of a dielectric prism. As shown in Fig.\[prism\], the surface of the prism is coated by two dielectric layers, a TiO$_2$ layer on top of a SiO$_2$ spacer layer. This coating forms an optical waveguide that resonantly enhances the evanescent field above the top layer [@Kaiser94]; we have designed this configuration for the study of two-dimensional atom traps [@Perrin03]. The incident angle of the laser beam is fixed by the resonance condition for a waveguide mode; for the transverse electric (or s) polarization we use, the incident angle is $\theta_i = 46.1^{\circ}$ (at the TiO$_2$/vacuum interface, index $n_{\rm TiO_2} = 1.86$). The resulting exponential decay length of the light field is $\kappa^{-1}=93.8$ nm, and $I = I_0 \, e^{-2 \kappa z}$ is the light intensity.
The mirror light is produced by a laser diode of power 40 mW detuned 1.5 GHz above the atomic D2 line ($\lambda = 780\,{\rm nm}$ or $1/\lambda = k_{L}/2\pi = 12\,820\,{\rm cm}^{-1}$). The Gaussian beam is elliptical and produces on the surface a spot with $1/\sqrt{e}$ waist diameters of $220~\mu$m along $x$ and $85~\mu$m along $y$ (see coordinate axes in Fig.\[prism\]). A measurement of the reflection threshold for the atom beam, taking into account the van der Waals attraction toward the surface (Landragin *et al.* in Ref.[@Balykin88]), gives access to the light intensity at the surface in the spot center: $I_{0} = 210\,{\rm W/cm}^2$. This value is lower than expected from the design of the dielectric coating; we attribute this to the losses due to the roughness of the deposited TiO$_2$ layer (see Figure \[AFMpicture\] below and the discussion there).
Atom bounce
===========
Data
----
The experiment proceeds as follows: approximately $10^8$ atoms are confined in the hyperfine ground state $F=2, m_F=2$ in a Ioffe Pritchard (IP) type magnetic trap, 3.6 mm above the evanescent mirror [@Perrin03]. The magnetic trap is cigar shaped, $x$ being its long axis. Oscillation frequencies are, respectively, $\omega_x/2 \pi
= 21\,{\rm Hz}$ and $\omega_{\perp}/2 \pi = 220\,{\rm Hz}$ in the radial directions ($y$ and $z$). The atoms are evaporatively cooled to below the condensation threshold and about $N = 3\times10^5$ atoms are released at $t = 0$ by switching off the magnetic trapping fields. These atoms reach the mirror after free fall at $t_{\mbox{\scriptsize reb}}=27$ ms and bounce on it with a velocity $v_{i} = 265\,{\rm mm/s}$ (normal incidence $\theta = 0$, de Broglie wavelength $\lambda_{\rm dB} = 2\pi \hbar / m v_{i} = 17.3\,{\rm
nm}$). Around the bouncing time $t_{\mbox{\scriptsize reb}}$, the mirror laser is switched on for $\Delta t = 2.2$ ms. Limiting this time window $\Delta t$ prevents near-resonant photon scattering during free fall or after reflection.
The atoms are detected by absorption imaging either before or after reflection. During free fall, the cloud expands along the radial directions because potential and interaction energy is released, but its width along $x$ remains nearly constant. The analysis of pictures taken before reflection gives access to the following parameters: fraction of condensed atoms $N_0/N=0.4$, kinetic temperature of thermal cloud $T=285$ nK, initial Thomas-Fermi size along $x$ of the condensed fraction $R_x=90~\mu$m and Thomas-Fermi velocity width along $z$: $V_{\perp}=5.96\,{\rm mm/s}$. The condensate velocity width along $x$ is very small, thus non directly measurable. However, it can be inferred from the knowledge of $V_{\perp}$ and the oscillation frequencies in the magnetic trap, using the solution for an expanding BEC [@Castin96]; we get $V_x = \frac{\pi}{2} \, \frac{\omega_x}{\omega_{\perp}} V_{\perp} =
0.89\,{\rm mm/s}$. The observation of the center of mass motion during free fall permits us to calibrate the pixel size knowing gravity’s acceleration and to infer the initial position and velocity of the cloud. The magnetic field switching process communicates a small acceleration to the atoms along $x$, resulting in a horizontal velocity $v_x = -30.7\,{\rm mm/s}$ (see Figure \[bounce\]).
![Absorption imaging pictures of a bouncing BEC with $3\times
10^5$ atoms for different times of flight after reflection: 2 ms, 7 ms, 12 ms, 17 ms and 22 ms. The pictures, taken with a 200 $\mu$s long, resonant pulse, are merely superimposed. The wide black line in the bottom is due to the prism surface, slightly tilted from the imaging axis. Picture dimensions are 5.7 mm $\times$ 4.4 mm.[]{data-label="bounce"}](Figs/bounce_with_axes.eps){width="70mm"}
After reflection, the absorption images change dramatically (figure \[bounce\]). The atoms occupy the surface of a scattering sphere, hence an elastic, but strongly diffuse scattering occurs. For $t > t_{\rm reb}$, the cloud width along $x$ increases from its initial value due to an additional velocity spread $\sigma_{v_x}$. The velocity Gaussian radius at $1/\sqrt{e}$ deduced from the pictures is $39.4\,{\rm mm/s}$. Taking into account the initial velocity width before reflection, the spread due to diffuse reflection is $\sigma_{v_x} = 39\,{\rm mm/s}$, that is $6.6\pm0.2~v_{\mbox{\scriptsize
rec}}$ where $v_{\mbox{\scriptsize rec}} = \hbar k_{L} / m
= 5.89\,{\rm mm/s}$ is the recoil velocity for Rb. This corresponds to an angular (rms) spread $\Delta \theta \approx 8.4^\circ$.
The effect of diffuse reflection along $y$ is more subtle to analyze, as this axis is aligned with the direction of observation. However, it is possible to extract information about $\sigma_{v_y}$ from the picture. If for instance the scattering were totally isotropic, with $\sigma_{v_y} = \sigma_{v_x}$, the atomic cloud should extend asymmetrically towards $-z$ at a given position $x$, as the projection of a spherical shell onto a plane extends towards the inner part of the circle (see figure \[profiles\]). If on the contrary the scattering would take place only along $x$, the cloud width along $z$ at a given position $x$ should be very small, with a symmetric shape.
Simulation
----------
To get some insight into what happens along $y$, we performed a numerical simulation of the atomic reflection. The simulation calculates $N = 3\times 10^5$ individual classical atomic trajectories. The initial positions and velocities are chosen to mimic the experimentally measured parameters: 40% of the atoms are “condensed” and are described by the initial 3D Thomas-Fermi velocity and position distribution. (We neglect the position spread along $y$ and $z$ because its contribution to the cloud size after a few ms of time of flight is very small.) The remaining 60% of the atoms are distributed according to gaussian profiles for velocity and position, with widths inferred from the knowledge of temperature and trap parameters. Position and velocity of the cloud centre are fixed to the experimental values as well.
![Simulation of a bouncing BEC with $3\times 10^5$ atoms for the same times of flight than the experimental ones, figure \[bounce\]. For this series, the velocity spread was chosen to be $\sigma_{v_y} = \sigma_{v_x} / 2 = 19.5\,{\rm mm/s}$. The position of the mirror surface is marked by a grey line.[]{data-label="simul"}](Figs/simul2.eps){width="70mm"}
The mirror is modelled as an instantaneous diffuse reflector. This assumption is reasonable as the typical time spent in the evanescent wave is small, $1 /\kappa v_i = 0.35~\mu$s. After reflection, the atomic velocity is modified to describe both specular reflection (inversion of vertical velocity) and scattering. A random horizontal velocity is added to the reflected velocity with a gaussian distribution. We take a $1/\sqrt{e}$ radius $\sigma_{v_{x}} = 39\,{\rm mm/s}$, as measured experimentally, and run simulations with varying $\sigma_{v_{y}}$. The $z$ component of the velocity is adjusted in order to preserve kinetic energy (the scattering process is elastic, total energy is conserved). The simulation also takes into account spontaneous emission. For our parameters, the atom spontaneously emits on average 0.13 photons per bounce [@note_spont]. We randomly draw the number of photons from a Poisson distribution and add a recoil of $1\,v_{\rm rec}$ in a random direction in velocity space for each emission event. After calculation of all atomic trajectories, the atomic density profile is integrated along $y$ as in the experimental pictures. We finally apply a Gaussian blur filter (width $\sigma_{\mbox{\scriptsize res}} = 9\,\mu$m along $x$ and $20~\mu$m along $z$) to mimic the finite resolution of the experimental imaging system that we calibrated independently.
Anisotropic scattering
----------------------
The qualitative agreement between the experimental and simulated pictures is very good as can be seen on figure \[simul\]. To be more quantitative for the possible values of the velocity spread $\sigma_{v_{y}}$, we analyze the central part of the cloud. For each time of flight, a region of size 0.8 mm $\times$ 1.5 mm along $x$ and $z$ respectively, centred on the maximum density of the cloud and identical for experimental and simulated pictures, is isolated and an integration of the signal is performed along $x$. We are left with a cut of the cloud along $z$, averaged over 0.8 mm along $x$. The experimental profile is compared to the simulated one, for different choices of $\sigma_{v_{y}}$ after the bounce. Results are shown on figure \[profiles\] for a time of flight 59 ms.
![Atomic density profiles integrated along $y$ and averaged along $x$, after 59 ms total time of flight, *i.e.* 32 ms after reflection. Closed circles: normalized experimental data. Lines: result of numerical calculation with a starting height 3.59 mm above the mirror, $N_0/N = 0.4$, $V_y = V_z = V_{\perp} =
5.96\,{\rm mm/s}$, $V_x = 0.89\,{\rm mm/s}$, $R_x = 90~\mu$m, $T =
285$ nK, $v_x = -30.67\,{\rm mm/s}$, $v_z = 0.3\,{\rm mm/s}$ and $\sigma_{v_x} = 39\,{\rm mm/s}$, values deduced from the experimental pictures. Thin line: totally anisotropic scattering ($\sigma_{v_y} =
0$); dashed line: anisotropic scattering with $\sigma_{v_y} =
\sigma_{v_x}/2$; bold line: isotropic scattering ($\sigma_{v_y} =
\sigma_{v_x}$). All curves are normalized to a maximum value of unity.[]{data-label="profiles"}](Figs/profiles.eps){width="85mm"}
The experimental data clearly exclude an isotropic diffuse reflection (figure \[profiles\], bold line). They also are different from the pure one dimensional scattering case (thin line): what fits best of all is a model intermediate between these two extremes, *i.e.* the scattering is only half as strong along $y$ compared to $x$. The atom mirror thus has an angular reflection characteristic that is elongated in the direction parallel to the (real part of the) wave vector of the evanescent wave. Spontaneous emission plays only a minor role for our parameters, but we found that the agreement with the experimental density profiles is improved by taking it into account, in particular on the lower left wing of the peak.
Mirror corrugation
------------------
For a theoretical prediction of the anisotropic mirror reflection, we use the theory of Ref.[@Henkel97a] where the diffuse scattering is attributed to the interference between the evanescent wave and light diffusely scattered from the rough glass surface. Within this theory, one can compute the width of the momentum distribution of the reflected atoms provided the power spectrum of the surface roughness is known. This power spectrum is a quantitative measure of the surface quality and has been measured with an atomic force microscope (AFM). A typical $4.5 \times 4.5 \mu$m$^2$ portion of the surface of the coated prism is shown in figure \[AFMpicture\]. One sees the top face of pillar-like structures which are typical for epitaxially grown TiO$_2$ on a substrate. The AFM data yield a surface roughness $\sigma = 3.34$ nm (the rms spread of the measured surface profile). A Fourier transform of the AFM image gives access to the power spectrum $P_S( {\bf Q} )$. (We use capitalized boldface letters for two-dimensional vectors in the mirror plane.) It is found to be isotropic (a function of $Q$ only) and well fitted in the wave vector range $1\,k_L \ldots 13\,k_L$ by a power law with a low-frequency cut-off (see figure \[Psfit\]) $$P_S( {\bf Q} ) = \frac{P_0}{\left(1 +
Q^2/ Q_0^2 \right)^{\alpha/2}}
\label{eq:PS-model}$$
![(left) Typical AFM picture of the prism surface. The dimensions are $4.5\,\mu{\rm m}\times 4.5\,\mu{\rm m}$. The grains are the top facets of pillar-like structures characteristic of an epitaxed TiO$_2$ surface.[]{data-label="AFMpicture"}](Figs/afm.eps){width="50mm"}
The fit gives access to the parameters $\alpha=4.8$, $P_0 = 5.3 \times 10^{-4}~k_L^{-4}$ and $Q_0 =
4.94~k_L$. In terms of this power spectrum, the rms surface roughness $\sigma$ is given by $$\sigma^2 = \int\!\frac{{\rm d}^2Q}{(2 \pi)^2} \, P_S( {\bf Q} ),$$ and the fitted parameters yield $\sigma = 3.36$ nm, in excellent agreement with the value directly deduced from the rms spread of the AFM data.
Diffuse reflection theory and comparison to the data
----------------------------------------------------
We now show that the diffuse reflection we observe can be understood within the theory of Ref.[@Henkel97a]. We note first that the cloud is so dilute at the bounce that a single-atom picture is sufficient to capture the physics [@Bongs99]. For a fixed incident momentum ${\bf p}_{\rm inc} = {\bf P} - {\bf e}_{z} m v_{i}$ near normal incidence, the reflected wave function can be written in the form of a plane wave with a randomly modulated phase front: $$\psi_{\rm refl}( {\bf r} ) = N \exp {\rm i} \left[ {\bf p}_{\rm
spec} \cdot{\bf r} + \delta\phi( {\bf R} ) \right],
\label{eq:refl-psi}$$ where $N$ is a normalization factor and ${\bf p}_{\rm spec} = {\bf P} + {\bf e}_{z} m v_{i}$. (The dependence on the angle of incidence is actually negligible for our parameters [@Henkel97a].) The phase $\delta\phi( {\bf R} )$ depends on the ‘impact position’ ${\bf R}$ on the mirror, *i.e.,* the projection of [**r**]{} onto the mirror plane. We perform an ensemble average over the realizations of the rough surface and compute the atomic momentum distribution $P_A( {\bf P} + \hbar {\bf
Q} )$ from the (spatial) Fourier transform of the ‘atomic coherence function’ (Sec. 6 of Ref.[@Henkel97a]) $$\begin{aligned}
\langle
\psi^*_{\rm refl}( {\bf r} ) \psi_{\rm refl}( {\bf r}' )
\rangle
&=&
N^2 \exp {\rm i} \left[
{\bf p}_{\rm spec}\cdot({\bf r}' - {\bf r}) \right]
\label{eq:atomic-coh-func}
\\
&&
\times
\exp\left[ - \frac12 \langle \left( \delta\phi( {\bf R} )
- \delta\phi( {\bf R}' ) \right)^2 \rangle \right],
\nonumber\end{aligned}$$ (We take $\langle \delta\phi( {\bf R} ) \rangle = 0$, assuming the roughness to be statistically homogeneous.) The variance of the phase shift can be found from the following formula (Eqs.(6.15) and (5.16) of Ref.[@Henkel97a]) $$\langle \delta\phi( {\bf R} ) \delta\phi( {\bf R}' ) \rangle
= \int\!\frac{{\rm d}^2Q}{(2 \pi)^2} \,
P_S( {\bf Q} ) |B_{\rm at}( {\bf Q} )|^2
{\rm e}^{ {\rm i} {\bf Q} \cdot ( {\bf R} - {\bf R}' ) },
\label{eq:phase-corrs}$$ where $B_{\rm at}( {\bf Q} )$ is the “atomic response function” given in Eq.(5.15) of Ref.[@Henkel97a]. For the parameters of our experiment, we find that the phase shift has a variance $\langle \delta\phi^2( {\bf R} ) \rangle = 16.5$ large compared to unity. In this regime, Ref.[@Henkel97a] has shown that the reflected atomic velocity distribution approaches a Gaussian shape whose width along the $x$-direction, for example, is given by $$\frac{ \sigma_{v_x}^2 }{ v_{\rm rec}^2 } =
\frac{1}{k_L^2}
\int\!\frac{{\rm d}^2Q}{(2 \pi)^2} \, Q_x^2 P_S( {\bf Q} )
|B_{\rm at}( {\bf Q} )|^2
.
\label{eq:dQx-theory}$$ This expression gives the additional broadening of the incident velocity distribution due to the diffuse mirror reflection. We perform the integration of Eq.(\[eq:dQx-theory\]) numerically, with the roughness power spectrum determined previously from the AFM images (Eq.(\[eq:PS-model\])). For simplicity, we calculate the response function $B_{\rm at}( {\bf Q} )$ using scalar light scattering from the topmost interface only, ignoring the actual layered structure. We believe that this approximation is sufficient, at least for describing the scattering in the $x$-direction: as shown in Ref [@Henkel97a], the atom does not change its magnetic sublevel if it scatters in this direction and if the evanescent wave is linearly polarized. These conditions are met here so that both atom and light can be described by scalar wave fields.
Within the theoretical model outlined above, the velocity spread along the propagation direction of the evanescent wave is found to be $\sigma_{v_x} = 6.76~v_{\rm
rec}$. This value is in very good agreement with the experimental value $6.6\pm0.2~v_{\rm rec}$. This is a very satisfying result because the theory only contains, within the approximations we made, parameters that are based on independent measurements. We believe that this is the first quantitative demonstration of evanescent wave scattering in the diffuse regime.
Discussion of the anisotropy
----------------------------
We also compute the anisotropy of the reflected atoms and find a ratio $\sigma_{v_x} / \sigma_{v_y} = 2.6$, in good agreement with the value ($2 \pm 0.5$) extracted from the experimental data. As discussed in Ref.[@Henkel97a], this anisotropy arises from the fact that diffuse reflection occurs predominantly by Bragg transitions where a photon is absorbed from the evanescent wave (with wave vector $k_x = k_L n_{\rm TiO_2} \sin\theta_i$) and another photon is emitted into a diffusely scattered mode that emerges at grazing incidence into the vacuum half-space (or the inverse process). If these scattered modes are distributed isotropically in the mirror plane on a circle of radius $r_{\rm sc} k_{L}$, the ratio of the rms spreads would be $\sigma_{v_{x}} / \sigma_{v_{y}} =
(2 (n_{\rm TiO_2} \sin(\theta_i) / r_{\rm sc})^2 + 1)^{1/2}$. Taking $r_{\rm sc} = 1$, which corresponds to scattered modes emerging at grazing incidence, we again find an anisotropy ratio of $\approx 2.5$. This agreement is not very surprising since the rough surface has a power spectrum much broader than the photon wavenumber $k_{L}$ (Fig.\[Psfit\]). Within this simple calculation, however, we can also get a quick estimate of the impact of the dielectric coating. The choice $r_{\rm sc} = n_{\rm TiO_2} \sin(\theta_i)$ corresponds to resonant scattering into waveguide modes in the TiO$_{2}$ layer and leads to a ratio $\sigma_{v_{x}} / \sigma_{v_{y}} = \sqrt{3}$ which cannot be excluded experimentally.
Conclusion
==========
In conclusion, we have observed the diffuse reflection of an ultracold atomic beam from an evanescent wave. The wave propagates on the rough surface of a dielectric prism, and light scattering leads to an atom mirror showing a significantly nonspecular reflection. The angular broadening of the reflected atoms, as well as their anisotropic angular distribution in the mirror plane, are in good agreement with a theory developed by Henkel *et al.* [@Henkel97a]. It is remarkable that this agreement does not imply any free parameters since we independently measured the spectrum of the surface roughness with an AFM. In our experiment, using a BEC has mainly practical advantages. Indeed, as we mentioned above, everything can be understood within a single-atom picture, and after diffuse scattering, spatial coherence is seriously reduced, as is discussed in Ref.[@Henkel97a] and investigated in Ref.[@Esteve04b]. Nevertheless, the BEC provides crucial advantages because we achieve a very clean situation. Apart from a very low velocity spread $V_x \ll \sigma_{v_{x}}$, a BEC has a negligible size when impacting the evanescent wave surface. This removes the need to take into account the mirror curvature due to the gaussian spot profile; the contribution of the initial size to the cloud width after reflection is negligible; and the losses given the finite size of the mirror (the waist of the reflected laser beam) are minimal. In fact, with a freely falling, ultracold, but thermal gas, the finite mirror size would lead to strongly reduced signal.
We gratefully acknowledge support by the Région Ile-de-France (contract number E1213) and by the European Community through the Research Training Network “FASTNet” under contract No. HPRN-CT-2002-00304 and the Marie Curie Research Network “Atom Chips” under contract No. MRTN-CT-2003-505032. Laboratoire de Physique des Lasers (LPL) is Unité Mixte de Recherche 7538 of Centre National de la Recherche Scientifique and Université Paris 13. The LPL group is a member of the Institut Francilien de Recherche des Atomes Froids.
[10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Hänsel W, Hommelhoff P, Hänsch T W and Reichel J 2001 [ *Nature*]{} [**413**]{} 498–501
Folman R, Kr[ü]{}ger P, Schmiedmayer J, Denschlag J H and Henkel C 2002 [ *Adv. At. Mol. Opt. Phys.*]{} [**48**]{} 263–356
Fort[á]{}gh J, Ott H, Kraft S and Zimmermann C 2002 [*Phys. Rev. A*]{} [ **66**]{} 041604(R) Leanhardt A E, Shin Y, Chikkatur A P, Kielpinski D, Ketterle W and Pritchard D E 2003 [*Phys. Rev. Lett.*]{} [**90**]{} 100404 Schumm T, Estève J, Figl C, Trebbia J B, Aussibal C, Nguyen H, Mailly D, Bouchoule I, Westbrook C and Aspect A 2005 [*Eur. Phys. J. D*]{} [**32**]{} 171–80
Jones M P A, Vale C J, Sahagun D, Hall B V and Hinds E A 2003 [*Phys. Rev. Lett.*]{} [**91**]{} 080401 Harber D M, McGuirk J M, Obrecht J M and Cornell E A 2003 [*J. Low Temp. Phys.*]{} [**133**]{} 229–38 Rekdal P K, Scheel S, Knight P L and Hinds E A 2004 [*Phys. Rev. A*]{} [ **70**]{} 013811
Henkel C and Wilkens M 1999 [*Europhys. Lett.*]{} [**47**]{} 414–20 Henkel C, P[ö]{}tting S and Wilkens M 1999 [*Appl. Phys. B*]{} [ **69**]{} 379–87
Balykin V I, Letokhov V S, Ovchinnikov Y B and Sidorov A I 1988 [ *Phys. Rev. Lett.*]{} [**60**]{} 2137–40 Kasevich M A, Weiss D S and Chu S 1990 [*Opt. Lett.*]{} [**15**]{} 607–609 Aminoff C G, Steane A M, Bouyer P, Desbiolles P, Dalibard J and Cohen-Tannoudji C 1993 [*Phys. Rev. Lett.*]{} [**71**]{} 3083–6 Landragin A, Courtois J Y, Labeyrie G, Vansteenkiste N, Westbrook C I and Aspect A 1996 [*Phys. Rev. Lett.*]{} [**77**]{} 1464–7
Christ M, Scholz A, Schiffer M, Deutschmann R and Ertmer W 1994 [ *Opt. Commun.*]{} [**107**]{} 211–7 Brouri R, Asimov R, Gorlicki M, Feron S, Reinhardt J, Lorent V and Haberland H 1996 [*Opt. Commun.*]{} [**124**]{} 448–51 Szriftgiser P, Guéry-Odelin D, Arndt M and Dalibard J 1996 [ *Phys. Rev. Lett.*]{} [**77**]{} 4–7 Cognet L, Savalli V, Horvath G Z K, Holleville D, Marani R, Westbrook N, Westbrook C I and Aspect A 1998 [*Phys. Rev. Lett.*]{} [**81**]{} 5044–5047
Ovchinnikov Y B, Shul’ga S V and Balykin V I 1991 [*J. Phys. B: Atom. Mol. Opt. Phys.*]{} [**24**]{} 3173–8 Gauck H, Hartl M, Schneble D, Schnitzler H, Pfau T and Mlynek J 1998 [*Phys. Rev. Lett.*]{} [**81**]{} 5298–301 Hammes M, Rychtarik D, Engeser B, Nägerl H C and Grimm R 2003 [*Phys. Rev. Lett.*]{} [**90**]{} 173001
Dekker N H, Lee C S, Lorent V, Thywissen J H, Smith S P, Drndi[ć]{} M, Westervelt R M and Prentiss M 2000 [*Phys. Rev. Lett.*]{} [**84**]{} 1124–7
Rychtarik D, Engeser B, Nägerl H C and Grimm R 2004 [*Phys. Rev. Lett.*]{} [**92**]{} 173003
Pasquini T A, Shin Y I, Sanner C, Saba M, Schirotzek A, Pritchard D E and Ketterle W 2004 [*Phys. Rev. Lett.*]{} [**93**]{} 223201 Pasquini T A, Saba M, Jo G, Shin Y, Ketterle K, Pritchard D E, Savas T A and Mulders N 2006, “Low velocity quantum reflection of Bose-Einstein condensates” *preprint* cond-mat/0603463.
McGuirk J M, Harber D M, Obrecht J M and Cornell E A 2004 [*Phys. Rev. A*]{} [**69**]{} 062905
Lin Y J, Teper I, Chin C and Vuleti[ć]{} V 2004 [*Phys. Rev. Lett.*]{} [ **92**]{} 050404 Harber D M, Obrecht J M, McGuirk J M and Cornell E A 2005 [*Phys. Rev. A*]{} [**72**]{} 033610 Obrecht J M, Wild R J, Antezza M, Pitaevskii L P, Stringari S and Cornell E A 2006, “Measurement of the Temperature Dependence of the Casimir-Polder Force” *preprint* physics/0608074.
Landragin A, Labeyrie G, Henkel C, Kaiser R, Vansteenkiste N, Westbrook C I and Aspect A 1996 [*Opt. Lett.*]{} [**21**]{} 1581–3 In these experiments, some indications for anisotropic scattering after reflection of thermal atoms on an evanescent wave mirror were observed (C. Westbrook and A. Landragin, private communication).
Hinds E A and Hughes I G 1999, [*J. Phys. D: Appl. Phys.*]{} [**32**]{} R119–46
Arnold A S, MacCormick C and Boshier M G 2002, [*Phys. Rev. A*]{} [**65**]{} 031601
Savalli V, Stevens D, Estève J, Featonby P D, Josse V, Westbrook N, Westbrook C I and Aspect A 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 250404
Henkel C, M[ø]{}lmer K, Kaiser R, Vansteenkiste N, Westbrook C I and Aspect A 1997 [*Phys. Rev. A*]{} [**55**]{} 1160–78
Perrin H, Colombe Y, Mercier B, Lorent V and Henkel C 2005 [*J. Phys.: Conf. Ser.*]{} [**19**]{} 151–7; doi:10.1088/1742-6596/19/1/025, *preprint* quant-ph/0509200
Kaiser R, Lévy Y, Vansteenkiste N, Aspect A, Seifert W, Leipold D and Mlynek J 1994 [*Opt. Commun.*]{} [**104**]{} 234
Colombe Y, Kadio D, Olshanii M, Mercier B, Lorent V and Perrin H 2003 [*J. Opt. B: Quantum Semiclass. Opt.*]{} [**5**]{} S155–63
Castin Y and Dum R 1996 [*Phys. Rev. Lett.*]{} [**77**]{} 5315–9 Kagan Y, Surkov E L and Shlyapnikov G V 1996 [*Phys. Rev. A*]{} [**54**]{} R1753–6
This value is deduced from an integration of the number of scattered photons along the mean classical atomic trajectory, calculated from the known evanescent wave parameters. We neglect the variation of the spontaneous emission rate at the vicinity of the surface. This assumption is reasonable as the classical turning point is rather far from the surface ($k_L z_0=1.33$). See Henkel C and Courtois J-Y 1998 [*Eur. Phys. J. D*]{} [**3**]{} 129–153. Bongs K, Burger S, Birkl G, Sengstock K, Ertmer W, Rzazewski K, Sanpera A, and Lewenstein M 1999 [*Phys. Rev. Lett.*]{} [**83**]{} 3577; Busch T, private communication
Est[è]{}ve J, Stevens D, Aussibal C, Westbrook N, Aspect A and Westbrook C I 2004 [*Eur. Phys. J. D*]{} [**31**]{} 487–91
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A deformed boson mapping of the Marumori type is derived for an underlying $su(2)$ algebra. As an example, we bosonize a pairing hamiltonian in a two level space, for which an exact treatment is possible. Comparisons are then made between the exact result, our q- deformed boson expansion and the usual non - deformed expansion.'
---
=0.1cm =0.1cm =15.8cm =-1.5cm
\#1\#2[0=1=to 0[\#2]{}\#1-201]{} \#1 \#1
[**q-Deformed Boson Expansions**]{}
2.0mm
S.S. Avancini$^1$,F.F. de Souza Cruz$^{1,2}$, J.R. Marinelli$^1$, D.P. Menezes$^1$ and M.M. Watanabe de Moraes$^1$
2.0mm
$^1$[*Departamento de Física, Universidade Federal de Santa Catarina,\
88.040-900 Florianópolis - S.C., Brazil\
*]{} $^2$[*Institute for Nuclear Theory, University of Washington,\
Seattle ,WA 98195,USA\
*]{}
Nowadays increasing importance has been given to [*quantum algebraic*]{} applications in several fields of physics [@qa]. In many cases, when the usual Lie algebras do not suffice to explain certain physical behaviors, quantum algebras are found to be successful mainly due to a free deformation parameter. In these cases, it is expected that a physical meaning be attached to the deformation parameter, but this is still a very challenging question. For an extensive review article on the subject, refer to [@bon]. In this work we are concerned with possible improvements that quantum algebras may add to boson expansions (or boson mappings).
In the literature it is easy to find situations in which fermion pairs can be replaced by bosons. This is normally performed with the help of boson mappings, that link the fermionic Hilbert space to another Hilbert space constructed with bosons. Of course boson mapping techniques are only useful when the Pauli Principle effects are somehow minimized. Historically boson expansion theories were introduced from two different points of view. The first one is the Beliaev - Zelevinsky - Marshalek (BZM) method [@bzm], which focuses on the mapping of operators by requiring that the boson images satisfy the same commutation relations as the fermion operators. In principle, all important operators can be constructed from a set of basic operators whose commutation relations form an algebra. The mapping is achieved by preserving this algebra and mapping these basic operators. The second one is the Marumori method [@marumori], which focuses on the mapping of state vectors. This method defines the operator in such a way that the matrix elements are conserved by the mapping and the importance of the commutation rules is left as a consequence of the requirement that matrix elements coincide in both spaces. The BZM and the Marumori expansions are equivalent at infinite order, which means that just with the proper mathematics one can go from one expansion to the other.
In this letter we concentrate on this second boson mapping method. First of all, we briefly outline the main aspects of the mapping from a fermionic space to a quantum deformed bosonic space. Once the deformation parameter is set equal to one, the usual boson expansion is recovered. Then the simple pairing interaction model is used as an example for our calculations. The pairing hamiltonian is exactly diagonalized and the results are compared with the ones obtained from the traditional boson and from the q- deformed boson expansions. In both cases we analyse the results for the second and fourth order hamiltonians.
In what follows we show a Marumori type deformed boson mapping. We start from an arbitrary operator $\hat O$ acting on a finite fermionic space. This fermionic Hilbert space with dimension $N+1$ is spanned by a basis formed by the states $\{ |n> \}$, with $n=0,
1,...N$. Hence,
O = \_[n,n’=0]{}\^N <n’|O|n> |n’><n| . \[opf\] In order to obtain the boson operators, we map $\hat O \rightarrow
\hat O_B$ : O\_B = \_[n,n’=0]{}\^N <n’|O|n> |n’)(n| , \[opb\] where |n)=[1]{} (b\^)\^n |0) are the deformed boson states [@dbs] with $[n]={q^n-1\over q-1}$ and $[b,b^{\dag}]_q=bb^{\dag}-qb^{\dag}b =1$. Note that the usual brackets $<|>$ stand for fermionic states and the round brackets $(|)$ stand for bosonic states. From the above considerations, it is straightforward to check that =(m|O\_B|m’). Therefore, we notice that the mapping is achieved by the equality between the matrix elements in the fermionic space and their counterparts in the bosonic space. As examples, we show the expressions for the $su(2)$ operators in the deformed bosonic space: (J\_z)\_B= \_[n=0]{}\^[2j]{} \_[l=0]{}\^ (-j+n) [ (-1)\^l q\^[l(l-1)/2]{} ! \[l\]!]{} (b\^)\^[n+l]{} b\^[n+l]{}, \[maz\]
(J\_+)\_B= \_[n=0]{}\^[2j]{} \_[l=0]{}\^ (b\^)\^[n+l+1]{} b\^[n+l]{}, \[map\]
(J\_+ J\_-)\_B= \_[n=0]{}\^[2j]{} \_[l=0]{}\^ n (2j-n+1) [ (-1)\^l q\^[l(l-1)/2]{} ! \[l\]!]{} (b\^)\^[n+l]{} b\^[n+l]{}, \[pm\]
(J\_- J\_+)\_B= \_[n=0]{}\^[2j]{} \_[l=0]{}\^ (2j-n)(n+1) [ (-1)\^l q\^[l(l-1)/2]{} ! \[l\]!]{} (b\^)\^[n+l]{} b\^[n+l]{}, \[mp\] and $(J_-)_B=(J_+)_B^{\dag}$. In deducing the above expressions we have used that [@fiv]
|0><0|=:exp\_q(-b\^b):=\_[l=0]{}\^ [ (-1)\^l q\^[l(l-1)/2]{} !]{} (b\^)\^l b\^l, \[vacuo\]
and we define the $su(2)$ basis as usual, i.e., $|n>=|j m>$, with $m=-j+n$.
Next, we apply the q- deformed boson expansions to the pairing interaction model [@krieger], which consists of two N-fold degenerate levels, whose energy difference is $\epsilon$. The lower level has energy $-\epsilon/2$ and its single-particle states are usually labelled $j_1 m_1$ and the upper level has energy $\epsilon/2$ and its single-particle states are labelled $j_2 m_2$. The pairing hamiltonian reads [@cambia]:
H=[2]{} \_m (a\^\_[j\_1 m]{} a\_[j\_1 m]{} - a\^\_[j\_2 m]{} a\_[j\_2 m]{}) -[G 4]{} ( \_j \_[m]{} a\^\_[j m]{} a\^\_[j |m]{} \_[j’]{} \_[m’]{} a\_[j’ |m’]{} a\_[j’m’]{} + h.c. ) \[hpair1\] where $a^{\dag}_{j \bar m}= (-1)^{j-m} a_{j -m}$ . In what follows, the number of particles (which are fermions) $N$ will be even and $2j=N/2$. Introducing the quasispin $su(2)$ generators : $$S_+=S_-^{\dag}={1 \over2} \sum_{m_1} a^{\dag}_{j_1 m_1}
a^{\dag}_{j_1 \bar m_1} = \sqrt{\Omega} A^{\dag}_1$$ $$S_z={1\over 2} \sum_{m_1} a^{\dag}_{j_1 m_1} a_{j_1 m_1} -
{N \over 4}$$ $$L_+=L_-^{\dag}={1\over2} \sum_{m_2} a^{\dag}_{j_2 m_2}
a^{\dag}_{j_2 \bar m_2} = \sqrt{\Omega} A^{\dag}_2$$ $$L_z={1\over 2} \sum_{m_2} a^{\dag}_{j_2 m_2} a_{j_2 m_2} -
{N \over 4}$$ one sees that the pairing interaction has an underlying $su(2) \otimes su(2)$ algebra. With the help of these operators, eq. (\[hpair1\]) can be rewritten as
H=(S\_z-L\_z)-[G 2]{} ( (A\_1\^+A\_2\^)(A\_1+A\_2)+ (A\_1+A\_2)(A\_1\^+A\_2\^) ). \[hpair2\]
The basis of states used for the diagonalization of the above hamiltonian is $|S={N\over4}~~L_z~,~L={N\over4}~~-L_z>$ [@krieger], [@ours].
Deformation can be straightforwardly introduced by deforming the $su(2) \otimes su(2)$ algebra and this problem has already been tackled in ref. [@ours]. To check the validity of the boson expansion method proposed in this letter, we substitute eqs. (\[maz\]), (\[map\]), (\[pm\]) and (\[mp\]) into eq. (\[hpair2\]) and obtain for the fourth order hamiltonian:
$${H_4 \over \epsilon} = -{x \over 2} +
\left(1-{x (\Omega-1) \over 2 \Omega}\right) b^{\dag}_1 b_1 +
\left(-1-{x (\Omega-1) \over 2 \Omega}\right)b^{\dag}_2 b_2
-{x \over2} (b^{\dag}_1 b_2 + b^{\dag}_2 b_1)$$ $$+\left({2 \over [2]}-1\right)(b^{\dag}_1 b^{\dag}_1 b_1 b_1
- b^{\dag}_2 b^{\dag}_2 b_2 b_2)$$ $${-x \over 4 \Omega} \left(2 - 3 \Omega -{8 \over [2]} +
{5 \Omega \over [2]} + {\Omega \over [2]}q \right)
(b^{\dag}_1 b^{\dag}_1 b_1 b_1
+ b^{\dag}_2 b^{\dag}_2 b_2 b_2)$$ -[x 2 ]{}( - ) (b\^\_1 b\^\_2 b\_2 b\_2 + b\^\_1 b\^\_1 b\_1 b\_2 + h.c.) \[h4\] where $x=2 G \Omega / \epsilon$. The second order hamiltonian is easily read off from the above equation by omitting all terms containing four boson operators. Diagonalizing eq. (\[h4\]) is a simple task and for this purpose the basis used is |n\_1 n\_2>= [1 ]{} (b\^\_1)\^[n\_1]{} (b\^\_2)\^[n\_2]{} |0> \[basis\] and $$b^{\dag}_1 |n_1> = \sqrt{[n_1 +1]} |n_1+1>~~~,~~~
b_1 |n_1> = \sqrt{[n_1]} |n_1-1>$$ with similar expressions for the $b_2$ and $b^{\dag}_2$ operators. We finally obtain: $${H_4 \over \epsilon} |n_1 n_2> = \left(
-{x \over 2} +
\left({2 \over [2]}-1\right)([n_1][n_1-1] - [n_2][n_2-1]) \right.$$ $$+ \left(1-{x (\Omega-1) \over 2 \Omega}\right) [n_1] +
\left(-1-{x (\Omega-1) \over 2 \Omega}\right) [n_2]$$ $$\left.{-x \over 4 \Omega} \left(2 - 3 \Omega -{8 \over [2]} +
{5 \Omega \over [2]} + {\Omega \over [2]}q\right)
([n_1][n_1-1] + [n_2][n_2-1]) \right)
|n_1 n_2>$$ $$+(-{x \over2} \sqrt{[n_2][n_1+1]} -
{x \over 2 \Omega} ( \sqrt{2 \Omega(\Omega-1) \over [2]}
- \Omega ) ([n_2-1]+[n_1]) \sqrt{[n_2][n_1+1]}) |n_1+1~ n_2-1>$$ +(-[x 2]{} - [x 2 ]{} ( - ) (\[n\_1-1\]+\[n\_2\]) ) |n\_1-1 n\_2+1> \[hfinal\]
Eq. (\[hfinal\]) yields the energy spectrum for the deformed Marumori type boson expansion. When $q$ is set equal to unity, the non- deformed spectrum is obtained. In what follows, we have chosen $x=1.0$ and the degeneracy $\Omega=20$. In figure 1 we show the ground state energy resulting from the exact diagonalization of eq. (\[hpair2\]) and the ground state energies obtained from the second and fourth order hamiltonians defined in eq. (\[h4\]) as a function of the number of pairs for $q=1$. One can see that the fourth order curve lies closer to the exact result than the second order curve, as expected, once the full expansion converges to the exact result.
We then compare the exact result with the deformed second and fourth order expansions and the results are plotted in figure 2. Setting $q=0.862$, we find that the second order expansion converges to the exact result and for $q=0.810$ the fourth order expansion also converges. This implies that the deformation parameter is playing the same rôle as all the rest of the truncated expansion. One does not have to go beyond the deformed second order boson expansion to obtain the exact result while the fourth order non- deformed expansion gives still very poor results, as seen in figure 1. Therefore, the use of quantum algebras in boson expansion theories can be a very useful method in providing the same result as the complete series. At this respect, we believe that further investigations, like the consideration of the BZM method and also of other model hamiltonians, deserve some effort in the future.
0.45in
This work has been partially supported by CNPq .
Figure Captions
===============
Figure 1) The ground state energy $E_0$ is plotted as a function of the number of pairs for the exact result (solid line), the second order expansion result (short- dashed line) and for the fourth order result (long- dashed line) for $q=1$, the interaction strenght $x=1.0$ and the degeneracy $\Omega=20$.
Figure 2) The ground state energy $E_0$ is plotted as a function of the number of pairs for the exact result with $q=1$ (solid line), the second order expansion result with $q=0.862$ (dashed line) and for the fourth order result with $q=0.810$ (dot- dashed line) for the interaction strenght $x=1.0$ and the degeneracy $\Omega=20$.
[99]{}
V.G. Drinfeld, in [*Proceedings of the International Congress of Mathematicians*]{}, ed. A.M. Gleason (American Mathematical Society, Providence, RI, 1987), p.798; M. Jimbo, Lett. Math. Phys. 11 (1986) 247; L.C. Biedenharn, J. Phys. A 22 (1989) L873; A.J. Macfarlane, J. Phys. A 22 (1989) 4581
D. Bonatsos, C. Daskaloyannis, P. Kolokotronis and D. Lenis, preprint [*Quantum Algebras in Nuclear Structure*]{}, in press
S.T. Beliaev and V.G. Zelevinsky, Nucl. Phys. 39 (1962) 582 ; E.R. Marshalek, Nucl. Phys. A161 (1971) 401, A224 (1974) 221, 245.
T. Marumori, M. Yamamura and A. Tokunaga, Progr. Theor. Phys. 31 (1964) 1009; T. Marumori, M. Yamamura, A. Tokunaga and T. Takada, Progr. Theor. Phys. 32 (1964) 726.
M. Arik and D.D. Coon, J. Math. Phys. 17 (1976) 524; M.R. Kibler, in [*Proceedings of the Second International School of Theoretical Physics*]{}, ed. W. Florek, D. Lipinski and J. Lulek (World Scientific, 1993)
D.I. Fivel, J. Phys. A 24 (1991) 3575
S.J. Krieger and K. Goeke, Nucl. Phys. A 234 (1974) 269
M.C. Cambiaggio, G.G. Dussel and M. Saraceno, Nucl. Phys. A415 (1984) 70
S.S. Avancini and D.P. Menezes, J. Phys. A 26 (1993) 6261
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We study combinatorial and algorithmic questions around minimal feedback vertex sets in tournament graphs.
On the combinatorial side, we derive strong upper and lower bounds on the maximum number of minimal feedback vertex sets in an $n$-vertex tournament. We prove that every tournament on $n$ vertices has at most ${1.6740}^n$ minimal feedback vertex sets and that there is an infinite family of tournaments, all having at least $1.5448^n$ minimal feedback vertex sets. This improves and extends the bounds of Moon (1971).
On the algorithmic side, we design the first polynomial space algorithm that enumerates the minimal feedback vertex sets of a tournament with polynomial delay. The combination of our results yields the fastest known algorithm for finding a minimum size feedback vertex set in a tournament.
author:
- 'Serge Gaspers[^1]'
- 'Matthias Mnich[^2]'
bibliography:
- 'short.bib'
title: 'Feedback Vertex Sets in Tournaments[^3]'
---
#### Keywords.
Algorithms and data structures, tournaments, feedback vertex set, polynomial delay, combinatorial bounds.
Introduction {#sec:introduction}
============
A tournament $T=(V,A)$ is a directed graph with exactly one arc between every pair of vertices. A feedback vertex set ([FVS]{}) of $T$ is a subset of its vertices whose deletion makes $T$ acyclic. A minimal [FVS]{}of $T$ is a [FVS]{}of $T$ that is minimal with respect to vertex-inclusion. The complement of a minimal [FVS]{} $F$ induces a maximal acyclic subtournament whose unique vertex of in-degree zero is a “Banks winner” [@Banks1985]: identifying the vertices of $T$ with candidates in a voting scheme and arcs indicating preference of one candidate over another, the *Banks winner* of $T[V\setminus F]$ is the candidate collectively preferred to every other candidate in $V\setminus F$. Banks winners play an important role in social choice theory.
#### Extremal Combinatorics.
We denote the number of minimal [FVSs]{}in a tournament $T$ by $f(T)$, and the maximum $f(T)$ over all $n$-vertex tournaments by $M(n)$. The letter “M” was chosen in honor of Moon who in 1971 proved [@Moon1971] that $$1.4757^n \leq M(n) \leq 1.7170^n $$ for large $n$. Our combinatorial main result are the stronger bounds $$1.5448^n \leq M(n) \leq {1.6740}^n \enspace .$$ To prove our new lower bound on $M(n)$, we construct an infinite family of tournaments all having $21^{n/7} > 1.5448^n$ minimal [FVSs]{}. To prove our new upper bound on $M(n)$, we bound the maximum of a convex function bounding $M(n)$ from above, and otherwise rely on case distinctions and recurrence relations.
For general directed graphs, no non-trivial upper bound on the number of minimal [FVSs]{}is known. For undirected graphs, Fomin et al. [@FominEtAl2008] show that any undirected graph on $n$ vertices contains at most $1.8638^n$ minimal [FVSs]{}, and that infinitely many graphs have $105^{n/10} > 1.5926^n$ minimal [FVSs]{}. Lower bounds of roughly $\log n$ on the size of a maximum-size acyclic subtournament have been obtained by Reid and Parker [@ReidParker1970] and Neumann-Lara [@NeumannLara94]. Other bounds on minimal or maximal sets with respect to vertex-inclusion have been obtained for dominating sets [@FominGPS08], bicliques [@GaspersKL08], separators [@FominV08], potential maximal cliques [@FominV10], bipartite graphs [@ByskovMS05], $r$-regular subgraphs [@GuptaRS06], and, of course, independent sets [@MillerM60; @MoonM65]. The increased interest in exponential time algorithms over the last few years has given new importance to such bounds, as the enumeration of the corresponding objects may be used in exponential time algorithms to solve various problems; see, for example [@BjorklundHK09; @Byskov04; @Eppstein03; @FominV10; @Lawler76; @RamanSS07].
#### Enumeration.
An algorithm by Schwikowski and Speckenmeyer [@SchwikowskiEtAl2002] lists the minimal [FVSs]{}of a directed graph $G$ with polynomial delay, by traversing a hypergraph whose vertices are bijectively mapped to minimal [FVSs]{}of $G$. Unfortunately the Schwikowski-Speckenmeyer-algorithm may use exponential space, and it is not known whether the minimal FVS problem allows a polynomial delay enumeration algorithm with polynomially bounded space complexity in directed graphs. Our algorithmic main result provides such an enumeration algorithm for the family of *tournaments*. Our algorithm is inspired from that by Tsukiyama et al. for the (conceptually simpler) enumeration of maximal independent sets [@TsukiyamaEtAl1977]. It is based on iterative compression, a technique for parameterized [@ReedSV04] and exact algorithms [@FominGKLS08]. We thereby positively answer Fomin et al.’s [@FominGKLS08] question if the technique could be applied to other algorithmic areas.
#### Exact Algorithms.
In the third [@Woeginger2008] in a series [@Woeginger03; @Woeginger04; @Woeginger2008] of very influential surveys on exact exponential time algorithms, Woeginger observes that Moon’s upper bound on $M(n)$ provides an upper bound on the overall running time of the enumeration algorithm of Schwikowski and Speckenmeyer. He explicitly asks for a faster algorithm finding a feedback vertex set of a tournament of minimum size. Our new bound yields a time complexity of $O(1.6740^n)$. Unlike upper bound proofs on other [@ByskovMS05; @FominEtAl2008; @FominGPS08; @FominV08; @FominV10; @GaspersKL08; @GuptaRS06; @MillerM60; @MoonM65] minimal or maximal sets with respect to vertex inclusion, for minimal [FVSs]{}in tournaments no known (non trivial) proof readily translates into a polynomial-space branching algorithm. Due to its space complexity, which differs from its time complexity by only a polynomial factor, the Schwikowski-Speckenmeyer-algorithm has only limited practicability [@Woeginger2008]. With our new enumeration algorithm, we achieve however a polynomial-space $O(1.6740^n)$-time algorithm to find a minimum sized feedback vertex set in tournaments, and to even enumerate all minimal ones. Dom et al. [@DomEtAl2006] independently answered Woeginger’s question by constructing an iterative–compression algorithm solving only the optimization version of the problem. However, the running time of their algorithm grows at least with $1.708^n$ and hence their result is inherently weaker than ours.
#### Organization of the paper.
Preliminaries are provided in Section \[sec:preliminaries\]. In Section \[sec:minimum\_number\], we answer how many distinct minimal [FVSs]{}a (strong) tournament on $n$ vertices has *at least*. Section \[sec:lowerbound\] proves the lower bound on $M(n)$, and Section \[sec:upperbound\] gives the upper bound. We conclude with the polynomial-space polynomial-delay enumeration algorithm in Section \[sec:polydelaypolyspace\]. The main result of the paper is formulated in Corollary \[thm:minfvspolyspace\].
Preliminaries {#sec:preliminaries}
=============
Let $T = (V,A)$ be a tournament. For a vertex subset $V'\subseteq V$, the tournament $T[V']$ induced by $V'$ is called a *subtournament* of $T$. For each vertex $v\in V$, its *in-neighborhood* and *out-neighborhood* are defined as $N^-(v)=\{u\in V~|~(u,v)\in A\}$ and $N^+(v)=\{u\in V~|~(v,u)\in A\}$, respectively. If there is an arc $(u,v)\in A$ then we say that $u$ *beats* $v$ and write $u \rightarrow v$. A tournament $T$ is *strong* if there exists a directed path between any two vertices. A non-strong tournament $T$ has a unique factorization $T = S_1 + \hdots + S_r$ into strong subtournaments $S_1,\hdots,S_r$, where every vertex $u\in V(S_k)$ beats all vertices $v\in V(S_\ell)$, for $1\leq k < \ell\leq r$. For $n\in\mathbb N$ let $\mathcal T_n$ denote the set of tournaments with $n$ vertices and let $\mathcal T^*_n$ denote the set of strong tournaments on $n$ vertices.
The *score* of a vertex $v\in V$ is the size of its out-neighborhood, and denoted by $s_v(T)$ or $s_v$ for short. Consider a labeling $1,\hdots,n$ of the vertices of $T$ such that their scores are non-decreasing, and associate with $T$ the *score sequence* $s(T)=(s_1,\hdots,s_n)$. If $T$ is strong then $s(T)$ satisfies the *Landau inequalities* [@HararyMoser1966; @Landau1953]: $$\begin{aligned}
\sum_{v=1}^k s_v &\geq \binom{k}{2}+1~\mbox{ for all }~k=1,\hdots,n-1, \text{ and}
\label{eqn:sbound2}\displaybreak[0]\\
\sum_{v=1}^n s_v &= \binom{n}{2}
\label{eqn:sbound3} \enspace .\end{aligned}$$ For every non-decreasing sequence $s$ of positive integers satisfying conditions –, there exists a tournament whose score sequence is $s$ [@Landau1953].
Let $L$ be a set of non-zero elements from the ring $\mathbb Z_n$ of integers modulo $n$ such that for all $i\in \mathbb Z_n$ exactly one of $+i$ and $-i$ belongs to $L$. The tournament $T_L = (V_L,A_L)$ with $V_L = \{1,\hdots,n\}$ and $A_L = \{(i,j)\in V_L\times V_L ~|~(j-i)\bmod n~\in L\}$ is *the circular n-tournament induced by* $L$. A *triangle* is a tournament of order $3$. The cyclic triangle is denoted $C_3$.
A *[FVS]{}* $F$ of a tournament $T=(V,A)$ is a subset of vertices, such that $T[V\setminus F]$ has no directed cycle. It is *minimal* if it does not contain a [FVS]{}of $T$ as a proper subset. Let $\mathcal F(T)$ be the collection of minimal [FVSs]{}of $T$; its cardinality is denoted by $f(T)$. A *minimum [FVS]{}* is a [FVS]{}with a minimum number of vertices.
Acyclic tournaments are sometimes called *transitive*; the (up to isomorphism unique) transitive tournament on $n$ vertices is denoted $TT_n$. Let $\tau$ be the unique topological order of the vertices of $TT_n$ such that $\tau(u) < \tau(v)$ if and only if $u$ beats $v$. For such an order $\tau$ and integer $i\in\{1,\hdots,n\}$ the subsequence of the first $i$ values of $\tau$ is denoted $\tau_i(V(TT_n))=(\tau^{-1}(1),\hdots,\tau^{-1}(i))$; call $\tau_1(V(TT_n))$ the *source* of $TT_n$. For a minimal [FVS]{}$F$ of a tournament $T$ the subtournament $T[V\setminus F]$ is a *maximal transitive subtournament* of $T$ and $V \setminus F$ is a *maximal transitive vertex set*.
Minimum Number of Minimal FVSs {#sec:minimum_number}
==============================
In this section we analyze the minimum number of minimal [FVSs]{}in tournaments.
Let the function $m:\mathbb N\rightarrow\mathbb N, n\mapsto \min_{T\in\mathcal T_n}f(T)$ count the minimum number of minimal [FVSs]{}over all tournaments of order $n$. Since a minimal [FVS]{}always exists, $m(n)\geq 1$ for all positive integers $n$. This bound is attained by the transitive tournaments $TT_n$ of all orders $n$.
\[thm:fvsstrongdecomposition\] If $T = S_1 + \hdots + S_r$ is the factorization of a tournament $T$ into strong subtournaments $S_1,\hdots,S_r$, then $f(T)=f(S_1)\cdot\hdots\cdot f(S_r)$.
Hence from now on we consider only strong tournaments (on at least $3$ vertices) and define $m^*:\mathbb N\setminus\{1,2\}\rightarrow\mathbb N, n\mapsto \min_{T\in\mathcal T^*_n}f(T)$.
\[thm:constmstar\] The function $m^*$ is constant: $m^*(n) = 3$ for all $n\geq 3$.
Let $T\in\mathcal T^*_n$ be a strong tournament. We show that $f(T)\geq 3$. As $T$ is strong, it contains some cycle and thus some cyclic triangle $C$, with vertices $v_1,v_2,v_3$. For $i=1,2,3$, define the vertex sets $W_i=\{v_i,v_{(i+1)\mod 3}\}$. Every set $W_i$ can be extended to a maximal transitive vertex set $W_i'$ of $T$. Note that for $i=1,2,3$ and $j\in\{1,2,3\}\setminus\{i\}$, we have $v_{(i+2)\mod 3}\in W_j'\setminus W_i'$. Hence, there are three maximal transitive subtournaments of $T$ whose complements form three minimal [FVSs]{}of $T$. Consequently, $m^*(n)\geq 3$ for all $n\geq 3$.
To complete the proof, construct a family $\{U_n\in\mathcal T^*_n~|~n\geq 3\}$ of strong tournaments with exactly three minimal [FVSs]{}. Set $U_3$ equal to the cyclic triangle. For $n\geq 4$, build the tournament $U_n$ as follows: start with the transitive tournament $TT_{n-2}$, whose vertices are labeled $1,\hdots,n-2$ by decreasing scores. Then add two special vertices $u_1,u_2$ which are connected by an arbitrarily oriented arc. For $i\in\{1,2\}$, add arcs from all vertices $2,\hdots,n-2$ to $u_i$. Finally, connect vertex $1$ to $u_i$ by an arc $(u_i,1)$, for $i=1,2$. The resulting tournament $U_n$, depicted in Fig. \[fig:fewminimalfvs\], has exactly three minimal [FVSs]{}, namely $\{u_1,u_2\},\{1\}$ and $\{2,\hdots,n-2\}$.
![A tournament $pq(T')\in\mathcal T^*_n$ with $f(pq(T'))=2f(T')+1$.[]{data-label="fig:lowerbeta"}](ufamily2.pdf){width="\textwidth"}
$~$
![A tournament $pq(T')\in\mathcal T^*_n$ with $f(pq(T'))=2f(T')+1$.[]{data-label="fig:lowerbeta"}](moonlower.pdf)
Lower Bound on the Maximum Number of Minimal FVSs {#sec:lowerbound}
=================================================
We prove a lower bound of $21^{n/7} > 1.5448^n$ on the maximum number of minimal [FVSs]{}of tournaments with $n$ vertices.
Formally, we will bound from below the values of the function $M(n)$ mapping integers $n$ to $\max_{T\in\mathcal T_n}f(T)$. By convention, set $M(0)=1$. Note that $M$ is monotonically non-decreasing on its domain: given any tournament $T\in\mathcal T_n$ and any vertex $v\in V(T)$, for every minimal [FVS]{}$F\in\mathcal F(T[V(T)\setminus\{v\}])$ either $F\in\mathcal F(T)$ or $F\cup\{v\}\in\mathcal F(T)$. As $T$ and $v$ are arbitrary it follows that $M(n)\geq M(n-1)$. We will now show that there is an infinite family of tournaments on $n=7k$ vertices, for any $k\in\mathbb N$, with $21^{n/7} > 1.5448^n$ minimal [FVSs]{}, improving upon Moon’s [@Moon1971] bound of $1.4757^n$. Let $ST_7$ denote the Paley digraph of order 7, i.e. the circular $7$-tournament induced by the set $L = \{1,2,4\}$ of quadratic residues modulo 7. All maximal transitive subtournaments of $ST_7$ are transitive triangles, of which there are exactly 21, as each vertex is the source of 3 distinct transitive triangles. Thus, all minimal [FVSs]{}for $ST_7$ are minimum [FVSs]{}. We remark that $ST_7$ is the unique $7$-vertex tournament without any $TT_4$ as subtournament [@ReidParker1970].
\[thm:manyminimalfvs\] There exists an infinite family of tournaments with $21^{n/7}$ minimal [FVSs]{}.
Let $k\in\mathbb N$ and form the tournament $T_0=ST_7+\hdots+ST_7$ from $k$ copies of $ST_7\in\mathcal T^*_7$. Then $T_0\in\mathcal T_n$ for $n=7k$, and the number of minimal [FVSs]{}in $T_0$ is $f(T_0)=f(ST_7)^k=21^k=21^{n/7}$.
Upper Bound on the Maximum Number of Minimal FVSs {#sec:upperbound}
=================================================
We give an upper bound of $\beta^n$, where $\beta ={1.6740}$, on the maximum number of minimal [FVSs]{}in any tournament $T\in\mathcal T_n$, for any positive integer $n$. This improves the bound of $1.7170^n$ by Moon [@Moon1971]. Instead of minimal [FVSs]{}we count maximal transitive subtournaments, and with respect to Observation \[thm:fvsstrongdecomposition\] we count the maximal transitive subtournaments of *strong* tournaments.
We start with three properties of maximal transitive subtournaments. First, for a strong tournament $T = (V,A)$ with score sequence $s =(s_1,\hdots,s_n)$ the following holds: if $TT_k = (V',A')$ is a maximal transitive subtournament of $T$ with $\tau_1(V') = (t)$ then $T[V'\setminus\{t\}]$ is a maximal transitive subtournament of $T[N^+(t)]$. Hence $f(T)\leq\sum_{v=1}^nM(s_v)$, where $s_v\leq n-2$ for all $v\in V$. This allows us to effectively bound $f(T)$ via a recurrence relation.
Second, there cannot be too many vertices with large score.
\[thm:nummaxscore\] For $n\geq 8$ and $k\in\{0,1,2\}$, any strong tournament $T\in\mathcal T^*_n$ has at most $2(k+1)$ vertices of score at least $n-2-k$.
Fix some strong tournament $T\in\mathcal T^*_n$ and $k\in\{0,1,2\}$. Suppose for contradiction that $T$ contains $2k+3$ vertices with score at least $n-2-k$. Then the Landau inequalities and imply the contradiction $$\begin{aligned}
2\binom{n}{2} & = & 2\left(\sum_{v=1}^{n-(2k+3)}s_v+\sum_{v=n-(2k+2)}^ns_v\right)\\
& \geq & 2\left(\binom{n-(2k+3)}{2}+1+(2k+3)(n-2-k)\right) = n^2-n+2.\end{aligned}$$
For $n\leq 7$, we can explicitly list the strong $n$-vertex tournaments for which the Lemma fails: the cyclic triangle for $k=0$, the tournaments $RT_5,ST_6$ for $k=1$ and $ST_7$ for $k=2$. $RT_5$ is the regular tournament of order 5 and $ST_6$ is the tournament obtained by arbitrarily removing some vertex from $ST_7$ (defined in the previous section) and all incident arcs.
Third, let $T'$ be a tournament obtained from a tournament $T$ by reversing all arcs of $T$. Then, $f(T) = f(T')$, whereas the score $s_v(T)$ of each vertex $v$ turns into $s_v(T') = n - 1 - s_v(T)$. This implies that analyzing score sequences with maximum score $s_n\geq n-1-c$ for some constant $c$ is symmetric to analyzing score sequences with minimum score $s_1\leq c$.
Our proof that any tournament on $n$ vertices has at most $\beta^n$ maximal transitive subtournaments consists of several parts. We start by proving the bound for tournaments with few vertices. The inductive part of the proof first considers tournaments with large maximum score (and symmetrically small minimum score), and then all other tournaments.
We begin the proof by considering tournaments with up to 10 vertices. For $n\leq 4$ exact values for $M(n)$ were known before [@Moon1971]. For $n=5,\hdots,9$ we obtained exact values for $M(n)$ with the help of a computer. For these values the extremal tournaments obey the following structure: pick a strong tournament $T'\in\mathcal T^*_{n-2}$ and construct the strong tournament $pq(T') \in \mathcal T^*_n$ by attaching two vertices to $T'$ as in Fig. \[fig:lowerbeta\]; namely add vertices $p$ and $q$ to $T'$, and arcs $q \rightarrow p$, and $p \rightarrow t$, $t \rightarrow q$ for each vertex $t$ in $T'$. Then $f(pq(T'))=2f(T')+1$.
For $n = 5$, there are exactly two non-isomorphic strong tournaments $QT_5\cong pq(C_3),\linebreak RT_5\in\mathcal T^*_5$. For these, $f(QT_5)=f(RT_5)=M(5)=2\cdot 3+1=7$. For $n = 6$, $ST_6$ is the unique tournament from $\mathcal T_6$ with $f(ST_6)=M(6)=12$ minimal [FVSs]{}. For $n = 7$ the previous section showed $f(ST_7)=21$, and in fact $ST_7$ is the unique $7$-vertex tournament with $M(7)=21$ minimal [FVSs]{}. For $n \in\{8,9\}$, $ST_n \cong pq(ST_{n-2})$; then $f(ST_n)=M(n)$. Table \[tab:smallextremal\] summarizes that for $n\leq 9$, $M(n)\leq\beta^n$.
------- -------------- --------------------------------- -------------------------------------
$n\;$ $M(n)\;$ $\quad M(n)^{1/n} \approx\quad$ $T\in\mathcal T_n:f(T)=M(n)$
1 $ 1\;$ $ 1.00000\quad$ $T\in\mathcal T_1$
2 $ 1\;$ $ 1.00000\quad$ $T\in\mathcal T_2$
3 $ 3\;$ $ 1.44225\quad$ $T\in\mathcal T_3\setminus\{TT_3\}$
4 $ 3\;$ $ 1.31607\quad$ $T\in\mathcal T_4\setminus\{TT_4\}$
5 $ 7\;$ $ 1.47577\quad$ $QT_5\cong pq(C_3),RT_5$
6 $ 12\;$ $ 1.51309\quad$ $ST_6\cong ST_7-\{1\}$
7 $ 21\;$ $ 1.54486\quad$ $ST_7$
8 $ 25\;$ $ 1.49535\quad$ $ST_8\cong pq(ST_6)$
9 $ 43\;$ $ 1.51879\quad$ $ST_9\cong pq(ST_7)$
------- -------------- --------------------------------- -------------------------------------
: Extremal tournaments of up to 9 vertices
\[tab:smallextremal\]
Next, we bound $M(10)$ by means of $M(n)$ for $n\leq 9$. Let $W$ be a maximal transitive vertex set of $T\in \mathcal T^*_{10}$. Then either $v^* \in W$ or $v^* \notin W$, where $v^*$ is a vertex with score $s_{10}$. There are at most $M(s_{10}) \le M(9)$ maximal transitive vertex sets $W$ such that $v^* \in W$ and at most $M(9)$ such sets $W$ for which $v^* \notin W$. As $(2M(9))^{1/10}=86^{1/10}< 1.5612$, the proof follows for all tournaments with at most 10 vertices.
For the rest of this section we consider tournaments with $n\geq 11$ vertices. Let $T = (V,A)$ be a strong tournament on $n\geq 11$ vertices; we will show that $f(T)\leq \beta^n$. The proof considers four main cases and several subcases with respect to the minimum and maximum score of the tournament.
We provide a complete proof of the upper bound on the maximum number of minimal feedback vertex set in tournaments.
Let $T = (V,A)$ be a strong tournament on $n\geq 11$ vertices and let $s = (s_1,\hdots,s_n)$ be the score sequence of $T$. We will show that $f(T)\leq \beta^n$. The proof considers four main cases and several subcases with respect to the minimum and maximum score of the tournament. To avoid a cumbersome nesting of cases, whenever inside a given case we assume that none of the earlier cases applies. By $W$ we denote a maximal transitive vertex set of $T$.\
**Case 1: $\mathbf{s_n = n-2}$.** Let $b$ be the unique vertex beating vertex $n$.\
If $b \notin W$ then $\tau_1(W)=(n)$; there are at most $M(s_n)=M(n-2)$ such $W$.\
If $b\in W$ and $n\in W$, then $\tau_1(W \setminus \{b\})=(n)$ as no vertex except $b$ beats $n$. So, $\tau_2(W)=(b,n)$ and there are at most $M(s_b-1)$ such $W$. For the last possibility, where $b\in W$ and $n\notin W$, note that $W$ contains at least one in-neighbor of $b$, otherwise $W$ were not maximal as $n$ could be added. We consider 4 subcases depending on the score of $b$.
Case 1.1: $\mathbf{s_b = n-2}$.
: Let $c$ be the unique vertex beating $b$. As at most 2 vertices have score $n-2$ by Lemma \[thm:nummaxscore\], $s_c \le n-3$. We have that $c \in W$, otherwise $W$ would not be maximal as $W\cup \{n\}$ induces a transitive subtournament of $T$. As $b$ and its unique in-neighbor $c$ are in $W$, $\tau_2(W)=(c,b)$. There are at most $M(s_c-1) \le M(n-4)$ such $W$. In total, $f(T)\leq M(n-2) + M(n-3) + M(n-4)\leq \beta^{n-4} + \beta^{n-3} + \beta^{n-2}$ which is at most $\beta^n$ because $\beta \ge 1.4656$.
In the three remaining subcases, all in-neighbors of $b$ have score at most $n-3$: if $c_i\in N^-(b)$ had score $n-2$, then Case 1.1 would apply with $n:=c_i$ and $b:=n$.
Case 1.2: $\mathbf{s_b = n-3}$.
: Let $N^-(b):=\{c_1,c_2\}$ such that $c_1 \rightarrow c_2$. Then either $\tau_1(W) = (c_1)$ or $\tau_1(W) = (c_2)$; there are at most $2M(n-3)$ such $W$. It follows $f(T)\leq M(n-2) + M(n-4) + 2M(n-3)\leq \beta^{n-4} + 2\beta^{n-3} + \beta^{n-2} \leq \beta^n$ as $\beta \ge 1.6181$.
Case 1.3: $\mathbf{s_b = n-4}$.
: Let $N^-(b):=\{c_1,c_2,c_3\}$. Observe that at most $2$ vertices among $N^-(b)$ have score $n-3$, otherwise $T$ is not strong as $N^-(b)\cup \{b,n\}$ induce a strong component. Either $\tau_1(W) = (c_1)$ or $\tau_1(W) = (c_2)$ or $\tau_1(W)=(c_3)$; there are at most $2M(n-3)+M(n-4)$ such $W$. Thus, $f(T)\leq M(n-2) + M(n-5) + 2M(n-3) + M(n-4)\leq \beta^{n-5} + \beta^{n-4} + 2\beta^{n-3} + \beta^{n-2} \leq \beta^n$ as $\beta \ge 1.6664$.
Case 1.4: $\mathbf{s_b \leq n-5}$.
: Then there are at most $M(n-1)$ subtournaments not containing $n$. It follows $f(T)\leq M(n-2) + M(n-6) + M(n-1)\leq \beta^{n-6} + \beta^{n-2} + \beta^{n-1} \leq \beta^n$ as $\beta \ge 1.6737$.
**Case 2: $\mathbf{s_n = n-3}$.** Let $b_1,b_2$ be the two vertices beating $n$ such that $b_1\rightarrow b_2$. The tree in Fig. \[fig:searchtree\] pictures our case distinction. Its leaves correspond to six different cases, numbered (1)–(6), for membership or non-membership of $n$, $b_1$ and $b_2$ in some maximal transitive vertex set $W$ of $T$. The cases corresponding to leafs (2) and (4) will be considered later. Let us now bound the number of possible $W$ for the other cases (1), (3), (5) and (6).\
![\[fig:searchtree\]Different possibilities for a maximal transitive vertex set $W$.](searchtree.pdf)
\[cl:1356\] Among all maximal transitive vertex sets $W$ of $T$,
- at most $M(n-3)$ are such that $b_1 \notin W$ and $b_2 \notin W$,
- at most $M(s_{b_2}-1)$ are such that $b_1 \notin W$, $b_2 \in W$ and $n \in W$,
- at most $M(s_{b_1}-2)$ are such that $b_1 \in W$, $b_2 \notin W$ and $n\in W$, and
- at most $M(s_{b_1}-2)$ are such that $b_1 \in W$, $b_2 \in W$ and $n\in W$.
If (1) $b_1 \notin W$ and $b_2 \notin W$, then $n\in W$ by maximality of $W$ and $n$ is the source of $T[W]$ as no vertex in $W$ beats $n$. Thus, there are at most $M(s_n)=M(n-3)$ such $W$. If (3) $b_1 \notin W$, $b_2 \in W$ and $n \in W$, then $\tau_1(W\setminus \{b_2\})=(n)$. Therefore, $\tau_2(W)=(b_2,n)$ and there are at most $M(s_{b_2}-1)$ such $W$. If (5) $b_1 \in W$, $b_2 \notin W$ and $n\in W$, then $\tau_2(W)=(b_1,n)$, and as $b_1$ beats $b_2$, there are at most $M(s_{b_1}-2)$ such $W$. If (6) $b_1 \in W$, $b_2 \in W$ and $n\in W$, then $\tau_3(W)=(b_1,b_2,n)$, and there are at most $M(s_{b_1}-2)$ such $W$.
To bound the number of subtournaments corresponding to the conditions in leafs (2) and (4), we will consider five subcases depending on the scores of $b_1$ and $b_2$. If $b_1$ and $b_2$ have low scores (Cases 2.4 and 2.5), there are few maximal transitive subtournaments of $T$ corresponding to the conditions in the leafs (3), (5) and (6). Then, it will be sufficient to group the cases (2) and (4) into one case where $n \notin W$ and to note that there are at most $M(n-1)$ such subtournaments. Otherwise, if the scores of $b_1$ and $b_2$ are high (Cases 2.1 – 2.3), we use that in (2), some vertex of $N^-(b_2)$ is the source of $W$. If this were not the case, $W$ would not be maximal as $W \cup \{n\}$ would induce a transitive tournament. Similarly, in (4) some vertex of $N^-(b_1)$ is the source of $W$ if $b_2 \notin W$.
Let $c_1,\hdots,c_{|N^-(b_1)|}$ be the in-neighbors of $b_1$ such that $c_i \rightarrow c_{i+1}$ for all $i \in \{1,\hdots,\linebreak |N^-(b_1)|-1\}$ (every tournament has a Hamiltonian path [@Redei34]) and let $d_1,\hdots,d_{|N^-(b_2)|-1}$ be the in-neighbors of $b_2$ besides $b_1$ such that $d_i \rightarrow d_{i+1}$ for all $i \in \{1,\hdots,|N^-(b_2)|-2\}$.
Let us first bound the number of subtournaments satisfying the conditions of (2) depending on $s_{b_2}$.
\[cl:2.n-3\] If $s_{b_2}=n-3$, there are at most $M(s_{d_1}-1)$ maximal transitive vertex sets $W$ such that $b_1 \notin W$, $b_2 \in W$ and $n \notin W$.
As mentioned above, some in-neighbor of $b_2$ is the source of $W$. As $s_{b_2}=n-3$, $N^-(b_2)\setminus \{b_1\} = \{d_1\}$. Thus, $\tau_2(W)=(d_1,b_2)$ and there are at most $M(s_{d_1}-1)$ such tournaments.
\[cl:2.n-4\] If $s_{b_2}=n-4$, there are at most $M(n-5)+2M(s_{d_1}-2)$ maximal transitive vertex sets $W$ such that $b_1 \notin W$, $b_2 \in W$ and $n \notin W$.
If $d_1 \notin W$ then $\tau_2(W)=(d_2,b_2)$ and there are at most $M(s_{b_2}-1) = M(n-5)$ such $W$. Otherwise, $d_1 \in W$ and either $d_2 \notin W$ in which case $\tau_2(W)=(d_1,b_2)$, or $d_2 \in W$ in which case $\tau_3(W)=(d_1,d_2,b_2)$. There are at most $2M(s_{d_1}-2)$ such $W$.
The next step is to bound the number of subtournaments satisfying the conditions of (4) depending on $s_{b_1}$.
\[cl:4.n-3\] If $s_{b_1}=n-3$, the number of maximal transitive vertex sets $W$ such that $b_1 \in W$ and $n \notin W$ is at most $2M(n-5)+M(n-4)$ if $b_2$ beats no vertex of $N^-(b_1)$, and otherwise at most $2M(n-5)+M(n-4)+M(n-6)$ if $s_{b_2}=n-3$ and at most $2M(n-5)+M(n-4)+3M(n-7)$ if $s_{b_2}=n-4$.
If $N^-(b_1)\cap W \not = \emptyset$, then $c_1$ or $c_2$ is the source of $W$. The number of subsets $W$ such that $c_1 \notin W$, and thus $\tau_2(W)=(c_2,b_1)$, is at most $M(s_{c_2}-1) \le M(n-4)$. The number of subsets $W$ such that $c_1 \in W$, and thus $\tau_3(W)=(c_1,c_2,b_1)$ or $\tau_2(W)=(c_1,b_1)$, is at most $2M(s_{c_1}-2)\le 2M(n-5)$. If, on the other hand, $N^-(b_1) \cap W = \emptyset$, then $\tau_1(W)=(b_1)$ and some in-neighbor of $b_2$ is the source of $T[W\setminus \{b_1\}]$, otherwise $W$ is not maximal as $n$ can be added. Also note that $b_2$ beats some vertex of $N^-(b_1)$ (we have $N^-(b_2)\setminus N^-(b_1) \not = \emptyset$ as $N^-(b_1)\cap W = \emptyset$ but $N^-(b_2)\cap W \not = \emptyset$). If $s_{b_2}=n-3$, we upper bound the number of such subsets $W$ by $M(s_{b_1}-3)=M(n-6)$ as $\tau_3(W)=(b_1,d_1,b_2)$. If $s_{b_2}=n-4$, we have that $\tau_4(W)=(b_1,d_1,d_2,b_2)$, $\tau_3(W)=(b_1,d_2,b_2)$ or $\tau_3(W)=(b_1,d_1,b_2)$. Thus, there are at most $3M(s_{b_1}-4)=3M(n-7)$ possible $W$ such that $N^-(b_1) \cap W = \emptyset$ if $s_{b_1}=n-3$ and $s_{b_2}=n-4$. Summarizing, there are at most $2M(n-5)+M(n-4)$ subsets $W$ if $b_2$ beats no vertex of $N^-(b_1)$, and otherwise at most $2M(n-5)+M(n-4)+M(n-6)$ subsets $W$ if $s_{b_2}=n-3$ and at most $2M(n-5)+M(n-4)+3M(n-7)$ subsets $W$ if $s_{b_2}=n-4$.
\[cl:4.n-4\] If $s_{b_1}=n-4$ and $s_{b_2}=n-3$, the number of maximal transitive vertex sets $W$ such that $b_1 \in W$ and $n \notin W$ is
- at most $M(n-7)+\sum_{c\in N^-(b_1)}2M(s_c-2)$ if $T[N^-(b_1)]$ is a directed cycle,
- at most $\max\{M(n-3)+M(n-4)+M(n-5) ; M(n-5)+6M(n-6)\}$ if $T[N^-(b_1)]$ is transitive and $d_1 \in N^-(b_1)$, and
- at most $M(n-3)+M(n-4)+M(n-5)+M(n-7)$ if $T[N^-(b_1)]$ is transitive and $d_1 \notin N^-(b_1)$.
If $c_3 \rightarrow c_1$, then $W$ intersects $N^-(b_1)$ in at most $2^3-1=7$ possible ways ($N^-(b_1) \subseteq W$ would induce a cycle in $T[W]$). In one of them, $N^-(b_1) \cap W = \emptyset$, which implies $\tau_3(W)=(b_1,d_1,b_2)$; there are at most $M(s_{b_1}-3)=M(n-7)$ such $W$. For each $c \in N^-(b_1)$, there are 2 possibilities where $\tau_1(W)=(c)$; one where $\tau_2(W)=(c,b_1)$ and one where $\tau_3(W)=(c,y,b_1)$ where $y$ is the out-neighbor of $c$ in $N^-(b_1)$; there are $2M(s_c-2)$ such $W$ for each choice of $c$. In total, there are at most $M(n-7)+\sum_{c\in N^-(b_1)}2M(s_c-2)$ possible $W$. If, on the other hand, $c_1 \rightarrow c_3$, first assume that $s_{c_1} \le n-3$, $s_{c_2} \le n-4$, and $s_{c_3} \le n-5$. Then either some vertex of $N^-(b_1)$ is the source of $W$ (at most $M(n-3)+M(n-4)+M(n-5)$ possibilities for $W$), or $\tau_3(W)=(b_1,d_1,b_2)$ (at most $M(n-7)$ possibilities for $W$). Otherwise, it must be that $s_{c_1}\le n-3$, $s_{c_2}\le n-4$, $s_{c_3} = n-4$ and that $d_1=c_3$. Then, $\tau_2(W)=(c_3,b_1)$, $\tau_2(W)=(c_2,b_1)$, $\tau_3(W)=(c_2,c_3,b_1)$, $\tau_2(W)=(c_1,b_1)$, $\tau_3(W)=(c_1,c_2,b_1)$, $\tau_3(W)=(c_1,c_3,b_1)$, or $\tau_4(W)=(c_1,c_2,c_3,b_1)$; there are at most $M(n-5)+6M(n-6)$ such $W$. In total, if $d_1 \in N^-(b_1)$, the number of possible $W$ can be upper bounded by $\max\{M(n-3)+M(n-4)+M(n-5) ; M(n-5)+6M(n-6)\}$, and if $d_1 \notin N^-(b_1)$, the number of possible $W$ can be upper bounded by $M(n-3)+M(n-4)+M(n-5)+M(n-7)$.
Armed with Claims \[cl:2.n-3\]–\[cl:4.n-4\], we now analyze the five subcases of Case 2, depending on the scores of $b_1$ and $b_2$.\
**Case 2.1: $\mathbf{s_{b_1} = n-3,s_{b_2} = n-3}$.** By Claim \[cl:2.n-3\], the number of maximal transitive vertex sets $W$ such that $b_1,n \notin W$ and $b_2\in W$ (leaf (2) in Fig. \[fig:searchtree\]) is at most $M(n-4)$. By Claim \[cl:4.n-3\], the number of maximal transitive vertex sets $W$ such that $b_1,n \notin W$ and $b_2\in W$ (leaf (4) in Fig. \[fig:searchtree\]) is at most $2M(n-5)+M(n-4)$, at most $2M(n-5)+M(n-4)+M(n-6)$, or at most $2M(n-5)+M(n-4)+3M(n-7)$. Combined with Claim \[cl:1356\],
$$\begin{aligned}
f(T) &\leq \max
\begin{cases}
M(n-3)+M(n-4)+M(n-4)+(2M(n-5)\\
\quad +M(n-4))+M(n-5) +M(n-5)\\
\hfill \leq 4\beta^{n-5}+3\beta^{n-4}+\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6314 \enspace ,\\
M(n-3)+M(n-4)+M(n-4)+(2M(n-5)\\
\quad +M(n-4)+M(n-6)) +M(n-5)+M(n-5)\\
\hfill \leq \beta^{n-6}+4\beta^{n-5}+3\beta^{n-4}+\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6516\enspace ,\\
M(n-3)+M(n-4)+M(n-4)+(2M(n-5)\\
\quad +M(n-4)+3M(n-7)) +M(n-5)+M(n-5)\\
\hfill \leq 3\beta^{n-7}+4\beta^{n-5}+3\beta^{n-4}+\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6666 \enspace .
\end{cases}\end{aligned}$$
**Case 2.2: $s_{b_1} = n-3,s_{b_2} = n-4$.** If $c_1 \rightarrow b_2$ and $c_2 \rightarrow b_2$, then $b_1 \notin W$ and $b_2 \in W$ implies that some in-neighbor $c$ of $b_1$ is in $W$, otherwise $W\cup\{b_1\}$ would induce a transitive tournament. But then, $n \notin W$, otherwise $\{c,b_2,n\}$ induces a directed cycle. This means that no maximal transitive vertex set $W$ satisfies the conditions of leaf (3) in Fig. \[fig:searchtree\]. We bound the possible $W$ corresponding to leafs (2)+(4) by $M(n-1)$ and obtain $$\begin{aligned}
f(T) &\le M(n-3)+M(n-1)+M(n-5)+M(n-5)\\
& \le 2\beta^{n-5}+\beta^{n-3}+\beta^{n-1}\leq\beta^n \text{ as } \beta \ge 1.6440 \enspace .\end{aligned}$$
Otherwise, there is some vertex $c \in N^-(b_1)$ such that $b_2 \rightarrow c$. Then, the number of $W$ in leaf (6) of Fig. \[fig:searchtree\] is upper bounded by $M(s_{b_2}-2)=M(n-6)$, and by Claims \[cl:2.n-4\] and \[cl:4.n-3\] those in leafs (2) and (4) are upper bounded by $M(n-5)+2M(s_{d_1}-2)$ and $2M(n-5)+M(n-4)+3M(n-7)$, respectively. Thus, $$\begin{aligned}
f(T) &\le M(n-3)+(M(n-5)+2M(n-5))+M(n-5)+(2M(n-5)\\
& \quad +M(n-4)+3M(n-7))+M(n-5)+M(n-6)\\
& \le 3\beta^{n-7}+\beta^{n-6}+7\beta^{n-5}+\beta^{n-4}+\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6740 \enspace .\end{aligned}$$
**Case 2.3: $s_{b_1} = n-4, s_{b_2} = n-3$.** By Claim \[cl:2.n-3\], at most $M(n-4)$ subsets $W$ correspond to leaf (2) in Fig. \[fig:searchtree\]. If $N^-(b_1)$ induces a directed cycle, Claim \[cl:4.n-4\] upper bounds the number of subsets corresponding to leaf (4) by $M(n-7)+2M(n-6)+4M(n-5)$ as at most 2 vertices except $b_2$ and $n$ have score $n-3$ by Lemma \[thm:nummaxscore\]. Together with Claim \[cl:1356\], this gives $$\begin{aligned}
f(T) &\le M(n-3)+M(n-4)+M(n-4)+(M(n-7)+2M(n-6)\\
& \quad +4M(n-5))+M(n-6)+M(n-6)\\
& \le \beta^{n-7}+4\beta^{n-6}+4\beta^{n-5}+2\beta^{n-4}+\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6670 \enspace .\end{aligned}$$ Otherwise, $c_1 \rightarrow c_3$. If $d_1 \rightarrow b_1$, then Claim \[cl:4.n-4\] upper bounds the number of subsets corresponding to leaf (4) by $M(n-3)+M(n-4)+M(n-5)$ or $M(n-5)+6M(n-6)$. Then, $$\begin{aligned}
f(T) &\leq \max
\begin{cases}
M(n-3)+M(n-4)+M(n-4)+(M(n-3)\\
\quad +M(n-4)+M(n-5))+M(n-6)+M(n-6) \\
\hfill \leq 2\beta^{n-6}+\beta^{n-5}+3\beta^{n-4}+2\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6632,\\
M(n-3)+M(n-4)+M(n-4)+(M(n-5)\\
\quad +6M(n-6))+M(n-6) + M(n-6)\\
\hfill \leq 8\beta^{n-6}+\beta^{n-5}+2\beta^{n-4}+\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6396 \enspace .\\
\end{cases}\end{aligned}$$ Otherwise, $b_1 \rightarrow d_1$. For the possible $W$ with $b_1,b_2,n \in W$, none of $N^-(b_1)\cup \{d_1\}$ is in $W$ as these vertices all create cycles with $b_1,b_2,n$. Thus, the number of possible subsets $W$ corresponding to leaf (6) is upper bounded by $M(s_{b_1}-3)=M(n-7)$. Then, by Claims \[cl:1356\] and \[cl:4.n-4\], $$\begin{aligned}
f(T) &\le M(n-3)+M(n-4)+M(n-4)+(M(n-3)+M(n-4)\\
& \quad +M(n-5)+M(n-7))+M(n-6)+M(n-7)\\
& \le 2\beta^{n-7}+\beta^{n-6}+\beta^{n-5}+3\beta^{n-4}+2\beta^{n-3}\leq\beta^n \text{ as } \beta \ge 1.6672 \enspace .\end{aligned}$$
**Case 2.4: $s_{b_1} = n-4,s_{b_2} \leq n-4$.** By grouping leafs (2) and (4) into one possibility where $n \notin W$, Claim \[cl:1356\] upper bounds the number of such maximal transitive vertex sets by $$\begin{aligned}
f(T) &\le M(n-3)+M(n-1)+M(n-5)+M(n-6)+M(n-6)\\
& \le 2\beta^{n-6}+\beta^{n-5}+\beta^{n-3}+\beta^{n-1}\leq\beta^n \text{ as } \beta \ge 1.6570 \enspace .\end{aligned}$$
**Case 2.5: $s_{b_1} \le n-5$.** By grouping leafs (2) and (4) into one possibility where $n \notin W$, Claim \[cl:1356\] upper bounds the number of such maximal transitive vertex sets by $$\begin{aligned}
f(T) &\le M(n-3)+M(n-1)+M(n-4)+M(n-7)+M(n-7)\\
& \le 2\beta^{n-7}+\beta^{n-4}+\beta^{n-3}+\beta^{n-1}\leq\beta^n \text{ as } \beta \ge 1.6679 \enspace .\end{aligned}$$
**Case 3: $\mathbf{s_n \leq n-4}$.** We may assume that the score sequence $s=s(T)$ satisfies $$\label{eqn:sbound4}
3\leq s_1\leq \hdots\leq s_n\leq n-4.$$ Let $S_n$ be the set of all score sequences that are feasible for –. The set $S_n$ serves as domain of the linear map $G:S_n\rightarrow\mathbb R_+,s\mapsto\sum_{v=1}^ng(s_v)$ with the strictly convex terms $g:c\mapsto\beta^c$. Furthermore, for all $n\ge 11$, we define a special score sequence $\sigma(n)$, whose membership in $S_n$ is easy to verify: $$\begin{aligned}
\sigma(n):=
\begin{cases}
(3,3,3,3,3,5,7,7,7,7,7) & \text{if } n = 11\enspace ,\\
(3,3,3,3,3,3,8,8,8,8,8,8) & \text{if } n=12\enspace ,\\
(3,3,3,3,3,3,6,9,9,9,9,9,9) & \text{if } n = 13\enspace , \text{ and}\\
(3,3,3,3,3,3,4,7,8,\hdots,n-9,n-8,n-5,\\ \quad \quad ~n-4,n-4,n-4,n-4,n-4,n-4) & \text{if } n \ge 14\enspace .
\end{cases}\end{aligned}$$
\[thm:reclemma\] For $n\geq 11$, the sequence $\sigma(n)$ maximizes the value of $G$ over all sequences in $S_n$: $G(s)\leq G(\sigma(n))$ for all $s\in S_n.$
Once Lemma \[thm:reclemma\] is proved we can bound $f(T)$, for $s=s(T)\in S_n$, from above via $$\begin{aligned}
\label{eqn:uppernlarge}
f(T)\leq G(s)\leq G(\sigma(n))
=\begin{cases}
5\beta^3+\beta^5+5\beta^7,&\text{if } n=11\enspace ,\\
6\beta^3+6\beta^8,&\text{if } n=12\enspace ,\\
6\beta^3+\beta^6+6\beta^9,&\text{if } n=13\enspace ,\\
6\beta^3+\beta^4+\frac{\beta^{n-7}-\beta^7}{\beta-1}+\beta^{n-5}+6\beta^{n-4}\\ \quad\quad
\leq \frac{\beta^{n-7}}{\beta-1}+\beta^{n-5}+6\beta^{n-4}, &\text{if } n\geq 14 \enspace,
\end{cases}\end{aligned}$$ which is at most $\beta^n$ as $\beta \ge 1.6259$. To prove Lemma \[thm:reclemma\], we choose any sequence $s\in\mbox{argmax}_{s'\in S_n} G(s')$ and then show that $s=\sigma(n)$. Recall that $s_1\geq 3$ and $s_n\leq n-4$, and set $s_1^*=3,s_n^*=n-4$.
\[thm:claim1\] If some score $c$ appears more than once in $s$, then $c\in\{s_1^*,s_n^*\}$.
For contradiction, suppose that $s_1^* < s_u=s_v=c<s_n^*$ for two vertices $u$ and $v$ such that $1\leq u<v\leq n$. First, suppose there exists an integer $k\in\{u,\hdots,v-1\}$ satisfying with equality: $$\label{eqn:sboundequality}
\sum_{v=1}^ks_v=\binom{k}{2}+1 \enspace .$$ Then , and Lemma \[thm:nummaxscore\] imply $8\leq k\leq n-9$, so $k\notin \{s_1^*,s_n^*\}$. The choice of $k$ among vertices of equal score $c$ now yields $$\label{eqn:sboundequalscore}
s_{k+1}=s_k=\sum_{v=1}^ks_v-\sum_{v=1}^{k-1}s_v\leq\binom{k}{2}+1-\binom{k-1}{2}-1=k-1 \enspace .$$ This however contradicts : $$\sum_{v=1}^{k+1}s_v\leq\binom{k}{2}+1+(k-1)=\binom{k+1}{2} \enspace .$$ It is thus asserted that no vertex $k$ with property exists. The score sequence $s'$ differing from $s$ only in $s_u'=s_u-1=c-1,s_v'=s_v+1=c+1$, therefore belongs to $S_n$. So apply the function $G$ to it, and use the strict convexity of $g$: $$G(s')-G(s)=(g(c+1)-g(c))-(g(c)-g(c-1))>0 \enspace .$$ This contradicts the choice of $s$ as a maximizer of $G$, and establishes Claim \[thm:claim1\].
\[thm:claim2\] The values $s_1^*=3$ and $s_n^*=n-4$ each appear between two and six times as scores in the sequence $s$.
By Lemma \[thm:nummaxscore\], $s_n^*$ is the score of no more than $6$ vertices. By symmetry, $s_1^*$ is the score of no more than $6$ vertices. As a consequence of Claim \[thm:claim1\], together $s_1^*$ and $s_n^*$ appear at least eight times in $s$. Hence there are at least two vertices of score $s_1^*$ and at least two vertices of score $s_n^*$.
\[thm:claim3\] If $n \ge 12$, each of $s_1^*$ and $s_n^*$ is the score of exactly six of the vertices.
Assuming this were not the case for $s_1^*$, by Claim \[thm:claim2\] it would be the score of two to five vertices. Hence there exists a vertex $a\in\{3,\hdots,6\}$ with score $s_a>s_1^*$. It holds $s_n^*=n-4>a+1$, which is obvious if $n \ge 13$ and follows from if $n=12$. So there must be two scores in $s$ larger than $s_a$, precisely $s_a<s_{a+1}<s_{a+2}$. Observe that the sequence $s'=(s_1,\hdots,s_{a-1},s_a-1,s_{a+1}+1,s_{a+2},\hdots,s_n)$ is a member of $S_n$. The same argument on strict convexity of $g$ as in Claim \[thm:claim1\] gives $$G(s')-G(s) = (g(y+1)-g(y))-(g(x)-(g(x-1))>0$$ for $x=s_a<s_{a+1}=y$, again contradicting the choice of $s$ as a maximizer of $G$. Consequently, the sequence $s$ starts with six scores $s_1^*$. By symmetry, the same argumentation also applies for $s_n^*$, proving the claim.
\[thm:claim4\] If $n = 11$, each of $s_1^*$ and $s_n^*$ is the score of exactly five of the vertices.
As all scores are between $3$ and $7$, at most $5$ vertices have score $3$ and at most $5$ vertices have score $7$ by . Assume less than $5$ vertices have score $s_1^*$. By Claim \[thm:claim2\], $s_1^*$ is the score of two to four vertices. Hence there exists a vertex $a\in\{3,4,5\}$ with score $s_a>s_1^*$. Thus, $s_n^*=7>a+1$. So there must be two scores in $s$ larger than $s_a$, precisely $s_a<s_{a+1}<s_{a+2}$. To conclude we construct a sequence $s'$ with $G(s')>G(s)$ exactly as in the proof of Claim \[thm:claim3\].
It holds $s=\sigma(n)$. \[thm:claim5\]
If $n=11$, $s$ has $5$ vertices of score $3$ and $5$ vertices of score $7$ by Claim \[thm:claim4\]. As, $\sigma(11)$ is the only such sequence not contradicting , the claim holds for $n=11$. Similarly, $\sigma(n)$ is the only sequence not contradicting and Claim \[thm:claim3\] if $12 \le n \le 13$. Suppose now that $n \ge 14$. There are $n-12$ elements of $s$ being different from both $s_1^*$ and $s_n^*$, which have a score equal to one of the $n-8$ numbers in the range $4,\hdots,n-5$. Symmetry of the map $d\mapsto\binom{n}{d}$ around $d=\frac{n}{2}$ together with means that only pairs $\{h_1,n-1-h_1\}$ with $4\leq h_1<\frac{n-1}{2}$ and $\{h_2,n-1-h_2\}$ with $5\leq h_2<\frac{n-1}{2}$ of scores are missing in $s$. Moreover, requires $h_1,h_2 < 7$, for otherwise $k=8$ violates this relation. Since $s$ was chosen to be a maximizer of $G$, this leaves $h_1=5$ and $h_2=6$. Thus $s=\sigma(n)$, completing the proof of the claim and of Lemma \[thm:reclemma\].
All cases taken together imply the following upper bound on the number of maximal transitive subtournaments.
\[thm:combupperbound\] Any strong tournament $T\in\mathcal T^*_n$ has at most ${1.6740}^n$ maximal transitive subtournaments.
Moon [@Moon1971] already observed that the following limit exists.
It holds $1.5448 \le \lim_{n \rightarrow \infty} (M(n))^{1/n} \le {1.6740}$.
We conjecture that the Paley digraph of order 7, $ST_7$, plays the same role for [FVSs]{}in tournaments as triangles play for independent sets in graphs, i.e. that the tournaments $T$ maximizing $(f(T))^{1/|V(T)|}$ are exactly those whose factors are copies of $ST_7$.
Polynomial-Delay Enumeration in Polynomial Space {#sec:polydelaypolyspace}
================================================
In this section, we give a polynomial-space algorithm for the enumeration of the minimal [FVSs]{}in a tournament with polynomial delay.
Let $T = (V,A)$ be a tournament with $V = \{v_1,\hdots,v_n\}$, and for each $i = 1,\hdots,n$ let $T_i = T[\{v_1,\hdots,v_i\}]$. For a vertex set $X$, we write $\chi_X(i)=1$ if $v_i\in X$ and $\chi_X(i)=0$ otherwise. Let $<$ denote the total order on $V$ induced by the labels of the vertices. For vertex sets $X,Y\subseteq V$, say that $X$ is *lexicographically smaller* than $Y$ and write $X\prec Y$ if for the minimum index $i$ for which $\chi_X(i)\not=\chi_Y(i)$ it holds that $v_i\in X$. Because $X$ and $Y$ are totally ordered by the restriction of $<$ to $X$ and $Y$, respectively, $\prec$ is also a total order and each collection of subsets of $V$ has a unique *lexicographically smallest* element.
The algorithm enumerates the maximal acyclic vertex sets of $T$. It performs a depth-first search in a tree $\mathcal T$ with the maximal acyclic vertex sets of $T$ as leaves, whose forward and backward edges are constructed “on the fly”. The depth of $\mathcal T$ is $|V|$, and we refer to the vertices of $\mathcal T$ as *nodes*. The algorithm only needs to keep in memory the path from the root to the current node in the tree and all the children of the nodes on this path. Each node at level $j$ is labeled by a maximal acyclic vertex set $J$ of $T_j$. As for its children, there are two cases. In case $J\cup\{v_{j+1}\}$ is acyclic then $J$’s only child is $J\cup\{v_{j+1}\}$. In case $J\cup\{v_{j+1}\}$ is not acyclic then $J$ has at least one and at most $\lfloor j/2 \rfloor + 1$ children. Let $L_J = (v^1,v^2,\hdots,v^{|J|})$ be a labeling of the vertices in $J$ such that $(v^r,v^s)\in A$ for all $1\leq r < s\leq j$; we view $L_J$ as a sequence of vertices. The children of $J$ are as follows. The first child $J^0$ is a copy of $J$, and is always present. The potential other children are, for $1\le z \le |J|+1$, $$J^z = \{v^i \in J \mid i<z \wedge v^i \rightarrow v_{j+1}\} \cup \{v_{j+1}\} \cup \{v^i \in J \mid i\ge z \wedge v_{j+1} \rightarrow v^i\}$$ where set $J^z$ is a potential child of $J$ only if $J^z$ is a maximal acyclic vertex set in $T_{j+1}$ (the maximality of $J^ z$ can clearly be checked in polynomial time). Note how we try to insert $v_{j+1}$ at every possible position in $J$. However, only at most $\lfloor j/2 \rfloor+1$ positions make sense for $v_{j+1}$: before $v^1$ if $v_{j+1} \rightarrow v^1$, between $v^i$ and $v^{i+1}$ if $v^i \rightarrow v_{j+1} \rightarrow v^{i+1}$, where $1\le i\le |J|-1$, and after $v^{|J|}$ if $v^{|J|}\rightarrow v_{j+1}$; all other positions do not give maximal acyclic vertex sets and should not be generated in an actual implementation. Note that $J^z$ may be a potential child of several sets on the same level in $\mathcal T$. Of all these sets, $J^z$ is made the child only of the lexicographically smallest such set. To determine whether $J$ is the lexicographically smallest such set, we compute by a greedy algorithm the lexicographically smallest maximal acyclic vertex set $H = H(J^z)$ of $T_j$ which contains $J^z\setminus\{v_{j+1}\}$ as a subset. That is, we iteratively build the set $H$ by setting $$\begin{aligned}
H_0 & = J^z\setminus\{v_{j+1}\},\displaybreak[0]\\
H_i & = \begin{cases}
H_{i-1}\cup\{v_i\},&\mbox{if}~H_{i-1}\cup\{v_i\}~\mbox{is acyclic},\\
H_{i-1},&\mbox{otherwise},
\end{cases}
\qquad i = 1,\hdots,j,\displaybreak[0]\\
H & = H_j \enspace.\end{aligned}$$ Then we make $J^z$ a child of the node labeled $J$ only if $H = J$. This completes the description of the algorithm.
To show that the algorithm is correct, we prove that for every maximal acyclic vertex set $W$ of $T$ there is exactly one leaf in $\mathcal T$ labeled with $W$. By construction of the algorithm, it suffices to show that at least one leaf is labeled by $W$. The proof is by induction on the number $n = |V|$ of vertices in $T$. For $n = 1$ the claim clearly holds, so suppose that $n > 1$ and that the claim is true for all tournaments with fewer vertices. Then from the induction hypothesis we can conclude that for the induced subtournament $T' := T_{n-1}$ there is a tree $\mathcal T'$ constructed by the above algorithm and a bijection $f'$ from the maximal acyclic vertex sets of $T'$ to the leaves of $\mathcal T'$.
Let $W$ be a maximal acyclic vertex set of $T$. If $v_n\notin W$ then $W$ is an acyclic vertex set of $T'$ as removing a vertex from a digraph does not introduce cycles. In fact, $W$ is a maximal acyclic vertex set of $T'$: for any vertex $v_\ell \in V\setminus (W\cup \{v_n\})$, $T'[W\cup \{v_\ell \}]$ has a cycle as $W$ is a maximal acyclic vertex set for $T$ and $T'[W\cup \{v_\ell \}]=T[W\cup \{v_\ell \}]$. Hence there exists a leaf $f'(W)$ in $\mathcal T'$ labeled by $W$. Since $W\cup\{v_n\}$ is not acyclic, by maximality of $W$ for $T$, the algorithm constructs the child $W^0$ of $f'(W)$ labeled by $W$, and that child will be a leaf in the final tree constructed by the algorithm.
If $v_n\in W$, then let $W' = W\setminus\{v_n\}$. So, $W'$ is an acyclic vertex set of $T'$. In case $W'$ is maximal for $T'$, there is a leaf $f'(W')$ in $\mathcal T'$ that is labeled by $W'$. Since $W'\cup\{v_n\}$ is acyclic, the algorithm will create a single child of $f'(W')$ labeled by $W'\cup\{v_n\} = W$, and that child will be a leaf in the final tree constructed by the algorithm. In case $W'$ is not maximal for $T'$, let $N$ be the lexicographically smallest extension of $W'$ to a maximal acyclic vertex set of $T'$. Hence there exists a leaf $f'(N)$ in the tree $\mathcal T'$ labeled by $N$. Observe that the sequence $L_{W'}$ is a subsequence of $L_N$, and that $N\cup\{v_n\}$ is not acyclic. Hence the algorithm creates children $N^1,N^2,\hdots$, one of which will be labeled by $W$.
To see that the algorithm runs with polynomial delay, note that the children and parent of a given node in $\mathcal T$ can all be computed in polynomial time. It follows that $\mathcal T$ can be traversed in a depth-first manner with polynomial delay per step of the traversal, and thus the leaves of $\mathcal T$ can be output with only a polynomial delay.
We show that the algorithm requires only polynomial space. We already observed that each node in $\mathcal T$ at level $j$ has at most $\lfloor j/2 \rfloor+1$ children. For each node we store the maximal acyclic vertex set by which it is labeled. Because we are traversing $\mathcal T$ in a depth-first-search manner, in each step of the algorithm we only need to save data of $O(n^2)$ nodes: those of the $O(n)$ nodes on the path from the root to the currently active node labeled by $J$, and the $O(n)$ children for each node on this path.
The described algorithm enumerates all [FVSs]{}of a tournament with polynomial delay and uses polynomial space.
\[thm:minfvspolyspace\] In a tournament with $n$ vertices a minimum directed feedback vertex set can be found in $O(1.6740^n)$ time and polynomial space.
#### Acknowledgment.
We thank Gerhard J. Woeginger for help with the presentation of the results.
[^1]: CMM, Universidad de Chile, Santiago de Chile. E-mail: `[email protected]`
[^2]: Technische Universiteit Eindhoven, Eindhoven, The Netherlands. E-mail: `[email protected]`
[^3]: Part of this research has been supported by the Netherlands Organisation for Scientific Research (NWO), grant 639.033.403.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Within the BCS framework a multiband model with d-wave symmetry is considered. Generalized Fermi surface topologies via band overlapping are introduced. The band overlap scale is of the order of the Debye energy. The order parameters and the pairing have d-wave symmetry. Experimental values reported for the critical temperatures $T_c(x)$ and the order parameters, $\Delta_0(x)$, in terms of dopping $x$ are used. Numerical results for the coupling and the band overlapping parameters in terms of the doping are obtained for the cuprate superconductor $La_{2-x}Sr_xCuO_4$.'
author:
- Susana Orozco
- 'Rosa María Méndez-Moreno'
- María de los Angeles Ortiz
- Gabriela Murguía
title: 'D-wave overlapping band model for cuprate superconductors'
---
Introduction
============
Measurements of angle-resolved photoemission spectroscopy (ARPES)[@Zhou:05] and tunneling[@Lee:06], provide enough evidence for the relevant role of phonons in high-$T_c$ superconductivity (HTSC). Experimental data accumulated so far for the high-$T_c$ copper-oxide superconductors have given some useful clues to unravel the fundamental ingredients responsible for the high transition temperature $T_c$. However, the underlying physical process remains unknown. In this context, it seems crucial to study new ideas that use simplified schematic models to isolate the mechanism(s) that generate HTSC.
Pairing symmetry is an important element toward understanding the mechanism of high-$T_c$ superconductivity. Although early experiments were consistent with s-wave pairing symmetry, recent experiments suggest an anisotropic pairing behavior[@Deutscher:05]. For many cuprate superconductors it is generally accepted that the pairing symmetry is d-wave for hole-doped cuprate superconductors[@Tsuei:00] as for electron doped cuprates[@Liu:07]. On the other hand, recent experiments with Raman scattering and ARPES[@Blumberg:02; @Qazilbash:05] have shown that the gap structure on high-$T_c$ cuprate superconductors, as a function of the angle, is similar to a d-wave gap[@Hawthorn:07; @Tacon:05]. The small but non-vanishing isotope effects in high-$T_c$ cuprates have been shown compatible with d-wave superconductivity[@franck:94]. A phonon-mediated d-wave BCS like model has recently been presented to describe layered cuprated superconductors[@Xiao:07]. The last model account well for the magnitudes of $T_c$ and the oxygen isotope exponent of the superconductor cuprates. Calculations with BCS theory and van Hove scenario have also been done with d-wave pairing[@Hassan:02]. The validity of d-wave BCS formalism in high-$T_c$ superconductor cuprates has been supported by measurements of transport properties and ARPES[@Matsui:05].
Numerous indications point to the multiband nature of the superconductivity in doped cuprates. The agreement of the multiband model with experimental findings, suggests that a multiband pairing is an essential aspect of cuprate superconductivity[@Kristoffel:08].
First principle calculations show overlapping energy bands at the Fermi level[@to]. The short coherence length observed in high-$T_c$ superconductors, has been related to the presence of overlapping energy bands[@okoye:99; @saleb:08]. A simple model with generalized Fermi surface topologies via band overlapping has been proposed based on indirect experimental evidence. That confirms the idea that the tendency toward superconductivity can be enhanced when the Fermi level lies at or close to the energy of a singularity in the density of states (DOS)[@Moreno:96]. This model that can be taken as a minimal singularity in the density of states and the BCS framework, can lead to higher $T_c$ values than those expected from the traditional phonon barrier. In our model, the energy band overlapping, modifies the DOS near the Fermi level allowing the high $T_c$ values observed. A similar effect can be obtained with other mechanisms as a van Hove singularity in the density of states[@misho:2005].
The high-$T_c$ copper-oxide superconductors have a characteristic layered structure: the $Cu O_2$ planes. The charge carriers in these materials are confined to the two dimensional (2D) $Cu O_2$ layers[@harshman:92]. These layering structures of high-$T_c$ cuprates suggest that two-dimensional physics is important for these materials[@Xiao:07].
In this work, within the BCS framework, a phonon mediated d-wave model is proposed. The gap equation (with d-wave symmetry) and two-dimensional generalized Fermi surface topologies via band overlapping are used as a model for HTSC. A two overlapping band model is considered as a prototype of multiband superconductors. For physical consistency, an important requirement of the model is that the band overlapping parameter is not larger than the cutoff Debye energy, $E_D$. The model here proposed will be used to describe some properties of the cuprate superconductor $La_{2-x}Sr_xCuO_4$ in terms of the doping and the parameters of the model.
The model
=========
We begin with the famous gap equation $$\label{eq:az}
\Delta(k{^\prime})= {\sum_k} V(k,k{^\prime})
\Delta(k)\frac{\tanh( E_k/2 k_B T )}{2 E_k} ,$$ in the weak coupling limit, with $V(k,k{^\prime})$ the pairing interaction, $k_B$ is the Boltzman constant, and $E^2_k = \epsilon^2_k
+ \Delta^2_k$, where $\epsilon_k = \hbar^2 k^2/ 2 m$ are the self-consistent single-particle energies.
For the electron-phonon interaction, we have considered, with $V_0$ a constant, $V(k,k^{\prime}) = V_0 \psi(k) \psi(k^{\prime})$ when $|\epsilon_k|$ and $|\epsilon_{k^{\prime}}|~ \leq E_D ~=~k_B
T_D$ and $0$ elsewhere. As usual the attractive BCS interaction is nonzero only for unoccupied orbitals in the neighborhood of the Fermi level $E_F$. In the last equation, $\psi(k) = \cos ({2 ~\phi_k})$ for $d_{x^2 -y^2}$ pairing. Here $\phi_k = \tan^{-1}(k_y / k_x)$ is the angular direction of the momentum in the $ab$ plane. The superconducting order parameter, $\Delta(k) = \Delta(T)~\psi(k)$ if $|\epsilon_k| \leq E_D$ and $0$ elsewhere.
With these considerations we propose a generalized Fermi surface. The generalized Fermi sea proposed consists of two overlapping bands. As a particular distribution with anomalous occupancy in momentum space the following form for the generalized Fermi sea has been considered $$\label{eq:aa}
n_k = \Theta(\gamma k_F - k)
+ \Theta(\gamma k_F - k) \Theta(k - \beta k_F ),$$ with $k_F$ the Fermi momentum and $0 < \beta < \gamma < 1$. In order to keep the average number of electron states constant, the parameters are related in the 2D system by the equation $$\label{eq:gg}
2 \gamma^2 - \beta^2 = 1,$$ then only one of the relevant parameters is independent. The distribution in momentum induces one in energy, $E_{\beta} <
E_{\gamma}$ where $E_{\beta} = \beta^2 E_F$ and $E_{\gamma} = \gamma^2
E_F$ . We require that the band overlapping be of the order or smaller than the cutoff (Debye) energy, which means $(1 - \gamma^2) E_F \leq
E_D$. The last expression can be written as $$\label{eq:rr}
(1 - \gamma^2) E_F = \eta E_D,$$ where $\eta$ is in the range $0 < \eta < E_F/( 2 E_D)$. Equations (\[eq:gg\]) and (\[eq:rr\]) together will give the minimum $\gamma^2$ value consistent with our model.
In the last framework the summation in Eq. (\[eq:az\]) is changed to an integration which is done over the ([*symmetric*]{}) generalized Fermi surface defined above. One gets $$\label{eq:bb}
\begin{split}
1 = & ~\frac{\lambda}{4\pi} \int_{E_\gamma - E_D}^{E_\gamma + E_D}
\int_{0}^{ 2\pi} d\phi~\cos^2({2\phi})
\tanh \left(\frac{\sqrt{\Xi_k}}{2 k_B T}\right)
\frac{d\epsilon_k}{\sqrt{\Xi_k}} \\
& + \frac{\lambda}{4\pi} \int_{E_\beta}^{ E_F}
\int_{0}^{ 2\pi} d{\phi}~\cos^2({2\phi})
\tanh \left(\frac{\sqrt{\Xi_k}}{2 k_B T }\right)
\frac{d\epsilon_k}{\sqrt{\Xi_k}}.
\end{split}$$
In this equation $\Xi_k = (\epsilon_k - E_F)^2 + \Delta(T)^2
~\cos^2({2~\phi})$, the coupling parameter is $\lambda = V_0 D(E)$, with $D(E)$ the electronic density of states, which will be taken as a constant for the $2D$ system in the integration range. $E_F~=
\frac{{\hbar}^2\pi}{m}n_{2D}$, with $n_{2D}$ the carriers density per $CuO_2$ layer. The two integrals correspond to the bands proposed by Eq. (\[eq:aa\]).
The integration over the surface at $E_{\gamma}$ in the first band, is restricted to states in the interval $E_{\gamma} - E_D \leq E_k \leq
{E_{\gamma}+ E_D}$. In the second band, in order to conserve the particle number, the integration is restricted to the interval $E_{\beta} \leq E_k \leq {E_F}$, if $E_{\gamma}+ E_D>E_F$, with $E_{\beta} ~=~ (2~\gamma^2 ~-~1 ) E_F$, according to Eq. (\[eq:gg\]) in our model. While $E_F - E_{\gamma} \leq E_D$, implies that the energy difference between the anomalously occupied states must be provided by the material itself. Finally $ \Delta(T)~\psi(k) =
\Delta(T)~\cos({2~\phi})$ at the two bands.
The critical temperature is introduced via the Eq. (\[eq:bb\]) at $T
= T_c$, where the gap becomes $\Delta(T_c) = 0$. At this temperature Eq. (\[eq:bb\]) is reduced to $$\begin{split}
\label{eq:cc}
1 = & ~\frac{\lambda}{4}~ \int_{E_\gamma - E_D}^{E_\gamma + E_D}
\tanh \left(\frac{\epsilon_k - E_F}{2 k_B T_c}\right)
\frac{d\epsilon_k}{\epsilon_k - E_F} \\
& + \frac{\lambda}{4}~ \int_{E_\beta}^{E_F}
\tanh \left(\frac{\epsilon_k - E_F}{2 k_B T_c}\right)
\frac{d\epsilon_k}{\epsilon_k - E_F},
\end{split}$$ which will be numerically evaluated. The last equation relates $T_c$ to the coupling constant $\lambda$ and to the anomalous occupancy parameter $\gamma^2$. This relationship determines the $\gamma^2$ values which reproduces the critical temperature of several cuprates in the weak coupling region.
At $T = 0$K, Eq. (\[eq:bb\]) will also be evaluated and $\gamma^2$ values consistent with the numerical results of Eq. (\[eq:cc\]) will be obtained: $$\label{eq:ee}
\begin{split}
1~ = & ~\frac{\lambda}{4~\pi}~\int_{0}^{ 2\pi} d\phi~\cos^2({2~\phi}) \\
& \times ~\left[
\sinh^{-1}~ \frac{~k_B T_D~-~(1~-~\gamma^2)k_B T_F}
{\Delta_0~|\cos{(2~\phi)}|}~ \right. \\
& \qquad + ~\sinh^{-1}~ \frac{~(1~-~\gamma^2)k_B T_F ~+ ~k_B T_D}
{\Delta_0~|\cos({2~\phi})|}\\
& \qquad + ~\left. \sinh^{-1}~\frac{2k_B~(1~-~\gamma^2)T_F}
{\Delta_0~|\cos{(2~\phi)}|}
\right],
\end{split}$$ where $\Delta(0) = \Delta_0$.
The model presented in this section can be used to describe high-$T_c$ cuprate superconductors, the band overlapping $1~-~\gamma^2$ and relevant parameters are determined. In any case a specific material must be selected to introduce the available experimental data. Ranges for the coupling parameter $\lambda$ in the weak coupling region, and the overlapping parameter $\gamma^2$, consistent with the model and the experimental data, can be obtained for each material. The relationship between the characteristic parameters will be obtained for $La$-based compounds at several doping concentrations $x$, ranging from the underdoped to the overdoped regime. Different values of the coupling constant and the overlapping parameter consistent with the model, are obtained using the experimental values of $\Delta_0$ and $T_c$.
The single layer cuprate superconductor $La_{2-x}Sr_xCuO_4$ ($La-214$) has one of the simplest crystal structures among the high-$T_c$ superconductors. This fact makes this cuprate very attractive for both theoretical and experimental studies. High quality single crystals of this material are available with several doping concentrations which are required for experimental studies. Even the determination of charge carrier concentration in the cuprate superconductors is quite difficult, the $La-214$ is a system where the carrier concentration is nearly unambiguously determined. For this material, the hole concentration for $CuO_2$ plane, $n_{2D}$, is equal to the $x$ value, [*i.e.*]{} to the $Sr$ concentration, as long as the oxygen is stoichiometric[@Ando:00; @Ino:02]. Additionally, there are reliable data for the $T_c$ and the superconducting gap $\Delta_0$ for several samples in the superconducting region.
Results and discussion
======================
In order to get numerical results, with our overlapping band model with d-wave symmetry, the cuprate $La_{2-x} Sr_x Cu O_4$ was selected. The values for $\Delta_0$ are taken in the interval $2 \leq
~ \Delta_0 ~\leq 12$ meV which includes experimental results[@Ino:02]. The behavior of $\lambda$ as function of $x$ and $\gamma^2$ at $T = T_c$ is obtained from Eq. (\[eq:cc\]); and $\lambda$ as function of $\Delta_0$, $x$ and $\gamma^2$ at $T = 0$K is given by Eq. (\[eq:ee\]). To have coupled solutions of these equations the same $\lambda$ value for $T = T_c$ and $T = 0$K is proposed. These solutions correspond to different overlap values $1 -
\gamma^2$, at each equation. With this model and s-wave symmetry, the band overlapping $1 - \gamma^2$ was higher at $T = 0$K than at $T =
T_c$[@oro:07]. We consider the same behavior with d-wave symmetry. The maximum $T_c$ for cuprate superconductors is obtained at optimal doping. With the model $\lambda(x)$ values are obtained, including at optimal doping $\lambda(x_{op})$[@oro:08].
In Fig. \[fig1\]. values of the coupling parameter $\lambda$ in terms of the overlapping parameter $\gamma^2$ are shown in the weak coupling region. The experimental results of $T_c$ and $\Delta_0$ from Refs. [@harshman:92] and [@Ino:02] were introduced. The curves at $T = 0$K (broken curve) and at $T = T_c = 40$K (continuous curve) for $La_{2-x} Sr_x Cu O_4$, with optimal doping $x_{op} = 0.16$ are shown. The minimum $\gamma^2$ value of $0.55$ was taken to be consistent with the model. In the whole range reported for the band overlapping, the coupling parameter required at each $\gamma^2$ is larger for $T = 0$K than for $T = T_c$. In order to use the same $\lambda$ for $T = T_c$ and $T = 0$K, the $\lambda$ values must be restricted [*i.e.*]{}, the $\lambda$ value at each $\gamma^2$ must be larger than $\lambda_{min}= 0.57$ at $T = 0$K.
In the region $\gamma^2 \geq 0.7$ with a constant $\lambda$ value, a larger band overlapping $1 - \gamma^2$ is obtained for $T = 0$K than for $ T = T_c$ in agreement with our assumption. For instance, the maximum $\lambda$ for $T = T_c$ with $\gamma^2 = 0.95$, is shown by the horizontal line at $\lambda = 0.68$, and the intersection of this line and the $T = 0$K curve is at $\gamma^2 = 0.94$. The same restrictions over $\lambda$ are considered at any other doping in the superconducting phase. However, for any $x \neq x_{op}$, the $\lambda
$ value must be smaller than $\lambda = 0.68$.
In Fig. \[fig2\] the results for optimal doping $ x_{op}$, are compared with the underdoped $x=0.13$ and the overdoped $x=0.2$ cases. The experimental values of $\Delta_0$ and $T_c$ for each doping, were introduced. The continuous curves correspond to $ x_{op}$, the small dashed curves show the underdoped behavior and the large dashed ones the overdoped results. In the optimal doped and underdoped cases, the $T = 0$K curves are above the corresponding $T = T_c$ ones. In the overdoped case, the behavior is different [*i.e.*]{}, the $T_c$ curve is above the $T = 0$K one.
In the three cases, the values of the coupling parameter are in the weak coupling region for the $\gamma^2$ values which satisfy the conditions of our model. All the $\gamma^2$ values which satisfy the $\lambda$ restrictions are allowed. However, as an example, we have selected extreme $\lambda$ values in the three cases. The three horizontal lines show these $\lambda$ values.
As in Fig. \[fig1\] the maximum $\lambda $ value selected at optimal doping is $\lambda = 0.68$. In the underdoped case $\lambda= 0.65$ is selected. This value corresponds to the overlapping parameter $\gamma^2= 0.621$, the minimum of the $T = 0$K curve, and $\gamma^2=
0.941$ at the $T = T_c$ curve. In the overdoped case, the selected $\lambda$ value is $ 0.51$, [*i.e.*]{} the minimum of the curve $T =
T_c$. With this $\lambda$, the overlapping parameters are $\gamma^2=
0.599$ for $T = 0$K and $\gamma^2= 0.76$ for $T=T_c$.
With numerical solutions of Eq. (\[eq:ee\]) we may obtain the gap $\Delta_0$ in terms of the parameters of our model. The underdoped material is considered in Fig. \[fig3\] because the advantage of our model is easily shown. The gap $\Delta_0$ is shown in terms of the coupling parameter $\lambda$. The gap $\Delta_0$ always increases with the coupling parameter $\lambda$. The curves are drawn for $\gamma^2 =
0.621, 0.5$ and $0.8$ from up to down respectively. For this sample, with $\gamma^2 = 0.621$, we obtain the minimum $\lambda$ value for any $\Delta_0$ and for any $\lambda$ the maximum $\Delta_0$ value.
The continuous horizontal line shows the experimental $\Delta_0 =
10.85$meV value. The large dashed horizontal line shows the d-wave mean-field approximation $\Delta_{MF}= 6.68$meV result[@won:94], where the same d-wave symmetry was considered. However, introduction of the band overlapping allows to reproduce the experimental result with all the $\gamma^2$ values in the range considered. The band overlapping model also allows higher $\Delta_0$ values for the underdoped system and lower $\Delta_0$ for the overdoped one, than the $\Delta_{MF}= 2.145 k_B T_c$.
In Fig. \[fig4\] the behavior between $\Delta_0$ and $\lambda$ for optimal doping is compared with the underdoped and the overdoped cases. The $\gamma^2$ values introduced are those selected in Fig. \[fig2\] for $T = 0$K. The horizontal lines are the $\lambda$ values also selected in Fig. \[fig2\]. All the continuous curves correspond to optimal doping. The large and small dashed curves correspond to the overdoped and the underdoped systems respectively. The curves show the interesting relationship between these parameters. As for optimal doping, the coupling parameter increases with $\Delta_0$ for any doping. The vertical lines are the experimental $\Delta_0$ values. It is possible to reproduce the experimental $\Delta_0$ in the range $0.13 \leq x \leq 0.2$. The band overlapping introduced in this model allows the reproduction of the behavior of $\Delta_0$ with doping.
In conclusion, we presented an overlapping band model with d-wave symmetry, to describe high-$T_c$ cuprate superconductors, within the BCS framework. We have used a model with anomalous Fermi Occupancy and d-wave pairing in the 2D fermion gas. The anomaly is introduced via a generalized Fermi surface with two bands as a prototype of bands overlapping. We report the behavior of the coupling parameter $\lambda$ as function of the gap $\Delta_0$ and the overlapping parameter $\gamma^2 $, for different doping samples. The $\lambda$ values consistent with the model are in the weak coupling region. The behavior of $\Delta_0$ as function of $\lambda$ shows that for several band overlapping parameters it is possible to reproduce the experimental $\Delta_0$ values near the optimal doping, for the cuprate $La_{2-x} Sr_x Cu O_4$. The band overlapping allows the improvement of the results obtained with a d-wave mean-field approximation, in a scheme in which the electron-phonon interaction is the relevant high-$T_c$ mechanism. The energy scale of the anomaly $(1 - \gamma^2)E_F$ is of the order of the Debye energy. The Debye energy is then the overall scale that determines the highest $T_c$ and gives credibility to the model because it requires an energy scale accessible to the lattice. The enhancing of the DOS with this model simulates quite well intermediate and strong coupling corrections to the BCS framework.
[10]{}
X. J. Zhou [*et al.*]{}, Phys. Rev. Lett. [**95**]{}, 117001 (2005).
J. Lee [*et al.*]{}, Nature [**442**]{}, 546 (2006).
G. Deutscher, Rev. Mod. Phys. [**77**]{}, 109 (2005).
C. C. Tsuei and J. R. Kirtley, Rev. Mod. Phys. [**72**]{}, 969 (2000).
C. S. Liu and W. C. Wu, Phys. Rev. B [**76**]{}, 014513 (2007).
G. Blumberg [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 107002 (2002).
M. M. Qazilbash [*et al.*]{}, Phys. Rev. B [**72**]{}, 214510 (2005).
D. G. Hawthorn [*et al.*]{}, Phys. Rev. B [**75**]{}, 104518 (2007).
M. Le Tacon, A. Sacuto, and D. Colson, Phys. Rev. B [**71**]{}, 100504 (2005).
J. P. Franck, (World Scientific Publi. Co., Singapure, 1994), p. 184.
X.-J. Chen [*et al.*]{}, Phys. Rev. B [**75**]{}, 134504 (2007).
Z. Hassan, R. Abd-Shukor, and H. A. Alwi, Int. J. Mod. Phys. B [**16**]{}, 4923 (2002).
K. Nakayama [*et al.*]{}, Phys. Rev. B [**75**]{}, 014513 (2007).
N. Kristoffel, P. Robin, and T. Ord, J. Phys.: Conf. Ser. [**108**]{}, 012034 (2008).
T. Thonhauser, H. Auer, E. Y. Sherman, and C. Ambrosch-Draxl, Phys. Rev. B [**69**]{}, 104508 (2004).
C. M. I. Okoye, Physica C [**313**]{}, 197 (1999).
S. A. Saleh, S. A. Ahmed, and E. M. M. Elsheikh, J. Supercond. Nov. Magn. [**21**]{}, 187 (2008).
M. Moreno, R. M. Méndez-Moreno, M. A. Ortíz, and S. Orozco, Mod. Phys. Lett. B [**10**]{}, 1483 (1996).
T. M. Mishonov, S. I. Klenov, and E. S. Penev, Phys. Rev. B [**71**]{}, 024520 (2005).
D. R. Harshman and A. P. Mills, Phys. Rev. B [**45**]{}, 10684 (1992).
Y. Ando [*et al.*]{}, Phys. Rev. B [**61**]{}, R14956 (2000).
A. Ino [*et al.*]{}, Phys. Rev. B [**65**]{}, 094504 (2002).
S. Orozco, M. Ortiz, R. Mendez-Moreno, and M. Moreno, Appl. Surf. Sci. [**254**]{}, 65 (2007).
S. Orozco, M. Ortiz, R. Méndez-Moreno, and M. Moreno, Physica B [**403**]{}, 4209 (2008).
H. Won and K. Maki, Phys. Rev. B [**49**]{}, 1397 (1994).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Previous studies have demonstrated that continental carbon-silicate weathering is important to the continued habitability of a terrestrial planet. Despite this, few studies have considered the influence of land on the climate of a tidally-locked planet. In this work we use the Met Office Unified Model, coupled to a land surface model, to investigate the climate effects of a continent located at the sub-stellar point. We choose to use the orbital and planetary parameters of Proxima Centauri B as a template, to allow comparison with the work of others. A region of the surface where $T_{\text{s}} > 273.15\,\text{K}$ is always retained, and previous conclusions on the habitability of Proxima Centauri B remain intact. We find that sub-stellar land causes global cooling, and increases day-night temperature contrasts by limiting heat redistribution. Furthermore, we find that sub-stellar land is able to introduce a regime change in the atmospheric circulation. Specifically, when a continent offset to the east of the sub-stellar point is introduced, we observe the formation of two mid-latitude counterrotating jets, and a substantially weakened equatorial superrotating jet.'
author:
- 'Neil T. Lewis'
- 'F. Hugo Lambert'
- 'Ian A. Boutle'
- 'Nathan J. Mayne'
- James Manners
- 'David M. Acreman'
title: 'The influence of a sub-stellar continent on the climate of a tidally-locked exoplanet'
---
Introduction {#sec:intro}
============
Beginning with the works of @1997Icar..129..450J and , 3D atmosphere general circulation models have been used to study the character of planets beyond our solar system. The output from 3D models has uncovered new dynamical regimes [see @2010exop.book..471S for a review], and has allowed characterization of the ‘habitability’ of recently discovered *terrestrial* exoplanets orbiting M-Dwarf stars such as Proxima Centauri B (, discovery: @2016Natur.536..437A) and the Trappist-1 planets (@2017ApJ...839L...1W [@2017arXiv170706927T], discovery: @2017Natur.542..456G)
M-dwarf stars are believed to make up approximately 75% of stars on the main sequence. Using a conservative definition for the habitable zone, @2015ApJ...807...45D estimate there are approximately $0.16$ Earth size planets, and $0.12$ super-Earth size planets per M-dwarf habitable zone. As a result, habitable zone planets orbiting M dwarf stars are expected to be numerous. M-dwarf stars are much smaller and cooler, and subsequently dimmer, than the Sun, making the potential to detect habitable planets orbiting these stars much improved over Sun-like stars. The reduced stellar radii of M-dwarfs compared to that of G-dwarfs leads to a stronger per-transit signal. Moreover, in order to achieve potentially habitable temperatures planets must orbit M-dwarfs much more closely than they would around a G-dwarf, leading to much shorter orbital periods and increased feasibility of repeat observations further increasing the attraction of these targets. Due to strong tidal forces, planets orbiting at such short distances may fall into so-called ‘tidally-locked’ synchronous rotation, where the planet’s orbital period and rotation period are the same. This means that the planet has a permanent ‘day-side’ and a permanent ‘night-side’. 3D climate modeling studies of tidally-locked planets have shown that the day-night forcing difference experienced by these planets gives rise to atmospheric circulation and equilibrium climate different to that found on Earth .
Traditionally, the habitability of a planet has been assessed by whether it occupies the ‘habitable zone’ of its parent star. The habitable zone is defined as the area around a star within which a planet can orbit and host liquid water at its surface [@1993Icar..101..108K]. Its inner boundary is determined by a climate transition to either a ‘moist greenhouse’ or ‘runaway greenhouse’ state. In the former scenario, water in the stratosphere is lost by photolysis and hydrogen is lost to space. In the latter, the outgoing thermal radiation reaches an upper limit beyond which surface temperatures can rise unchecked, until a planet’s oceans are evaporated into the atmosphere [@2016ApJ...819...84K]. The outer boundary of the habitable zone is traditionally defined as the point after which surface temperatures are cool enough to allow $\text{CO}_{2}$ to condense onto the surface, removing the $\text{CO}_{2}$ greenhouse effect, thus causing global cooling and consequently global glaciation. The outer boundary can be extended if greenhouse gasses such as hydrogen are present in sufficient quantity to forgo the requirement for $\text{CO}_{2}$ greenhouse warming [@2011ApJ...734L..13P].
Recently, some studies have advanced the assessment of habitability by systematically assessing the *different* equilibrium states an individual planet could occupy. For example, @2011ApJ...726L...8P and investigate the effects of different possible atmospheric compositions on the climates of Gliese 518g [discovery: @2010ApJ...723..954V] and Proxima Centauri b (hereafter, ProC b), respectively. These studies are informed by discussion regarding the initial volatile inventory of the planet and water-loss as a result of interaction with the planet’s host star . discuss the effects of ProC b occupying a 3:2 resonant orbit with its host star, as opposed to a tidally-locked orbit on planetary climate and habitability.
To remain habitable on the order of gigayears it has been suggested that a planet requires a carbon-silicate weathering cycle, whereby it can sequester carbon in order to balance volcanic outgassing of $\text{CO}_{2}$ and ‘adapt’ to rising temperatures resulting from increased stellar irradiance as a star progresses through its main sequence life-time [@1993Icar..101..108K]. Continued habitability on the order of gigayears is important, as life may only emerge on a planet a few hundred million years subsequent to the planet’s formation. The earliest confirmed evidence for life on Earth is found to be from the Archaean, 3.5 Gyr ago, and it is thought therefore that life began at the end of the Hadean or in the early Archaean, roughly 500 Myr - 1 Gyr after the formation of the Earth [@2001Natur.409.1083N].
Silicate weathering can occur either on continents or at the sea floor. Continental weathering is facilitated by precipitation, which contains dissolved $\text{CO}_{2}$ that reacts with silicate rocks to create aqueous minerals. These minerals are then transported by surface water run-off to the ocean, where carbonates are created and subsequently buried. An increase in temperature would cause an increase in precipitation, thus increasing weathering and the $\text{CO}_{2}$ draw down rate. This, in turn, would reduce the $\text{CO}_{2}$ greenhouse effect, reducing the temperature. In this way, carbon-silicate weathering provides a stabilizing feedback that allows a planet to respond to increased insolation. For an extended description of the carbon-silicate weathering cycle the reader is invited to consult @2012ApJ...756..178A.
Sea floor weathering is thought to be weaker than continental weathering, and far less temperature dependent [@2012ApJ...756..178A]. Assuming sea floor weathering is temperature independent, @2012ApJ...756..178A find that ocean planets without a continental surface (hereafter, aquaplanets) will be unable to adapt to increased stellar irradiance, and therefore the habitable zone for such planets will be ‘dramatically narrower’ than previously thought. @2015MNRAS.452.3752K note that aquaplanets may host deep oceans, with depth in excess of $100\,\text{km}$. The pressure associated with such depth will cause the formation of ‘high-pressure water ice’ at the ocean floor, which could prevent carbon exchange with the continental crust below. This would compromise the ability of the carbon-silicate cycle to sequester carbon. With this in mind, and in light of the findings presented by @2012ApJ...756..178A, they suggest that the rate of atmospheric $\text{CO}_{2}$ draw down will be controlled by the ability of the ocean to dissolve atmospheric $\text{CO}_{2}$, as opposed to by the carbon-silicate cycle. In this scenario, where oceanic $\text{CO}_{2}$ dissolution is the primary mechanism for carbon exchange with the atmosphere, they find that the carbon cycle may actually provide a positive feedback, and thus a destabilizing effect on climate. This would serve to reduce the width of the habitable zone. It follows from the results of @2012ApJ...756..178A and @2015MNRAS.452.3752K that an exposed continental surface may be required for a planet to maintain an effective carbon-silicate cycle. Indeed, @2012ApJ...756..178A show that if a land mass is present then continental weathering can occur, with little dependence on land fraction, allowing us to retain previous habitable zone theory and limits.
It seems sensible to consider the climate dynamics of tidally-locked planets where a continent is present, yet to date few studies have done so, with the majority choosing instead to focus on aquaplanets or entirely land planets. @2003AsBio...3..415J and @2017arXiv170902051D present results for simulations where continents are introduced, but as part of broader studies into planetary climate, and so few results presented are continent specific. The aim of this study is to investigate the climate response to the introduction of a continent on a tidally-locked planet. We choose to focus on Proxima Centauri b, and assume it has an $\text{N}_{2}$ dominated atmosphere with a surface pressure of $p_{s} = 10^{5}\,\text{Pa}$ and trace $\text{CO}_{2}$ to allow easy comparison with and .
As discussed in , it has been suggested for tidally-locked planets that any large-scale gravitational anomaly, and by extension, topographical anomaly, is likely to be aligned with the star-planet axis [@Wieczorek2007]. Therefore, one might expect a topographical basin to be located at either the anti-stellar point, or at the sub-stellar point, as is the case on the Moon [@1994Sci...266.1839Z]. This could favour the existence of any above- sea-level land at the sub- or anti-stellar point. Continental weathering requires precipitation, which falls largely near the sub-stellar point, and is only likely to be effective when land is not covered in ice, which requires $T_{\text{s}}>273.15\,\text{K}$ as is the case on the day-side of our planet . Indeed, @2012AsBio..12..562E demonstrate that carbon-silicate weathering is greatly enhanced when land is located on the day-side of a tidally-locked planet, as opposed to the night-side. Furthermore, it is at the sub-stellar point that we expect land to cause the largest change in climate from the aquaplanet scenario. With these points in mind, for this study we consider land located at the sub-stellar point.
A sub-stellar continent will throttle the supply of moisture to the atmosphere, reducing the atmospheric temperature and water vapor, and the occurrence of cloud and precipitation. Notwithstanding reduction in cloud, these effects are expected to reduce surface temperatures globally through a reduction in the water vapor greenhouse effect, and to reduce the efficiency of moist atmospheric heat transport. On a rapidly, non-synchronously rotating planet such as Earth, a drier atmosphere, and thus a reduced capacity for moist atmospheric heat transport, serves to reduce meridional heat transport, cooling polar regions [as presented in @2013JGRD..11810414C their Figure 4]. On a tidally-locked planet, we expect that reducing the efficiency of moist atmospheric heat transport will cool the night-side, and so increase the day-night temperature contrast. Even though global mean surface temperatures may be reduced, near the sub-stellar point we expect that a continent will have greater surface temperature than an ocean surface due to reduced cloud coverage and reduced evaporative cooling . @2010GeoRL..3718811S demonstrate that the temperature/pressure field on a tidally-locked planet is set by an atmospheric wave response to longitudinally asymmetric surface heating. For planets large enough to contain a planetary scale Rossby wave, this can induce equatorial superrotation , which can dominate the circulation on tidally-locked planets. The location and extent of any land surface on the day-side will modify the location and amplitude of the sub-stellar surface heat flux, and so may alter the large-scale circulation of the atmosphere.
To conduct our investigation into continent-climate interaction on a tidally-locked planet, we run climate simulations using the Met Office Unified Model, a three-dimensional atmospheric General Circulation Model, coupled to a land surface model, complete with fully interactive hydrology, and a single-layer slab ocean model. In Section \[sec:model\] we describe our model set-up. Our results are presented in Section \[sec:land\]. In Section \[sec:globalresponse\] we investigate the response of primary climate diagnostics to a box-continent centered at the sub-stellar point, where we discover the introduction of a sub-stellar continent generally results in a cooler climate. In Section \[sec:daynight\] we consider the influence of sub-stellar land on heat redistribution and day-night temperature contrasts. We find that continents can impede the ability of the atmosphere to redistribute heat to the night-side, resulting in increased day-night temperature contrasts. In Section \[sec:circulation\], we investigate the effect of sub-stellar land on the large-scale circulation. For continents offset to the east of the substellar point, we observe a regime change in the atmospheric circulation, namely a weaker superrotating jet and the appearance of two counterrotating mid-latitude jets. Discussion is presented in Section \[sec:discuss\], where we compare our work to other studies that have investigated heat redistribution on tidally-locked planets [e.g. @2014ApJ...784..155Y; @2015ApJ...806..180W]. We also discuss the implications of our findings for the potential habitability of tidally-locked terrestrial exoplanets, and make comments regarding the observability of a sub-stellar land mass, relevant to future work within the field. Finally, our conclusions are summarized in Section \[sec:conclusions\].
Model Framework {#sec:model}
===============
General Circulation Model {#sec:method}
-------------------------
We make use of the Global Atmosphere 7.0 [@Walters2017] configuration of the Met Office Unified Model (UM) to simulate the atmosphere. The UM is a 3D atmospheric General Circulation Model (GCM) that solves the fully compressible, deep-atmosphere, non-hydrostatic, Navier-Stokes equations using a semi-implicit, semi-Lagrangian approach.
Sub-grid scale processes are parametrized as follows; boundary layer turbulence, including non-local transport of heat and momentum, follows @2000MWRv..128.3187L and @2008BoLMe.128..117B; cumulus convection uses a mass-flux approach based on @1990MWRv..118.1483G that includes re-evaporation of falling precipitation; multi-phase $\text{H}_{2}\text{O}$ cloud condensate and fraction amounts are treated prognostically following @2008QJRMS.134.2093W and ice and liquid precipitation formation is based on @1999QJRMS.125.1607W and @2014MWRv..142.1655B respectively. Radiative transfer is handled by the SOCRATES[^1] scheme [described in @Walters2017 Section 2.3] which makes use of 6 “shortwave" bands (0.2 - 10$\,\mu$m) to treat incoming stellar radiation, and 9 “longwave" bands (3.3$\,\mu$m - 10mm) for thermal emission from the planet. A correlated-*k* technique is applied. All schemes are considerably improved from their original documentation, and the current incarnations are summarized in @2017GMD....10.1487W, @Walters2017, and references therein.
[l|l]{} &\
Semi-major axis (AU) & 0.0485\
Stellar irradiance, $S$ (W m$^{-2}$) & 881.7\
Orbital period (Earth days) & 11.186\
Rotation rate, $\Omega$ (rad s$^{-1}$) & $6.501\times10^{-6}$\
Eccentricity & 0.0\
Obliquity & 0.0\
&\
$r_{\text{p}}$ (km) & 7160\
$g$ (m s$^{-2}$) & 10.9\
*($\text{N}_{2}$-dominated)* &\
$R$ (J kg$^{-1}$ K$^{-1}$)& 297.0\
$c_{\text{p}} $ (J kg$^{-1}$ K$^{-1}$) & 1039.0\
CO$_{2}$ Mass mixing ratio (kg kg$^{-1}$) & $5.941\times10^{-4}$\
Mean surface pressure, $p_{0}$ (Pa) & $10^{5}$\
The UM has been adapted to simulate a wide range of planetary climates, including those of both gas giant planets and terrestrial planets . For this work, the model is configured to simulate the climate of a tidally-locked terrestrial exoplanet. We choose to use the orbital and planetary parameters of ProC b, following , based on the best estimates provided by @2016Natur.536..437A and , as a ‘template’. The stellar spectrum is from BT-Sett1 with $T_{\text{eff}} = 3000\,\text{K}$, $g = 1000\,\text{m\,s}^{-2}$ and $\text{metallicity} = 0.3\,\text{dex}$, following .
We choose a model resolution of 2.5$^{\circ}$ longitude by 2$^{\circ}$ latitude, with 38 levels in the vertical extending from the surface to the top-of-atmosphere (40km), quadratically stretched to enhance resolution near the surface. The model timestep is $1200\,\text{s}$. Model parameters are presented in Table \[tab:params\].
As far as the atmosphere is concerned, our setup is identical to that used for the nitrogen-dominated ProC b simulations presented in . Whilst we keep our discussion general to any tidally-locked terrestrial exoplanet, this choice allows the reader to make an easy comparison with the results of , and @2017arXiv170902051D.
The surface boundary condition
------------------------------
The principal aim of this study is to investigate the effect of a sub-stellar continent on the climate of tidally-locked terrestrial exoplanets. To do this, we use the Joint UK Land Environment Simulator [JULES, @2011GMD.....4..677B] to represent land covered surface, in conjunction with a single layer ‘slab’ model based on @2006JAtS...63.2548F to represent ocean covered surface.
For the slab ocean, we choose a heat capacity of $10^{8}\,\text{J}\,\text{K}^{-1}\,\text{m}^{-2}$, which is representative of an ocean surface with a 24m mixed layer. This choice, which differs from that made in (2.4m mixed layer), is designed to make a distinction between the heat capacity of land and ocean. In our simulations, ice-free ocean has an albedo of $\alpha_{\text{min}} = 0.07$. We decide to include a simple representation of sea-ice following the ‘HIRHAM’ parametrization presented in @2007IJCli..27...81L [based on @1996JGR...10123401D], which changes the surface albedo of the ocean if surface temperatures fall below a critical temperature, $T_{\text{c}} = 271\,\text{K}$. Below $T_{\text{c}}$, the ocean surface albedo is given by: $$\label{eq:albedo}
\alpha_{\text{o}} = \alpha_{\text{max}}-\exp(-(T_{\text{c}}-T_{\text{s}})/2)\cdot(\alpha_{\text{max}}-\alpha_{\text{min}}),$$ where $T_{\text{s}}$ is surface temperature, $\alpha_{\text{max}} = 0.27$ is the maximum ice-albedo, and $\alpha_{\text{min}} = 0.07$ is the ice-free ocean albedo. We choose $\alpha_{\text{max}} = 0.27$ based on the mean bolometric albedo for ice calculated in for a planet with ProC b’s incident stellar spectrum. For surface temperatures greater than $T_{\text{c}}$, $\alpha_{\text{o}}=\alpha_{\text{min}}=0.07$. Unlike in , ocean albedo is spectrally independent, to retain consistency with the simplicity of the sea-ice parametrization.
[l|lll]{} Homogeneous & & &\
*Aqua* & N/A & N/A & 0.0\
Centred & & &\
*B1* & (-15,15) & (-28.75,28.75) & 4.6%\
*B2* & (-19,19) & (-36.25,36.25) & 7.1%\
*B3* & (-25,25) & (-46.25,46.25) & 11.6%\
*B4* & (-29,29) & (-56.25, 56.25) & 16.6%\
*B5* & (-35,35) & (-66.25,66.25) & 22.0%\
*B6* & (-39,39) & (-73.75,73.75) & 26.8%\
*B7* & (-45,45) & (-86.25,86.25) & 34.4%\
*B8* & (-49,49) & (-93.75,93.75) & 39.9%\
East-offset & & &\
*E2* & (-19,19) & (1.25,73.75) & 7.1%\
*E4* & (-29,29) & (1.25,113.75) & 16.6%\
*E6* & (-39,39) & (1.25,148.75) & 26.8%\
For each column in the UM where land is present, JULES is used to simulate the surface boundary. There are four soil layers (labelled 1 to 4, layer 1 closest to the surface) with a thickness of 0.1, 0.25, 0.65 and 2m, for layers 1 to 4 respectively, between which soil water and heat fluxes are calculated. JULES operates a tile approach, with the surface of each land point subdivided into five types of vegetation and four non-vegetated surface types. For our simulations, we set all grid points to have 100% bare soil coverage, where the soil is given a sandy composition. We initialize soil moisture to be 0.2, 0.5, 1.2 and 3.8kgm$^{-2}$ for layers 1 - 4 respectively. The maximum soil moisture that can be reached is defined by the volumetric soil moisture content at saturation, which is set to 0.363 based on Earth-like parameters for soil with a sandy composition. The maximum soil water content in a given layer is therefore $\rho_{\text{H}2\text{O}}\cdot0.363\cdot h_{\text{l}}\,\text{kg\,m}^{-2}$ where $\rho_{\text{H}2\text{O}}$ is the density of water and $h_{\text{l}}$ the layer depth. Evaporation from the soil is permitted; transpiration is not as there is no vegetation. Excess water is removed as surface run-off and is assumed to return to the ocean. For simplicity, land is assumed to be flat, with constant sea-level altitude. Therefore, whilst the roughness length ($10^{-3}\,\text{m}$) is higher than that of the ocean surface, and will exert an additional drag force on the near-surface flow, we do not introduce large perturbations to the flow associated with gravity wave drag, flow blocking or orographic roughness. A full description of JULES, documenting processes including surface exchange, soil fluxes and run-off, can be found in @2011GMD.....4..677B.
Without observational constraints, our choice for the land surface albedo, $\alpha_{\text{ls}}$, is informed by values found for present-day Earth. On Earth, desert land has an albedo $\alpha_{\text{ls}} = 0.42$ in the 0.85 - 4$\mu$m wavelength range, and $\alpha_{\text{ls}} = 0.5$ in the 0.7 - 0.85$\mu$m range [@Coakley2003]. Most of the stellar radiation incident on ProC b falls within this range . In reality, much of Earth’s land surface has a lower albedo due to the presence of vegetation. Recognising that $0.5$ is a high value to choose for land surface albedo, we choose $\alpha_{\text{ls}} = 0.4$. We note that this choice is still significantly higher than the value of $\alpha_{\text{ls}} = 0.2$ used in the ‘dry planet’ simulation of and simulations with land in @2003AsBio...3..415J, however it is consistent with our assumption that the soil has a sandy composition.
In order to investigate the impact of sub-stellar land on the climate we run simulations for three surface ‘configurations’. We use an aquaplanet simulation, *Aqua*, as our control. We run eight simulations where a box-continent of varying extent is centered at the sub-stellar point. These simulations are named *B1, B2, B3, B4, B5, B6, B7, B8*, hereafter *B*(1-8), to convey increasing size. Additionally, we run three simulations where a continent is introduced with its western coastline located at $\phi = 1.25^{\circ}$, so that the centre of the continent is offset to the east of the sub-stellar point These simulations are named *E2, E4, E6*, to correspond with sizes of ‘*B*’ configurations, i.e. *B2* and *E2* are the same size. Continent bounds and fractional surface coverage for each configuration are presented in Table \[tab:continents\].
Results {#sec:land}
=======
As detailed in the introduction, the results of this study are presented in three parts. In Section \[sec:globalresponse\] we investigate the response of primary climate diagnostics such as evaporation, precipitation and surface temperature, to box continents symmetric about the sub-stellar point. In Section \[sec:daynight\] we describe the effect of variations in water vapor availability on the planetary energy budget and heat redistribution. In Section \[sec:circulation\] we examine the response of the large-scale circulation to sub-stellar land. In addition to considering continents symmetric about the sub-stellar point, we introduce a continent whose center is offset to the east of the sub-stellar point.
All simulations are run for 20 Earth years, and the results presented are temporal mean values over the final five years of simulation, unless stated otherwise.
Primary climate diagnostics {#sec:globalresponse}
---------------------------
Figure \[fig:globvals\] (a) presents global mean evaporation for the aquaplanet, *Aqua*, and box-continent, *B*(1-8), simulations. We find that with increasing continent extent, surface evaporation decreases. The *Aqua* simulation has a global mean evaporation $E = 0.78\text{\,mm\,day}^{-1}$. When the *B1* continent is introduced, which covers only 5% of the planet’s surface, mean evaporation is reduced relative to the *Aqua* simulation to $E = 0.65\text{\,mm\,day}^{-1}$. For the *B2* continent, which covers 7% of the total surface, mean evaporation falls further to $E = 0.53\text{mm day}^{-1}$. When land covers most of the day-side, global mean evaporation is close to zero. For example, $E = 0.08\text{\,mm\,day}^{-1}$ for the *B8* simulation where roughly 75% of the day-side surface is land. As our simulations are in equilibrium, reduced global mean evaporation is reflected by an equivalent reduction in global mean precipitation.
Reduced evaporation is associated with a reduction in atmospheric water vapor and cloud water content. In Figure \[fig:globvals\] (d) we present global mean column-integrated water vapor, $\hat{q}_{\text{v}}$, cloud liquid water, $\hat{q}_{\text{cl}}$, and cloud frozen water, $\hat{q}_{\text{cf}}$, where $\hat{q}$ is defined as: $$\hat{q} =\int_{0}^{\infty} \rho qdz,$$ where $z$ is height in m, and $\rho$ is density in $\text{kg\,m}^{-3}$, so that $\hat{q}\in\{\hat{q}_{\text{v}},\hat{q}_{\text{cl}},\hat{q}_{\text{cf}}\}$ has units of $\text{kg\,m}^{-2}$. Through comparison with @2013JGRD..11810414C we find that a sub-stellar box-continent on a tidally-locked planet has a far greater effect on atmospheric water vapor content than equatorial land on a rapidly rotating planet. @2013JGRD..11810414C find the introduction of an equatorial continent in simulations of the Archean Earth ($(\lambda_{\text{min}},\lambda_{\text{max}}) = (-38,38)$, $(\phi_{\text{min}},\phi_{\text{max}}) = (-56,56)$) results in a 17.5% reduction in $\hat{q}_{\text{v}}$ with respect to an otherwise identical aquaplanet simulation. The continents introduced in the *B4* and *B5* simulations are the most comparable in size to the @2013JGRD..11810414C equatorial super-continent. These simulations see reductions in $\hat{q}_{\text{v}}$ with respect to the *Aqua* simulation of 63% and 67% respectively, far greater than the reduction found for the Archean Earth supercontinent. This is because, as discussed in , evaporation on a tidally-locked planet, with which reduced $\hat{q}_{\text{v}}$ is associated, is largely restricted to a region local to the sub-stellar point where surface temperature is great enough to permit it. To achieve a similar reduction in $\hat{q}_{\text{v}}$ on a rapidly rotating planet, we would need to introduce land to a significant portion of the equator (the evaporating region), which would require a significantly larger land mass. Our smallest box-continent (*B1* simulation), which covers just 4.6% of the surface, sees a similar reduction (20%) in $\hat{q}_{\text{v}}$ to the @2013JGRD..11810414C super-continent that has a surface coverage roughly four times greater.
Maps of cloud coverage and surface precipitation for the *Aqua*, *B2* and *B8* simulations are presented in Figure \[fig:precip\]. For simulations with a box continent, precipitation is focussed on a narrow band near the center of the continent. On a tidally-locked planet, water vapor is transported towards the sub-stellar point, where precipitation is maximal, from a surrounding evaporative ring by strong boundary layer convergence. Once near the sub-stellar point, strong surface heating induces deep convection that forces water vapor upwards resulting in it being precipitated out . We have found that this process is insensitive to the introduction of a sub-stellar continent, meaning that the center of a continent can be very wet, as rain always preferentially falls near the sub-stellar point. This is important as continental silicate weathering requires precipitation to fall over land.
We note that on a rapidly, non-synchronously, rotating planet such as Earth, where there is no preferred longitude for stellar heating and so no preferred longitude for convection and precipitation, the distribution of precipitation is not insensitive to the introduction of a land surface. This is in contrast with our tidally-locked simulations. On Earth, high land surface temperatures lead to low relative humidity (RH). Tropical rain preferentially falls over regions of highest RH [@2017JCli...30.4527L], which can result in large equatorial continents with low RH becoming desert regions which see minimal rainfall [@2013JGRD..11810414C]. Reductions in rainfall over continental regions are then compensated for by increases elsewhere, such as over ocean regions.
The eastern and western flanks of a sub-stellar continent, which fall outside of the precipitating band, are dry desert-like regions, which see precipitation of less than $0.125\text{\,mm\,day}^{-1}$. As continent extent increases, precipitation reduces in intensity and becomes increasingly focussed on the sub-stellar point, significantly increasing the extent of the desert regions and reducing the extent of cloud coverage overhead. We note that in the *B1* and *B2* simulations, night-side cloud coverage is increased with respect to the *Aqua* simulation. This is associated with an increase in night-side specific humidity which occurs in spite of reduced global mean specific humidity.
Global mean surface temperatures are presented in Figure \[fig:globvals\] (c). Generally, the presence of a continent implies a reduction in global mean surface temperatures. The simulation with the largest box continent, *B8*, has a mean surface temperature $8\,\text{K}$ lower than the *Aqua* simulation. Despite globally cooler temperatures, a ‘habitable region’ (where $T > 273.15\,\text{K}$) is always retained. In all of our simulations $\approx 20\%$ of the surface area meets this criterion, with a maximum of 21% for the aquaplanet simulation, and minimum values of 17% and 18% for the *B1* and *B2* simulations, respectively. We find that global mean surface temperature falls due to a reduction in longwave ($LW$) absorption by the atmosphere (i.e. the greenhouse effect), $LW_{\text{abs}} = LW_{\uparrow,\text{surface}} - LW_{\uparrow,\text{TOA}}$, due to reductions in both atmospheric water vapor and cloud coverage (TOA denotes top-of-atmosphere). $LW_{\text{abs}}$ is presented in Figure \[fig:globvals\] (b). Increased land surface albedo ($\alpha_{\text{ls}} = 0.4$, $\alpha_{\text{o}} = 0.07$) does little to cool the planet, as it is offset by a reduction in the albedo of the atmosphere due to reduced cloud coverage. Planetary albedo remains roughly constant ($0.33\pm0.01$ for all simulations). Reduced cloud coverage serves to increase continental surface temperatures, as less radiation is absorbed and reflected in the atmosphere allowing more to be absorbed at the surface. This means that, in contrast with reduced global mean surface temperatures, surface temperatures local to the sub-stellar point increase when compared to the aquaplanet simulation. To measure the trend in sub-stellar surface temperature, in Figure \[fig:globvals\] (f) we present temperatures averaged over the area of the *B1* continent.
We note that for simulations with smaller continents (surface coverage $< 15\%$) there is little change in global mean surface temperature. However, once continent extent increases further, surface temperature falls monotonically. To understand variation in global mean temperature it is useful to consider day-side and night-side mean temperatures separately.
Heat redistribution and day-night contrasts {#sec:daynight}
-------------------------------------------
Figure \[fig:globvals\] (e,g,h) presents surface and tropospheric temperatures for both the day-side and the night-side. Here tropospheric temperature is defined as the mass-weighted average temperature in the troposphere: $$\label{eq:Ttrop}
T_{\text{trop}} = \frac{\int^{z_{\text{trop}}}_{0}\rho Tdz}{\int^{z_{\text{trop}}}_{0}\rho dz},$$ where $z_{\text{trop}} = 15000 \text{\,m}$ is the tropopause height and $T$ is temperature. We find that day-side surface temperatures for simulations with land remain roughly unchanged from the aquaplanet case. There is strong coupling between day-side surface and tropospheric temperature resulting from the efficient maintenance of a moist adiabat by convection. This means that, similar to day-side surface temperatures, day-side tropospheric temperature, $T_{\text{trop,ds}}$, exhibits little deviation from the aquaplanet case upon introduction of a continent. On the night-side, whilst variation $T_{\text{trop,ns}}$ in is minimal and closely follows $T_{\text{trop,ds}}$, surface temperatures exhibit larger variation and generally fall. There is a $13\,\text{K}$ drop in mean night-side surface temperature from the *Aqua* simulation to the *B8* simulation. It is apparent that the variation in global mean surface temperatures is dominated by changes in night-side surface temperature.
The close-coupling between day-side and night-side tropospheric temperatures must be maintained by the rapid transport of heat from the day-side to the night-side. To investigate this, we consider radiative and advective timescales. We define a radiative timescale, $\tau_{\text{rad}}$: $$\label{eq:taurad}
\tau_{\text{rad}} = \frac{c_{\text{p}}p}{g\sigma T^{3}},$$ following @1989artb.book.....G and @2013cctp.book..277S. $p = 300\,\text{hPa}$ is the pressure at the jet-height, $T = 230\,\text{K}$ is a temperature typical of the night-side troposphere (see Figure \[fig:globvals\] e), $g$ is acceleration due to gravity, and $c_{\text{p}}$ is atmospheric specific heat capacity at constant pressure (see Table \[tab:params\]). If we additionally define an advective timescale, $\tau_{\text{adv}}$: $$\label{eq:tauadv}
\tau_{\text{adv}} = \frac{\pi(r_{\text{p}}+h_{\text{jet}})}{U},$$ where $r_{\text{p}}$ is the planetary radius (see Table \[tab:params\]), $h_{\text{jet}} = 8000\,\text{m}$ is the height of the equatorial jet, and $U = 30\,\text{m\,s}^{-1}$ is taken as the jet-speed, then comparison between the two timescales yields $\tau_{\text{rad}}/\tau_{\text{adv}} = 5.52 > 1$. This means that heat is transported to the night-side faster than it is radiated away to space, and a weak day-night atmospheric temperature gradient is maintained by advection. We note that latent heat transport to the night-side is negligible for all simulations. Night-side advective and latent heating temperature increments are presented in Figure \[fig:ns-heating\].
On the night-side of the planet, there is no incident stellar radiation and so the only heat source is atmospheric heat transport from the day-side. The night-side surface receives a portion of this as radiation from the night-side atmosphere. Energy balance, therefore, dictates that night-side surface temperatures are given by: $$\label{eq:nsT}
\epsilon_{\text{s}}\sigma T^{4}_{\text{ns}} = \epsilon_{\text{na}}\sigma T^{4}_{\text{na}}+F_{\text{n,turb}},$$ where $\sigma$ is the Stefan-Boltzmann constant, $T_{\text{ns}}$ is night-side surface temperature, $\epsilon_{s}$ is surface emissivity and is a model parameter (0.985 for ocean, 0.9 for land), $T_{\text{na}}$ is night-side atmospheric temperature, $\epsilon_{\text{na}}$ is night-side atmospheric emissivity, a quantity that describes the atmosphere’s ability to radiate heat, and $F_{\text{n,turb}}$ are turbulent latent and sensible heat fluxes, which remain small in all of our simulations ($F_{\text{n,turb}} = 1.4\pm0.6\,\text{W\,m}^{-2}$) when compared to radiative fluxes ($\epsilon_{\text{s}}\sigma T^{4}_{\text{ns}}, \space\sigma\epsilon_{\text{na}}T^{4}_{\text{na}} \approx 70\,\text{W\,m}^{-2}$).
In Figure \[fig:eps\] we present solutions to Equation \[eq:nsT\] for $T_{\text{ns}}$ on lines of constant $\epsilon_{\text{na}}$, where we use a constant value of $1.4\,\text{W\,m}^{-2}$ for $F_{\text{n,turb}}$. We then over-lay the results from our simulations using night-side mean surface temperature as an estimate for $T_{\text{ns}}$ and night-side mean tropospheric temperature as an estimate for $T_{\text{na}}$, and use Equation \[eq:nsT\] to find $\epsilon_{\text{na}}$.
Equation \[eq:nsT\] tells us that night-side surface temperatures are dependent on night-side atmospheric temperatures and night-side emissivity. In our simulations, night-side tropospheric temperatures are strongly coupled to day-side tropospheric temperatures (see Figure \[fig:globvals\] e). Day-side tropospheric temperatures stay roughly constant, so whilst there is some cooling in the night-side, tropospheric temperatures do not deviate far from the aquaplanet case, as a result of rapid advective heat transport. Solving Equation \[eq:nsT\], using $T_{\text{na}} = T_{\text{trop,ns}}$, we find $\epsilon_{\text{na}}$ is not constant for our simulations (see Figure \[fig:eps\]). Night-side atmospheric temperature variation alone is not substantial enough to result in the surface temperature variation observed in our simulations. Instead, it is apparent that variation in $\epsilon_{\text{na}}$ is largely responsible.
Atmospheric emissivity appears to be positively related with atmospheric water content. For our simulations with ‘large’ continents (surface fraction $> 15\%$), reduced day-side evaporation is associated with the reduced transport of moisture to the night-side. As a result the night-side atmosphere becomes drier, and is less able to absorb and radiate heat. This means that the night-side atmosphere is able to receive a smaller fraction of energy from the day-side, and a corresponding smaller fraction is passed to the night-side surface through longwave radiation from the atmosphere. Consequently, night-side surface temperatures are reduced. For the smaller *B1* and *B2* simulations, where the atmosphere remains relatively wet whilst maintaining increased sub-stellar surface temperatures with respect to the *Aqua* simulation, more water vapor and cloud water is transported to the night-side increasing night-side emissivity. As a result, night-side surface temperatures are increased.
In Figure \[fig:merplot\] we present meridionally meaned surface temperature, and specific humidity (SH). On the night-side there is a clear correlation between surface temperature and SH. For the larger continents investigated (*B7* provides an example) we find that reduced night-side SH results in reduced surface temperatures, particularly between $180^{\circ}\le\phi\le270^{\circ}$ where the reduction in atmospheric water vapor content between the continental and aquaplanet cases is at its maximum. Meanwhile, for the *B1* and *B2* simulations (*B2* is presented in Figure \[fig:merplot\]), we observe that where night-side specific humidity is increased with respect to the *Aqua* simulation, so is night-side surface temperature. In the case of both ‘larger’ and ‘smaller’ continents, variation in night-side surface temperature is dominated by variation in night-side atmospheric emissivity, $\epsilon_{\text{na}}$, caused by variation in atmospheric water vapor and cloud water content.
On the day-side, variation in surface temperature is governed by several processes. The introduction of a continent reduces evaporation and so cloud coverage overhead, which increases the radiation incident on the surface below, allowing continental surface temperatures to rise. However, reduced cloud coverage and atmospheric water vapour content also reduces the day-side greenhouse effect, allowing more radiation from the surface to escape to space. For each of our simulations, whether the day-side temperature increases or decreases is a competition between these two effects, and the amount of heat lost to the night-side. For the majority of our experiments, the day-side surface temperature is reduced, although the change is always small.
The day-side surface energy balance is given by $$\label{eq:dsT}
\epsilon_{\text{s}}\sigma T^{4}_{\text{ds}} = \epsilon_{\text{da}}\sigma T^{4}_{\text{da}}+F_{\text{turb,d}} + \frac{S}{2}(1-\alpha_{\text{p}}),$$ where the terms follow the same notation convention as used in Equation \[eq:nsT\], with the subscript $d$ denoting a day-side quantity. $S$ is the top of the atmosphere stellar flux, and $\alpha_{\text{p}}$ is the planetary albedo. Unlike on the night-side, the day-side surface is heated by both stellar radiation *and* longwave radiation from the atmosphere. Consulting Figure \[fig:merplot\], on the day-side we observe that SH, and so atmospheric emissivity, has little influence on surface temperatures. To first-order, this is because the stellar radiation absorbed by the surface, $\frac{S}{2}(1-\alpha_{\text{p}})$, is comparatively much larger than the radiation received by the surface from the atmosphere. Therefore any variation in longwave radiation emitted to, and absorbed by, the surface comprises a much smaller fraction of the surface energy balance than on the night-side, where longwave radiation from the atmosphere is the only significant source of surface heating. Towards the terminators on the day-side, where $S$ provides a smaller contribution and heating from the atmosphere is more important, the drier atmosphere means that surface temperatures for simulations with continents fall below aquaplanet surface temperatures.
We have found that night-side atmospheric water content is important in determining the redistribution of heat from the day-side to the night-side. A useful metric to quantify this is the day-night heat redistribution efficiency, defined as the ratio of mean night-side outgoing longwave radiation (OLR) over mean day-side OLR, $\eta$, following . $\eta$ is presented in Figure \[fig:eta\], both as a function of continent extent (top panel) and night-side specific humidity (bottom panel). As specific humidity is reduced, so is redistribution efficiency. The one exception to this is the *B2* simulation, however, this simulation has more night-side cloud than the *B1* simulation. This contributes to increased night-side emissivity which results in it having a slightly higher redistribution efficiency. The two smallest continental configurations, *B1* and *B2*, increase the redistribution efficiency. Increased $\eta$ for these simulations may explain why they have slightly cooler day-side surface temperatures (see Figure \[fig:globvals\] g). In this scenario, the night-side atmosphere is better at radiating energy to space which requires that more heat is transported to it from the day-side, resulting in cooler day-side temperatures [@2014ApJ...784..155Y]. Similarly, we can see that reduced $\eta$ for larger continents may assist in stabilizing day-side temperatures and prevent them from deviating further from the *Aqua* simulation, as less heat is transported to the night-side atmosphere.
In spite of reduced night-side temperatures, no night-side location ever experiences temperatures less than $125\,\text{K}$, the temperature where $\text{CO}_{2}$ could condense onto the surface . This means that the $\text{CO}_{2}$ greenhouse effect is retained, which helps the planet to avoid transition into a ‘snowball’ state where surface temperatures are below $273.15$K everywhere.
Large-scale circulation {#sec:circulation}
-----------------------
Finally, we consider the effect of a sub-stellar land surface on the large-scale circulation. Zonally averaged zonal winds for the *Aqua*, *B8*, and *E4* simulations are presented in Figure \[fig:zonal\]. In all of our simulations, equatorial superrotation is induced. The emergence of this phenomenon can be understood by considering the dimensionless equatorial Rossby deformation length, $\mathcal{L}$: $$\label{eq:rossby}
\mathcal{L} = \frac{L_{\text{Ro}}}{r_{\text{p}}} = \sqrt{\frac{R}{c_{\text{p}}}\frac{\sqrt{c_{\text{p}}T_{\text{e}}}}{2\Omega r_{\text{p}}}},$$ where $L_{\text{Ro}}$ is the equatorial Rossby deformation radius and $r_{\text{p}}$ is the planetary radius [@2015ApJ...806..180W; @2015MNRAS.453.2412C]. $R$ is the specific gas constant for dry air, $c_{\text{p}}$ is atmospheric specific heat capacity at constant pressure and $\Omega$ is the planetary rotation rate (see Table \[tab:params\]). $T_{\text{e}}=\left[\left(1-\alpha_{\text{p}}\right)S/4\sigma\right]^{\frac{1}{4}}$ is the effective radiative temperature of the planet. For $\mathcal{L}$ approximately equal to (or less than) 1, a planetary scale Rossby wave can exist. For our planet, $\mathcal{L} = 1.22$, so the planet is large enough to contain a planetary scale Rossby wave . In this regime, @2010GeoRL..3718811S demonstrate that strong day-night radiative heating contrasts present on tidally-locked planets trigger the formation of standing, planetary-scale equatorial Rossby and Kelvin waves. These are similar in form to those found in shallow-water solutions for the ‘Matsuno-Gill’ model [@Mat66; @1980QJRMS.106..447G]. The solutions show that when a longitudinally asymmetric heating is applied at the equator, equatorial Kelvin waves exhibit group propagation away from the heating to the east, Rossby waves located polewards of the heating exhibit group propagation away to the west. As these waves propagate away from the heating, they transport energy. @2011ApJ...738...71S studies the interaction of these waves with the mean-flow using a hierarchy of one-layer shallow water models and 3D GCM simulations of hot Jovian planets. They find that the latitudinally varying phase shift induced by the alternate propagation of the Rossby and Kelvin waves ‘tilts’ the wind vectors northwest-to-southeast in the northern hemisphere, and southwest-to-northeast in the southern hemisphere. This serves to pump eastward momentum to the equator inducing superrotation.
For increasing continent extent, the thermal forcing at the substellar point is increased as a result of increased surface temperatures over land, thus increasing the forcing amplitude. @2011ApJ...738...71S find that increased forcing amplitude should increase the strength of equatorial superrotation. We find that with increasing continent extent, the jet becomes more ‘focussed’ on the equator. This is observed in when the equatorial jet *increases* in strength. However, in our simulations, zonal wind speeds *decrease* with increased forcing amplitude (see Figure \[fig:zonal\]). We speculate that whilst increased forcing is acting to strengthen the equatorial jet, reduced moisture convergence at the sub-stellar point weakens the strength of convection, reducing vertical wind speeds. In turn, this results in a reduction in the jet speed. To add to our discussion on heat redistribution, we note that as flow becomes increasingly focussed on the equator, the Coriolis force is likely to hamper effective heat redistribution to mid-latitudes on the night-side . This may be contributing to reductions in $\eta$ found in our simulations.
In addition to considering box-continents centred at the sub-stellar point, we also consult the results of simulations *E2, E4* and *E6*, where a box-continent is offset to the east of the sub-stellar point. This serves to ‘shift’ the surface thermal forcing maximum eastwards. For the *E2* simulation, the large-scale circulation takes the same form as in the *Aqua* and *B* cases; namely, there is a superrotating jet at the equator. For the *E4* and *E6* simulations we observe regime change. The equatorial jet is substantially weakened and two counterrotating mid-latitude jets are introduced (see Figure \[fig:zonal\], the *E4* simulation).
Atmospheric temperature and wind vectors for the *Aqua* and *E4* runs are presented in Figure \[fig:slice\], where for both of the simulations we can see the temperature field somewhat resembles the form of the Matsuno-Gill standing wave response. This is indicated by the presence of two off-equatorial Rossby nodes found to the west of the sub-stellar point, and an equatorial Kelvin node to the east of the sub-stellar point. The *Aqua* simulation exhibits a superrotating jet that benefits from momentum convergence from the mid-latitudes, as proposed in @2011ApJ...738...71S. At lower altitudes, the *E4* and *E6* simulations do not see eastward momentum imported to the equator from the mid-latitudes. A similar momentum ‘pumping’ mechanism is observed, however it is reversed. Wind vectors are now tilted northeast-to-southwest in the northern hemisphere, and southeast-to-northwest in the southern hemisphere, meaning that westward momentum is converged towards the equator. As a result, the equatorial jet is reduced in strength and latitudinal extent.
By introducing the continent offset to the east of the sub-stellar point, we ‘shift’ the surface heat source to the east. This is the case as whilst the latent heating associated with deep convection still occurs over ocean regions to the west of the sub-stellar point, lower altitude turbulent boundary layer heating, communicated via dry convection, is introduced over the continent to the east of the sub-stellar point. If the surface heating were the only heat source for the atmosphere, we expect this would only serve to shift the entire pattern by the same distance, in such a way that its structure remained the same. Instead, we observe that the Rossby gyres move closer to the sub-stellar point and decrease in extent, and that the structure of the temperature field surrounding the Kelvin node is altered so that isotherms are now orientated in such a way that they direct eastward momentum away from the equator (see Figure \[fig:slice\], particularly in the 4500m panel). We suspect this arises because the surface is not the only source of heating for the atmosphere. The atmosphere is also directly heated by incident shortwave radiation[^2] and latent heating from deep convection in the upper troposphere . By offsetting the surface heating to the east, we take the ‘surface’ and ‘direct’ heating sources ‘out of alignment’. This implies less resonance between atmospheric waves at different altitudes, weakening the Kelvin and Rossby wave response. As a result, the temperature field is altered and geostrophic balance now requires that flow is ‘tilted’ in such a way that it supplies westward momentum to the equator. Evidence for the cause being longitudinally offset surface and direct stellar heating can be observed by noting that the 8000m panels for the *Aqua* and *E4* simulations, where equatorial superrotation is still observed for the *E4* simulation, display more similarity than the 4500m panels (Figure \[fig:slice\]), and that the mid-latitude counterrotating jets are most prominent at lower altitude (Figure \[fig:zonal\]). This is because further from the surface, offset surface heating has less of an effect, and the circulation is more similar to the regime where the heating sources are in alignment.
A consequence of this regime change is increased night-side atmospheric water content, which allows increased heat redistribution to the night-side. For the *Aqua* simulation we have $\eta = 0.57$, while for the *E4* simulation $\eta = 0.65$. As a result, we see an increase in night-side mean temperatures ($T_{\text{ns, \emph{Aqua}}} = 188.4\,\text{K}$, $T_{\text{ns, \emph{E4}}} = 197.2\,\text{K}$) and a decrease in day-side mean temperatures ($T_{\text{ds, \emph{Aqua}}} = 262.0\,\text{K}$, $T_{\text{ds, \emph{E4}}} = 257.6\,\text{K}$). Overall there is a slight ($\approx2$K) increase in global mean temperatures for the *E4* simulation, compared to the *Aqua* simulation.
Discussion {#sec:discuss}
==========
Heat transport
--------------
A thorough investigation of atmospheric heat redistribution on tidally-locked planets is presented in @2014ApJ...784..155Y, where a two-column model is applied to investigate the effects of water vapor and clouds on day-night contrasts in thermal emission. Varying day-side emissivity, $\epsilon_{\text{da}}$, with fixed night-side emissivity, $\epsilon_{\text{na}} = 0.5$, they find that the magnitude of variation in $T_{\text{ds}}$ is relatively small, with just a $\approx 7$K increase for $\epsilon_{\text{da}} = 0 \rightarrow \epsilon_{\text{da}} = 1$. Performing the same experiment but for fixed $\epsilon_{\text{da}} = 0.5$ and varying $\epsilon_{\text{na}}$ between zero and 1, they find reducing $\epsilon_{\text{na}}$ substantially warms the day-side surface with temperature, $T_{\text{ds}}$, increasing by $\approx 45$K as $\epsilon_{\text{na}}$ is reduced from 1 to zero. Further, they report that global mean temperatures decrease, implying a substantial decrease in night-side surface temperature, $T_{\text{ns}}$. The sensitivity to night-side emissivity is explained via an analogy whereby the night-side is described to behave as a “radiator fin", similar to that presented in @1995JAtS...52.1784P for the Earth’s tropics, that is found to be able to radiate to space easily. When $\epsilon_{\text{na}}$ is reduced, night-side heat loss to space is reduced which *necessarily requires* a reduction in heat transport to the night-side [@2014ApJ...784..155Y]. Similarly, when $\epsilon_{\text{na}}$ is increased, an increase in heat transport from the day-side to the night-side is *required*. Our results clearly show the theory presented in @2014ApJ...784..155Y in operation in a 3D GCM. Furthermore, our findings, and those of @2014ApJ...784..155Y, demonstrate that not only does this mechanism plays an important role in setting the day-night thermal emission contrast (quantified in our results as $\eta$), but also in controlling night-side surface temperatures. This arises because, as presented in Section \[sec:daynight\], the night-side atmosphere is the only source of heat for the surface, and so $T_{\text{ns}}$ is directly dependent on $\epsilon_{\text{na}}$. A slight difference between our results and those of @2014ApJ...784..155Y is that for decreasing $\epsilon_{\text{na}}$ we do not see a substantial increase in day-side surface temperature. This is because in our simulations $\epsilon_{\text{da}}$ and $\epsilon_{\text{na}}$ decrease together in tandem, unlike in their two-column model where one is varied while the other is fixed. This means that on the day-side, both the greenhouse effect and the direct absorption of stellar radiation by the atmosphere are reduced, thus offsetting any increase in day-side surface temperatures that would result from the reduced export of heat to the night-side.
We have studied a climate regime where equatorial superrotation is present. As discussed in Section \[sec:circulation\], this is made possible as the planet is able to ‘contain’ a planetary scale Rossby wave . This was quantified by considering the equatorial Rossby deformation length, $\mathcal{L}$ (Equation \[eq:rossby\]). Planets where $\mathcal{L}$ is significantly greater than 1 are too small to contain a planetary Rossby wave. In this scenario, stellar-to-antistellar point circulation occurs, and no jets appear . @2015ApJ...806..180W finds in this regime that the circulation takes the form of a single planetary-sized convection cell on the day-side, with a ‘slower residual circulation’ on the night-side. In this regime, the author suggests that the planetary boundary layer, and not the large-scale circulation, is the key term in deciphering the planetary energy balance. We expect that a reduction in day-night heat redistribution induced by the introduction of a sub-stellar continent would be observed in the stellar-antistellar circulation regime, as the reduction is largely associated with reduced day-night transport of water vapour. As there is still a circulation that connects the two hemispheres we expect that water vapour transport to the night-side would still be reduced, resulting in a lower $\epsilon_{\text{na}}$ (as we have in our simulations) which would serve to hamper the efficiency of heat redistribution in the atmosphere. To ascertain whether the effect of sub-stellar land, as a function of land fraction, would be more or less pronounced in the stellar-antistellar regime would further study, and is left for future work.
Habitability
------------
We have presented results for simulations of Proxima Centauri b with a $1\,\text{bar N}_{2}$, $5.941\times10^{-4}\,\text{kg\,kg}^{-1}\ \text{CO}_{2}$ atmosphere. For this configuration, we have found that a region where surface temperatures are above freezing is always retained. Thus the conclusion that stable surface liquid water may be present on ProC b, presented in and , is robust to the introduction of a sub-stellar land mass. Furthermore, surface temperatures never fall below $125$K, and so condensation of $\text{CO}_{2}$ onto the surface is avoided, and the $\text{CO}_{2}$ greenhouse effect is retained. Our results, therefore, suggest that sub-stellar land should not preclude ProC b from being a potential environment for life. This is in agreement with the ‘*Day-land*’ simulation presented in @2017arXiv170902051D. The fact that introducing a continent does not necessarily compromise habitability is important; as an exposed continental surface is required to facilitate an effective carbon-silicate weathering cycle [@2012ApJ...756..178A].
We can, however, conceive of situations where sub-stellar land could compromise the prospective habitability of a planet. For example, consider the case of a planet identical to the one in this study, but that is located further from its host star in such a way that it is cooler whilst remaining habitable. Whilst our simulations have not exhibited night-side cooling to below $125$K, a cooler planet located near the outer edge of the habitable zone might have minimum night-side temperatures below $125$K if sub-stellar land were present. For such a planet, our simulations suggest that the presence of sub-stellar land moves the outer limit of the habitable zone inwards. @2013ApJ...771L..45Y propose that clouds can provide a stabilizing feedback that expands the habitable zone at its inner boundary. They identify that as surface temperatures rise, so will the intensity of convection which will produce more thick cloud at the sub-stellar point, thus increasing the albedo and so reducing surface temperature. It is clear from our results that the presence of a large sub-stellar land mass will reduce the effectiveness of this mechanism, as cloud coverage is substantially reduced (see Figure \[fig:precip\]), and thus makes a reduced contribution to the planetary albedo. Furthermore, whilst continental carbon-silicate weathering should balance outgassing of CO2 and increasing stellar irradiance, the maintenance of this process is vulnerable to small changes in atmospheric composition and pressure, via the ‘enhanced sub-stellar weathering instability’ described in @2011ApJ...743...41K. @2011ApJ...743...41K find that if a decrease in day-night temperature gradient induced by an increase in atmospheric pressure, and hence temperature, requires a reduction in sub-stellar temperatures, then this in turn will reduce the weathering rate which will further increase atmospheric pressure and temperature. In this way, a tidally-locked planet with an active weathering cycle may be vulnerable to a runaway feedback where the weathering cycle can dramatically increase or decrease in efficiency; leading the planet towards either atmospheric collapse or a runaway greenhouse scenario. @2011ApJ...743...41K suggest that such a feedback could be induced by changes in pressure resultant from volcanism or ‘mountain-building’.
Future work
-----------
Several questions have been raised by our results that require further study, beyond the scope of this work. Firstly, our model incorporates sophisticated parametrizations of convection and cloud processes. This may lead to more accurate capturing of the effects of these processes on the large-scale climate than more simple approaches . However, sophisticated treatments such as those used in this study have been developed to represent these processes accurately for Earth, and hence require more extensive study at higher resolution, to ensure the robust capturing of these processes across different planetary environments.
We have demonstrated that the climate is sensitive to changes in the surface boundary condition. This suggests that the climate may well be sensitive to further characteristics such as land orography and friction not considered in this work, which would alter the dynamics of the atmosphere, and the surface evaporation efficiency. Additionally, there may be scope to capture the overall trends of the climate sensitivity to a land surface via more simple, faster parametrizations. As part of our early preparation for this work we found that representing a land surface by means of locally restricting surface evaporation and changing surface heat capacity yielded similar results to those retrieved using the full land-surface model.
Of course, the inclusion of dynamic ocean and sea-ice treatments would enable a more complete, and consistent, exploration of the possible climates harbored by terrestrial planets. In such studies, if and where a continent is located will be of particular importance, as continents have been demonstrated to affect oceanic flows [@2017arXiv170902051D].
Furthermore, a more complete analysis of the acceleration mechanism responsible for maintaining the large-scale flow in the atmosphere, and its response to changes in land surface configuration is required, but well beyond the scope of this work.
Finally, the prediction of potential observable signatures of key transitions between climate states, is a natural follow-up to this work. In particular, if such signatures are predicted and observed, further study may reveal whether it will be possible to infer the presence of a land surface on an exoplanet by observing its effects on the climate. An in-depth description of the prospects for remotely estimating land fraction and location on extrasolar planets is presented in @2012ApJ...756..178A; here we choose to summarise the main opportunities. A number of studies have now been conducted to investigate the possibility of estimating land fraction and location on exoplanets directly from observations [@2009ApJ...700..915C; @2010ApJ...715..866F; @2010ApJ...720.1333K]. It has been suggested this can be achieved using measurements of reflected visible light, obtained via disc-integrated, time-resolved broadband photometry [@2009ApJ...700..915C], and via a planet’s thermal emission spectra [@2012ApJ...752....7H]. We comment a little on the former method. This has already been attempted, with relative success, for Earth using data obtained as part of the EPOXI mission [@2009ApJ...700..915C]. However, @2010ApJ...723.1168Z suggest that this technique may not be feasible for the fainter signals received from exoplanets. Another problem is cloud coverage over land, which obscures observations of the surface below. Indeed, the @2010ApJ...720.1333K surface reconstruction method requires cloud-less skies. This is at odds with permanent cloud coverage near the sub-stellar point on a tidally-locked planet, meaning that sub-stellar land may be difficult to characterize. In particular, our smaller continents, *B1* and *B2*, retained thick cloud coverage over the entire land surface. The characterization of larger continents provides a slightly more promising target, as the desert climate of these continents promotes reduced cloud coverage.
Conclusions {#sec:conclusions}
===========
The key conclusions of this study are:
1. The introduction of a sub-stellar land mass serves to reduce the availability of moisture at the sub-stellar point, which decreases both the water vapor greenhouse effect and the cloud radiative effect. This can lead to reduced global mean surface temperatures, chiefly through cooling on the night-side. Day-side surface temperatures exhibit minimal variation because changes in the absorption and reflection of radiation by the day-side atmosphere are roughly reflected by changes in the export of heat from the day-side to the night-side.
2. Reduced atmospheric water vapor content reduces heat redistribution to the night-side by reducing night-side emissivity. This reduces the night-side top-of-atmosphere infra-red flux and night-side surface temperatures, and exaggerates day-night contrasts in both of these quantities.
3. The introduction of land offset to the east of the sub-stellar point can induce a regime change in the large-scale circulation by altering the response of atmospheric waves to the heating perturbation at the sub-stellar point. In our east-offset continent simulations, two counterrotating mid latitude jets are introduced and the superrotating jet at the equator is weakened.
4. Specific to Proxima Centauri B; should the planet reside in a tidally-locked orbit, our results extend previous conclusions regarding its likely habitability to a scenario where both oceans and a land mass located at its sub-stellar point are present.
*Acknowledgements*. We thank Geoffrey K. Vallis for engaging in discussion on atmospheric waves, which greatly benefited this manuscript. N.T.L. and F.H.L are grateful to the London Mathematical Society for financial support by means of an undergraduate research bursary. I.A.B. and J.M. acknowledge the support of a Met Office Academic Partnership secondment. N.J.M.’s contributions were supported by a Leverhulme Trust Research Project Grant. We acknowledge the use of the MONSooN system, a collaborative facility supplied under the Joint Weather and Climate Research Programme, a strategic partnership between the Met Office and the Natural Environment Research Council. This study contains material produced using Met Office Software. We are grateful to an anonymous referee whose comments were of great value when revising this work.
Abbot, D. S., Cowan, N. B., & Ciesla, F. J. 2012, , 756, 178 Amundsen, D. S., Mayne, N. J., Baraffe, I., et al. 2016, , 595, A36 Anglada-Escud[é]{}, G., Amado, P. J., Barnes, J., et al. 2016, , 536, 437 Best, M. J., Pryor, M., Clark, D. B., et al. 2011, Geosci. Model Dev., 4, 677 Boutle, I. A., Eyre, J. E. J., & Lock, A. P. 2014, Mon. Weather Rev., 142, 1655 Boutle, I. A., Mayne, N. J., Drummond, B., et al. 2017, , 601, A120 Brown, A. R., Beare, R. J., Edwards, J. M., et al. 2008, Bound.-Lay. Meteorol., 128, 117 Carone, L., Keppens, R., & Decin, L. 2015, , 453, 2412 Charnay, B., Forget, F., Wordsworth, R., et al. 2013, (Atmospheres), 118, 10 Coakley, J. A. 2014, in Encyclopedia of Atmospheric Sciences, ed. J. R. Holton (Oxford: Academic Press), 1914 Cowan, N. B., Agol, E., Meadows, V. S., et al. 2009, , 700, 915 Del Genio, A. D., Way, M. J., Amundsen, D. S., et al. 2017, arXiv:1709.02051 Dethloff, K., Rinke, A., Lehmann, R., et al. 1996, , 101, 23401 Dressing, C. D., & Charbonneau, D. 2015, , 807, 45 Edson, A. R., Kasting, J. F., Pollard, D., Lee, S., & Bannon, P. R. 2012, Astrobiology, 12, 562 Frierson, D. M. W., Held, I. M., & Zurita-Gotor, P. 2006, J. Atmos. Sci., 63, 2548 Fujii, Y., Kawahara, H., Suto, Y., et al. 2010, , 715, 866 Gill, A. E. 1980, Q. J. R. Meteol. Soc., 106, 447 Gillon, M., Triaud, A. H. M. J., Demory, B.-O., et al. 2017, , 542, 456 Goody, R. M., & Yung, Y. L. 1989, Atmospheric radiation : theoretical basis, 2nd ed. (New York: Oxford Univ. Press) Gregory, D., & Rowntree, P. R. 1990, Mon. Weather Rev., 118, 1483 Hu, R., Ehlmann, B. L., & Seager, S. 2012, , 752, 7 Joshi, M. M., Haberle, R. M., & Reynolds, R. T. 1997, , 129, 450 Joshi, M. 2003, Astrobiology, 3, 415 Kaspi, Y., & Showman, A. P. 2015, , 804, 60 Kasting, J. F., Whitmire, D. P., & Reynolds, R. T. 1993, , 101, 108 Kawahara, H., & Fujii, Y. 2010, , 720, 1333 Kite, E. S., Gaidos, E., & Manga, M. 2011, , 743, 41 Kitzmann, D., Alibert, Y., Godolt, M., et al. 2015, , 452, 3752 Kopparapu, R. k., Wolf, E. T., Haqq-Misra, J., et al. 2016, , 819, 84 Lambert, F. H., Ferraro, A. J., & Chadwick, R. 2017, J. Clim., 30, 4527 Leconte, J., Forget, F., Charnay, B., et al. 2013, , 554, A69 Liu, J., Zhang, Z., Inoue, J., & Horton, R. M. 2007, Int. J. Climatol., 27, 81 Lock, A. P., Brown, A. R., Bush, M. R., Martin, G. M., & Smith, R. N. B. 2000, Mon. Weather Rev., 128, 3187 Matsuno, T. 1966, J. Meterol. Soc. Japan, 44, 25 Mayne, N. J., Baraffe, I., Acreman, D. M., et al. 2014a, , 561, A1 Mayne, N. J., Baraffe, I., Acreman, D. M., et al. 2014b, Geosci. Model Dev., 7, 3059 Merlis, T. M., & Schneider, T. 2010, JAMES, 2, 13 Nisbet, E. G., & Sleep, N. H. 2001, , 409, 1083 Pierrehumbert, R. T. 1995, J. Atmos. Sci., 52, 1784 Pierrehumbert, R. T. 2011, , 726, L8 Pierrehumbert, R. T., & Gaidos, E. 2011, , 734, L13 Rajpurohit, A. S., Reyl[é]{}, C., Allard, F., et al. 2013, , 556, A15 Ribas, I., Bolmont, E., Selsis, F., et al. 2016, , 596, A111 Schlaufman, K. C., & Laughlin, G. 2010, , 519, A105 Showman, A. P., & Guillot, T. 2002, , 385, 166 Showman, A. P., Cho, J. Y.-K., & Menou, K. 2010, in Exoplanets, ed. S. Seager (Tuscon: Univ. Arizona Press), 471 Showman, A. P., & Polvani, L. M. 2010, , 37, L18811 Showman, A. P., & Polvani, L. M. 2011, , 738, 71 Showman, A. P., Wordsworth, R. D., Merlis, T. M., & Kaspi, Y. 2013, in Comparative Climatology of Terrestrial Planets, eds. S. J. Mackwell et al. (Tuscon: Univ. Arizona Press), 277 Turbet, M., Leconte, J., Selsis, F., et al. 2016, , 596, A112 Turbet, M., Bolmont, E., Leconte, J., et al. 2017, arXiv:1707.06927 Vogt, S. S., Butler, R. P., Rivera, E. J., et al. 2010, , 723, 954 Walters, D., Boutle, I., Brooks, M., et al. 2017a, Geosci. Model Dev., 10, 1487 Walters, D. N., Baran, A., Boutle, I. A., et al. 2017b, Geosci. Model Dev., *submitted* Wieczorek, M. A., 2007, Treatise on Geophysics, 10, 165 Wilson, D. R., & Ballard, S. P. 1999, Q. J. R. Meteol. Soc., 125, 1607 Wilson, D. R., Bushell, A. C., Kerr-Munslow, A. M., Price, J. D., & Morcrette, C. J. 2008, Q. J. R. Meteol. Soc., 134, 2093 Wolf, E. T. 2017, , 839, L1 Wordsworth, R. 2015, , 806, 180 Yang, J., Cowan, N. B., & Abbot, D. S. 2013, , 771, L45 Yang, J., & Abbot, D. S. 2014, , 784, 155 Zuber, M. T., Smith, D. E., Lemoine, F. G., & Neumann, G. A. 1994, Science, 266, 1839 Zugger, M. E., Kasting, J. F., Williams, D. M., Kane, T. J., & Philbrick, C. R. 2010, , 723, 1168-1179
[^1]: To calculate radiative heating rates we use the Suite of Community Radiative Transfer codes based on Edwards and Slingo (SOCRATES), available at https://code.metoffice.gov.uk/trac/socrates
[^2]: Shortwave heating of the atmosphere is increased for planets orbiting M-type stars, as the stellar spectrum is shifted towards the longwave, making it easier for water vapour to absorb the incident radiation .
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We summarize the results of a survey on reproducibility in parallel computing, which was conducted during the [Euro-Par]{}conference in August 2015. The survey form was handed out to all participants of the conference and the workshops. The questionnaire, which specifically targeted the parallel computing community, contained questions in four different categories: general questions on reproducibility, the current state of reproducibility, the reproducibility of the participants’ own papers, and questions about the participants’ familiarity with tools, software, or open-source software licenses used for reproducible research.'
author:
- |
Sascha Hunold\
Vienna University of Technology\
Faculty of Informatics\
Research Group for Parallel Computing\
Favoritenstraße 16/184-5\
1040 Vienna, Austria\
`[email protected]`
bibliography:
- 'reppar\_survey.bib'
title: A Survey on Reproducibility in Parallel Computing
---
Introduction
============
Conducting sound and reproducible experiments in parallel computing is not easy, as hardware and software architectures of current parallel computers are most often very complex. This high complexity makes it difficult and often impossible for scientists to model such systems mathematically. Thus, scientists often rely on experiments to study new parallel algorithms, different software solutions ([e.g.]{}, operating systems), or novel hardware architectures. The situation in parallel computing is made even more difficult as parallel systems are in a constant state of flux, [e.g.]{}, the total core count is rapidly growing and many programming paradigms for parallel machines have emerged and are actively being used. We established the first edition of the International Workshop on Reproducibility in Parallel Computing ([REPPAR]{}[^1]) in conjunction with the [Euro-Par]{}conference in 2014. The workshop is concerned with experimental practices in parallel computing research. It should be a forum for discussing and exchanging ideas to improve reproducibility matters in our research domain. We solicit research papers and experience reports on a number of relevant topics, particularly: methods for analysis and visualization of experimental data, best practice recommendations, results of attempts to replicate previously published experiments, and tools for experimental computational sciences. Some examples of the latter include workflow management systems, experimental test-beds, and systems for archiving and querying large data files.
In 2015, the [REPPAR]{}workshop was hosted for the second time in conjunction with the [Euro-Par]{}conference. This year we wanted to spark a fruitful discussion by conducting a survey on reproducible research and by evaluating the results directly during the workshop. In the present paper, we will take a closer look at the results of the survey and discuss some of the findings.
After summarizing related work in [Section]{} \[sec:rel\_work\], we explain the context of the survey in [Section]{} \[sec:survey\]. [Section]{} \[sec:survey\_results\] presents the survey results, and we draw conclusions in [Section]{} \[sec:conclusions\].
Related Work {#sec:rel_work}
============
Improving the reproducibility of results that get published in today’s scientific journals is one of the big challenges of the current research landscape, not only because the problem has lately been brought into the spotlight by journals like Science or Nature ([cf.]{} [@Nature2013; @Buck_Science]). Thus, many researchers across disciplines are trying to tackle the problem of the irreproducibility of scientific findings.
From a computer-science standpoint, we are foremost interested in the state of reproducibility of computational results. The reproduction of scientific findings in computational sciences has other challenges than, say, medicine, as here we study abstract objects, [e.g.]{}, a computer program or an algorithm (rather than the human body). Questions that arise in this context are, for example, how to share source code (technically) or which license to apply to a piece of software? Stodden, Leisch, and Peng addressed these issues and published a collection of articles, in which several solutions to the dilemma are proposed [@stodden:implementing].
Here, we are not only interested in computational sciences, but specifically in parallel and distributed computing, where we are facing additional challenges in terms of reproducibility [@HunoldT13]. For example, in the high performance computing community, scientists are primarily interested in optimizing performance, [e.g.]{}, trying to minimize the [run-time]{}or to maximize the throughput of a system. Thus, a reproducible analysis does not only need to be able to solve the computational problem with the same outcome, but also in the same—or at least comparable—time as shown in the original paper.
Therefore, we conducted a survey on reproducible research among the Euro-Par participants to gain insights about how the reproducibility problem is perceived in our community. In the USA, several initiatives or workshops exist that address the reproducibility problem for large-scale computing. One example is the XSEDE workshop on reproducibility [@JamesWS14].
Several surveys have been undertaken in the broader context of reproducible research. For us, most related to our work are the survey of Stodden [@stodden:reproducibility] and the survey of Prabhu [et al.]{}[@Prabhu:2011]. Stodden’s survey sheds light on the incentives for scientists to share or not to share their work (code or data). The survey by Prabhu [et al.]{}is more concerned with best practices in computational sciences, for example, the authors try to answer questions like “do scientists know about parallelization techniques for speeding up their applications” or “what languages do scientists use for their daily compute tasks”. The survey results by Prabhu [et al.]{}reveal that “\[s\]cientists should release code to their peers” in order to “allow other scientists to reproduce prior work” [@Prabhu:2011].
Context of the Survey {#sec:survey}
=====================
To conduct our survey on reproducibility in parallel computing, we prepared a questionnaire containing 24 questions ([cf.]{}[Appendix]{} \[sec:questionnaire\]), which we grouped into four different categories ([cf.]{}[Section]{} \[sec:general\]–[Section]{} \[sec:tools\]). All participants of the [Euro-Par]{}conference received one survey flyer, which advertised and introduced the survey to them. The flyer contained a unique token that enabled each participant to vote exactly once. The survey was completely anonymous and since flyers (and their tokens) were handed out in the order in which participants arrived at the conference registration desk, the identity of the voters was additionally protected. The survey was implemented using the LimeSurvey software[^2]. We printed 300 survey flyers, each containing one token, and handed out one flyer to each of the 232 participants of the [Euro-Par]{}conference. Unfortunately, only 31 persons (13%) completed the online questionnaire.
Survey Results {#sec:survey_results}
==============
Now, we present the survey results and comment on the outcome of individual questions.
General Questions on Reproducibility {#sec:general}
------------------------------------
The first question ([Q2.1.1]{}) directly asked whether the survey participant is interested in reproducible research. Rather surprisingly, the majority of the participants ($>90\%$) declared their interest in reproducibility. Considering the fact that only 31 persons completed the survey, we conclude that most of the participants also attended the [REPPAR]{}workshop. Therefore, we should keep in mind that our results are highly biased towards a small group of people sharing similar interests.
We also assumed that only few scientists know what “reproducible research” means and also what the difference is between the terms “replicability”, “repeatability”, and “reproducibility”. Our assumption was based on the fact that many articles use different definitions of “reproducibility”. Since we had posed a vague question, the poll results of questions [Q2.1.2]{} and [Q2.1.3]{} were surprising. For example, only 13% of the voters stated that they do not know the difference between replicability, repeatability, and reproducibility.
It is also noteworthy that all survey participants think that the reproduction of already published results is worth another publication. However, 65% of them demand that articles reproducing work of others need to contain new insights.
### Do you care (in general) about the reproducibility of scientific results (your own, others)?
{width=".8\linewidth"}
### Do you know what people mean when speaking about “reproducible” results?
{width=".8\linewidth"}
### Do you know the difference between replicability, repeatability, and reproducibility?
{width=".8\linewidth"}
### Do you think that the reproduction of already published results is worth another publication?
{width=".8\linewidth"}
### Have you tried to reproduce the results of others?
{width=".8\linewidth"}
Current State of Reproducibility in Parallel Computing {#sec:current_state}
------------------------------------------------------
With the second block of questions, we intended to learn more about how scientists see the domain of parallel or high performance computing (HPC) in terms of reproducibility.
The results of question [Q2.2.1]{} show a clear picture: almost all participants think that the reproducibility of articles in our research domain need to be improved. We again note that our results are biased, as many of the survey participants also attended the [REPPAR]{}workshop on reproducible research.
It is also remarkable that only a small percentage (6%) of the voters believed that articles from top conferences such as PPoPP or IPDPS are better reproducible than papers of other conferences ([Q2.2.3]{}). On the contrary, many people (>50% in sum) do not trust the reproducibility of the results when they review scientific articles ([Q2.2.4]{}).
### Do you think the state of reproducibility for articles in our research domain (Parallel Computing/HPC) needs to be improved?
{width=".8\linewidth"}
### Do you think current research articles in the domain of Parallel Computing/HPC are reproducible by other independent researchers?
{width=".8\linewidth"}
### Do you think that results published in top conferences (e.g., PPoPP, IPDPS) are generally easier to reproduce than those published in lower-tier conferences in parallel computing (in the last 5 years)?
{width=".8\linewidth"}
### How often do you question the reproducibility of results when you review other scientific articles?
{width=".8\linewidth"}
Reproducibility of Your Articles {#sec:your_articles}
--------------------------------
The third block of questions was concerned with what the participants think about the reproducibility of their own articles. The poll results for question [Q2.3.1]{} show that a significant fraction of the voters (19%) believe that the results published in their articles are reproducible by others. Surprisingly, only 3% stated that they know that their papers are not reproducible. We had expected a higher percentage of people that would admit that their papers are hard to reproduce, especially when taking into account that the poll was anonymous.
90% of the participants consider freely accessible HPC systems a necessity for reproducible results.
We also asked how scientists provide the source code, the raw experimental data, and the data analysis procedures to others. Again, it was surprising that a large percentage (23%) of scientists stated that they publish the source code along with their papers ([Q2.3.2]{}). From our personal experience we had expected much less (around 10%). The poll results also show that more than half of the scientists use a public revision control system, such as GitHub, to share their code ([Q2.3.4]{}).
However, when we look at the percentage of scientists that do not provide the source code, the raw experimental data, or the data analysis procedures, we can observe that the data analysis procedures get shared less often compared to the other two. One explanation could be that the data analysis procedures applied are very simple ([e.g.]{}, computing the arithmetic mean). Another explanation could be that researchers simply do not give them a high priority, and perhaps do not see the importance for others to have these procedures.
We also asked the survey participants about their main reasons for not sharing code, data, or data analysis procedures ([Q2.3.8]{}). Here, no clear line can be drawn, as no answer was mentioned significantly more often than others. Similarly, we did not obtain a clear picture when asking the participants what they believe are the major obstacles to reproduce their papers ([Q2.3.9]{}).
### Do you think the results (contribution) published in YOUR papers are reproducible by others (in the last 5 years)?
{width=".8\linewidth"}
### How often have you published the source code along with YOUR paper (in the last 5 years)?
{width=".8\linewidth"}
### Do you consider freely accessible HPC systems a necessity for getting reproducible performance figures?
{width=".8\linewidth"}
### How do you provide “source code” for others?
{width=".8\linewidth"}
### How do you provide the “raw data (of experiments)” for others?
{width=".8\linewidth"}
### How do you provide the “data analysis procedure (R scripts, etc)” for others?
{width=".8\linewidth"}
### How do you document how to use your source code / data analysis scripts for others?
{width=".8\linewidth"}
### What are the main reasons for NOT making the source code/raw data/data analysis procedure available?
{width=".8\linewidth"}
### What do you think will be the main difficulties/obstacles when other independent researchers try to reproduce YOUR experiments?
{width=".8\linewidth"}
Tools, Software, and Licenses for Reproducible Research {#sec:tools}
-------------------------------------------------------
Last, we wanted to examine which software and licenses scientists use for making their experiments reproducible.
In question [Q2.4.1]{}, we asked the participants whether they use statistical software packages, such as R or SPSS, for performing data analysis tasks. It turns out that only a third of the voters use such tools on a regular basis. It is also remarkable that most of the voters (71% and 84% respectively) had never used software for literate programming ([e.g.]{}, knitr or org-mode) nor tools for managing or executing scientific workflows ([e.g.]{}, VisTrails or Kepler).
Researchers often debate what open-source software license is the best for their purposes. We therefore asked the question whether the participants do know the license policy of their research institutions ([Q2.4.6]{}). Only 19% of the voters know this policy, whereas 26% stated that the institute has no explicit policy.
### Have you used statistical software packages (e.g., R, SAS, SPSS) for analyzing your experimental results?
{width=".8\linewidth"}
### How would you rate YOUR knowledge of the programming language “R”?
{width=".8\linewidth"}
### Do you use/have you used tools for literate programming (e.g., knitr, org-mode) for publishing articles?
{width=".8\linewidth"}
### Do you have practical experiences with workflow tools to support reproducible research (e.g., VisTrails, Kepler, DataMill, etc.)?
{width=".8\linewidth"}
### Do you know the differences between the available common open source licenses?
{width=".8\linewidth"}
### Do you know the policy used by YOUR research institution concerning the choice of open source licenses?
{width=".8\linewidth"}
Conclusions {#sec:conclusions}
===========
We presented the poll results of a survey on reproducible research, which had been conducted during the [Euro-Par]{}conference 2015. Despite the fact that only 31 persons completed the survey, the results give us some evidence that reproducibility is a problem in our domain. In fact, the survey revealed that the majority of the voters believe that the state of reproducibility needs to be improved in the domain of parallel and high performance computing. The survey participants also think that the majority of the results presented in papers that they receive for review are unlikely to be reproducible. The survey also showed that scientists need to be better informed what the different open-source licenses actually mean and which licenses are allowed to be applied by their research institutions. Last, we found evidence that many scientists are not familiar with software for literate programming and with scientific workflows, which can potentially help to improve reproducibility of articles.
Original Questionnaire {#sec:questionnaire}
======================
General Questions on Reproducibility {#sec-1}
------------------------------------
1. Do you care (in general) about the reproducibility of scientific results (your own, others)?
1. no
2. yes
2. Do you think you know what people mean when speaking about “reproducible” results?
1. no
2. not sure, but I guess so
3. sure, I know what that means
3. Do you know the difference between replicability, repeatability, and reproducibility?
1. no
2. I am not sure, but I guess so
3. sure, I know what the differences are
4. Do you think that the reproduction of already published results is worth another publication?
1. yes, the reproduction alone is worth another publication
2. yes, but only if the publication contains new insights
3. no
5. Have you tried to reproduce the results of others?
1. no
2. I tried once or twice
3. a couple of times (> 2 and <=10)
4. many times (>10)
Current State of Reproducibility in Parallel Computing {#sec-2}
------------------------------------------------------
1. Do you think the state of reproducibility for articles in our research domain (Parallel Computing/HPC) needs to be improved?
1. no
2. yes
2. Do you think current research articles in the domain of parallel computing/HPC are reproducible by other independent researchers?
1. yes, all of them
2. yes, except a few papers (90% reproducibility)
3. 50/50, some are, some are not
4. no, but a few might be reproducible (10% reproducibility)
5. no article is reproducible (<1% reproducible)
6. I really do not know
3. Do you think that results published in top conferences (e.g., PPoPP, IPDPS) are generally easier to reproduce than those published in lower-tier conferences in parallel computing (in the last 5 years)?
1. yes, I know from my experience
2. probably, I can imagine that
3. I am note sure, but I guess not
4. no, not at all (all equally reproducible or non-reproducible)
4. How often do you question the reproducibility of results when you review other scientific articles?
1. for more than 90% of the articles
2. for more than 50% of the articles
3. for more than 10% of the articles
4. never
Reproducibility of Your Articles {#sec-3}
--------------------------------
1. Do you think the results (contribution) published in YOUR papers are reproducible by others (in the last 5 years)?
1. yes, 100%
2. most of them should be (>50%)
3. I am not sure, I guess most results will not be (<= 50% reproducibility)
4. honestly, I know that they are not! (<5% reproducibility)
2. How often have you published the source code along with YOUR paper (in the last 5 years)?
1. never
2. very few times (<25%)
3. >= 25 % and < 50%
4. >= 50 %
5. 100% (for each article)
3. Do you consider freely accessible HPC systems a necessity for getting reproducible performance figures?
1. yes, for all studies
2. yes, but only for some studies
3. no
4. How do you provide “source code” for others?
1. I do not provide the source code
2. as an email attachment in response to a direct request
3. as an archive (zip, tar) on a personal webpage
4. I use public revision control system (e.g., GitHub)
5. How do you provide the “raw data (of experiments)” for others?
1. I do not provide the raw data
2. as an email attachment in response to a direct request
3. as an archive (zip, tar) on a personal webpage
4. I use public revision control system (e.g., GitHub)
6. How do you provide the “data analysis procedure (R scripts, etc)” for others?
1. I do not provide the data analysis procedure
2. as an email attachment in response to a direct request
3. as an archive (zip, tar) on a personal webpage
4. I use public revision control system (e.g., GitHub)
7. How do you document how to use your source code / data analysis scripts for others?
1. I do not document them
2. simple README files
3. standard documentation system (e.g., doxygen)
4. electronic laboratory notebook
8. What are the main reasons for NOT making the source code/raw data/data analysis procedure available? (multiple options)
1. it does not apply to me (as I make them available)
2. Technical difficulties. Lack of suited tools or hosting infrastructure
3. Institution policy or legal aspects
4. I want to retain a competitive advantage
5. it is too time consuming
6. it is not rewarding
7. it is irrelevant because evolution is too fast
8. other
9. What do you think will be the main difficulties/obstacles when other independent researchers try to reproduce your experiments? (multiple options)
1. the lack of access to specific machines
2. the lack of a specific software setup
3. the lack of documentation
4. the lack of time to reproduce our results
5. the lack of scientific credits (others will not get many credits for reproducing our results)
6. other
Tools/Software/Licenses for Reproducible Research {#sec-4}
-------------------------------------------------
1. Have you used statistical software packages (e.g., R, SAS, SPSS, ..) for analyzing your experimental results?
1. no, not at all
2. not on a regular basis
3. yes, I always use them
2. How would you rate YOUR knowledge of the programming language “R”?
1. I have never heard of R
2. I am a novice
3. I can code if needed, but I would not call myself an expert
4. I am an advanced user (expert)
3. Do you use/have you used tools for literate programming (e.g., knitr, org-mode, ..) for publishing articles?
1. never
2. I used them, but was not convinced
3. I used them and will do so in the future
4. I always use them
4. Do you have practical experiences with workflow tools to support reproducible research (e.g., VisTrails, Kepler, DataMill, etc.)?
1. I never used them
2. I used them, but they were not convincing
3. I used them several times and I plan to use them in the future
4. I now use them all the time
5. Do you know the differences between the available common open source licenses?
1. no, I have no clue
2. I know the basic differences
3. yes, I have a solid background on licenses
6. Do you know the policy used by YOUR research institution concerning the choice of open source licenses?
1. I have no idea
2. not really, but I know where to look/who to ask
3. yes, I know the policy
4. yes, and it is that there is no explicit policy
[^1]: <http://reppar.org/>
[^2]: <http://www.limesurvey.org/>
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Branko Dragovich\
Institute of Physics, University of Belgrade, Belgrade, Serbia
title: On Nonlocal Modified Gravity and Cosmology
---
-3cm
Introduction {#sec:1}
============
Recall that General Relativity is the Einstein theory of gravity based on tensorial equation of motion for gravitational (metric) field $g_{\mu\nu}: \quad R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu} = {8 \pi G} T_{\mu\nu},$ where $R_{\mu\nu}$ is the Ricci curvature tensor, $R$ – the Ricci scalar, $T_{\mu\nu}$ is the energy-momentum tensor, and speed of light is $c = 1$. This Einstein equation follows from the Einstein-Hilbert action $S = \frac{1}{16\pi G} \int \sqrt{-g}\, R \, d^4x + \int \sqrt{-g} \mathcal{L}_m \, d^4,$ where $g = \det(g_{\mu\nu})$ and $\mathcal{L}_m$ is Lagrangian of matter.
Motivations for modification of general relativity are usually related to some problems in quantum gravity, string theory, astrophysics and cosmology (for a review, see [@clifton; @nojiri; @faraoni]). We are here mainly interested in cosmological reasons to modify the Einstein theory of gravity. If general relativity is gravity theory for the universe as a whole and the universe has Friedmann-Lemaître-Robertson-Walker (FLRW) metric, then there is in the universe about $68\%$ of [*dark energy*]{}, $27\%$ of [*dark matter*]{}, and only $5\%$ of [*visible matter*]{} [@planck]. The visible matter is described by the Standard model of particle physics. However, existence of this $95\%$ of dark energy-matter content of the universe is still hypothetical, because it has been not verified in the laboratory ambient. Another cosmological problem is related to the Big Bang singularity. Namely, under rather general conditions, general relativity yields cosmological solutions with zero size of the universe at its beginning, what means an infinite matter density. Note that when physical theory contains singularity, it is not valid in the vicinity of singularity and must be appropriately modified.
In this article, we briefly review nonlocal modification of general relativity in a way to point out cosmological solutions without Big Bang singularity. We consider two nonlocal models and present their nonsingular bounce cosmological solutions. To have more complete view of these models we also write down other exact solutions which are power-law singular ones of the form $a(t) = a_0 \, |t|^\alpha .$
In Section 2 we describe some general characteristics of nonlocal gravity which are useful for understanding what follows in the sequel. Section 3 contains a review of both nonsingular bounce and singular cosmological solutions for two nonlocal gravity models without matter. Last section is related to the discussion with some concluding remarks.
Nonlocal Gravity {#sec:2}
================
The well founded modification of the Einstein theory of gravity has to contain general relativity and to be verified on the dynamics of the Solar system. Mathematically, it should be formulated within the pseudo-Riemannian geometry in terms of covariant quantities and equivalence of the inertial and gravitational mass. Consequently, the Ricci scalar $R$ in gravity Lagrangian $\mathcal{L}_g$ of the Einstein-Hilbert action has to be replaced by a function which, in general, may contain not only $R$ but also any covariant construction which is possible in the Riemannian geometry. Unfortunately, there are infinitely many such possibilities and so far without a profound theoretical principle which could make definite choice. The Einstein-Hilbert action can be viewed as a result of the principle of simplicity in construction of $\mathcal{L}_g$.
We consider here nonlocal modified gravity. In general, a nonlocal modified gravity model corresponds to an infinite number of spacetime derivatives in the form of some power expansions of the d’Alembert operator $\Box = \frac{1}{\sqrt{-g}} \partial_{\mu}\sqrt{-g} g^{\mu\nu} \partial_{\nu}$ or of its inverse $\Box^{-1},$ or some combination of both. We are mainly interested in nonlocality expressed in the form of an analytic function $ \mathcal{F}(\Box)= \sum_{n =0}^{\infty} f_{n}\Box^{n}.$ However, some models with $\Box^{-1} R,$ have been also considered (see, e.g. [@woodard; @woodard-d; @woodard1; @nojiri1; @nojiri2; @sasaki; @vernov0; @vernov1; @koivisto; @koivisto1] and references therein). For nonlocal gravity with $\Box^{-1}$ see also [@barvinsky; @modesto]. Many aspects of nonlocal gravity models have been considered, see e.g. [@modesto1; @modesto2; @moffat; @calcagni; @maggiore] and references therein.
Motivation to modify gravity in a nonlocal way comes mainly from string theory. Namely, strings are one-dimensional extended objects and their field theory description contains spacetime nonlocality. We will discuss it in the framework of $p$-adic string theory in Section 4.
In order to better understand nonlocal modified gravity itself, we investigate it without matter. Models of nonlocal gravity which we mainly consider are given by the action $$S = \int d^{4}x \sqrt{-g}\Big(\frac{R - 2 \Lambda}{16 \pi G} + R^{q}
\mathcal{F}(\Box) R \Big), \quad q = +1, -1, \label{eq:2.1}$$ where $\Lambda$ is cosmological constant, which is for the first time introduced by Einstein in 1917. Thus this nonlocality is given by the term $R^{q}
\mathcal{F}(\Box) R , $ where $q= \pm 1$ and $ \mathcal{F}(\Box)= \sum_{n =0}^{\infty} f_{n}\Box^{n},$ i.e. we investigate two nonlocal gravity models: the first one with $q = + 1$ and the second one with $q = - 1.$
Before to proceed, it is worth mentioning that analytic function $ \mathcal{F}(\Box)= \sum_{n =0}^{\infty} f_{n}\Box^{n},$ has to satisfy some conditions, in order to escape unphysical degrees of freedom like ghosts and tachyons, and to be asymptotically free in the ultraviolet region (see discussion in [@biswas3; @biswas4]).
Models and Their Cosmological Solutions
=======================================
In the sequel we shall consider the above mentioned two nonlocal models separately for $q = +1$ and $q = - 1 .$
We use the FLRW metric $ds^2 = - dt^2 + a^2(t)\big(\frac{dr^2}{1-k r^2} + r^2 d\theta^2 +
r^2 \sin^2 \theta d\phi^2\big)$ and investigate all three possibilities for curvature parameter $k =0,\pm 1$. In the FLRW metric scalar curvature is $R = 6 \left (\frac{\ddot{a}}{a} +
\frac{\dot{a}^{2}}{a^{2}} + \frac{k}{a^{2}}\right )$ and $\Box =
- \partial_t^2 - 3 H \partial_t ,$ where $H =
\frac{\dot{a}}{a}$ is the Hubble parameter. Note that we use natural system of units in which speed of light $c = 1.$
Nonlocal Model Quadratic in $R$
-------------------------------
Nonlocal gravity model which is quadratic in $R$ is given by the action [@biswas1; @biswas2] $$S = \int d^{4}x \sqrt{-g}\Big(\frac{R - 2 \Lambda}{16 \pi G} + R
\mathcal{F}(\Box) R \Big). \label{eq:3.1}$$ This model is attractive because it is ghost free and has some nonsingular bounce solutions, which can solve the Big Bang cosmological singularity problem.
The corresponding equation of motion follows from the variation of the action (\[eq:3.1\]) with respect to metric $g_{\mu\nu}$ and it is $$\begin{aligned}
& 2 R_{\mu\nu} \mathcal{F}(\Box) R - 2({\nabla_{\mu}}{\nabla_{\nu}}-
g_{\mu\nu} \Box)( \mathcal{F}(\Box) R) - \frac{1}{2}
g_{\mu\nu} R \mathcal{F}(\Box) R \nonumber \\
&+ \sum_{n=1}^{\infty} \frac{f_n}{2} \sum_{l=0}^{n-1} \big(
g_{\mu\nu} \left( g^{\alpha\beta}\partial_{\alpha} \Box^l R
\partial_{\beta} \Box^{n-1-l} R + \Box^l R \Box^{n-l} R
\right) \nonumber \\
&- 2 {\partial_{\mu}}\Box^l R {\partial_{\nu}}\Box^{n-1-l} R\big) = \frac{-1}{8
\pi G} (G_{\mu\nu} + \Lambda g_{\mu\nu}). \label{eq:3.2}\end{aligned}$$
When metric is of the FLRW form in then there are only two independent equations. It is practical to use the trace and $00$ component of , and respectively they are: $$\begin{aligned}
&6\Box ( \mathcal{F}(\Box) R) + \sum_{n=1}^{\infty} f_n
\sum_{l=0}^{n-1} \left(
\partial_{\mu} \Box^l R
\partial^{\mu} \Box^{n-1-l} R + 2 \Box^l R \Box^{n-l} R
\right)\nonumber \\ &= \frac{1}{8 \pi G} R - \frac{\Lambda}{2 \pi G}, \label{eq:3.3}\end{aligned}$$ $$\begin{aligned}
& 2 R_{00} \mathcal{F}(\Box) R - 2(\nabla_0
\nabla_0 - g_{00} \Box)( \mathcal{F}(\Box) R) - \frac{1}{2}
g_{00} R \mathcal{F}(\Box) R \nonumber \\
&+ \sum_{n=1}^{\infty} \frac{f_n}{2} \sum_{l=0}^{n-1} \big( g_{00}
\left( g^{\alpha\beta}\partial_{\alpha} \Box^l R
\partial_{\beta} \Box^{n-1-l} R + \Box^l R \Box^{n-l} R
\right) \nonumber \\
&- 2 \partial_0 \Box^l R \partial_0 \Box^{n-1-l} R\big) =
\frac{-1}{8 \pi G}( G_{00} + \Lambda g_{00}). \label{eq:3.4}\end{aligned}$$
We are interested in cosmological solutions for the universe with FLRW metric and even in such simplified case it is rather difficult to find solutions of the above equations. To evaluate the above equations, the following Ansätze were used:
- Linear Ansatz: $\Box R = r R + s, $ where $r$ and $s$ are constants.
- Quadratic Ansatz: $\Box R = q R^2, $ where $q$ is a constant.
- Qubic Ansatz: $\Box R = q R^3, $ where $q$ is a constant.
- Ansatz $\Box^n R = c_n R^{n+1}, \,\, n\geq 1, $ where $c_n$ are constants.
In fact these Ansätze make some constraints on possible solutions, but on the other hand they simplify formalism to find a particular solution.
### Linear Ansatz and Nonsingular Bounce Cosmological Solutions
Using Ansatz $\Box R = r R + s$ a few nonsingular bounce solutions for the scale factor are found: $a(t) = a_0 \cosh{\left(\sqrt\frac{\Lambda}{3}t\right)}$ (see [@biswas1; @biswas2]), $\, a(t) = a_0 e^{\frac{1}{2}\sqrt{\frac{\Lambda}{3}}t^2}$ (see [@koshelev]) and $a(t) = a_0 (\sigma e^{\lambda t} + \tau e^{-\lambda t} )$ [@dragovich2]. The first two consequences of this Ansatz are $$\label{nth degree}
\Box^{n} R = r^{n}(R +\frac sr ) , \, \, n\geq 1 , \, \qquad \, \mathcal{F}(\Box)
R = \mathcal{F}(r) R + \frac sr({\mathcal{F}}(r)-f_0) ,$$ which considerably simplify nonlocal term.
Now we can search for a solution of the scale factor $a(t)$ in the form of a linear combination of $e^{\lambda t}$ and $e^{-\lambda t}$, i.e. $$\label{sol:a}
a(t) = a_0 (\sigma e^{\lambda t} + \tau e^{-\lambda t} ), \quad
0< a_0, \lambda,\sigma,\tau \in {\mathbb{R}}.$$ Then the corresponding expressions for the Hubble parameter $H(t) = \frac{\dot{a}}{a},$ scalar curvature $R(t) = \frac{6}{a^2} (a \ddot{a} + \dot{a}^2 + k) $ and $\Box R$ are: $$\begin{aligned} \label{sol:all}
H(t) &= \frac{\lambda (\sigma e^{\lambda t} - \tau e^{-
\lambda t}) } {\sigma e^{\lambda t} + \tau e^{- \lambda t}}, \\
R(t) &= \frac{6 \left(2 a_0^2 \lambda ^2 \left(\sigma^2 e^{4 t
\lambda }+\tau ^2\right)+k e^{2 t \lambda }\right)}{a_0^2 \left(\sigma
e^{2 t \lambda }+\tau \right)^2},\\
\Box R &= -\frac{12 \lambda ^2 e^{2 t \lambda } \left(4 a_0^2
\lambda ^2 \sigma \tau -k\right)}{a_0^2 \left(\sigma e^{2 t \lambda
}+\tau \right)^2}.
\end{aligned}$$ We can rewrite $\Box R$ as $$\label{ansatz:1}
\Box R = 2\lambda^2 R - 24\lambda ^ 4 , \qquad r = 2\lambda^2 , \, \, s = - 24\lambda ^ 4.$$
Substituting parameters $r$ and $s$ from into one obtains
$$\begin{aligned} \label{sol-all}
\Box^n R &= (2\lambda^2)^n (R - 12\lambda ^2) , \, \, n \geq 1 , \\
\mathcal{F}(\Box) R &= \mathcal{F}(2 \lambda^2)R - 12
\lambda^2(\mathcal{F}(2 \lambda^2) - f_0).
\end{aligned}$$
Using this in and we obtain $$\begin{aligned}
\label{trace:11}
&36\lambda^2 \mathcal{F}(2 \lambda^2) (R - 12\lambda ^2) +
\mathcal{F}'(2 \lambda^2) \left( 4 \lambda^2 (R - 12\lambda ^2)^2
- \dot R^2 \right) \nonumber \\
&-24 \lambda^2 f_0(R - 12 \lambda^2) = \frac{R - 4\Lambda}{8 \pi G} ,\end{aligned}$$ $$\begin{aligned}
\label{eom:2}
& (2 R_{00} + \frac{1}{2} R)\left( \mathcal{F}(2 \lambda^2)R - 12
\lambda^2(\mathcal{F}(2 \lambda^2) - f_0) \right)\nonumber \\ &-\frac{1}{2}\mathcal{F}' (2 \lambda^2) \left( \dot R^2 + 2 \lambda^2 (R - 12 \lambda^2)^2 \right)
-6\lambda^2({\mathcal{F}}(2\lambda^2)-f_0) (R-12\lambda^2)\nonumber \\ &+6 H {\mathcal{F}}(2 \lambda^2) \dot R= - \frac{1}{8\pi G}( G_{00} - \Lambda) .\end{aligned}$$
Substituting $a(t)$ from into equations and one obtains two equations as polynomials in $e^{2\lambda t}$. Taking coefficients of these polynomials to be zero one obtains a system of equations and their solution determines parameters $a_0, \lambda,\sigma,\tau $ and yields some conditions for function $\mathcal{F}(2\lambda^2).$ For details see [@dragovich2].
### Quadratic Ansatz and Power-Law Cosmological Solutions
New Ansätze $\Box R = r R, \,\, \Box R = q R^2$ and $\Box^n R = c_n R^{n+1},$ were introduced in [@dragovich1] and they contain solution for $R =0$ which satisfies also equations of motion. When $k =0$ there is only static solution $a= constant,$ and for $k=-1$ solution is $a(t) = |t|.$
In particular, Ansatz $\Box R = q R^2$ is very interesting. The corresponding differential equation for the Hubble parameter, if $k =0,$ is $$\dddot{H} + 4\dot{H}^{2} + 7H \ddot{H} + 12 H^{2}\dot{H} + 6 q (
\dot{H}^2 + 4 H^2 \dot{H} + 4 H^4) = 0$$ with solutions
$$\label{Hn}
H_\eta(t) = \frac{2\eta+1}{3}\frac{1}{t + C_1}, \quad q_\eta
=\frac{6(\eta-1)}{(2\eta+1)(4\eta-1)}, \, \, \, \eta \in
\mathbb{R}$$
and $H =\frac{1}{2}\frac{1}{t + C_1}$ with arbitrary coefficient $q$, what is equivalent to the ansatz $\Box R = r R$ with $R = 0$.
The corresponding scalar curvature is given by $$\label{Rn}
R_\eta = \frac{2}{3}\frac{(2\eta+1)(4\eta-1)}{ (t+C_1)^{2}}, \, \,
\, \eta \in \mathbb{R}.$$ By straightforward calculation one can show that $\Box^n R_n = 0$ when $n \in \mathbb{N}$. This simplifies the equations considerably. For this particular case of solutions operator $\mathcal{F}$ and trace equation effectively become $$\begin{aligned}
\label{operatorF:n} &\mathcal{F}(\Box) = \sum_{k=0}^{n-1} f_{k}\Box^k ,\\
\label{trace:2}
&\sum_{k=1}^{n+1} f_{k} \sum_{l=0}^{k-1} (\partial_{\mu}\Box^{l}R \partial^{\mu}\Box^{k-1-l}R + 2\Box^{l}R \Box^{k-l}R )
+ 6 \Box \mathcal{F}(\Box) R = \frac{R}{8 \pi G}.\end{aligned}$$
In particular case $n=2$ the trace formula becomes $$\begin{aligned}
\nonumber
& \frac{36}{35}f_{0} R^{2} + f_{1}(- \dot{R}^{2}+
\frac{12}{35}R^{3}) + f_{2}(-\frac{24}{35}R \dot{R}^{2} +
\frac{72}{1225}R^{4}) + f_{3} (-\frac{144}{1225}R^{2}\dot{R}^{2})\\
& = \frac{R}{8 \pi G}. \label{trace:3}\end{aligned}$$
Some details on all the above three Ansätze can be found in [@dragovich1].
Nonlocal Model with Term $ R^{-1}
\mathcal{F}(\Box) R $
---------------------------------
This model was introduced recently [@dragovich3] and its action may be written in the form $$\label{eq-3.2.1}
S = \int d^{4}x \sqrt{-g}\Big(\frac{R}{16 \pi G} + R^{-1} \mathcal{F}(\Box) R \Big),$$ where $\mathcal{F}(\Box) = \sum_{n=0}^\infty f_n \Box^n $ and when $f_0 = -\frac{\Lambda}{8\pi G}$ it plays role of the cosmological constant. For example, $\mathcal{F}(\Box)$ can be of the form $\mathcal{F}(\Box) = - \frac{\Lambda}{8\pi G} e^{-\beta \Box} .$
The nonlocal term $R^{-1} \mathcal{F}(\Box) R$ in is invariant under transformation $R \to C R.$ It means that effect of nonlocality does not depend on the magnitude of scalar curvature $R,$ but on its spacetime dependence, and in the FLRW case is sensitive only to dependence of $R$ on time $t$. When $R= constant$ there is no effect of nonlocality, but only of $f_0$ what corresponds to cosmological constant.
By variation of action with respect to metric $g^{\mu\nu}$ one obtains the equations of motion for $g_{\mu\nu}$ $$\begin{aligned}
\label{eq-3.2.2}
&R_{\mu\nu} V - ({\nabla_{\mu}}{\nabla_{\nu}}- g_{\mu\nu} \Box)V -
\frac{1}{2} g_{\mu\nu} R^{-1} \mathcal{F}(\Box) R \nonumber \\
&+ \sum_{n=1}^{\infty} \frac{f_n}{2} \sum_{l=0}^{n-1} \big(
g_{\mu\nu} \left( \partial_{\alpha} \Box^l(R^{-1})
\partial^{\alpha} \Box^{n-1-l} R + \Box^l(R^{-1}) \Box^{n-l} R
\right) \nonumber \\
&- 2 {\partial_{\mu}}\Box^l(R^{-1}) {\partial_{\nu}}\Box^{n-1-l} R\big) = - \frac{G_{\mu\nu}}{16 \pi G} , \label{eq-3.2.2} \\
&V = {\mathcal{F}}(\Box) R^{-1} - R^{-2} {\mathcal{F}}(\Box) R . \nonumber\end{aligned}$$ Note that operator $\Box$ acts not only on $R$ but also on $R^{-1}$. There are only two independent equations when metric is of the FLRW type.
The trace of the equation is $$\begin{aligned}
\label{eq-3.2.3}
&R V + 3 \Box V + \sum_{n=1}^{\infty} f_n \sum_{l=0}^{n-1} \left( \partial_{\alpha} \Box^l(R^{-1}) \partial^{\alpha} \Box^{n-1-l} R + 2 \Box^l(R^{-1}) \Box^{n-l} R \right) \nonumber \\
&-2 R^{-1} {\mathcal{F}}(\Box) R = \frac{R}{16 \pi G}. \label{eq-3.2.3}\end{aligned}$$
The $00$-component of is $$\begin{aligned}
&R_{00} V - (\nabla_0\nabla_0 - g_{00} \Box)V -
\frac{1}{2} g_{00} R^{-1} \mathcal{F}(\Box) R \nonumber \\
&+ \sum_{n=1}^{\infty} \frac{f_n}{2} \sum_{l=0}^{n-1} \big(
g_{00} \left( \partial_{\alpha} \Box^l(R^{-1})
\partial^{\alpha} \Box^{n-1-l} R + \Box^l(R^{-1}) \Box^{n-l} R \right) \nonumber \\
&- 2 \partial_0 \Box^l(R^{-1}) \partial_0 \Box^{n-1-l} R\big) = - \frac{G_{00}}{16 \pi G}. \label{eq-3.2.4}\end{aligned}$$ These trace and $00$-component equations are equivalent for the FLRW universe in the equation of motion (20), but they are more suitable for usage.
### Some Cosmological Solutions for Constant $R$
We are interested in some exact nonsingular cosmological solutions for the scale factor $a(t)$ in . The Ricci curvature $R$ in the above equations of motion can be calculated by expression $$R = 6 \left(\frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} + \frac{k}{a^2}\right).$$
**Case $k=0$, $a(t) = a_0 e^{\lambda t}$.**
We have $a(t) = a_0 e^{\lambda t}, \quad \dot{a} = \lambda a, \quad \ddot{a} = \lambda^2 a, \quad H = \frac{\dot{a}}{a} = \lambda$ and $R = 6 \left(\frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} \right) = 12 \lambda^2.$ Putting $a(t) = a_0 e^{\lambda t}$ in the above equations and , they are satisfied with $\lambda = \pm \sqrt{\frac{\Lambda}{3}},$ where $\Lambda = - 8\pi G \, f_0 $ with $f_0 < 0.$
**Case $k=+1$, $a(t) = \frac{1}{\lambda} \cosh{\lambda t}$.**
Starting with $a(t) = a_0 \cosh{\lambda t}, $ we have $\dot{a} = \lambda a_0 \sinh{\lambda t}, \quad H = \frac{\dot{a}}{a} = \lambda \tanh{\lambda t}$ and $R = 6 \left(\frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} + \frac{1}{a^2}\right) = 12 \lambda^2$ if $a_0 = \frac{1}{\lambda} .$ Hence equation and are satisfied for cosmic scale factor $a(t) = \frac{1}{\lambda} \cosh{\lambda t} .$
In a similar way, one can obtain another solution:
**Case $k=-1$, $a(t) = \frac{1}{\lambda} |\sinh{\lambda t}|$.**
Thus we have the following three cosmological solutions for $R = 12 \lambda^2$:
1. $k=0$, $a(t) = a_0 \, e^{\lambda t},$ nonsingular bounce solution.
2. $k=+1$, $a(t) = \frac 1\lambda \, \cosh{\lambda t},$ nonsingular bounce solution.
3. $k=-1$, $a(t) = \frac 1\lambda \, |\sinh{\lambda t}|,$ singular cosmic solution.
All of this solution have exponential behavior for large value of time $t$.
Note that in all the above three cases the following two tensors have also the same expressions: $$R_{\mu\nu} = \frac 14 R g_{\mu\nu}, \quad \quad
G_{\mu\nu} = - \frac 14 R g_{\mu\nu}.$$
Minkowski background space follows from the de Sitter solution $k=0$, $a(t) = a_0 e^{\lambda t}.$ Namely, when $\lambda \to 0$ then $a(t) \to a_0$ and $H =R=0.$
In all the above cases $\Box R = 0$ and thus coefficients $f_n ,\,\, n\geq 1$ may be arbitrary. As a consequence, in these cases nonlocality does not play a role.
### Some Power-Law Cosmological Solutions
Power-law solutions in the form $a(t)= a_0 |t-t_0|^\alpha,$ have been investigated by some Ansätze in [@dragovich3] and without Ansätze [@dragovich4]. The corresponding Ricci scalar and the Hubble parameter are: $$R(t) = 6 \left(\frac{\ddot{a}}{a} + \frac{\dot{a}^2}{a^2} +
\frac{1}{a^2}\right) = 6\big(\alpha(2\alpha-1) (t-t_{0})^{-2} + \frac k{a_0^2}(t-t_{0})^{-2\alpha}\big)$$ $$H(t) = \frac{\dot{a}}{a} = \frac{\alpha}{|t-t_0|}.$$ Now $\Box = - \partial_t^2 - \frac{3\alpha}{|t - t_0|} \partial_t .$ An analysis has been performed for $\alpha \neq 0, \, \frac{1}{2},$ and also $ \, \alpha \to 0, \quad \alpha \to \frac{1}{2}$ for $ \, k=+1, -1, 0.$ For details, the reader refers to [@dragovich3; @dragovich4].
Discussion and Concluding Remarks
=================================
To illustrate the form of the abave nonlocality it is worth to start from exact effective Lagrangian at the tree level for $p$-adic closed and open scalar strings. This Lagrangian is as follows (see, e.g. [@freund]): $$\begin{aligned}
L_p = &- \frac{m^D}{2g^2}\frac{p^2}{p-1} \varphi p^{-\frac{\Box}{2m^2}} \varphi - \frac{m^D}{2h^2}\frac{p^4}{p^2-1} \phi p^{-\frac{\Box}{4m^2}} \phi
+\frac{m^D}{h^2}\frac{p^4}{p^4-1} \phi^{p^2 +1} \nonumber \\ & - \frac{m^D}{g^2}\frac{p^2}{p^2-1} \phi^{\frac{p(p-1)}{2}} +
\frac{m^D}{g^2}\frac{p^2}{p^2-1} \varphi^{p+1} \phi^{\frac{p(p-1)}{2}}, \label{0.1}\end{aligned}$$ where $\varphi$ denotes open strings, $D$ is spacetime dimensionality (in the sequel we shall take $D=4$), and $g$ and $h$ are coupling constants for open and closed strings, respectively. Scalar field $\phi(x)$ corresponds to closed $p$-adic strings and could be related to gravity scalar curvature as $\phi =f(R)$, where $f$ is an appropriate function. The corresponding equations of motion are: $$\begin{aligned}
p^{-\frac{\Box}{2m^2}} \varphi = \varphi^{p} \phi^{\frac{p(p-1)}{2}} , \quad p^{-\frac{\Box}{4m^2}} \phi = \phi^{p^2} + \frac{h^2}{2 g^2} \frac{p-1}{p}
\phi^{\frac{p(p-1)}{2}-1} \left( \varphi^{p+1} - 1 \right). \label{0.2}\end{aligned}$$ There are the following constant vacuum solutions: $(i)\, \varphi = \phi = 0 $, $(ii)\, \varphi = \phi = 1 $ and $(iii)\, \varphi = \phi^{-\frac{p}{2}}
= constant.$
In the case that the open string field $\varphi = 0,$ one obtains equation of motion only for closed string $\phi .$ One can now construct a toy nonlocal gravity model supposing that closed scalar string is related to the Ricci scalar curvature as $\phi = - \frac{1}{m^2} R = - \frac{4}{3 g^2} (16\pi G) R. $ Taking $p = 2,$ we obtain the following Lagrangian for gravity sector: $$\begin{aligned}
\mathcal{L}_g = \frac{1}{16\pi G}\, R - \frac{8}{3} \frac{C^2}{h^2} R\, e^{-\frac{\ln 2\, \Box}{4m^2}}\, R
-\frac{1024}{405 g^6 h^2} (16\pi G)^3 R^{5} . \label{0.3}\end{aligned}$$ To compare third term to the first one in , let us note that $(16\pi G)^3 R^{5} = (16\pi G R)^4 \frac{R}{16\pi G}. $ It follows that $(G R)^4$ has to be dimensionless after rewriting it using constants $c$ and $\hbar.$ As Ricci scalar $R$ has dimension $Time^{-2}$ it means that $G$ has to be replaced by the Planck time as $t_P^2 = \frac{\hbar G}{c^5} \sim 10^{-88} s^2. $ Hence $(G R)^4 \to (\frac{\hbar G}{c^5} R)^4 \sim 10^{-352} R^4 $ and third term in can be neglected with respect to the first one, except when $R \sim t_P^{-2}.$ The nonlocal model with only first two terms corresponds to case considered above in this article. We shall consider this model including $R^5$ term elsewhere.
It is worth noting that the above two models with nonlocal terms $R \mathcal{F}(\Box) R$ and $R^{-1} \mathcal{F}(\Box) R$ are equivalent in the case when $R = constant,$ because their equations of motion have the same solutions. These solutions do not depend on $\mathcal{F}(\Box) - f_0 .$ It would be useful to find cosmological solutions which have definite connection with the explicit form of nonlocal operator $\mathcal{F}(\Box).$
Let us mention that many properties of and its extended quadratic versions have been considered, see [@biswas3; @biswas4; @biswas3+; @koshelev1; @koshelev2].
Nonlocal model is a new one and was not considered before [@dragovich3], it seems to be important and deserves further investigation. There are some gravity models modified by term $R^{-1},$ but they are neither nonlocal nor pass Solar system tests, see e.g. [@kamionkowski].
Note that nonlocal cosmology is related also to cosmological models in which matter sector contains nonlocality (see, e.g. [@arefeva0; @arefeva; @calcagni1; @barnaby; @koshelev-v; @arefeva-volovich; @dragovich; @dragovich-d]). String field theory and $p$-adic string theory models have played significant role in motivation and construction of such models.
Nonsingular bounce cosmological solutions are very important (as reviews on bouncing cosmology, see e.g. [@novello; @brandenberger]) and their progress in nonlocal gravity may be a further step towards cosmology of the cyclic universe [@steinhardt].
Work on this paper was supported by Ministry of Education, Science and Technological Development of the Republic of Serbia, grant No 174012. The author thanks Prof. Vladimir Dobrev for invitation to participate and give a talk, as well as for hospitality, at the X International Workshop “Lie Theory and its Applications in Physics”, 17–23 June 2013, Varna, Bulgaria. The author also thanks organizers of the Balkan Workshop BW2013 “Beyond Standard Models” (25-29.04.2013, Vrnjačka Banja, Serbia), Six Petrov International Symposium on High Energy Physics, Cosmology and Gravity (5-8.09.2013, Kiev, Ukraine) and Physics Conference TIM2013 (21-23.11.2013, Timisoara, Romania), where some results on modified gravity and its cosmological solutions were presented. Many thanks also to my collaborators Zoran Rakic, Jelena Grujic and Ivan Dimitrijevic, as well as to Alexey Koshelev and Sergey Vernov for useful discussions.
[99.]{}
Clifton,T., Ferreira, P.G., Padilla, A., Skordis, C.: Modified gravity and cosmology. Phys. Rep. **513**, 1–189 (2012) \[arXiv:1106.2476v2 \[astro-ph.CO\]\]
Nojiri, S., Odintsov, S.D.: Unified cosmic history in modified gravity: from $F(R)$ theory to Lorentz non-invariant models. Phys. Rep. **505**, 59–144 (2011) \[arXiv:1011.0544v4 \[gr-qc\]\]
T. P. Sotiriou, V. Faraoni, “$f(R)$ theories of gravity”, Rev. Mod. Phys. **82** (2010) 451–497 \[arXiv:0805.1726v4 \[gr-qc\]\].
Ade, P. A. R., Aghanim, N., Armitage-Caplan, C., et al. (Planck Collaboration): Planck 2013 results. XVI. Cosmological parameters. \[arXiv:1303.5076v3\]
Deser, S., Woodard, R.P.: Nonlocal cosmology. Phys. Rev. Lett. **99**, 111301 (2007) \[ arXiv:0706.2151 \[astro-ph\]\]
Deffayet, C., Woodard, R.P.: Reconstructing the distortion function for nonlocal cosmology. JCAP **0908**, 023 (2009) \[arXiv:0904.0961 \[gr-qc\]\]
Woodard, R.P.: Nonlocal models of cosmic acceleration. \[arXiv:1401.0254 \[astro-ph.CO\]\]
Nojiri, S., Odintsov, S.D.: Modified non-local-F(R) gravity as the key for inflation and dark energy. Phys. Lett. B **659**, 821–826 (2008) \[arXiv:0708.0924v3 \[hep-th\]
Jhingan, S., Nojiri, S., Odintsov, S.D., Sami, Thongkool M.I., Zerbini, S.: Phantom and non-phantom dark energy: The Cosmological relevance of non-locally corrected gravity. Phys. Lett. B **663**, 424-428 (2008) \[arXiv:0803.2613 \[hep-th\]\]
Zhang, Y.-li., Sasaki, M.: Screening of cosmological constant in non-local cosmology. Int. J. Mod. Phys. D **21**, 1250006 (2012) \[arXiv:1108.2112 \[gr-qc\]\]
Elizalde, E., Pozdeeva, E.O., Vernov, S.Yu.: Stability of de Sitter solutions in non-local cosmological models. PoS(QFTHEP2011) 038, (2012) \[arXiv:1202.0178 \[gr-qc\]\]
Elizalde, E., Pozdeeva, E.O., Vernov, S.Yu., Zhang, Y.: Cosmological solutions of a nonlocal model with a perfect fluid. J. Cosmology Astropart. Phys. **1307**, 034 (2013) \[arXiv:1302.4330v2 \[hep-th\]\]
Koivisto, T.S.: Dynamics of nonlocal cosmology. Phys. Rev. D **77**, 123513 (2008) \[arXiv:0803.3399 \[gr-qc\]\]
Koivisto, T.S.: Newtonian limit of nonlocal cosmology. Phys. Rev. D **78**, 123505 (2008) \[arXiv:0807.3778 \[gr-qc\]\]
Barvinsky, A.O.: Dark energy and dark matter from nonlocal ghost-free gravity theory. Phys. Lett. B **710**, 12–16 (2012) \[arXiv:1107.1463 \[hep-th\]\]
Modesto, L., Tsujikawa, S.: Non-local massive gravity. Phys. Lett. B **727**, 48–56 (2013) \[arXiv:1307.6968 \[hep-th\]\]
Briscese, F., Marciano, A., Modesto, L., Saridakis, E.N.: Inflation in (super-)renormalizable gravity. Phys. Rev. D **87**, 083507 (2013) \[arXiv:1212.3611v2 \[hep-th\]\]
Calcagni, G., Modesto, L., Nicolini, P.: Super-accelerting bouncing cosmology in assymptotically-free non-local gravity. \[arXiv:1306.5332 \[gr-qc\]\]
Moffat, J.M.: Ultraviolet complete quantum gravity. Eur. Phys. J. Plus **126**, 43 (2011) \[arXiv:1008.2482 \[gr-qc\]\]
Calcagni, G., Nardelli, G.: Nonlocal gravity and the diffusion equation. Phys. Rev. D **82**, 123518 (2010) \[arXiv:1004.5144 \[hep-th\]\]
Dirian, Y., Foffa, S., Khosravi, N., Kunz, M., Maggiore, M.: Cosmological perturbations and structure formation in nonlocal infrared modifications of general relativity. \[arXiv:1403.6068 \[astro-ph.CO\]\]
Biswas, T., Gerwick, E., Koivisto, T., Mazumdar, A.: Towards singularity and ghost free theories of gravity. Phys. Rev. Lett. **108**, 031101 (2012) \[arXiv:1110.5249v2 \[gr-qc\]\]
Biswas, T., Conroy, A., Koshelev, A.S., Mazumdar, A.: Generalized gost-free quadratic curvature gravity. \[arXiv:1308.2319 \[hep-th\]\]
Biswas, T., Mazumdar, A., Siegel, W: Bouncing universes in string-inspired gravity. J. Cosmology Astropart. Phys. **0603**, 009 (2006) \[arXiv:hep-th/0508194\]
Biswas, T., Koivisto, T., Mazumdar, A.: Towards a resolution of the cosmological singularity in non-local higher derivative theories of gravity. J. Cosmology Astropart. Phys. **1011**, 008 (2010) \[arXiv:1005.0590v2 \[hep-th\]\].
Biswas, T., Koshelev, A.S., Mazumdar, A., Vernov, S.Yu.: Stable bounce and inflation in non-local higher derivative cosmology. J. Cosmology Astropart. Phys. **08**, 024 (2012) \[arXiv:1206.6374 \[astro-ph.CO\]\]
Koshelev, A.S., Vernov, S.Yu.: On bouncing solutions in non-local gravity. \[arXiv:1202.1289v1 \[hep-th\]\]
Dimitrijevic, I., Dragovich, B., Grujic J., Rakic, Z.: On modified gravity. Springer Proceedings in Mathematics and Statistics **36**, 251–259 (2013) \[arXiv:1202.2352 \[hep-th\]\]
Dimitrijevic, I., Dragovich, B., Grujic J., Rakic, Z.: New cosmological solutions in nonlocal modified gravity. Rom. Journ. Phys. **58** (5-6), 550–559 (2013) \[arXiv:1302.2794 \[gr-qc\]\]
Dimitrijevic, I., Dragovich, B., Grujic J., Rakic, Z.: A new model of nonlocal modified gravity. Publications de l’Institut Mathematique **94** (108), 187–196 (2013)
Dimitrijevic, I., Dragovich, B., Grujic J., Rakic, Z.: Some pawer-law cosmological solutions in nonlocal modified gravity. In these proceedings
Brekke, L., Freund, P.G.O.: $p$-Adic numbers in physics. Phys. Rep. **233**, 1–66 (1993).
Erickcek, A.L., Smith, T.L., Kamionkowski M.: Solar system tests do rule out 1/R gravity. Phys. Rev. D **74**, 121501 (2006) \[arXiv:astro-ph/0610483\]
Koshelev, A.S.: Modified non-local gravity. \[arXiv:1112.6410v1 \[hep-th\]\]
Koshelev, A.S.: Stable analytic bounce in non-local Einstein-Gauss-Bonnet cosmology. \[arXiv:1302.2140 \[astro-ph.CO\]\]
Aref’eva, I.Ya.: Nonlocal string tachyon as a model for cosmological dark energy. AIP Conference Proceedings **826**, 301–311 (2006) \[astro-ph/0410443\]
Aref’eva, I.Ya., Joukovskaya, L.V., Vernov, S.Yu.: Bouncing and accelerating solutions in nonlocal stringy models. JHEP **0707**, 087 (2007) \[hep-th/0701184\]
Calcagni, G., Montobbio, M., Nardelli, G.: A route to nonlocal cosmology. Phys. Rev. D **76**, 126001 (2007) \[arXiv:0705.3043v3 \[hep-th\]\]
Barnaby, N., Biswas, T., Cline, J.M.: $p$-Adic inflation. JHEP **0704**, 056 (2007) \[hep-th/0612230\]
Koshelev, A.S., Vernov, S.Yu.: Analysis of scalar perturbations in cosmological models with a non-local scalar field. Class. Quant. Grav. **28**, 085019 (2011) \[arXiv:1009.0746v2 \[hep-th\]\]
Aref’eva, I.Ya., Volovich, I.V.: Comological daemon. JHEP **1108**, 102 (2011) \[arXiv:1103.0273 \[hep-th\]\]
Dragovich, B.: Nonlocal dynamics of $p$-adic strings. Theor. Math. Phys. **164** (3), 1151–115 (2010) \[arXiv:1011.0912v1 \[hep-th\]\]
Dragovich, B.: Towards $p$-adic matter in the universe. Springer Proceedings in Mathematics and Statistics **36**, 13–24 (2013) \[arXiv:1205.4409 \[hep-th\]\]
Novello, M., Bergliaffa, S.E.P.: Bouncing cosmologies. Phys. Rep. **463**, 127–213 (2008) \[arXiv:0802.1634 \[astro-ph\]\]
Brandenberger, R.H.: The matter bounce alternative to inflationary cosmology. \[arXiv:1206.4196 \[astro-ph.CO\]\]
Lehners, J.-L., Steinhardt, P.J.: Planck 2013 results support the cyclic universe. arXiv:1304.3122 \[astro-ph.CO\]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The task of action recognition or action detection involves analyzing videos and determining what action or motion is being performed. The primary subject of these videos are predominantly humans performing some action. However, this requirement can be relaxed to generalize over other subjects such as animals or robots. The applications can range from anywhere between human-computer interaction to automated video editing proposals. When we consider spatio-temporal action recognition, we deal with **action localization**. This task not only involves determining what action is being performed, but also when and where it is being performed in said video. This paper aims to survey the plethora of approaches and algorithms attempted to solve this task, give a comprehensive comparison between them, explore various datasets available for the problem, and determine the most promising approaches.'
author:
- |
Amlaan Bhoi\
Department of Computer Science\
University of Illinois at Chicago\
[[email protected]]{}
bibliography:
- 'egbib.bib'
title: 'Spatio-temporal Action Recognition: A Survey'
---
Introduction
============
**Spatio-temporal action recognition**, or action localization [@tian2013spatiotemporal], is the task of classifying what action is being performed in a sequence of frames (or video) as well as localizing each detection both in space and time. The localization can be visualized using bounding boxes or masks. There has been an increased interest in this task in recent years due to the increased availability of computing resources as well as new advances in convolutional neural network architectures.
There are several approaches to tackle this task. Most of the approaches revolve around the following approaches: discriminative parts [@ma2013action; @ke2007event], figure-centric models [@klaser2010human; @lan2011discriminative; @prest2013explicit], deformable parts [@tian2013spatiotemporal], action proposals [@gkioxari2015finding; @yu2015fast; @jain2014action; @weinzaepfel2015learning], graph-based [@yan2018spatial; @soomro2015action], 3D convolutional neural networks [@diba2017deep], and more. We examine each approach, list out advantages and disadvantages, and explore common subset of techniques used between many approaches. We then explore the various datasets available for this task and how they are a sufficient metric to evaluate this problem. Finally, we comment on which methods are promising going forward. The term spatio-temporal action recognition and action localization will be used interchangeably in this text as they refer to the same task. We should not confuse action localization with the similarly framed problem of **temporal action detection** which deals with determining only *when* an action occurs in a large video.
Problem Definition
==================
We can broadly define the problem as: given a video $X = \{x_1, x_2, ..., x_n\}$ where $x_i$ is the $i^{th}$ frame in the video, determine the action label $a_i \in A$, where $A$ is the set of action labels in the dataset, for that $i^{th}$ frame as well as a set of $\{x1, x2, y1, y2\}$ coordinates of the bounding box of the classified action $a_i$.
An alternative formulation given by @weinzaepfel2015learning defines action localization as: given a video of T frames $\{I_t\}_{t=1..T}$ and a class $c \in C$ where C is the set of classes, the task involves detecting if action $c$ occurs in the video and if yes, when and where. The output of a successful algorithm should output $\{R_t\}_{t=t_b..t_e}$ with $t_b$ the beginning and $t_e$ the end of the predicted temporal extent of action $c$ and $R_t$ the detected region in frame $I_t$.
Every paper contains a slightly different formulation of the problem depending on the approach they have taken. We shall swiftly explore those definitions to see how this one task can be approached in different viewpoints (graphs, optical flow, etc).
Challenges
==========
Spatio-temporal action recognition faces the usual challenges in faced in action recognition such as tracking the action throughout the video, localizing the time frame when the action occurs, and more. However, there are an additional set of challenges such as but not limited to:
- Background clutter or object occlusion in video
- Spatial complexity in scene with respect to number of candidate objects
- Linking actions between frames in presence of irregular camera motion
- Predicting optical flow of action
However, there is a more fundamental problem to consider with the traditional approach to action localization. We cannot treat the problem in a linear way of just classifying an action. Even object detection algorithms require region proposals to classify [@ren2015faster]. This can be made worse by the introduction of the temporal dimension. This would cause an exponential increase in the number of proposals which would render any such approach impractical for use.
Action Proposal Models
======================
Action localization with tubelets from motion
---------------------------------------------
@jain2014action propose a method for spatio-temporal action recognition by proposing a selective search sampling strategy for videos. Their approach uses **super-voxels** instead of super-pixels. In this way, they directly obtain the $2D+t$ sequences of bounding boxes which is called as *tubelets*. This removes the issue of linking bounding boxes between frames in a video. In addition to this, their method explicitly incorporates motion information by introducing *independent motion evidence* as a feature to characterize how the action’s motions deviates from background motion.
The pipeline of this method starts with super-voxel segmentation which is done through a graph-based method [@xu2012evaluation], iterative generation of additional tubelets, descriptor generation (BOW representation), and finally classification using BOW histograms of tubelets.
**Super-voxel generation.** Initial super-voxels are agglomeratively merged together based on similarity measures. We can imagine the merging as a tree with the individual super-voxels being the leaf of the tree and being merged all the way up to the root. This procedure produces $n-1$ additional super-voxels.
**Tubelets** Wherever super-voxels appear, they are tightly bounded by a rectangular bounding box. A sequence of these bounding boxes produces what is known as a *tubelet*. The algorithm, thus, produces $2n-1$ tubelets.
### Merging
Merging of super-voxels is based on five criterias that are sectioned into two parts: **color, texture, motion:** $$\label{eq: 1}
h_t = \frac{\Gamma (r_i) \times h_i + \Gamma (r_j) \times h_j}{\Gamma (r_i) + \Gamma (r_j)}$$ where $h_i$ is the $\ell_1$-normalized histogram for super-voxel $r_i$ (similarly for $r_j$) and $\Gamma (r_i)$ is the number of super-voxels in $r_i$. The second part is **size, fill:** $$\label{eq: 2}
s_{\Gamma}(r_i, r_j) = 1 - \frac{\Gamma (r_i) + \Gamma (r_j)}{\Gamma (video)}$$ where $\Gamma (video)$ is the size of the video (in pixels). The merging strategies can vary with any number of combinations of these criterias. An example of the merge operations is illustrated in Figure \[fig:4-1\].
{width="0.95\linewidth"}
### Motion features
The authors defined *independent motion evidence* (IME) as: $$\label{eq: 3}
\xi (p,t) = 1 - \varpi (p)$$ where $\varpi(p)$ is the ratio $\frac{\phi(r_{\hat{\theta}}(p, t))}{r_{\hat{\theta}}(p, t)}$ normalized between \[0, 1\]. More details are available in their paper.
### Results
The authors evaluated their model with ROC curve comparisons with other methods. The graphs can be found in their paper. They also recorded mean precision for the MSR-II dataset [@zhang2016large] with results shown in Table \[table:4-1\].
\[tubelets\]
Method Boxing Handclapping Handwaving
-------------- ---------- -------------- ------------
Cao *et al.* 17.5 13.2 26.7
SDPM 38.9 23.9 44.7
Tubelets **46.0** **31.4** **85.8**
: Results for @jain2014action. Average precisions for MSR-II.[]{data-label="table:4-1"}
Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos
------------------------------------------------------------------------
@hou2017tube introduce a new architecture called **tube convolutional neural networks** or T-CNN which is a generalization of R-CNN [@girshick2014rich] from 2D to 3D. Their approach first divides the videos into clips with 8 frames in each clip. This allows them to use a fixed-sized ConvNet architecture to process clips while mitigating the cost of GPU memory. As an input video is processed clip by clip, action tube proposals with various spatial and temporal sizes are generated for various clips. They need to be linked into a tube proposal sequence.
The authors introduce a new layer called **Tube-of-Interest** (ToI) pooling layer. This is a 3D generalization of Region of Interest (RoI) pooling layer of R-CNN. ToI layers are used to produce fixed length feature vectors which solves the problem of variable length feature vectors. More details about ToI layers can be found in there paper.
{width="1.0\linewidth"}
### Tube Proposal Network
The TPN consists of 8 3DConv layers, 4 max-pooling layers, 2 ToI layers, 1 point-wise convolutional layer, and 2 fully-connected layers. The authors use a pre-trained C3D model [@tran2015learning] and fine-tune it for each dataset they used in experiments. When generalizing R-CNN, instead of using the 9 hand-picked anchor boxes, they decided to use K-Means clustering on training set to select 12 anchor boxes which is better adapted to different datasets. Each bounding box is assigned an “actionness” probability determining if a box corresponds to an action or not (binary label). A positive bounding box proposal is determined by an Intersection-over-Union (IoU) overlap of more than 0.7.
### Linking Tube Proposals
The primary problem of linking tube proposals is not every consecutive tube proposal may capture the entire action (think of occlusion or noise in middle clips). To solve this, the authors used two metrics when linking tube proposals: actionness and overlap scores. Each video proposal (link of tube proposals) is assigned a score defined as: $$\label{eq: 4}
S = \frac{1}{m}\sum_{i=1}^{m}Actionness_{i}+\frac{1}{m-1}\sum_{j=1}^{m-1}Overlap_{j, j+1}$$ where $Actionness_{i}$ denotes the actionness score of tube proposal from $i$-th clip, $Overlap_{j, j+1}$ measures the overlap between two linked proposals from clips $j$ and ($j$ + 1), and $m$ is the total number of clips.
### Action Detection
The input to the action detection module is a set of linked tube proposal sequences of varying lengths. This is where the ToI layer comes into action. The output of the ToI layer is atteched to two fully-connected layers and a dropout layer. The dimension of the last fully-connected layer is $N$ + 1 ($N$ action classes and 1 background class).
### Results
@hou2017tube evaluated and verified their approach on three trimmed video datasets (UCF-Sports [@rodriguez2008action], J-HMDB [@jhuang2013towards], UCF-101 [@soomro2012ucf101]) and one un-trimmed video dataset – THU-MOS’14 [@jiang2014thumos]. The results for the UCF-Sports dataset is outlined in Table \[table:4-2\] There are many more approaches related to action proposal models that we did not get into detail of. These include **Finding Action Tubes** by @gkioxari2015finding, **Fast action proposals for human action detection and search** by @yu2015fast, **Learning to track for spatio-temporal action localization** by @weinzaepfel2015learning, and **Human Action Localization with Sparse Spatial Supervision** by @weinzaepfel2017human.
Diving Golf Kicking Lifting Riding Run SkateB. Swing SwingB. Walk mAP
-------------------------- ----------- ----------- ----------- ----------- ------------ ----------- ----------- ----------- ----------- ----------- ----------
@weinzaepfel2015learning 60.71 77.55 65.26 **100.0** 99.53 52.60 47.14 **88.88** 62.86 64.44 71.9
@peng2016multi **96.12** 80.47 73.78 99.17 97.56 82.37 57.43 83.64 98.54 75.99 84.51
@hou2017tube 84.38 **90.79** **86.48** 99.77 **100.00** **83.65** **68.72** 65.75 **99.62** **87.79** **86.7**
Figure-Centric Models
=====================
Discriminative figure-centric models for joint action localization and recognition
----------------------------------------------------------------------------------
@lan2011discriminative approach spatio-temporal action recognition by combining bag-of-words style statistical representation and a figure-centric structural representation which mainly works like template matching. They treat the position of the human as a latent variable in a **discriminative latent variable model** and infer it while simultaneously recognizing an action. In addition, instead of simple bounding boxes, they actually learn discriminative cells within boxes for more robust detection. Due to the latent variable model, exact learning and inference is intractable. Thus, efficient approximate learning and inference algorithms are developed.
Method
-------------------------------------------- --
global bag-of-words
local bag-of-words
spatial bag-of-words with $\Delta_{0/1}$
spatial bag-of-words with $\Delta_{joint}$
latent model with $\Delta_{0/1}$
@lan2011discriminative
: Results for @lan2011discriminative. Mean per-class action recognition accuracies.[]{data-label="Table:5-1"}
### Figure-Centric Video Sequence Model
The model jointly learns the relationship between action label and location of the person performing the action in each frame. The standard bounding box is divided into $R$ cells where each cell is either turned “on” or “off” depending on if it contains an action.
Each video **I** has an associated label $y$. Suppose video **I** contains $\tau$ frames represented as $\textbf{I} = (I_1, I_2,...,I_\tau)$, where $I_i$ denotes the $i$-th frame of said video. Furthermore, the authors define the bounding box for each video as $L = (l_1, l_2,...,l_\tau)$. The $i$-th bounding box $l_i$ is a 4-dimensional tensor representing $(x, y)$ coordinates, height, and width of bounding box. The extracted feature vector $\lambda(l_i;I_i)$ is a concatenation of three vectors: i.e. $\lambda(l_i;I_i) = [\textbf{x}_i;\textbf{g}_i;c_i]$. $\textbf{x}_i$ and $\textbf{g}_i$ represent the appearance feature which is defined as the k-means quantized HOG3D descriptors [@klaser2010learning] and spatial locations of interest in bounding box $l_i$. $c_i$ denotes the color histogram.
The authors use a scoring function inspired by the latent SVM model [@yu2009learning] which is defined as:
$$\label{eq: 5}
\begin{split}
\theta^{\top}\Phi(\textbf{z}, L, y, \textbf{I}) = & \sum_{i \in \nu}\alpha^{\top}\phi(l_i, \textbf{z}_i, y, I_i) \\
& + \sum_{i, i+1 \in \varepsilon}\beta^{\top}\psi (l_i, l_{i+1}, \textbf{z}_i, \textbf{z}_{i+1}, I_i, I_{i+1}) \\
& + \gamma^{\top}\eta (y, \textbf{I})
\end{split}$$
The definition of the **unary** potential function, **pairwise** potential function, and **global action** potential function can be found in the paper.
### Learning
Given N training samples $\langle\textbf{I}^n, L^n, y^n\rangle(n = 1, 2,...,N)$, the authors optimize over model parameter $\theta$. They adopt the SVM framework for learning as:
$$\begin{aligned}
\min_{\theta, \xi \geq 0} \frac{1}{2}\left \| w \right \|^{2}+C \sum_{n=1}^{N}\xi^n \\
\textup{s.t.} f_{\theta}(y^{n}, L^n, \textbf{I}^n) - f_{\theta}(y, L, \textbf{I}^n) \geq \\
\Delta(y, y^n, L, L^n) - \xi^n, \forall n, \forall y, \forall L
\end{aligned}$$
### Inference
The inference is simply solving the following optimization problem:
$$\max_y \max_L f_{\theta}(L, y, \textbf{I}) = \max_{y} \max_{L} \max_{\textbf{z}} \theta^\top \Phi (\textbf{z}, L, y, \textbf{I})$$
For a fixed $y \in \mathcal{Y}$, we can maximize $L$ and **z** as:
$$\max_L \max_{\textbf{z}} \theta^\top \Phi(\textbf{z}, L, y, \textbf{I})$$
### Results
The authors evaluate their model on the UCF-Sports dataset [@rodriguez2008action]. They achieved a **83.7**% accuracy beating out previous methods. The complete results for with mean per-class action recognition accuracies can be found in Table \[Table:5-1\]
The other two closely related Figure-Centric approaches are **Human focused action localization in video** by @klaser2010human and **Explicit modeling of human-object interactions in realistic videos** by @prest2013explicit.
Deformable Parts Models
=======================
Action recognition and localization by hierarchical space-time segments
-----------------------------------------------------------------------
{width="1.0\linewidth"}
@ma2013action introduce a new representation called *hierarchial space-time segments* where the space-time segments of videos are organized into two-level hierarchy. The first level comprises the root space-time segments that may contain the whole human body. The second level comprises space-time segments that contain parts of the root. They present an unsupervised algorithm designed to extract time segments that preserve both static *and* non-static relevant space time segments as well as their hierarchial and temporal relationships. Their algorithm consists of three major steps:
1. Apply hierarchical segmentation on each video frame to get a set of segment trees, each of which is considered as a candidate segment tree of the human body.
2. Prune the candidates by exploring several cues such as shape, motion, articulated objects’ structure and global foreground color.
3. Track each segment of the remaining segment trees in time both forward and backward.
Finally, using a simple linear SVM on the *bag of hierarchial space-time segments* representation, they achieved better or comporable perforamnce on previous methods.
{width="1.0\linewidth"}
### Video Frame Hierarchial Segmentation
On each video frame, the authors compute the boundary map as described by [@leordeanu2012efficient] using three color channels and five motion channels including optical flow, unit normalized optical flow, and optical flow magnitude. The boundary map is then utilized to compute Ultrametic Contour Map (UCM) as described by [@arbelaez2009contours]. By traversing the UCM, certain segments are removed to reduce redundancy. Then, the authors remove the root of the segment tree and get a set of segment trees $\tau^{t}$ where $t$ is the frame index. Each $T_{j}^{t} \in \tau^{t}$ is considered a candidate segment tree of a human body and we denote $T_{j}^{t} = {s_{ij}^{t}}$ where each ${s_{ij}^{t}}$ is a segment and ${s_{0j}^{t}}$ is the root segment.
### Pruning Candidate Segment Trees
The pruning step should only prune irrelevant static segments. The decision to prune a candidate segment is made with information from all segments of the tree and not just local information. Thus, pruning is done at candidate level. The two methods used to perform tree pruning are with **shape and color cues** and **using foreground map**. Detailed representations are available in the original paper.
### Extracting Hierarchial Space-Time Segments
After pruning, there is a set $\hat\tau^{t}$ containing remaining candidate segment trees. To capture temporal information, for each $T_{j}^{t} \in \hat\tau^{t}$, the authors track every segment $s_{ij}^{t} \in T_{j}^{t}$ to construct a space-time segment. The authors then propose a non-rigid region tracking method. The method revolves around predicting the region of next frame by flow and computing a flow prediction map $M_f$ as well as color prediction map $M_c$. If a point $b' \in B$ (where $B$ is the bounding box of next frame inside region $R'$) has color $c_{b'}$, then $M_{c}(b') = \textbf{h}(c_{b'})$ and $M_f(b')$:
$$M_{f}(b') = \begin{cases}
2 & \text{$b' \in R'$} \\
1 & \text{$b' \in \hat{B} \wedge b' \notin R'$} \\
0 & \text{otherwise}
\end{cases}$$
The combined map $M$ is then scaled and quantized to contain integer values in range $[0, 20]$. By settings thresholds $\delta_m$ of integer values between 1 to 20, we get 20 binary maps. The size of every connected component is computed and one with most similar size to $R$ is selected as candidate. These space-time segments may contain same objects. To exploit this dense representation, we can group space-time segments together if they overlap over some threshold (authors used 0.7). Finally, for each track (groups of segments), bounding boxes are calculated on all spanned frames.
### Action Recognition and Localization
For action recognition, a one-vs-all linear SVM is trained on all training videos’ BoW representations for multiclass classification resulting in the following rule:
$$y = \operatorname*{argmax}_{y \in \mathcal{Y}}
\left( \begin{array}{ccc}
\textbf{w}_{y}^{r} \\
\textbf{w}_{y}^{p}\end{array} \right)
(\textbf{x}^{r}\textbf{x}^{p})+b_y$$
where $\textbf{x}^{r}$ and $\textbf{x}^{p}$ are BoW representations of root and part space-time segments of test video respectively, $\textbf{w}_{y}^{r}$ and $\textbf{w}_{y}^{p}$ and entries of trained separation hyperplane for roots and parts respectively, $b_y$ is the bias term, and $\mathcal{Y}$ is set of action class labels.
For action localization, we find space-time segments that have positive contribution to classification of the video. Given a test video and set of root space-time segments $S^{p} = {\textbf{s}_{a}^{p}}$ and set of part space-time segments $S^{p} = {\textbf{s}_{b}^{p}}$, denote $C^r = {\textbf{c}_{k}^{r}}$ and $C^p = {\textbf{c}_{k}^{p}}$ as set of code words corresponding to positive entries of $\textbf{w}_{y}^{r}$ and $\textbf{w}_{y}^{p}$ respectively. We compute set U as:
$$\begin{aligned}
U = \{ \hat{s}^{r} : \hat{s}^{r} = \operatorname*{argmax}_{s_{a}^{r} \in S^{r}} h(\textbf{s}_{a}^{r}, \textbf{c}_{k}^{r}), \forall \textbf{c}_{k}^{r} \in C^{r} \} \\
\cup \{ \hat{s}^{p} : \hat{s}^{p} = \operatorname*{argmax}_{s_{b}^{p} \in S^{p}} h(\textbf{s}_{b}^{p}, \textbf{c}_{k}^{p}), \forall \textbf{c}_{k}^{p} \in C^{p} \}
\end{aligned}$$
where function $h$ measures similarity between two space-time segments. Finally, the tracks are output which have at least one space-time segment in set $U$ as action localization results.
### Results
@ma2013action experimented on the UCF-Sports [@rodriguez2008action] and High Five [@patron2010high] datasets. In action localization performance, they had an average of 10% increase in average IOU compared to previous methods.
------------ ------------------------ -------------------- ------------------------------ ---------- ------------------------ -------------------- ------------------------------ ----------
(l)[2-9]{} [[@tran2011optimal]]{} [[@tran2012max]]{} [[@lan2011discriminative]]{} Ma [[@tran2011optimal]]{} [[@tran2012max]]{} [[@lan2011discriminative]]{} Ma
dive 16.4 36.5 43.4 **46.7** 22.6 37.0 - **44.3**
golf - - 37.1 **51.3** - - - **50.5**
kick - - 36.8 **50.6** - - - **48.3**
lift - - **68.8** 55.0 - - - **51.4**
ride 62.2 **68.1** 21.9 29.5 63.1 **64.0** - 30.6
run 50.2 **61.4** 20.1 34.3 48.1 **61.9** - 33.1
skate - - 13.0 **40.0** - - - **38.5**
swing-b - - 32.7 **54.8** - - - **54.3**
swing-s - - 16.4 **19.3** - - - **20.6**
walk - - 28.3 **39.5** - - - **39.0**
**Avg.** - - 31.8 **42.1** - - - **41.0**
------------ ------------------------ -------------------- ------------------------------ ---------- ------------------------ -------------------- ------------------------------ ----------
: Results for @ma2013action. Action localization results measured as average IOU on UCF Sports dataset.
Spatiotemporal Deformable Part Models for Action Detection
----------------------------------------------------------
@tian2013spatiotemporal extend the concept of deformable parts model from 2D to 3D similar to @ma2013action but with some differences. The main difference is that this approach searches for a 3D subvolume considering parts both in space and time. SDPM also includes an explicit model to capture intra-class variation as a deformable configuration of parts. Finally, this approach shows effective resuls on action detection within a DPM framework without resorting to global BoW information, trajectories, or video segmentation.
The primary problem of generalizing DPM to 3D is that an action in a video may move spatially as frames progress. This is not a difficult problem in 2D as a static bounding box would cover most of the action parts. However, in videos, actions may move and a static learned bounding box will fail to cover the action across time. A naive approach would be to encapsulate the action with a large spatiotemporal box. However, that would drastically decrease the IOU of the prediction. The secondary problem is the difference between space and time. As the authors rightly point out, if an action size changes due to distance from camera, that does not mean the duration of the action changes as well. Thus, their feature pyramids employ multiple levels in space but not in time. Finally, they employ HOG3D feature d[@klaser2010learning] for their effectiveness. The HOG3D descriptors are based on a histogram of oriented spatiotemporal gradients as a volumetric generalization of the HOG [@dalal2005histograms] descriptor.
{width="0.90\linewidth"}
### Root filter
Following the DPM paradigm, the authors select a single bounding box for each video enclosing one cycle of given action. Volumes of other actions are treated as negative examples. Random volumes drawn from different scales of video are also added to negative samples to better discriminate action from background. The root filter is responsible for capturing the overall information of the action cycle by applying a SVM on the HOG3D features. An important aspect is to decide how to divide an action volume. Too few cells will decrease the overall discriminative power of the features while too many cells will prevent each cell from containing enough information to make it useful. The size of the spatial dimension in the root filter can be determined empirically. The authors used a 3x3xT size. This cannot be done for temporal dimension as an action may vary from a 5-30 seconds. Thus, the size of this filter must be determined automatically depending on the distribution of the action in the training set.
### Deformable parts
The authors observed that extracting HOG3D features from part models at twice the resolution and with more cells in space enabled the learned parts to capture important details. A point to note is that parts selected by this model are **allowed** to overlap in space. After SVM, subvolumes with higher weights (more discriminative power for a given action type) are selected as parts. The authors divided action cells into 12x12xT cells to extract HOG3D features and each part occupies 3x3x1 cell. Then, they greedily selected N parts with highest weights that fill 50% of action cycle volume. Parts weights initialization is corresponding to the cell weights containted inside them. An anchor position $(x_i, y_i, t_i)$ for $i$-th part is also determined. To address intra-class variation, the authors use a quadratic function to allow parts to shift within a certain spatiotemporal region.
### Action detection with SDPM
Given a test video, SDPM builds a spatiotemporal feature pyramid by computing HOG3D features at different scales. Template matching during detection is done using a sliding window approach. Score maps for root and part filters are computed at every level of the pyramid using template matching. For level $l$, the score map $S(l)$ of each filter can be obtained by correlation of filter $F$ with features of test video volume $\phi(l)$,
$$S(l, i, j, k) = \sum_{m, n, p} F(i, j, k) \phi(i+m, j+n, k+p, l)$$
At level $l$ in feature pyramid, score of detection volume centered at $(x, y, t)$ is sum of the score of root filter on this volume and scores from each part filter on best possible subvolume:
$$\begin{aligned}
score(x, y, t, l) = F_{0} \cdot \alpha(x, y, t, l) + \\
\sum_{i \le i \le n} \max_{(x', y', t') \in Z} [F_{i} \cdot \beta(x'_{i}, y'_{i}, t'_{i}, l) - \varepsilon (i, X_{i})]
\end{aligned}$$
where $F_{0}$ is the root filter and $F_{i}$ are part filters. $\alpha(x, y, t, l)$ and $\beta(x, y, t, l)$ are features of a 3x3xT volume centered at $(x, y, t)$ and 3x3x1 volume centered at part location $(x', y', t')$ respectively, at level $l$ of feature pyramid. $Z$ is the set of all different possible part locations and $\varepsilon (i, X_{i})$ is corresponding deformation cost. Highest score is chosen at the end based on a threshold. A scanning search algorithm is employed instead of exhaustive search.
### Results
The authors present their results on Weizmann [@ActionsAsSpaceTimeShapes_iccv05], UCF Sports [@rodriguez2008action], and MSR-II [@zhang2016large] datasets. Without much surprise, SDPM achieves 100% accuracy on the Weizmann dataset as the challenge is easy (9 actions on static background). On the UCF-Sports dataset, the authors achieved an average classification accuracy of **75.2%** which is higher than @ma2013action (73.1%) but lower than @raptis2012discovering (79.4%). On the MSR-II dataset, they outperformed model without parts as well as baselines.
Graph-Based Models
==================
Action localization in videos through context walk
--------------------------------------------------
@soomro2015action take a different approach to action localization. As a brief summary, they over-segment videos into supervoxels, learn context relationships (background-background and background-foreground), estimate probability of supervoxel belonging to an action for each supervoxel to create a conditional distribution of an action over all supervoxels, use a **Conditional Random Field** (CRF) to find action proposals in video, and use a **SVM** to obtain confidence scores. This *context walk* eliminates the need to use a *sliding window* approach and do an exhaustive search over an entire video. This is useful because most videos have under 20% of frames with actions in them.
### Context Graphs for Training Videos
Assuming index of training videos for action $c = 1...C$ is between range $n = 1...N_{c}$, where $N_{c}$ is number of training videos for action $c$, the $i$-th supervoxel in the $n$-th video is represented by $\textbf{u}_{n}^{i}, i = 1...I_{n}$, where $I_{n}$ is number of supervoxels in video $n$. Each supervoxel either belongs to foreground action or background. The authors now construct a directed graph $\textbf{G}_{n}(\textbf{V}_{n}, \textbf{E}_{n})$ for each training video across all action classes. Nodes in the graph are represnted by supervoxels while edges $\textbf{e}^{ij}$ emanate from all nodes belonging to foreground.
Let each supervoxel **u** be represented by its spatiotemporal centroid, $\textbf{u}_{n}^{i} = (x_{n}^{i}, y_{n}^{i}, t_{n}^{i})$. The features associated with $\textbf{u}_{n}^{i}$ are given by $\mathbf{\Phi}_{n}^{i} = ( _{1}\phi_{n}^{i}, _{2}\phi_{n}^{i},...,_{F}\phi_{n}^{i})$, where $F$ is total number of features. Graphs $\textbf{G}_{n}$ and $\mathbf{\Phi}_{n}^{i} \forall n=1...N_{c}$ are represented by composite graph $H_{c}$ which contains all information necessary for action localization.
### Context Walk in Testing Video
The model obtains about 200-300 supervoxels per video. The goal is to visit each supervoxel in sequence, referred to as a *context walk*. The initial supervoxel is selected randomly and similar supervoxels are found by nearest neighbor algorithm. The following function $\psi(\cdot)$ generates a conditional distribution over all supervoxels in testing video given only current supervoxel $\mathbf{v}^{\tau}$, features $\Phi^{\tau}$, and composite graph $\mathbf{H}$:
$$\begin{aligned}
\psi(\mathbf{v}|\mathbf{v}^{\tau}, \mathbf{\Phi}^{\tau}, \mathbf{H}, \mathbf{w}_{\psi}) = \\
Z^{-1} \sum_{n=1}^{N_{c}} \sum_{i=1}^{I_{n}} \sum_{j|e_{ij} \in \mathbf{E}_{n}} H_{\sigma}(\mathbf{\Phi}^{\tau}, \mathbf{\Phi}_{n}^{i}; \mathbf{w}_{\sigma}) \\
\cdot H_{\sigma}(\mathbf{v}, \mathbf{v}^{\tau}, \mathbf{u}_{n}^{i}, \mathbf{u}_{n}^{j}; w_{\delta})
\end{aligned}$$
where $H_{\sigma}$ computes similarity between features of current supervoxel in testing video. Skipping ahead to inference, the supervoxel with highest probability is selected in next step of context walk:
$$\mathbf{v}^{\tau + 1} = \operatorname*{argmax}_{\mathbf{v}} \Psi^{\tau}(v|\mathbf{S}_{\mathbf{v}}^{\tau}, \mathbf{S}_{\mathbf{\Phi}}^{\tau}, \mathbf{H}, \mathbf{w})$$
### Measuring Supervoxel Action Specificity
The authors quantify discriminative supervoxels based on an action specificity score. If $\xi(k_c)$ represents the ratio of number of supervoxels from foreground of action $c$ in cluster $k_c$ to all supervoxels from action $c$ in cluster, then, given appearance/motion descriptors **d**, if supervoxel belongs to cluster $k_c$, its action specificity $H_{\chi}(\mathbf{v}^i)$ is quantified as:
$$H_{\chi}(\mathbf{v}^i) = \xi(k_c) \cdot \exp(\frac{\left \| \mathbf{d}^i - \mathbf{d}_{k_c} \right \|}{r_{k_c}})$$
where $\textbf{d}_{k_c}$ and $r_{k_c}$ are center and radius of $k$-th cluster, respectively.
### Inferring Action Locations using 3D-CRF
Once we have conditional distribution $\mathbf{\Psi^T(\cdot)}$, we can merge supervoxels belonging to actions to create a continuous flow of supervoxels without any gaps. The authors use CRFs for this purpose. They minimize the negative log likelihood over all supervoxel labels **a** in the video:
$$\begin{aligned}
-\log(Pr(\mathbf{a}|\mathbf{G}, \mathbf{\Phi}, \mathbf{\Psi}^T; w_\gamma )) = \sum_{\mathbf{v}^i \in \mathbf{V}} (\Theta(a^i|\mathbf{v}^i, \mathbf{\Psi}^T) + \\
\sum_{v^j|e^{ij} \in \mathbf{E}} \gamma(a^i, a^j | \mathbf{v}^i, \mathbf{v}^j, \mathbf{\Phi}^i, \mathbf{\Phi}^j; w_\gamma))
\end{aligned}$$
where $\Theta(\cdot)$ captures unary potential and depends on conditional distribution after $T$ steps and action specificity measured above. Both are normalized between 0 and 1.
{width="0.90\linewidth"}
### Results
The approach is evaluated on UCF-Sports [@rodriguez2008action], sub-JHMDB [@jhuang2013towards], and THUMOS’13 [@jiang2014thumos] datasets. The biggest advantage of this method is the complexity. While SDPM by @tian2013spatiotemporal and Tubelets by @jain2014action have complexities $\mathcal{O}(n^4)$ and $\mathcal{O}(n^2)$ respectively, this work has complexity $\mathcal{O}(c)$ where $c$ is the number of classifier evaluations.
Method UCF-Sports sub-JHMDB
-------------------------- ------------ -----------
@wang2014video 47% 36%
@wang2014video (iDTF+FV) - 34%
@jain2014action 53% -
@tian2013spatiotemporal 42% -
@lan2011discriminative 38% -
**@soomro2015action** **55**% **42**%
: @soomro2015action. Comparison of methods at 20% overlap.[]{data-label="Table:7-1"}
Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition
-----------------------------------------------------------------------------------
Besides optical flow and traditional pixel level information, there is a class of representations based on human skeleton and joints. These form conceptual graphs that can be used to classify actions. This paper by @yan2018spatial uses those features to classify and localize actions.
**Graph neural networks** are a recent paradigm that generalize convolutional neural networks to graphs of arbitrary structures. They have shown to perform well on tasks such as image classification, document classification, and semi-supervised learning. [@yan2018spatial] extend the idea of graph neural networks to *Spatial-Temporal Graph Convolutional Networks (ST-GCN)* which attempt to model, localize, and classify actions. The graph representation contains *spatial edges* that conform to natural connectivity of joints and *temporal edges* that connect to same joints across consecutive time steps. Besides ST-GCN, the authors also introduce several principles to design convolution kernels in ST-GCN. Finally, they evaluate their models on large scale datasets to demonstrate the approach’s effectiveness as we shall see.
### Spatial Temporal Graph ConvNet
The overall pipeline expects skeleton based data obtained from a motion-capture device or pose estimation algorithm from videos. For each frame, there will be a set of join coordinates. Given these sequences of body joints, the model constructs a spatial temporal graph with joints as graph nodes and natural connectivities in both human body structures and time as graph edges.
### Skeleton Graph Construction
The authors create an undirected spatial temporal graph $G = (V, E)$ on a skeleton sequence with $N$ joints and $T$ frames feature both intra-body and inter-frame connection. The node set $V = {v_{ti}|t=1,...,T,i=1,...,N}$ includes all joints in skeleton sequence. As ST-GCN’s input, feature vector on node $F(v_{ti})$ consists of coordinate vectors as well as estimation confidence on $i$-th join on frame $t$. The construction of the graph is divided into two steps: The joints within one frame are connected with edges according to human body structure. Then, each joint is connected to the same joint in the consecutive time step’s graph. Connections are naturally made without manual intervention. This also provides generalization capabilities with respect to different datasets. Formally, the edge set $E$ is composed of two subsets: $E_{S} = {v_{ti}v{tj}|(i, j) \in H}$ consisting of intra-skeleton connection at each frame where $H$ is set of naturally connected human body joints. The second subset consists of inter-frame edges connecting same joints in consecutive frames and is expressed as $E_F = {v_{ti}v_{(t+1)i}}$.
### Spatial Graph Convolutional Neural Network
Let us just consider graph CNN model within one single frame. At a single frame at time $\tau$, there will be $N$ joint nodes $V_t$ along with skeleton edges $E_S(\tau) = {v_{ti}v{tj}|(i, j) \in H}$. Given a convolution operator with kernel size $K \times K$, and input feature map $f_{in}$ with number of channels c, the output value of single channel at spatial location $x$ can be written as:
$$f_{out}(\mathbf{x}) = \sum_{h=1}^{K}\sum_{w=1}^{K}f_{in}(\mathbf{p}(\mathbf{x}, h, w) \cdot \mathbf{w}(h, w))$$
where **sampling function** $\mathbf{p}:Z^2 \times Z^2 \rightarrow Z^2$ enumerates neighbors of location $x$. The **weight function** $\mathbf{w}: Z^2 \rightarrow \mathbb{R}^c$ provides weight vector in $c$-dimensional real space for computing inner product with sampled input feature vector of dimension $c$. Standard convolution on image domain is achieved by encoding a rectangular grid in **p(x)**. Please refer to the original paper for reformulation of sampling and weight functions on 2D image domains. Now, we can write a graph convolution as:
$$f_{out}(v_{ti}) = \sum_{v_{tj} \in B(v_{ti})} \frac{1}{Z_{ti}(v_{tj})} f_{in}(v_{tj}) \cdot \mathbf{w}(l_{ti}(v_{tj}))$$
where normalizing term $Z_{ti}(v_{tj}) =| {v_{tk}|l_{ti}(v_{tk}) = l_{ti}(v_{tj})}$ equals cardinality of corresponding subset. To model the temporal aspect of this graph, we simply use the same sampling function and labeling map $l_{ST}$. Because temporal axis is well-ordered, we directly modify label map for spatial temporal neighborhood rooted at $v_{ti}$ to be:
$$l_{ST}(v_{qj}) = l_{ti}(v_{tj})+(q-t+ \floor{\Upgamma / 2}) \times K$$
where $l_{ti}(v_{tj})$ is label map for single frame case at $v_{ti}$.
### Implementing ST-GCN
The implementation of graph convolution is the same as in @kipf2016semi. The intra-body connections are represented by an adjacency matrix **A** and identity matrix **I**. Thus, in the single frame case,
$$f_{out} = \mathbf{\Lambda}^{-\frac{1}{2}}(\mathbf{A} + \mathbf{I}) \mathbf{\Lambda}^{-\frac{1}{2}}\mathbf{f}_{in}\mathbf{W}$$
where $\Lambda^{ii} = \sum_{j}(A^{ij}+I^{ij})$.
In the multiple subset case,
$$f_{out} = \sum_{j} \mathbf{\Lambda}_{j}^{-\frac{1}{2}}\mathbf{\Lambda}_{j}\mathbf{\Lambda}_{j}^{-\frac{1}{2}}f_{in}\mathbf{W}_j$$
where similarity $\Lambda_{j}^{ii} = \sum_{k}(A_{j}^{jk}) + \alpha$. Here, the authors set $\alpha = 0.001$ to avoid empty rows in $\mathbf{A}_j$.
The input is first fed to a batch normalization layer to normalize data. The ST-GCN model is composed of 9 layers of spatial temporal graph convolution operations. The first three layers have 64 output channels, second three layers have 128 channels, and last three layers have 256 output channels. These layers have 9 temporal kernel size. The ResNet mechanism is applied to each of these layers. There is also a random dropout of 0.5 after each layer to prevent overfitting. The 4-th and 7-th layer have strides 2 for pooling. Then, a global pooling is performed to achieve a 256 dimension feature tensor for each sequence. Lastly, the feature vector is fed to a Softmax layer for classification. The model is optimized using stochastic gradient descent with a learning rate of 0.01 and decayed by 0.1 after every 10 epochs.
### Results
The authors tested the model on **Kinetics human action dataset** [@kay2017kinetics] and **NTU-RGB+D** [@shahroudy2016ntu] dataset. On the Kinetics dataset, ST-GCN achieved a 10.4% and 12.8% increase in Top-1 and Top-5 accuracies when compared to frame based methods. On the NTU-RGB+D dataset, they achieved a 1.9% and 3.5% increase on X-Sub and X-View accuracies when compared to all previous methods.
Top-1 Top-5
-------------------------------------- ---------- ----------
RGB @kay2017kinetics 57.0 77.3
Optical Flow @kay2017kinetics 49.5 71.9
Feature Enc. @fernando2015modeling 14.9 25.8
Deep LSTM @liu2016spatio 16.4 35.3
Temporal Conv. @kim2017interpretable 20.3 40.0
ST-GCN **30.7** **52.8**
: Results for @yan2018spatialAction recognition performance on skeleton based models on Kinetics dataset. The first two methods are frame based methods.
3D Convolutional Neural Networks
================================
A Closer Look at Spatiotemporal Convolutions for Action Recognition
-------------------------------------------------------------------
Let us look into an approach which completely uses convolutional neural networks without any special feature representations. The paper by @tran2017closer introduces an even more advanced approach to action recognition with a demonstration of a new form of convolution. The method is aimed for action recognition only. However, it can be supplemented with additional features to enable *action localization*.
The authors mainly focus on the domain of residual learning for action recognition. They explore the existing types of 3D convolutions and namely introduce two new types of convolution. The first new convolution is a mixed convolution where early layers of the model perform 3D convolutions while later layers perform spatial or 2D convolutions over the learned features. This is called the *MC* or mixed convolution. The second new convolution is a complete decomposition of the 3D convolution into separate 2D spatial convolution and 1D temporal convolution. This is called the *R(2+1)D* convolution. This decomposition brings in two advantages. Firstly, the decomposition introduces an additional nonlinear rectification between two operations. This means, you double the number of nonlinearities compared to a network using full 3D convolutions for same number of parameters. Secondly, this facilitates optimization leading to lower training loss and lower testing loss. Let us now explore the various types of convolutions for videos.
### Convolutional residual blocks for video
Within the framework of residual learning, there are several spatiotemporal convolution variants available. Let **x** denote input clip of size $3 \times L \times H \times W$, where $L$ is number of frames in clip, $H$ and $W$ are frame height and width, and 3 refers to the RGB channels. Let $\textbf{z}_i$ be tensor computed by $i$-th convolutional block. Then, the output of that block is:
$$\mathbf{z}_i = \mathbf{z}_{i-1} + \mathcal{F}(\mathbf{z}_{i-1}; \theta_i)$$
where $\mathcal{F}(;\theta_i)$ implements composition of two convolutions parameterized by weights $\theta_{i}$ and application of ReLU functions.
**R2D: 2D convolutions over the entire clip.** 2D CNNs for video ignore temporal ordering and treat $L$ frames independently of channels. This is basically reshaping the input 4D tensor **x** into a 3D tensor of size $3L \times H \times W$. The output $\mathbf{z}_i$ of $i$-th block is also a 3D tensor. Each filter is 3D and has size $N_{i-1} \times d \times d$, where d denotes spatial width and height. Even tho filter is 3D, it only convolves in 2D over the *spatial* dimensions. All temporal information of video is collapsed into single-channel feature maps. This prevents any sort of temporal reasoning.
**f-R2D: 2D convolutions over frames.** Another 2D CNN approach involves processing independently the $L$ frames via a series of 2D convolutional residual block. Same filterers are applied to all $L$ frames. No temporal modeling is performed on the convolutional layers and global spatiotemporal pooling layer at the end simply fuses information extracted independently from the $L$ frames. This architecture variant is referred to as f-R2D (frame-based R2D).
**R3D: 3D convolutions.** 3D CNNs [@tran2015learning] preserve temporal information and propagate it through the layers of the network. The tensor $\mathbf{z}_i$ is 4D in shape and has size $N_i \times L \times H_i \times W_i$, where $N_i$ is number of filters used in $i$-th block. Each filter is 4-dimensional and has size $N_{i-1} \times t \times d \times d$ where $t$ denotes the temporal extent of the filter (the authors used $t = 3$).
**M$C_x$ and rM$C_x$: mixed 3D-2D convolutions** The intuition behind **MC** layers is that in early layers, motion modeling may be important while in later layers, motion or temporal modeling is not necessary. In the authors’ experiments of a 5 layer residual block, two variants come out where first three layers are 3D convolutions while last two are 2D. The other variant is just the opposite of it.
### R(2+1)D: (2+1)D convolutions
Another hypothesis proposed by the authors is that full 3D convolutions can be more conveniently approximated by a 2D convolution followed by a 1D convolution. Thus, they designed a R(2+1)D architecture where the $N_i$ 3D convolutional filter of size $N_{i-1} \times t \times d \times d$ is replaced with a (2+1)D block consisting of $M_i$ 2D convolutional filters of size $N_{i-1} \times 1 \times d \times d$ and $N_i$ temporal convolutional filters of size $M_i \times t \times 1 \times 1$. The hyperparameter $M_i$ determines dimensionality of intermediate sub-space where signal is projefcted between spatial and temporal convolutions. The authors choose $M_i = \floor{\frac{td^2N_{i-1}N{i}}{d^2N_{i-1}+tN_{i}}}$ so that number of parametrs in the block approximately equal to the 3D variant. This spatiotemporal decomposition can be applied to any 3D convolutional layer.
{width="0.8\linewidth"}
### Results
The authors experimented their new architecture on the Kinetics [@kay2017kinetics] and Sports-1M [@karpathy2014large] datasets. They also pre-trained the models on these two datasets and then finetuned them to UCF-101 [@soomro2012ucf101] and HMDB51 [@kuehne2013hmdb51] datasets.
The networks experimented with are the ResNet-18 and ResNet-34 network architectures. Frame input size is $112 \times 112$. The authors use one spatial downsampling of $1 \times 2 \times 2$, and three spatiotemporal downsampling with convolutional striding of $2 \times 2 \times 2$. For training, $L$ consecutive frames are randomly sampled. Batch normalization is applied to all convolutional layers and batch size is set to 32 per GPU. The initial learning rate is set to 0.01 and is decayed by 0.1 every 10 epochs. The R(2+1)D layer architecture reported an average 3% improvement from previous methods.
Action Localization Datasets
============================
Let us explore common datasets used for action localization. This can help us understand what sort of features and data is available to explore algorithms and models upon. We shall explore seven datasets that are of different sizes, domains, and contain different features.
### Kinetics Dataset
The first dataset we explore is the Kinetics dataset by @kay2017kinetics. The dataset is sourced from YouTube videos to encourage variation between videos in same and different action classes. There are **400** actions, minimum 400 video clips per action, and contains 306,245 videos in total. The way the dataset was built was through first curating action classes by merging different previous dataset action classes. Secondly, the videos were sourced from YouTube corpus. To collect the best videos, an aggregation of relevance feedback scores were used with multiple queries. Finally, human tagging was used to manually annotate the videos for accuracy and consistent.
### Weizzman Dataset
The Weizzman dataset by @ActionsAsSpaceTimeShapes_iccv05 contains **90** low-resolution video sequences $(180 \times 144, 50 fps)$ consisting of 10 different action classes. Some actions are: “running”, “walking”, “jumpingjack”, “jumping-forward-on-two-legs”, “jumping-in-placeon-two-legs”, “galloping-sideways”, “waving-two-hands”, “waving-one-hand”, “bending”.
### UCF-101 Dataset
The UCF-101 by @soomro2012ucf101 is arguably one of the most famous datasets for action recognition and action localization. As its name implies, it consists of **101** action classes, 13,320 clips, 4-7 clips per group, a mean clip length of 7.21 seconds, total duration of 1600 minutes, frame rate of 25 fps, and resolution of $320 \times 240$. The source of the videos is the YouTube corpus. It is an extension of the UCF-50 dataset.
### UCF-Sports Dataset
The UCF-Sports dataset by @rodriguez2008action is an video dataset containing **10** actions primarily in the sports domain. There are 150 clips, mean clip length of 6.39s, frame rate of 10fps, total duration of 958s, and at a resolution of $720 \times 480$. The maximum and minimum number of clips per class are 22 and 6 respectively. The videos are sourced through BBC and ESPN video corpus.
### THUMOS’14 Dataset
@jiang2014thumos released the THUMOS’14 dataset which contains **101** actions, 13,000 temporally trimmed videos, over 1000 temporally untrimmed videos, over 2500 negative sample videos, and bounding boxes for 24 action classes.
### HMDB Dataset
@jhuang2013towards introduce the HMDB dataset containing **51** actions, 6849 clips, and each class contains at least 101 clips. Actions include laughing, talking, eating drinking, pull up, sit down, ride bike, etc.
### Activity Net Dataset
@caba2015activitynet curated the Activity Net dataset which contains **200** action classes, 100 untrimmed videos per class, 1.54 activity instances per video on average, and a total of 38,880 minutes of videos. This dataset was hosted as a challenge at CVPR 2018.
### NTURGB-D Dataset
Finally, we look at the NTURGB-D dataset by @shahroudy2016ntu. The dataset contains 56,880 action samples distributed across **60** actions with each video containing the following data:
1. RGB videos
2. depth map sequences
3. 3D skeletal data
4. infrared videos
Video samples are of resolution $1920 \times 1080$, depth map and IR videos have resolution $512 \times 424$, and 3D skeletal data have three dimensional locations of 25 major body parts.
Conclusion
==========
In conclusion, we explored eight approaches to action localization. There are many more methods and techniques that are used to solve this problem. However, majority of them rely upon clever usage of the same types of features including RGB pixel values, optical flow, skeleton graphs, etc. Action proposal networks are effective but they are expensive and usually need an exhaustive search of the entire video. If the video length is large (e.g. CCTV camera footage spanning over several hours), these approaches turn out to be computationally infeasible. Figure centric models try to solve this problem. However, they require too much of manual feature construction to be automated on a large scale. Deformable parts models actually solve the problem of selective sampling and extracting segments for action localization. There is room for improvement there too. Graph based models also lower the amount of search and classification operations with optimizations. However, they require skeletal data which means a pose estimation algorithm is required as a pre-processing step. The accuracy of the pose estimation algorithm can also create bias towards the training and inference of the model. Finally, spatiotemporal convolutions provide an interesting proposition and possibly we can extend this technique while incorporating more features to solve the problem of action localization.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove that the genus two surface admits a cw-expansive homeomorphism with a fixed point whose local stable set is not locally connected.'
author:
- Alfonso Artigue
title: 'Anomalous cw-expansive surface homeomorphisms'
---
Introduction
============
In [@L; @Hi] Lewowicz and Hiraide proved that every expansive homeomorphism of a compact surface $S$ is conjugate with a pseudo-Anosov diffeomorphism. Recall that a homeomorphism $f\colon S\to S$ is *expansive* if there is ${\eta}>0$ such that if $\operatorname{dist}(f^n(x),f^n(y))\leq{\eta}$ for all $n\in{\mathbb Z}$ then $x=y$. In [@Ka93] Kato introduced a generalization of expansivity called *continuum-wise expansivity*. We say that $f$ is *cw-expansive* if there is ${\eta}>0$ such that if $C\subset S$ is a continuum (compact connected) and $\operatorname{diam}(f^n(C))\leq{\eta}$ for all $n\in{\mathbb Z}$ then $C$ is a singleton. In the works of Kato on cw-expansivity we find several generalizations of results holding for expansive homeomorphisms. Also, he found new phenomena as for example a cw-expansive homeomorphism with infinite topological entropy. In this paper we investigate the possibility of extending results from [@L; @Hi] for a cw-expansive surface homeomorphism.
A key concept in dynamical systems is that of the stable set of a point. Given a homeomorphism $f\colon S\to S$ and ${\varepsilon}>0$ we define the ${\varepsilon}$-*stable set* of a point $x\in S$ as $$W^s_{\varepsilon}(x)=\{y\in S:\operatorname{dist}(f^n(x),f^n(y))\leq{\varepsilon}\hbox{ for all }n\geq 0\}.$$ For a hyperbolic set it is well known that local stable sets are embedded submanifolds (the invariant manifold theorem). In the papers [@L; @Hi] they prove that if $f\colon S\to S$ is expansive then the connected component of $x$ in $W^s_{\varepsilon}(x)$ is a locally connected set. This implies the arc-connection of this components and allows them to prove that each local stable set is a finite union of arcs. In some sense it is an invariant manifold theorem for expansive homeomorphisms of surfaces. After this, they prove the conjugacy with a pseudo-Anosov diffeomorphism, giving a complete classification of such dynamics.
Some cw-expansive homeomorphisms of surfaces are not expansive, see [@ArNexp; @APV; @PPV; @PaVi]. In these examples the components of local stable sets are locally connected. The purpose of this paper is to construct a cw-expansive homeomorphism of a compact surface with a point whose local stable set is connected but it is not locally connected.
The example {#secAno}
===========
The example is a variation of those in [@ArNexp; @APV]. We start defining a homeomorphism of ${\mathbb R}^2$ such $(0,0)$ as a fixed point and its stable set is not locally connected. Then, this anomalous saddle is *inserted* in a derived from Anosov diffeomorphism of the torus. Finally, this anomalous derived from anosov system is connected via a wandering tube with a usual derived from Anosov to obtain our example.
An anomalous saddle point {#secIrregular}
-------------------------
First, we will construct a plane homeomorphism $f$ with a fixed point at the origin whose local stable set is connected but not locally connected. The homeomorphism will be defined as the composition of a piece-wise linear transformation $T$ and a time-one map of a flow $\phi$. This flow will have a non-locally connected set $E$ of fixed points.
We start with the linear part of the construction. Let $T_i\colon {\mathbb R}^2\to{\mathbb R}^2$, for $i=1,2,3$, be the linear transformations defined by $T_1(x,y)=(\frac x2,\frac y2)$, $T_2(x,y)=(\frac x2,2y)$, $T_3(1,1)=(\frac12,\frac12)$, $T_3(0,1)=(0,2)$. Define the piece-wise linear transformation $T\colon {\mathbb R}^2\to{\mathbb R}^2$ as $$T(x,y)=\left\{
\begin{array}{l}
T_1(x,y)\hbox{ if } x\geq y\geq 0,\\
T_2(x,y)\hbox{ if } x\leq 0 \hbox{ or } y\leq 0,\\
T_3(x,y)\hbox{ if } y\geq x\geq 0.
\end{array}
\right.$$ In Figure \[sillaLoca\] we illustrate the definition of $T$.
Now we define the non-locally connected plane continuum $E$. Some care is needed in order to be able of relate this set with the transformation $T$. Define the sets: $$\begin{array}{l}
C(a)=\{(a,y)\in{\mathbb R}^2:0\leq y\leq a\}\hbox{ for }a>0, \\
D_1=\cup_{i=1}^\infty C(\frac12+\frac1{2^i}), \\
D_{n+1}=T_1(D_n)\hbox{ for all }n\geq 1, \\
D=\cup_{n\geq 1} D_n.
\end{array}$$ Also consider the non-locally connected continuum $E=D\cup([0,1]\times\{0\})$ shown in Figure \[conjuntoE\].
Now we will define a flow related with the set $E$. Consider the continuous function $\rho\colon {\mathbb R}^2\to{\mathbb R}$ defined by $$\rho(p)=\operatorname{dist}(p,E)=\min\{\operatorname{dist}(p,q):q\in E\}$$ and the vertical vector field $X\colon{\mathbb R}^2\to{\mathbb R}^2$ defined as $$X(p)=(0,\rho(p)).$$ Since $$|\operatorname{dist}(p,E)-\operatorname{dist}(q,E)|\leq\operatorname{dist}(p,q)$$ for all $p,q\in{\mathbb R}^2$, we have that $\rho$ is Lipschitz. Therefore, by Picard’s theorem, $X$ has unique solutions and we can consider the flow $\phi\colon{\mathbb R}\times{\mathbb R}^2\to{\mathbb R}^2$ induced by $X$. Since $\|X(p)\|\leq \|p\|$ for all $p\in {\mathbb R}^2$ we have that every solution is defined for all $t\in{\mathbb R}$.
Let $f\colon{\mathbb R}^2\to{\mathbb R}^2$ be the homeomorphism $$f=\phi_1\circ T,$$ where $\phi_1\colon{\mathbb R}^2\to{\mathbb R}^2$ is the time-one homeomorphism associated to the vector field $X$.
The homeomorphism $f$ preserves the vertical foliation on ${\mathbb R}^2$.
It follows because $\phi_t$ and $T$ preserves the vertical foliation.
Consider the region $$\label{eqR1}
R_1=\{(x,y)\in[0,1]\times[0,1]:x\geq y\}.$$
\[lemaR1\] For all $p\in R_1$ it holds that $\rho(T(p))=\frac 12 \rho(p)$ and $$\phi_t(T(p))=T(\phi_t(p))$$ if $\phi_t(p)\in R_1$ and $t\geq 0$.
By the definition of $T$ we have that $T(p)=T_1(p)=\frac12p$ for all $p\in R_1$. Given $p\in R_1$ consider $q\in E$ such that $\rho(p)=\operatorname{dist}(p,q)$. Then $\rho(T(p))=\operatorname{dist}(T(p),T(q))$ and $\rho(T(p))=\frac 12 \rho(p)$.
Consider $t\geq 0$ such that $\phi_t(p)\in R_1$. Since $X$ is a vertical vector field we have that $\phi_{[0,t]}(p)\subset R_1$. For $s\in(0,t)$, if $q=\phi_s(p)$ then $$X(T(q))=(0,\rho(T(q)))=\left(0,\frac12\rho(q)\right)=d_qT(X(q)).$$ Therefore, $\phi_s(T(p))=T(\phi_s(p))$ for $s\in(0,t)$ and consequently for $s=t$.
Define the stable set of the origin as usual by $$W^s_f(0)=\{p\in{\mathbb R}^2:\lim_{n\to+\infty}\|f^n(p)\|=0\}.$$
For the homeomorphism $f\colon{\mathbb R}^2\to{\mathbb R}^2$ defined above it holds that $$W^s_f(0)\cap([0,1]\times[0,1])=E.$$
First notice that $E\subset W^s_f(0)$ because for all $p\in E$ and $t\in{\mathbb R}$ we have that $\phi_t(p)=p$ and $T(p)=\frac12p$. Then $f(p)=\frac12p$ for all $p\in E$.
Now take a point $p\in [0,1]\times[0,1]$. For $p\notin R_1$, the set defined in (\[eqR1\]), it is easy to see that $f^n(p)\to\infty$ as $n\to+\infty$. Assume that $p\in R_1\setminus E$. We will show that $p\notin W^s_f(0)$. It is sufficient to show that for some $n>0$ the point $f^n(p)$ is not in $R_1$. By contradiction, assume that $f^n(p)\in R_1$ for all $n\geq 0$. Then, by Lemma \[lemaR1\] we know that $$f^n(p)=(\phi_1\circ T)^n(p)=T^n(\phi_n(p)).$$ Notice that $\phi_1^n=\phi_n$. Then, it only rests to prove that $\phi_n(p)\notin R_1$ for some $n>0$. But this is easy because the velocity of $\phi_t(p)$ is $\rho(\phi_t(p))$ and this velocity increases with $t$.
A variation of a derived from Anosov {#AnomCwexp}
------------------------------------
We start recalling some properties of what is known as a derived from Anosov diffeomorphisms. The interested reader should consult [@Robinson]\*[Section 8.8]{} for a construction of such a map and detailed proofs of its properties. A derived from Anosov is a $C^\infty$ diffeomorphism $f\colon T^2\to T^2$ of the two-dimensional torus such that: it satisfies Smale’s axiom A and its non-wandering set consists of an expanding attractor and a repeller fixed point $p\in T^2$. The expanding attractor is locally a Cantor set times an arc, and it has two hyperbolic fixed points of saddle type $q$ and $q'$ as in Figure \[figDA\].
![The derived from Anosov diffeomorphism on the two-dimensional torus.[]{data-label="figDA"}](figDA.pdf)
We will assume that there is a local chart $\varphi\colon D\to T^2$, defined on the disc $D=\{x\in{\mathbb R}^2:\|x\|\leq 2\}$, such that
1. $\varphi(0)=p$,
2. the pull-back of the stable foliation by $\varphi$ is the vertical foliation on $D$ and
3. $\varphi^{-1}\circ f\circ \varphi(x)=4x$ for all $x\in D$ with $\|x\|\leq 1/2$.
Now we will *insert* the anomalous saddle in the derived from Anosov. Let $q$ be the hyperbolic fixed point shown in Figure \[figDA\]. Consider a topological rectangle $R_q$ covering a half-neighborhood of $q$ as in Figure \[figDARect\].
![Topological rectangles on the derived from Anosov (left) and on the anomalous saddle (right).[]{data-label="figDARect"}](figDARect.pdf)
Consider the homeomorphism with an anomalous saddle fixed point defined in Section \[secIrregular\]. Call this homeomorphism $g$ (to avoid confusion with the derived from Anosov $f$). Denote by $o$ its fixed point (the origin of ${\mathbb R}^2$) and take a rectangle $Q_o\subset {\mathbb R}^2$, similar to $R_p$, as in Figure \[figDARect\]. Now we can *replace* $R_q$ with $Q_o$ and define what we call a *derived from Anosov with an anomalous saddle* as in Figure \[figAnoDA\].
![Derived from Anosov with an anomalous saddle fixed point $q$.[]{data-label="figAnoDA"}](figAnoDA.pdf)
Anomalous cw-expansive surface homeomorphism
--------------------------------------------
In this section we finish the construction with ideas from [@ArNexp; @APV]. Consider $S_1$ and $S_2$ two disjoint copies of the torus ${\mathbb R}^2/{\mathbb Z}^2$. Let $f_i\colon S_i\to S_i$, $i=1,2$, be two homeomorphisms such that:
- $f_1$ is the derived from Anosov with an anomalous saddle from the previous section, denote by $p_1\in S_1$ the source fixed point of $f_1$,
- $f_2$ is the inverse of the derived from Anosov (the usual one) with a sink fixed point at $p_2\in S_2$.
Consider local charts $\varphi_i\colon D_2\to S_i$, $i=1,2$, where $D_2$ is the compact disk $$D_2=\{x\in{\mathbb R}^2:\|x\|\leq 2\},$$ such that:
1. $\varphi_i(0)=p_i$,
2. the pull-back of the unstable foliation by $\varphi_2$ is the vertical foliation on $D_2$ and
3. $\varphi_1^{-1}\circ f^{-1}_1\circ \varphi_1(x)=\varphi_2^{-1}\circ f_2\circ \varphi_2(x)=x/4$ for all $x\in D$.
Consider the open disk $$D_{1/2}=\{x\in{\mathbb R}^2:\|x\|<1/2\}$$ and the compact annulus $$A=D_2\setminus D_{1/2}.$$ Define $\psi\colon A\to A$ as the inversion $\psi(x)=x/\|x\|^2$. The pull-back of the unstable foliation on $S_2$ by $\varphi_2\circ\psi$ on the annulus $A$ is shown in Figure \[figFols\].
![Unstable foliation of $f_2$ on the annulus, in the local chart $\varphi_2\circ\psi$.[]{data-label="figFols"}](figFols.pdf)
On the disjoint union $S_3=[S_1 \setminus \varphi_1(D_{1/2})]\cup [S_2\setminus \varphi_2(D_{1/2})]$ consider the equivalence relation generated by $$\varphi_1(x)\sim \varphi_2\circ\psi (x)$$ for all $x\in A$. Denote by $[x]$ the equivalence class of $x$. The surface $S=S_3/\sim$ is the genus two surface if equipped with the quotient topology. Consider the homeomorphism $f\colon S\to S$ defined by $$f([x])=\left\{
\begin{array}{ll}
\left[f_1(x)\right] &\hbox{ if } x\in S_1 \setminus \varphi_1(D_{1/2})\\
\left[f_2(x)\right] &\hbox{ if } x\in S_2 \setminus \varphi_2(D_2)\\
\end{array}
\right.$$
For $x\in S$ and ${\eta}>0$ define the set $$\Gamma_{\eta}(x)=W^s_{\eta}(x)\cap W^u_{\eta}(x).$$
In order to prove that a homeomorphism $f$ is cw-expansive it is equivalent to find ${\eta}>0$ such that $\Gamma_{\eta}(x)$ is totally disconnected for all $x\in S$.
\[mainteo\] There are cw-expansive homeomorphisms of the genus two surface having a fixed point whose local stable set is connected but it is not locally connected.
Define $A_S=[\phi_1(A)]$ the annulus on $S$ corresponding to $A$. We will perturb the homeomorphism $f$ defined above on the annulus $A_S$. First note that the non-wandering set of $f$ is expansive and dynamically isolated, i.e. there is a neighborhood $U$ of the non-wandering set $\Omega$ such that if $f^n(x)\in U$ for all $n\in{\mathbb Z}$ then $x\in\Omega$. Also note that for every wandering point $x\in S$ there is $n\in{\mathbb Z}$ such that $f^n(x)\in A_S$. Therefore, it is sufficient to prove that there is a homeomorphism $g\colon S\to S$ such that $f|A_S=g|A_S$ and there is $\delta>0$ such that for each $x\in A_S$ the intersection $\Gamma_{\eta}(x)$ is totally disconnected. In Figure \[figFols\] we have the picture of the unstable foliation on $A_S$ (or in local charts). The problem is that the stable sets do not make a foliation, this is because there is an anomalous saddle. Then, it is convenient to consider the stable partition, i.e., the partition defined by the equivalence relation of being positively asymptotic. This partition is illustrated in Figure \[figStPart\].
![Stable partition on the annulus $A_S$.[]{data-label="figStPart"}](figStPart.pdf)
We know that the unstable leaves are circle arcs, as in Figure \[figFols\]. Therefore, it is sufficient to consider a $C^0$ perturbation $g$ of $f$ supported on $A_S$, such that the stable partition of $g$ in the annulus contains no circle arc, in local charts. See comments below. By the previous comments this implies that $g$ is cw-expansive. Since $g$ coincides with $f$ outside $A_S$, we have that $g$ has an anomalous saddle with non-locally connected stable set. This finishes the proof.
The example has further properties that we wish to remark. Given $N\geq 1$ we say that $f$ is $N$-*expansive* [@Mo12] if there is ${\eta}>0$ such that $|\Gamma_{\eta}(x)|\leq N$ for all $x\in S$, where $|A|$ stands for the cardinality of the set $A$. We have that the example of the previous proof is not $N$-expansive for all $N\geq 1$ because there are point with $|\Gamma_{\eta}(x)|=\infty$ for arbitrarilly small values of ${\eta}$.
We say that a probability measure $\mu$ on $S$ is an *expansive measure* [@MoSi] if there is ${\eta}>0$ such that $\mu(\Gamma_{\eta}(x))=0$ for all $x\in S$. Obviously, if $\mu$ is an expansive measure then $\mu(x)=0$ for all $x\in S$, i.e $\mu$ is non-atomic. In [@AD] it is shown that every non-atomic probability measure is expansive if and only if there is ${\eta}>0$ such that $|\Gamma_{\eta}(x)|\leq|{\mathbb Z}|$ for all $x\in S$. This property is called *countable-expansivity* and our example satisfies this condition.
In the generalized pseudo-Anosov shown in [@PPV; @PaVi] there is a finite number of *spines* (or 1-*prongs*), i.e. points whose local stable sets do not separate arbitrarilly small neighborhoods. This is a cw-expansive homeomorphism on the two-sphere. Our example has a countable set of spines, namely, the points in the set $E$ of Figure \[conjuntoE\] in the line $y=x$ give rise to spines in the example. As explained in [@PPV] the generalized pseudo-Anosov of the two-sphere has points with its local stable set non-locally connected. But the components are arcs. Our example has connected components not being locally connected. It seems to be the case that if we start with a set like the graph of $\sin(1/x)$, in place of the set $E$, we can obtain an anomalous saddle with no arc-connected stable set. Notice that the set $E$ is arc-connected.
Let us finally give some questions. May an example as in Theorem \[mainteo\] be smooth? Can it be transitive, i.e. to have a dense orbit?
Departamento de Matemática y Estadística del Litoral, Salto-Uruguay\
Universidad de la República\
E-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Correlations in the orbits of several minor planets in the outer solar system suggest the presence of a remote, massive Planet Nine. With at least ten times the mass of the Earth and a perihelion well beyond 100 AU, Planet Nine poses a challenge to planet formation theory. Here we expand on a scenario in which the planet formed closer to the Sun and was gravitationally scattered by Jupiter or Saturn onto a very eccentric orbit in an extended gaseous disk. Dynamical friction with the gas then allowed the planet to settle in the outer solar system. We explore this possibility with a set of numerical simulations. Depending on how the gas disk evolves, scattered super-Earths or small gas giants settle on a range of orbits, with perihelion distances as large as 300 AU. Massive disks that clear from the inside out on million-year time scales yield orbits that allow a super-Earth or gas giant to shepherd the minor planets as observed. A massive planet can achieve a similar orbit in a persistent, low-mass disk over the lifetime of the solar system.'
author:
- 'Benjamin C. Bromley'
- 'Scott J. Kenyon'
bibliography:
- 'planets.bib'
title: 'Making Planet Nine: A Scattered Giant in the Outer Solar System'
---
Introduction
============
The orbital alignment of minor planets located well beyond Neptune, including Sedna and , inspired @trujillo2014 to invoke a massive, unseen planet orbiting at roughly 200 AU from the Sun. Expanding on this analysis, @batygin2016 ([-@batygin2016]; see also @brown2016) propose a more distant planet which maintains the apsidal alignment for a set of six trans-Neptunian objects. With a mass more than ten times that of the Earth, this planet would have a semimajor axis between 300 AU to 1500 AU, an eccentricity within the range of roughly 0.2–0.8, an inclination below 40$^\circ$, and an apsis that is anti-aligned with the six minor planets. Subsequent work by @fienga2016, using precise Cassini radio ranging data of Saturn, places constraints on the perturber’s orbital phase.
The prospect of a lurking in the outer solar system provides a new opportunity to test our understanding of planet formation theory. Various mechanisms – coagulation [@kb2015a], gravitational instability [@helled2014 and references therein], and scattering [@bk2014] – can place a massive planet far from the host star. Aside from a direct detection of , testing these ideas requires numerical simulations which predict the properties of planets as a function of initial conditions in the protoplanetary disk.
Previous calculations of gas giant planet formation [@rasio1996; @weiden1996; @ford2005; @moorhead2005; @lev2007; @chatterjee2008; @bk2011a] demonstrate that growing gas giants clear their orbital domains by scattering super-Earths or more massive planets to large distances. If the surface density of the gaseous disk at large distances is small, scattered planets are eventually ejected. For disks with larger surface densities, however, dynamical friction damps a scattered planet to lower eccentricity [@dokuchaev1964; @rephaeli1980; @takeda1988; @ostriker1999; @kominami2002]. @bk2014 used simple models of disk-planet interactions to show that this mechanism plausibly circularizes the orbits of massive planets at 100–200 AU from the central star.
The @batygin2016 analysis poses a new challenge to scattering models. Although they propose a moderately eccentric orbit for their massive perturber, the planet has a semimajor axis beyond 300–400 AU. Our goal here is to show under what conditions a scattered planet can achieve this orbit through dynamical friction with a gas disk. We consider a wide range of planet and disk configurations, numerically simulate outcomes of these models, and assess how well they explain a in the outer solar system.
Method
======
To explore the possibility of a scattered origin for , we follow our earlier strategy [@bk2014]. We choose initial conditions for planetary orbits and gas disks and a mechanism for disk dissipation. We then track the orbital evolution with the $n$-body integration component of our code [@bk2006; @kb2008; @bk2011a]. In this section we provide details of the disk models and an overview of the numerical method, which includes an updated treatment of dynamical friction.
The disk models
---------------
To set the stage for relocating a scattered planet from 5–15 AU into the outer solar system, we model the Sun’s gas disk with the following prescription for surface density $\Sigma$, scale height $H$, and midplane mass density $\rhog$: $$\begin{aligned}
\label{eq:S}
&\ &
\Sigma(a,t) =
\begin{cases}
\Soh
\left(\frac{a}{\aoh}\right)^{\!-1}
e^{-t/\tau}, & \textrm{ if } \ain \leq a \leq \aout, \\
0 & \textrm{ otherwise,}
\end{cases}
\\
&\ &
H(a) = \hoh a \left(\frac{a}{\aoh}\right)^{\!\,2/7},
\\[7.5pt]
&\ &
\rhog(a,t) = \frac{\Sigma(a,t)}{H(a)} ~ .\end{aligned}$$ Here, $\Soh$ sets the surface density at distance $\aoh \equiv 1$ AU, $\ain$ and $\aout$ are the inner and outer edges of the disk, and $\hoh = 0.05$ establishes the scale height of the flared disk [@kh1987; @chiang1997; @and2007; @and2009]. The global surface density decay parameter $\tau = 1$–10 Myr enables a homologous reduction in the surface density [@haisch2001]. To allow the inner edge of the disk to expand, as in a transition disk, we adopt an expansion rate $\openrate$: $$\ain(t) = \ain(0) + \openrate t ,$$ where the initial size of the inner cavity is $\ain(0)\equiv 20$ AU. Observations of transition disks [@calvet2005; @currie2008; @and2011; @najita2015] suggest opening rates of O(10) AU/Myr.
Toward estimating dynamical friction and gas drag, we assume that the sound speed in the gas is $\cs \approx H\vkep/a$, where $\vkep$ is the circular Keplerian speed at orbital distance $a$ from the Sun. We also assume that both $H$ and $\cs$ are independent of time. Armed with these variables we can determine the Mach number of a planet moving relative to the gas, and hence derive drag forces on the planet. These estimates include the effect of pressure support within the gas disk, which makes the bulk flow in the disk sub-Keplerian, with an orbital speed that is reduced from $\vkep$ by a factor of $(1-H^2/a^2)$ [e.g., @ada76; @weiden1977a; @yk2013].
Table \[tab:parmsdisk\] lists disk model parameters. We distinguish two types of disks: static and evolving. Static disks have fixed surface density profiles and small total mass, $0.002 \Msolar < \Mdisk
< 0.06 \Msolar$ (2–60 $\Mjupiter$, where $\Mjupiter$ is the mass of Jupiter), which extend to 1600 AU. These models enable us to consider the possibility of long-term ($\gtrsim$100 Myr) planet-disk interactions. Evolving disks extend to 800 AU with larger initial surface density and mass. In the most extreme case ($\Soh = 1000$ g/cm$^2$), the disk mass is half that of the Sun. Although improbable [cf. @and2013], this extreme disk mass allows us to explore the possibility of rapid, strong orbital damping. Because dynamical friction depends on the gas density, $\rhog\sim
\Sigma/H$, we can scale results to less massive disks with smaller scale heights.
Scattered planets
-----------------
For each disk configuration in Table \[tab:parmsdisk\], we carry out simulations with planets scattered to large ($> 1000$ AU) distances. As summarized in Table \[tab:parmsplanet\], each planet is assigned a mass $\mp$ and mean density $\rhop$, from which we infer a physical radius, $\rp$. In our orbital dynamics code, the planet is launched from a perihelion distance of $\peri = 10$ AU with a speed that would take it to a specified aphelion distance $\apo$ if it were on a Keplerian orbit about the Sun. We track the subsequent dynamical evolution with orbital elements calculated geometrically, since the disk potential can complicate the interpretation of osculating orbital elements. While the planet’s orbit typically comes close to the nominal starting aphelion, it never makes it to $\apo$ exactly due to dynamical friction with the gas and the disk’s overall gravitational potential.
Numerical approach
------------------
To evolve planetary orbits in a gas disk, we follow @bk2014. We use the orbit integrator in our hybrid $n$-body–coagulation code to calculate the trajectory of individual planets around the Sun in the midplane of the disk. We calculate disk gravity using 2000 radial bins spanning the planet’s orbit, assigning a mass to each bin according to Equation (\[eq:S\]). We initially solve the Poisson equation by numerical integration, storing the results. The saved potential is updated as the disk evolves.
To estimate acceleration from dynamical friction, we adopt a parameterization similar to @lee2014 in the absence of gas accretion [see also @dokuchaev1964; @ruderman1971; @ostriker1999]: $$\label{eq:drag}
\frac{d\vec{v}}{dt} =
-\frac{G^2 \rhog \mp \ma}{\cs^2}
\frac{(1+4\pi^2\Cdyn^2\ma^2)^{1/2}}{(1+\ma^2)^{2}} \frac{\vecdv}{|\vecdv|},$$ where $\vecdv$ is the planet’s velocity relative to the gas, $\ma \equiv |\vecdv|/\cs$ is the mach number, and $\Cdyn$ is a constant that depends on the geometry of the disk in the plane perpendicular to the planet’s motion.
In evaluating the coefficient $\Cdyn$, we previously only considered contributions from gas more distant than $H/2$ from a planet [@bk2014]. Here we are less restrictive and include contributions from material closer to the planet. The drag acceleration thus has a piece from distant disk material in slab geometry [@bk2014], along with a Coulomb logarithm [e.g., @binn2008]: $$\Cdyn \approx 0.31 + \ln\left[\max(1,\frac{H}{2R})\right],$$ where the radius $R$ is $$R \equiv \max(\rp, \Rsonic)$$ and $$\Rsonic
= \frac{1}{\ma_x^2-1} \frac{2 G\mp}{cs^2}
\ \ \ \ \ [\ma_x^2 = \max(\ma^2,1.0001)]$$ is an effective sonic radius [e.g., @thun2016].
With this formulation, our goal is to map how scattered planets with large eccentricities ($e\lesssim 1$) damp to modest values ($e \sim
0.5$) when the planet moves at supersonic speeds. Once the planet achieves low eccentricity, its subsequent evolution is complicated by differential torque exchange with the disk [@gold1980; @ward1997] and accretion of disk material [@hoyle1939; @lee2011]. We do not attempt to track this behavior. Our prescription underestimates dynamical friction in the subsonic and transonic regimes [cf. @ostriker1999]. Thus we follow a planet as its orbit damps, but stop the integration if it manages to fully circularize before the gas disappears.
Results
=======
We ran over 10$^4$ simulations to map out the parameter space of disk and planet configurations. To describe our results, we first consider low-mass, static disks, which isolate the physics of dynamical damping without the complications of disk evolution. To evaluate damping outcomes over the 1–10 Myr lifetimes of typical protoplanetary disks, we then consider a set of evolving disks.
Relocation of a scattered planet in a long-lived disk
-----------------------------------------------------
In this set of simulations, we set up static disks with low surface density ($\Soh = 2$–50 g/cm$^2$), large radial extent ($\aout = 1600$ AU), and big inner cavities ($\ain = 50$–200 AU). We evolve scattered planets over a time $t = 100$ Myr. Figure \[fig:aetm\] shows several outcomes with planets of different masses. All planets follow the same track in $a$–$e$ space as they circularize; more massive planets evolve further along the track.
Figure \[fig:aetd\] illustrates how evolution depends on the disk configuration. The plot shows three separate evolutionary tracks in $a$–$e$ space, each corresponding to a different inner edge of the disk ($\ain$). With a smaller inner edge, the disk causes a planet to settle more quickly (because there is more disk material to interact with) and closer to the Sun. The markers in the plot designate how far each planet evolves. Planets in low surface density disks make less progress along their track than those in high surface density disks.
These calculations establish an approximate degeneracy between mass and the surface density parameter $\Soh$ in the formula for dynamical friction acceleration (Equation (\[eq:drag\])). As a result, the progress that a planet makes along its $a$–$e$ track in a fixed amount of time depends only on the product, $\mp\times\Soh$. Thus, data points showing the orbital evolution as a function of planet mass in Figure \[fig:aetm\] can also represent the progress of a planet of fixed mass in disks with different surface densities. Similarly points in Figure \[fig:aetd\] can represent outcomes with different planet masses at fixed surface density.
If long-lived disks are responsible for settling a planet on the type of orbit inferred by @batygin2016, then our suite of simulations suggests the following condition leads to successful -like outcomes: $$\left(\frac{\Soh}{10~\textrm{g/cm$^2$}}\right)
\left(\frac{\mp}{20~\Mearth}\right) \approx
\left(\frac{\ain}{100~\textrm{AU}}\right)^{1/2}
\left(\frac{t}{100~\textrm{Myr}}\right)^{-1} ~ .$$ This expression applies as long as $\aout \gg \ain$. While it is only approximate, this relation suggests that a persistent, low-mass disk ($\Soh \approx 0.3$ g/cm$^2$; about a quarter of a Jupiter mass in gas) can modestly damp a scattered Neptune-size body within the age of the solar system.
Settling in an evolving disk
----------------------------
Observations indicate that the youngest stars are surrounded with opaque disks of gas and dust [see @kh1995; @kgw2008; @will2011; @and2015]. Surface densities vary; the “Minimum Mass Solar Nebula” value of $\Soh \approx$ 2000 g/cm$^2$ [@weiden1977b; @hayashi1981] is at the upper end of the range observed in the youngest stars [e.g., @and2013; @najita2014]. These disks globally dissipate on a time scale, $\tau$, of millions of years [@haisch2001], and may also erode from the inside out at a rate of $\openrate \gtrsim O(10)$ AU/Myr, as in transition disks [e.g., @and2011; @najita2015]. The resulting behavior of a scattered planet as it settles depends sensitively on how mass is distributed in these disks as they evolve.
Figure \[fig:aem\] illustrates the dependence of planetary settling on disk parameters $\tau$, $\openrate$, the initial disk surface density, and the inner disk edge. Adopting a baseline model where ($\Soh$,$\ain$,$\openrate$,$\tau$) = (1000 g/cm$^2$, 60 AU, 40 AU/Myr, 4 Myr), we vary individual disk parameters for planets with masses of 15–30 $\Mearth$, scattered to starting distances of $\apo = 2000$–2800 AU. The general trends are clear. Dynamical settling to small orbital distance and low eccentricity is more effective in massive, slowly evolving disks with small inner cavities.
Figure \[fig:aep\] summarizes the outcomes of all of our evolving disk models (see Tables \[tab:parmsdisk\] and \[tab:parmsplanet\]). The trends that emerged in Figure \[fig:aem\] are apparent in this Figure as well: long-lived, massive disks lead to significant dynamical evolution, while short-lived, low-mass disks do not. Figure \[fig:aep\] also shows an extended “sweet spot” in $a$–$e$ space, labeled with “Planet Nine,” roughly corresponding to orbital elements of the massive perturber hypothesized by @batygin2016 and @brown2016. Several hundred models yield planets that lie in the sweet spot, suggesting that the scattering mechanism can explain the inferred orbit of in the outer solar system.
Despite the trends revealed in Figures \[fig:aetm\]–\[fig:aem\], it is difficult to tell which set of model parameters leads to successful -like orbits. To distinguish models in a way that highlights successful ones, we define two variables, $$\begin{aligned}
\label{eq:P}
P & \equiv & \left(\frac{\mp}{10~\Mearth}\right)
\left( \frac{\Soh}{1000~\textrm{g/cm}^2}\right)
\left(\frac{\apo}{1000~\textrm{AU}}\right)^{-1}
\\
\label{eq:Q}
Q & \equiv &
\left(\frac{\tau}{1~\textrm{Myr}}\right)
\left[\left(\frac{\openrate}{60~\textrm{AU/Myr}}\right)
\left(\frac{\ain}{\textrm{20 AU}}\right)
\left(\frac{\apo}{\textrm{1000 AU}}\right)\right]^{-1}.\end{aligned}$$ Roughly, the first variable is a mass-dependent damping rate, determined by planet mass and the disk mass, along with a factor of $1/\apo$ that reduces this rate if the planet is launched further away from the Sun. The second one measures the disk lifetime, based on the global disk decay time and the time for the disk to clear from the inside out, along with geometric factors involving the disk’s radial extent and the planet’s initial orbit. For models where $\tau$ is formally infinite, we set $\tau = 10$ Myr, the simulated duration of the evolving disk models.
The variables $P$ and $Q$ help to isolate the parameters that are necessary for a scattered giant planet to settle on a -like orbit. Our choice for defining these quantities, along with the mass, length and time scales in Equations (\[eq:P\]) and (\[eq:Q\]), are based more on simplicity than anything else; other combinations of model parameters may serve the same purpose. Nonetheless, our choice yields a nicely compact region in $P$–$Q$ space for models that succeed in matching Batygin and Brown’s (2016) criteria for .
Figure \[fig:pixx\] shows a swath of points in the $P$–$Q$ plane that correspond to successful models. These points have just the right balance between the masses of the disk and the planet ($P$) on the one hand, and disk lifetime ($Q$) on the other. Models without this balance tend to produce planets that circularize at small semimajor axes (high $P$ and high $Q$; large masses and long disk lifetimes) or remain highly eccentric at large semimajor axes (low $P$ and low $Q$; small masses and short disk lifetimes).
A rough quantitative relationship between $P$ and $Q$ for successful models is $Q \sim P^{-3/2}$; in terms of model parameters, this condition translates to: $$\label{eq:PQ}
\left(\frac{\mp}{1~\Mearth}\right)
\left(\frac{\Soh}{1000~\textrm{g/cm}^2}\right)
\approx
\left(\frac{\openrate}{40~\textrm{AU/Myr}}\right)^{\!2/3}
\!
\left(\frac{\ain}{\textrm{40 AU}}\right)^{\!2/3}
\!
\left(\frac{\tau}{1~\textrm{Myr}}\right)^{\!-2/3}
\!
\left(\frac{\apo}{1000~\textrm{AU}}\right)^{\!-1/3},$$ from which the inverse relationship between disk lifetime and mass factors is apparent, as is the sensitivity to disk and orbit geometries.
Summary of simulation outcomes
------------------------------
These simulations suggest a broad range of outcomes in $a$–$e$ space for 1–50 planets scattered from 10 AU into the outer part of a gaseous disk. For many combinations of input parameters, planets remain on $e \gtrsim$ 0.80 orbits at $a \gtrsim$ 400 AU. Another set of parameters yields in massive planets on nearly circular orbits at 100–200 AU from the host star. Specific combinations of disk and planet properties result in “successful models,” with planets on orbits consistent with the massive perturber of @batygin2016:
1. A planet scattered at low inclination into a low-mass, long-lived disk damps at a rate proportional to the planet mass and the disk’s surface density. The final semimajor axis depends on $\ain$, the inner radius of the disk; successful models have $\ain
\gtrsim 50$ AU. A power-law disk ($\Sigma \sim 1/a$) with low surface density ($\Sigma \sim 3\times 10^{-3}$ g/cm$^2$ at 100 AU, extending to $\sim 1000$ AU) can produce a Neptune-size on a moderately eccentric orbit within the lifetime of the solar system. Higher planet masses or larger surface densities lead to success in less time.
2. Scattering in a massive, short-lived disk leads to -like orbits when there is a balance between the damping rate and the disk evolution time scale. In successful models, planet masses are typically 10 $\Mearth$ or more, although 5 $\Mearth$ planets can acquire a -like orbit in the most massive, long-lived disks. Most successful models experience either slow global decay ($\tau =
4$ Myr) or none at all. The disk then evolves primarily through inside-out erosion, as in a transition disk. This feature helps successful planets settle at large semimajor axes.
3. In all of our simulations, successful models tend to have semimajor axes that lie within $a = 600$ AU. Batygin and Brown’s preferred model has a higher orbital distance, with $a \approx
700$ AU [see also @molhatra2016; @brown2016]. While their analysis accommodates a wide range of possibilities, our current models do not. If a massive perturber were to have a semimajor axis firmly established beyond 700 AU, our mechanism would require a disk with more mass beyond a few hundred AU.
Discussion {#sect:discuss}
==========
Although a in the outer solar system has not yet been confirmed, several massive exoplanets have been identified at large distances from their host stars. The outermost planet in the HR 8799 system has a semimajor axis of $a \approx$ 70 AU [@marois2008; @maire2015]. The planets in 1RXS J160929.1$-$210524 [$a
\approx$ 330 AU; @lafren2010], and HD 106906 b [$a \approx$ 650 AU @bailey2014] have much larger semimajor axes. Gravitational instability is a popular mechanism to produce planets with such large $a$ [e.g., @helled2014; @rice2016]. Our calculations demonstrate that a planet scattered from $a \approx$ 10 AU can interact with a gaseous disk and settle on roughly circular orbits at much larger $a$. Thus, scattering is a viable alternative to disk instability for placing massive planets at large $a$.
In our approach, we do not consider whether a scattered planet might accrete gas as its orbit damps [cf. @hoyle1939]. In principle, planets at large $a$ might accumulate significant amounts of gas in 1–10 Myr. Whether accreting planets end up in configurations similar to those of the gas giants in HR 8799, 1RXS J160929.1$-$210524, and HD 106906 b requires an expanded set of more physically realistic simulations which are beyond the scope of the present work.
Here, we have focused on identifying initial conditions that yield a planet of fixed mass on an orbit with $e \sim$ 0.2–0.8 at $a \gtrsim$ 300 AU. With over $10^4$ models, we survey a variety of planet masses, scattered orbits, and configurations of the gas disk. A large central cavity in the disk, as observed in some transition disks [e.g., @and2011], is essential to settling at large orbital distances. Throughout, we assume that scattering and subsequent damping occur at low inclination; this condition is necessary for optimal interaction with the disk. A low scattering inclination is also expected. Damping by gas and planetesimals in the gas giant region likely kept larger bodies on orbits that were nearly coplanar with the gas disk [e.g., @liss1993b].
Our “successful” models — with outcomes that have the orbital characteristics of Batygin and Brown’s ([-@batygin2016]) inferred massive perturber — are those that balance planet and disk masses with disk longevity. In models where the disk is long-lived but low-mass, a planet like Neptune can settle within a few billion years. Successful models with more rapid gas dissipation require more massive disks. Disks that evolve on time scales of a few million years can lead to -like orbits, only if the initial disk mass is about 0.1 $\Msolar$ or more. A smaller disk scale height and/or reduced flaring of the disk [e.g. @keane2014] can reduce this restriction on the disk mass.
In addition to our proposal for scattering and damping as an origin for a massive perturber in the outer solar system, there are other compelling possibilities. These include *in situ* formation, late-time dynamical instabilities (the Nice model), passing stars, and Galactic tides. Each of these phenomena lead to different outcomes for .
*In situ* formation of is possible when disk evolution produces a massive ring of solids beyond 100 AU. Coagulation may then grow super-Earths in 1–5 Gyr out to distances of 750 AU [@kb2015a; @kb2016b]. In this mechanism, super-Earths reside on fairly circular orbits. For comparison, scattered planets can damp to circular orbits only inside of $\sim 200$ AU (see Fig. \[fig:aep\]). Thus, a found at a large orbital distance with low eccentricity and low inclination strongly favors *in situ* formation. While not the favored choice of @batygin2016, it is unclear whether current observations explicitly rule out circular orbits for the massive perturber.
Other purely dynamical events can also produce a . In the Nice model [@tsig2005], a dynamical instability after the gaseous disk has dispersed can scatter a fully-formed giant planet into the outer solar system. Most scattered planets are ejected [e.g., @nesvorny2011]. If damping within a residual, low surface density gaseous disk is possible, some scattered planets might be retained on high eccentricity ($e\gtrsim 0.9$), low inclination ($i \lesssim 10^\circ$) orbits [e.g., @marzari2010; @raymond2010]. For a massive , the high $e$ orbit might distinguish this mechanism from our model, where scattering occurs when the inner edge of the disk lies much closer to the Sun.
A passing star — perhaps a member of the Sun’s birth cluster [@adams2001] — can also relocate in the outer solar system. Outcomes vary widely, depending on the planet’s initial orbit [e.g. @koba2001; @kb2004d; @morby2004a; @brasser2006; @kaib2008]. If the planet starts on a circular orbit in the ecliptic plane, a stellar flyby will give it a strong kick in eccentricity but only a mild boost in perihelion distance and inclination. Thus, if the planet’s present-day semimajor axis is above 400 AU, it is likely to have an eccentricity of 0.9 or more, unless it formed well beyond Neptune.
A passing star can yield a broader range of outcomes if were on an eccentric orbit at the time of the flyby, perhaps as a result of a previous stellar encounter or the scattering mechanism considered here. Alternatively, if the Sun captured from the passing star, the possibilities are even greater [e.g., Figures 2 and 3 of @kb2004d see also @morby2004a, @levison2010b, @jilkova2015]. However, the likelihood of this eventuality seems low [@li2016].
Finally, we consider the effect of tides from the Galactic environment. For Oort cloud comets, the gravitational potential of the Galaxy dominates the orbital evolution [@heisler1986; @duncan1987]. However, tidal effects become weak inside $10^4$ AU; evolutionary time scales are then long, 100 Myr or more. For objects with a semimajor axis within 1000 AU, the Galactic tide causes only small changes in the orbit over the age of the solar system [@higuchi2007; @brasser2008]. In our static disk models, the semimajor axes we consider are at 800 AU and smaller; a putative is then shielded from tidal interaction. In the evolving disk models, we use a maximum semimajor axis of 1800 AU. However, in successful models, the semimajor axis falls well below 700 AU within 10 Myr, well within the tidal evolution time scale.
In the absence of any gas, the Galactic tide can influence the orbit of a initially scattered beyond $\sim 1000$ AU within a billion years of the solar system’s formation. Torque from the Galactic potential then raises both the perihelion and the inclination of the orbit [e.g., @duncan1987; @higuchi2007; @brasser2008]. The hallmark of this process would be a semimajor axis exceeding 1000 AU, a perihelion distance of at least 100 AU, an eccentricity of 0.8–0.9, and an inclination that may be anywhere from $i = 0^\circ$ to $\sim$135$^\circ$ [e.g., Figures 9 and 11 of @higuchi2007].
Tides from the Sun’s birth cluster may have had an even more dramatic effect than the Galactic tide [e.g., @brasser2006]. However, if this cluster was typical of other embedded clusters, it would have disintegrated quickly, within 2–3 Myr [see @lada2003]. The density of stars in the cluster, the Sun’s orbit through it, and the timing of the cluster dispersal relative to the formation of the gas giants are all uncertain. If ’s final orbit was determined by interactions during this phase of the Sun’s history, then its high perihelion distance would also likely be accompanied by a high inclination [e.g., Figures 6 and 8 of @brasser2006].
Observations of exoplanetary systems provide ways to test these scenarios. Over the next 10–20 yr, direct imaging will probably yield large samples of gas giants at large $a$. Comparison of the observed properties of these systems with the predictions of numerical simulations should enable constraints on the likelihood of any particular theoretical model. For stars with ages of 5–10 Myr, current data suggest many systems with $\lesssim$ 1 Jupiter mass of gas [e.g., @dent2013]. Expanding surveys to older stars and reducing upper limits on the mass in gas by an order of magnitude would challenge some of our scattering models.
In the solar system, identifying and new dwarf planets is essential for making progress. As outlined in @batygin2016, larger samples of dwarf planets provide additional constraints on any . A robust detection of a massive perturber [see @cowan2016; @linder2016; @ginzburg2016; @delafuenta2016] and direct measurement of orbital elements allow discrimination between the various possibilities for the origin and evolution of . If interactions with a gas disk turn out to be important, the next step is to obtain more realistic predictions of scattering outcomes with hydrodynamical simulations. Combined with observations of exoplanets, these advances might determine the fate of scattered planets.
We are grateful to M. Geller, J. Najita and D. Wilner for comments and helpful discussions. NASA provided essential support for this program through a generous allotment of computer time on the NCCS ’discover’ cluster and [*Outer Planets Program*]{} grant NNX11AM37G.
[lcll]{}\
Name & Symbol & Value or Range & Units\
radial length scale & $a_0$ & 1 & AU\
scale height factor & $\hoh$ & 0.05 & –\
\
surface density & $\Soh$ & 2, 10, 20, 50 & g/cm$^2$\
initial inner edge & $\ain$ & 50, 100, 200 & AU\
outer edge & $\aout$ & 1600 & AU\
\
surface density & $\Soh$ & 50, 100, 200, 500, 1000 & g/cm$^2$\
initial inner edge & $\ain$ & 20, 60, 100, 140, 180 & AU\
outer edge & $\aout$ & 800 & AU\
opening rate ($\aindot$) & $\openrate$ & 20, 40, 60, 80 & AU/Myr\
decay time & $\tau$ & 2, 4, $\infty$ & Myr\
[lcll]{}\
Name & Symbol & Value or Range & Units\
mass & $\mp$ & 1, 5, 10, 15, 20, 30, 50 & $\Mearth$\
mean density & $\rhop$ & 1.33 & g/cm$^3$\
initial perihelion & $\peri$ & 10 & AU\
initial aphelion & $\apo$ & 1600, 2000, 2400, ..., 3600 & AU\
inclination & $i$ & 0 & rad\
![\[fig:aetm\] Simulations of the orbital evolution of scattered planets in a static gas disk. Solid curves show the semimajor axis ($a$) and eccentricity ($e$) of four planets with various masses as they evolve over a period of 100 Myr. The legend indicates the set of disk parameters. All planets start at the same high $a = 800$ AU and $e = 0.9875$ and evolve along a single path towards smaller values of $a$ and $e$. circular symbols show the final outcomes after 100 Myr, labeled with planet mass. More massive planets evolve faster and progress further along the path. The gray region approximates the allowed range of $a$ and $e$ from @batygin2016 for .](f1.eps){width="7.0in"}
![\[fig:aetd\] Orbital evolution of a planet in various configurations for a static disk. As in Fig. \[fig:aetm\], planets starting with large $a$ and $e$ follow specific tracks which depend on the inner edge of the disk ($\ain$). Larger inner cavities allow planets to settle at larger $a$. For a fixed planet mass of 10 $\Mearth$, the final location in the $a$–$e$ plane depends on the surface density parameter ($\Soh$). In disks with higher surface density, orbits evolve more quickly and reach smaller $a$ and $e$. ](f2.eps){width="7.0in"}
![\[fig:aem\] Semimajor axis and eccentricity at 10 Myr for scattered planets with masses between 15 $\Mearth$ and 30 $\Mearth$ in evolving disks with baseline parameters $(\Soh,\ain,\openrate,\tau)$ = (1000 g/cm$^2$, 60 AU, 40 AU/Myr, 4 Myr) and a range of initial aphelion distances (2000–2800). Each panel illustrates how outcomes change when one of these parameters is varied in the range specified in the legend. Symbol shades indicate the value of the varied parameter from white (lower values) to black (upper values). In all panels, symbol size correlates with planet mass. ](f3.eps){width="7.0in"}
![\[fig:aep\] Outcomes for scattered planets at 10 Myr in evolving disks. Symbol size correlates with planet mass; shading indicates initial aphelion distance (lightest: 1600 AU; darkest: 3600 AU). More massive planets starting at the smallest aphelion distance settle at smaller orbital distances with lower eccentricities. Variations in the initial disk configuration and the mode/time scale for disk dissipation move the final $(a,e)$ along the sequence outlined in the figure. ](f4.eps){width="7.0in"}
![\[fig:pixx\] Disk lifetime and mass parameters describing outcomes for scattered planets in evolving disks. Each point in this space of mass and disk lifetime parameters ($P$ and $Q$; see Equations (\[eq:P\]) and (\[eq:Q\])) corresponds to an individual simulation with a unique set of planet and disk configurations. Dark circles with black outlines indicate successful models, which roughly match the orbital parameters of in @batygin2016 and are located in the shaded region in Figure \[fig:aep\]. The line running through these points is from the approximation in Equation (\[eq:PQ\]). The light gray points points, with the lightest shade of gray, cover the unsuccessful models where orbits are either too remote and eccentric or too close to the Sun and circular. ](f5.eps){width="7.0in"}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper continues investigation of the class of flag simple polytopes called 2-truncated cubes. It is an extended version of the short note [@V3]. A 2-truncated cube is a polytope obtained from a cube by sequence of truncations of codimension 2 faces. Constructed uniquely defined function which maps any 2-truncated cube to a flag simplicial complex with $f$-vector equal to $\gamma$-vector of the polytope. As a corollary we obtain that $\gamma$-vectors of 2-truncated cubes satisfy Frankl-Furedi-Kalai inequalities.'
author:
- 'V. D. Volodin[^1]'
title: 'Geometric realization of $\gamma$-vectors of 2-truncated cubes.'
---
Introduction
============
E.Nevo and T.K.Petersen (see [@NP]) studied $\gamma$-vectors of generalized associahedra and proved that $\gamma$-vectors of Stasheff polytopes and Bott-Taubes polytopes can be realized as $f$-vectors of some simplicial complexes. This result gave rise to the following problem.
For given flag simple polytope $P$ construct simplicial complex $\Delta(P)$ such that $\gamma(P) = f(\Delta(P))$.
In [@Ai] N.Aisbett solved this problem for flag nestohedra. The construction introduced in [@Ai] used specific of building sets and was based on the fact that any flag nestohedron is a 2-truncated cube, i.e. can be obtained from the cube by sequence of truncations of codimension 2 faces (see [@V1; @V2]). Results about 2-truncated cubes one can find in [@BV].
In the present paper we introduce the construction which for every 2-truncated cube gives required simplicial complex, i.e. solve the problem for class of all 2-truncated cubes. Moreover, we obtain that constructed complex is flag.
For every 2-truncated cube $P^n$ there exists flag simplicial complex $\Delta(P)$ such that $\gamma(P)=f(\Delta(P))$.
In the proof we use the construction that for a given sequence of truncations defines a unique simplicial complex with required $f$-vector. This construction is inductive and build such complexes (on same vertex set) for all faces of the 2-truncated cube. Then we obtain a function $\Delta(Q)$ on the set of faces $G$ of $P$. This function is monotonic, i.e. $\Delta(Q_1)\subset \Delta(Q_2)$ provided by $Q_1\subset Q_2$. As a corollary we prove that $\gamma$-vectors of 2-truncated cubes satisfy Frankl-Furedi-Kalai inequalities. For dimensions 2 and 3 the required complex is a set of $\gamma_1(P)$ points. For dimensions 4 and 5 the required complex is a graph with $\gamma_1(P)$ vertices and $\gamma_2(P)$ edges without triangles.
When this paper was in preparation there appeared [@Ai2] in Archive. Central result of [@Ai2] coinside with the central result of the note [@V3] which is a short version of the present paper.
Face polynomials
================
The convex $n$-dimensional polytope $P$ is called *simple* if its every vertex belongs to exactly $n$ facets.\
Let $f_i$ be the number of $i$-dimensional faces of an $n$-dimensional polytope $P$. The vector $(f_0,\ldots,f_n)$ is called the $f$-vector of $P$. The $F$-polynomial of $P$ is defined by: $$F(P)(\alpha,t)=\alpha^n+f_{n-1}\alpha^{n-1}t+\dots +f_1\alpha t^{n-1}+f_0 t^n.$$ The $h$-vector and $H$-polynomial of $P$ are defined by: $$H(P)(\alpha,t)=h_0\alpha^n+h_1\alpha^{n-1}t+\dots+h_{n-1}\alpha t^{n-1}+h_n t^n=F(P)(\alpha-t,t).$$ The $g$-vector of a simple polytope $P$ is the vector $(g_0,g_1,\dots,g_{[\frac{n}{2}]})$, where $g_0=1,\quad g_i=h_i-h_{i-1}, i>0$.\
The Dehn-Sommerville equations (see [@Zi]) state that $H(P)$ is symmetric for any simple polytope. Therefore, it can be represented as a polynomial of $a=\alpha+t$ and $b=\alpha t$: $$H(P)=\sum\limits_{i=0}^{[\frac{n}{2}]}\gamma_i(\alpha t)^i(\alpha+t)^{n-2i}.$$ The $\gamma$-vector of $P$ is the vector $(\gamma_0,\gamma_1,\dots,\gamma_{[\frac{n}{2}]})$. The $\gamma$-polynomial of $P$ is defined by: $$\gamma(P)(\tau)=\gamma_0+\gamma_1\tau+\dots+\gamma_{[\frac{n}{2}]}\tau^{[\frac{n}{2}]}.$$
Class of 2-truncated cubes
==========================
In this section we introduce the class of 2-truncated cubes. Proofs of the propositions and more results about this class one one can find in [@BV].
We say that simple polytope $\tilde P$ is obtained from simple polytope $P$ by truncation of the face $G\subset P$, if simplicial complex $\partial \tilde P^*$ is obtained from the simplitial complex $\partial P^*$ by stellar subdivision along the simplex $\sigma_G$ corresponding to the face $G$. Polytope $\tilde P$ has new facet corresponding to the new vertex $v_0\in \partial \tilde P^*$.\
\
Unformally, polytope $\tilde P$ is obtained from $P$ by shifting the support hyperplane of $G$ inside polytope $P$. The new facet $\tilde F_s$ of polytope $\tilde P$ corresponding to the new vertex $v_0\in \partial \tilde P^*$ is defined by the section. We will call it the *section facet* $\tilde F_s$.
Truncation of a face $G$ of codimension 2 will be called 2-truncation. A combinatorial polytope obtained from a cube by 2-truncations will be called a 2-truncated cube.
\[new-face\] In this case the section facet will have combinatorial type $G\times I$. After 2-truncation facet $F$ of $P$ either stays unchanged (if $G\supset F$ or $G\cap F=\emptyset$) or handles 2-truncation of face $F\cap G$. Then, for each face $\tilde Q$ of $\tilde P$ there exists a unique face $Q$ such that either $\tilde Q$ is obtained from $Q$ by 2-truncation of $G\cap Q$ or $\tilde Q$ is unchanged (or perturbed) face $Q$ of $P$ or $\tilde Q = Q \times I\subset G\times I$.
\[shave\] Let the $\tilde P$ be obtained from the simple polytope $P$ by 2-truncation of the face $G$, then $$\label{gamma-change}
\gamma(\tilde P)=\gamma(P)+\tau\gamma(G).$$
\[flagshave\] Any 2-truncation keeps flagness.
\[truncated\_ring\] Every face of 2-truncated cube is a 2-truncated cube.
Main results
============
Simplicial complex is called *flag*, if its every clique forms a simplex. For simplicial complex $K$ of dimension $d$ the $f$-polynomial is defined by $f(K) := 1 + f_0 t + \dots + f_d t^{d+1}$, where $f_i$ are the numbers of $i$-dimensional faces. The central result of the paper is following.
For every 2-truncated cube $P$ there exists a flag complex $\Delta(P)$ such that $\gamma(P)=f(\Delta(P))$.
Let $P$ be a 2-truncated cube with fixed sequence of truncations defined by section facets $F_1, \ldots, F_m$. For every face $Q\subset P$ including $Q=P$, let us construct simplicial comlex $\Delta(Q)$ on the vertex set $W(P) = \{w(F_1),\ldots, w(F_m)\}$.
\[gamma-complex\] For $P=I^n$ we have $W(P)=\emptyset$ and $\Delta(Q)=\emptyset$ for all the faces.
Assume that required family of simplicial complexes is constructed for polytope $P$ which is obtained from the cube by sequence of 2-truncations corresponding to sequence $F_1,\ldots,F_{m-1}$ of section facets of $P$. Let polytope $\tilde P$ be obtained from $P$ by 2-truncation of face $G_m\subset P$. Then, $W(\tilde P)=W(P)\cup\{w(F_m)\}$, where $w(F_m)$ corresponds to the new facet $F_m$ of $\tilde P$.
Consider arbitrary face $\tilde Q\subset \tilde P$. Let $Q$ be the face from remark \[new-face\]. Then,$$\label{gamma-complex-defn}
\Delta(\tilde Q):=
\begin{cases}
\Delta(Q) \cup (\Delta(G_m\cap Q)\star w(F_m)),&\text{if $\tilde Q$ is obtained from $Q$ by 2-truncation of $G_m\cap Q\subset Q$;}\\
\Delta(Q),&\text{otherwise.}\\
\end{cases}$$
The number of connected components of $\Delta(P)$ is not greater than number of cubes among truncated faces $G_1,\ldots, G_m$.
\[delta-intersection\] For every $k$-face $Q^k$ of $P$ we have $$\Delta(Q^k) = \bigcap_{F^{n-1}\supset Q^{k}}\Delta(F^{n-1})$$
Function $\Delta(\cdot)$ is monotonic, i.e. $\Delta(Q_1)\subset\Delta(Q_2)$ provided by $Q_1\subset Q_2$.
The lemma holds for $P=I^n$. Assume it holds for $P$ and prove it for $\tilde P$ obtained from $P$ by 2-truncation of face $G$. Notice, that it is enough to prove lemma for faces of codimension 2. Let $\tilde Q=\tilde F_1\cap \tilde F_2\subset \tilde P$ be such a face. According to remark \[new-face\], we have 5 possible cases:
1. Both $\tilde F_1$ and $\tilde F_2$ are facets $F_1$ and $F_2$ of $P$ not changed by truncation;
2. Facet $\tilde F_1$ is obtained from $F_1\subset P$ by 2-truncation, facet $\tilde F_2$ is unchanged facet $F_2$ of $P$;
3. Both faces $\tilde F_1$ and $\tilde F_2$ are obtained from faces $F_1$ and $F_2$ of $P$ by 2-truncations;
4. Facet $\tilde F_1$ is the section facet $\tilde F_s$ of $\tilde P$, facet $\tilde F_2$ is unchanged facet $F_2$ of $P$;
5. Facet $\tilde F_1$ is the section facet $\tilde F_s$ of $\tilde P$, facet $\tilde F_2$ is obtained from $F_2\subset P$ by 2-truncation.
The case 1 is obvious. In the case 2 face $\tilde Q$ is unchanged face $Q$ of $P$. Then, $$\begin{aligned}
\Delta(\tilde F_1)\cap \Delta(\tilde F_2) = (\Delta(F_1) \cup (\Delta(G\cap F_1)\star w(\tilde F_s)))\cap\Delta(F_2)=\\=\Delta(F_1)\cap\Delta(F_2)=\Delta(F_1\cap F_2)=\Delta(\tilde F_1\cap\tilde F_2) = \Delta(\tilde Q).\end{aligned}$$ In the case 3 face $\tilde Q$ is obtained from face $Q=F_1\cap F_2$ by truncation of its face $G \cap Q$. Then, $$\begin{aligned}
\Delta(\tilde F_1)\cap \Delta(\tilde F_2)= (\Delta(F_1) \cup (\Delta(G\cap F_1)\star w(\tilde F_s)))\cap (\Delta(F_2) \cup (\Delta(G\cap F_2)\star w(\tilde F_s)))=\\=\Delta(F_1\cap F_2)\cup(\Delta(G\cap F_1 \cap F_2)\star w(\tilde F_s))=\Delta(\tilde F_1\cap \tilde F_2)=\Delta(\tilde Q).\end{aligned}$$ In the case 4 we have $\Delta(\tilde F_s)=\Delta(G)\subset \Delta(F_2)$ since $G \subset F_2$. Then, $$\begin{aligned}
\Delta(\tilde F_1)\cap \Delta(\tilde F_2) = \Delta(G)\cap \Delta(F_2) = \Delta(G) = \Delta(\tilde F_1\cap \tilde F_2)=\Delta(\tilde Q).\end{aligned}$$ In the case 5 we have $\Delta(\tilde F_s\cap \tilde F_2)=\Delta(\tilde F_s\cap \tilde F_2\cap \tilde F_3)$, where $\tilde F_3$ is a facet from the previous case. Then, the required relation follows from the previous cases and from the relation for polytope $\tilde F_3$ which holds by inductive assumption (by dimension).
For every face $Q$ of $P$ complex $\Delta(Q)$ is flag.
On each step of construction \[gamma-complex\] we merge two flag complexes $\Delta(P_{m-1})$ and $\Delta(G_m)\star w(F_m)$ with flag intersection $\Delta(G_m)$. Then, it is enough to prove that if $\Delta(P)$ contains some edge $\{v_1, v_2\}$ and for some its face $Q$ complex $\Delta(Q)$ contains vertices $v_1$ and $v_2$, then $\Delta(Q)$ contains also edge $\{v_1,v_2\}$.
Without loss of generality we assume that $v_1\in \Delta(G_m)$ and $v_2 = w(F_m)$ . Let $\tilde P$ be obtained from $P$ by 2-truncation of the face $G_m$ and face $\tilde Q$ be obtained from some face $Q$ by 2-truncation of the face $G\cap Q$. We have $v_1\in \Delta(G)$ and $v_1\in \Delta(Q)$, then from lemma \[delta-intersection\] follows that $v_1\in \Delta(G\cap Q)$. Therefore, the edge $\{v_1, w(F_m)\}$ is contained in $\Delta((G\cap Q)\star w(F_m))\subset \Delta(\tilde Q)$.
For every face $Q$ of $P$ we have $\gamma(Q) = f(\Delta(Q))$.
The lemma holds for $P=I^n$. From the formula \[gamma-complex-defn\] follows, that if the face $\tilde Q$ is obtained from $Q$ by 2-truncation, then $f(\Delta(\tilde Q))$ and $f(\Delta(Q))$ are connected by the next formula. $$f(\Delta(\tilde Q)) = f(\Delta(Q) + t f(\Delta (G \cap Q)).$$ Similar formula connects $\gamma$-vectors of $\tilde Q$ and $Q$. The lemma follows.
Denote by ${\binom n k}_r$ the number of $k$-clique in Turan graph $T_{n,r}$. For natural numbers $m,k$ and $r\geq k$ there exists unique canonical representation $$m = {\binom{n_k}k}_r + \dots + {\binom{n_{k-s}}{k-s}}_{r-s},$$ where $n_{k-i} - [\frac{n_{k-i}}{r-i}]>n_{k-i-1}$ for all $0\leq i < s$ and $n_{k-s}\geq k-s>0$. Denote $$m^{\langle k \rangle_{r}} = {\binom{n_k}{k+1}}_r + \dots + {\binom{n_{k-s}}{k-s+1}}_{r-s}.$$
The integer vector $(f_0,\dots,f_n)$ with nonnegative components is $f$-vector of some $r$-colorable simplicial complex $K$ if and only if $f_k\leq f_{k-1}^{\langle k \rangle_{r}}$.
Then, using Frankl-Furedi-Kalai inequalities we obtain the following result which was proved for flag nestohedra in [@Ai].
Let $P^n$ be a 2-truncated cube. Then $0\leq \gamma_i\leq \gamma_k^{\langle k\rangle_{r}}$, where $k>1, r=[\frac{n}{2}]$.
Let us apply the obtained result to polytopes of dimensions 4 and 5. Their $\gamma$-vectors have only 3-components: $(1,\gamma_1,\gamma_2)$. In this case we obtain a graph with $\gamma_1$ vertices and $\gamma_2$ edges without triangles. Therefore, we have 3 inequalities:
1. $\gamma_1\geq 0$;
2. $\gamma_2\geq 0$;
3. $\gamma_2\leq \frac{\gamma_1(\gamma_1-1)}{2}$.
[amsplain]{} N. Aisbett, Frankl-Furedi-Kalai, Inequalities on the $\gamma$-vectors of flag nestohedra, arXiv:1203.4715v1. N. Aisbett, Frankl-Furedi-Kalai, gamma-vectors of edge subdivisions of the boundary of the cross polytope, arXiv:1209.1789. V.M.Buchstaber, V.D.Volodin, Combinatorial 2-truncated cubes and applications, Associahedra, Tamari Lattices, and Related Structures, Tamari Memorial Festschrift, Progress in Mathematics, Vol. 299, pp 161-186, 2012. P. Frankl, Z. Furedi, and G. Kalai, Shadows of colored complexes, Math. Scand. 63 (1988) 169-178. A. Frohmader, Face vectors of flag complexes, arXiv:math/0605673v1. S. Gal, Real root conjecture fails for five- and higher-dimensional spheres, Discrete & Computational Geometry, vol. 34, no. 2, pp. 269-284, 2005; arXiv:math/0501046v1. E. Nevo, T. K. Petersen, On $\gamma$-vectors satisfying the Kruskal-Katona Inequalities, Discrete Comput. Geom, Vol. 45, 2010, pp. 503-521. V. Volodin, Cubical realizations of flag nestohedra and Gal’s conjecture, arXiv:0912.5478v1. V. Volodin, Cubic realizations of flag nesohedra and proof of Gal’s conjecture for them, Uspekhi Mat. Nauk, 65:1(391) (2010), 183-184. V. Volodin, Geometric realization of the $\gamma$-vectors of 2-truncated cube, Uspekhi Mat. Nauk, 67:3(405) (2012), 181-182. G. Ziegler, Lectures on Polytopes, Springer-Verlag, 1995. (Graduate Texts in Math. V.152).
<span style="font-variant:small-caps;">Steklov Mathematical Institute,Moscow,Russia</span>\
<span style="font-variant:small-caps;">Delone Laboratory of Discrete and Computational Geometry,Yaroslavl State University,Yaroslavl,Russia</span>\
*E-mail adress:* `[email protected]`
[^1]: This work is supported by the Russian Government project 11.G34.31.0053.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have studied the entanglement of identical fermions in two spatial regions in terms of the Berry phase acquired by their spins. The analysis is done from the viewpoint of the geometrical interpretation of entanglement, where a fermion is visualized as a scalar particle attached with a magnetic flux quantum. The quantification of spin entanglement in terms of their Berry phases is novel and generalises the relationship between the entanglement of distinguishable spins and that of delocalised fermions.'
author:
- 'B. Basu'
- 'P. Bandyopadhyay'
title: Spin Entanglement of Two delocalised Fermions and Berry Phase
---
Introduction
============
Quantum entanglement is a specific feature which distinguishes between the classical and quantum world. The role of entanglement is also important in different branches of quantum information science such as quantum communication[@sch], quantum computation [@sc], quantum cryptography [@ek] and quantum teleportation [@ben]. Entanglement for two distinguishable qubits have been well studied and a measure of the degree of entanglement can be quantified in terms of von Neuman entropy and concurrence [@a1; @b1; @c1; @d1]. However, entanglement of two identical fermions have not yet been well understood. In systems of identical fermions, a proper measure of entanglement should take into account multiple occupancy of states [@3; @4; @5; @6; @7], the effect of exchange [@2] and mutual repulsion. Recently, Ramsak et al. [@8] have considered the problem and formulated several expressions for the concurrence of two indistinguishable delocalised spin $1/2$ particles. In a recent paper [@1], it has been pointed out that the concurrence for the entanglement of two distinguishable spins can be formulated in terms of the Berry phase acquired by the spins when each spin is rotated about the quantization axis(z-axis). In fact, when a spinor is visualized as a scalar particle attached with a magnetic flux, quantum entanglement of spin systems is caused by the deviation of the internal magnetic flux line associated with one particle in presence of the other. This helps us to consider the measure of entanglement viz. concurrence, in terms of the Berry phase acquired by the rotation of the spin around the z-axis induced by the internal magnetic field of the other particle. This picture is potentially useful to study the entanglement of identical fermions in two spatial regions in terms of the Berry phase acquired by their spins. Indeed in this formalism, the spin entanglement through magnetic coupling is associated with the spatial entanglement between fermions at different spatial regions and entanglement can be viewed as a consequence of Fermi statistics [@2] Therefore, just like in distinguishable spin systems, the concurrence associated with the entanglement of identical fermions in different spatial regions can also be expressed in terms of the geometrical phase. The phase is acquired by the spin of one particle in one spatial region, when it moves around the z-axis in presence of the other particle, in another spatial region. In the present note, we shall study the entanglement of two delocalised electrons in two spatial regions from this viewpoint.
Concurrence and Berry Phase
===========================
For an entangled state, the Berry phase acquired by a spin may be analysed by considering that, under the influence of the internal magnetic field associated with the other electron, the spin of an electron rotates adiabatically with an angular velocity $\omega_0$ around the $z$-axis under an angle $\theta$.
The instantaneous eigenstates of a spin operator in direction ${\bf n}(\theta,t)$ where ${\bf n}$ is the unit vector depicting the magnetic field ${\bf B}(t)=B {\bf n}(\theta,t)$ in the $\sigma_z$-basis are given by $$\label{art}
\begin{array}{ccc}
\displaystyle{|\uparrow_n;t>}&=&\displaystyle{\cos \frac{\theta}{2} |\uparrow_z> +~ \sin
\frac{\theta}{2} e^{i\omega_0 t}|\downarrow_z> }\\
&&\\
\displaystyle{|\downarrow_n;t>}&=&\displaystyle{\sin \frac{\theta}{2} |\uparrow_z> +~ \cos
\frac{\theta}{2} e^{i\omega_0 t}|\downarrow_z>}
\end{array}$$
After cyclic evolution for the interval $\tau=\displaystyle{\frac{2\pi}{\omega_0}}$ each eigenstate will pick up a geometric phase (Berry phase) apart from the dynamical phase [@10] $$\label{a16}
\displaystyle{\Phi_{B \mp}}~= \displaystyle{\pi(1 \mp \cos \theta)}$$ where $\Phi_{B-}(\Phi_{B+})$ corresponds to up (down) state. The angle $\theta$ represents the deviation of the spin from the quantization axis (z-axis)under the influence of the magnetic field.
The evaluation of the concurrence in terms of the Berry phase follows from the following consideration. For the Bell state $$\label{Bel}
|\psi>=a |\uparrow\downarrow>-b |\downarrow\uparrow>$$ where $a$ and $b$ are complex coefficients, the concurrence is given by $$C=2|a||b|$$
In this formalism, as entanglement is considered to be caused by the deviation of the magnetic flux line from the quantization axis in presence of the other particle, we may take $|a|$ and $|b|$ as functions of this angle of deviation $\theta$ and thus we write $$\frac{1}{\sqrt 2}\left(
\begin{array}{c}
|a|\\
|b|
\end{array}
\right) =\left(
\begin{array}{c}
f(\theta)\\
g(\theta)
\end{array}
\right)$$ The angle $\theta$ here just corresponds to the deviation of up(down) spin under the influence of the other and thus represents the same angle $\theta$ associated with the Berry phase acquired by the spin as given by eqn. (2). For the maximum entangled state (MES), we have $\theta=\pi$ as it corresponds to the maximum deviation of a spin from the z-axis when the spin direction is reversed. For this state, we have $ \mid a\mid =~\mid b\mid~=\frac{1}{\sqrt 2} $ and $C=1$.\
Again for the disentangled state $\theta=0 $ and we have $C=0$\
These constraints satisfy, $$f(\theta)\mid_{\theta=\pi}=~g(\theta)\mid_{\theta=\pi}=\frac{1}{2}$$ and, $$\rm{either}~f(\theta)\mid_{\theta=0}=0~~~~~~~~\rm{or}~g(\theta)\mid_{\theta=0}=0$$ From these constraint equations, for the positive definite norms $0\leq \mid a \mid \leq 1$ and $0\leq \mid b \mid \leq 1$, we can have a general solution $$\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
|a|\\
|b|
\end{array}
\right) =
\left(
\begin{array}{c}
f(\theta)\\
g(\theta)
\end{array}
\right)=
\left(
\begin{array}{c}
\cos^2\frac{n\theta}{4}\\
\sin^2\frac{n\theta}{4}
\end{array}
\right)$$ with $n$ being an odd integer. It is noted that according to eqn. (8) the relation $|a|^2 +|b|^2=1$ is satisfied only in the case of $\theta=\pi$ implying the MES. So to have the probability interpretation the generalised state may be defined by incorporating the normalization factor $\frac{1}{\sqrt{|a|^2 +|b|^2}}$ in eqn. (3). The Berry phase corresponds to the half of the solid angle $\frac{1}{2}\Omega$ swept out by the magnetic flux line and is given by $\pi(1-\cos\theta)$. The system under consideration suggests that the range of $\theta$ lies between $0\leq \mid \theta \mid \leq \pi$ where $\theta =\pi$ corresponds to the maximum deviation of the spin when the spin direction is reversed. So in the expression (8) we should take $n=1$ for our present system. We find that the particular solution with $n=1$ relates the concurrence with the Berry phase and is given by $$C=2|a|~|b|=\sin^2\frac{\theta}{2}=\frac{1}{2}(1-\cos\theta) = \frac{|\phi_B|}{2\pi}$$
We may remark here that the concurrence (as it is a measure of entanglement) is a function of an instantaneous state, whereas the Berry phase is related to the periodic rotation of the system. The relationship between these two entities in the present framework follows from physical aspects. Here, entanglement is caused by the deviation of the magnetic flux line associated with one fermion in presence of the other and the Berry phase of an entangled spin system is related with this deviation. This is the novelty of studying spin entanglement from Berry phase approach.
Spin Entanglement of Two delocalised Fermions
=============================================
In our framework, we consider two electrons in two different spatial regions A and B. Entanglement is produced when two initially unentangled(separated) electrons in wave packets approach each other, interact and then again become well separated into distinct regions A and B. The spin properties of such a fermionic system can be realized in spin correlation functions for the two domains. In fact, the spin measuring apparatus could measure spin correlation functions for the two domains A and B rather than two distinguishable spins. We may consider spin entanglement of two-electron states on a lattice of the form $$|\psi>=\sum_{i,j=1}^N \frac{1}{2}\left[ \psi_{i j}^{\uparrow \downarrow}
c_{i \uparrow}^\dagger c_{j \downarrow}^\dagger~+~
\psi_{i j}^{\downarrow \uparrow}
c_{i \downarrow}^\dagger c_{j \uparrow}^\dagger~\right] |0>$$ where $c_{i s}^\dagger$ creates an electron with spin $s$ on site $i$ and $N$ is the total number of sites. Here $\psi_{i j}^{\uparrow \downarrow} (\psi_{i j}^{\downarrow \uparrow})$ is the amplitude of probability to find the two-electron state with one having spin $\uparrow$ in region A and another with spin $\downarrow$ in region B. The whole set of probabilities give the wave function for the two-electron system in the continuum limit.
The system is relevant in representing a tight binding lattice containing two valence electrons occupying two non-degenerate atomic orbitals or two electrons in the conduction band of a semiconductor for which the site represents finite grid points.
To study the concurrence associated with the entanglement of such a system in terms of the $geometric$ $phase$ acquired by the spin of one electron in presence of the other electron, we consider a rotation of the spin around the $z$-axis under an angle $\theta$ at each site $$\psi_{i j}^{\uparrow \downarrow} \rightarrow \psi_{i j}^{\uparrow \downarrow}e^{2i\theta}$$ when the angle $\theta$ varies from 0 to $\pi$. The Berry phase acquired by the spin may be realised through the expression $$\Phi_B=-i~\int_0^\pi ~<\psi|\partial_\theta \psi>d\theta$$ which on the lattice takes the form $$\label{a4}
\Phi_B=2\pi ~2 \sum_{i,j}\psi_{i j}^{\uparrow \downarrow ^{*}}~\psi_{j i}^{\uparrow \downarrow}$$ This follows from the differentiation of the expression (11) with respect to $\theta$ and replacing the integration in the continuum case by the summation on the lattice. The relationship between concurrence and the Berry phase can be generalised for the system of two indistinguishable particles and from eqns. (9) and (\[a4\]) we can write $$\label{u1}
C=\frac{|\Phi_B|}{2\pi}=2 \sum_{i,j}\psi_{i j}^{\uparrow \downarrow ^{*}}~\psi_{j i}^{\uparrow \downarrow}$$ This may be identified with the formula obtained by Ramsak et. al.[@8] for the entanglement of the two electron states on a lattice (given by eqn. (10)). The concurrence of the system can be expressed in terms of the operators $$S^+_{A(B)}=(S^-_{A(B)})^\dagger=\sum_{i\in A(B)}c^\dagger_{i \uparrow}c_{i \downarrow}$$ and for the state with $S^Z_{tot}=0$, we have $$\label{r1}
C=2|<S^+_A~ S^-_B>|~=~2\sum_{i,j}\psi_{i j}^{\uparrow \downarrow ^{*}}~\psi_{j i}^{\uparrow \downarrow}$$ Indeed, this can be formulated in a more familiar form by considering the state in analogy to the Bell state $$\Phi^{\pm}_{i j}=\frac{1}{\sqrt 2}(\psi_{i j}^{\uparrow \downarrow} \pm
\psi_{j i}^{\uparrow \downarrow} )$$ over all pairs $[i j]$ such that $i\in A$ and $j \in B$. The expression for concurrence of the system is given by [@8] $$C= \sum_{[i j]}\mid [(\Phi_{i j}^+)^2~-~(\Phi_{i j}^-)^2 ]\mid$$ which is equivalent to the expression (\[r1\]). From our analysis we note that this result is identical with the expression (\[u1\]) obtained from the relationship of Berry phase with concurrence.
Entanglement of two delocalised electrons in Hubbard model
===========================================================
As the study of generation of entanglement in the solid state environment is an active field of research in recent times, for an application for our formalism we have picked up the well studied Hubbard model [@11].
To compute the concurrence for the entanglement of two electrons in two different spatial regions in Hubbard model, let us consider two interacting electrons in a one dimensional lattice with $N \rightarrow \infty $. The corresponding Hamiltonian is $$\label{a6}
H=-t~\sum _{i j}\left(c^\dagger _{i~ s}c_{j~ s}+h.c.\right)+
\sum _{i, j, s, s^\prime}U_{i`j}n_{i~ s}n_{j~ s^\prime}$$ where $t$ is the hopping parameter, $U$ represents the onsite repulsion and $n_{i~ s}$ is the number of electrons at the site $i$ with spin $s$. Let the situation be such, that one electron with spin ${\uparrow}$ is initially confined in the region A and the other electron with opposite spin ${\downarrow}$ in region B. The initial state is defined by two wave packets, the left with momentum $k$ and the right with momentum $-q$. After collision, the electrons move apart with non-spinflip amplitude $t_{kq}$ and spin flip amplitude $r_{kq}$. For sharp momentum resolutions we take $k=-q=k_0$. We would like to study the entanglement of these two electrons in terms of the Berry phase acquired by the spins in this system. We know that for strong coupling and at half filling, the system with Hamiltonian (\[a6\]) reduces to the Heisenberg antiferromagnetic chain and the Hamiltonain is given by $$H=J\sum \left[ S^x_i~ S^x_j~+~S^y_i~ S^y_j~+~S^z_i~ S^z_j\right]$$ with $J=4t^2/U$. In the $S=0$ sector( $S$ = total spin), the rotational symmetry of the Hamiltonian implies $$<S^x_i~ S^x_j>~=~<S^y_i~ S^y_j>~=~<S^z_i~ S^z_j >$$ In the antiferromagnetic chain for spin 1/2 system, $$<S^z_i~ S^z_i>~\leq \frac{1}{4}$$ If $\theta$ be the deviation of the spin at the site $i$ from the quantization axis i.e. $z$-axis under the influence of the spin at the site $j$ then we can write, $$<S^z_i~ S^z_j>=\frac{1}{4}\cos \theta$$ We consider collision of the two electrons initially at the regions A and B. After the collision the electrons move to the final states in these two regions either with spin flip or non-spin flip configurations. The Berry phase acquired by the up(down) configuration is given by $$\Phi_{B-}(\Phi_{B+})~= \pi(1 - \cos \theta)(\pi(1 + \cos \theta))$$ However after the collision, the initial spin positions get changed so that for spin flip and spin nonflip cases we have the two phases $$\Phi_B= \pi(1 - \cos \theta)|_{\theta=\pi}~~~~\rm{and}~~~~ \Phi_B=\pi(1 + \cos \theta)|_{\theta=0}$$ respectively.
The generalised expression for the Berry phase is $$\Phi_B= \pi(1 + |\cos \theta)|)$$ When the spin flip and spin nonflip amplitudes coincide the concurrence is given by $$C=\frac{|\Phi_B|}{2\pi}=\frac{1}{2}(1 + |\cos \theta)|)|_{\theta=0,\pi}=1$$ Our result is identical with another definition of concurrence [@8] $$C=2|t_{kq}r_{kq}|=1$$ when the spin flip and spin nonflip amplitude coincides i.e. $t_{kq}=r_{kq}$ This corresponds to $k_0 \sim 0,\pi$. However, when the spin flip and non-spin-flip amplitudes do not coincide i.e. $t_{kq}\neq r_{kq}$, we can measure the concurrence from an estimate of the angle $\theta$ in terms of momentum $k_0(k=-q=k_0)$. This can be achieved from an analysis of the energy relations in Hubbard model and Heisenberg antiferromagentic chain in the ground state with site $i \in A, j\in B$ . In Hubbard model, when no particles meet at a lattice point, the many particle energy is given by $$E=-2t \sum_i \cos k_i$$ In the Heisenberg antiferromagnetic chain with the correlation given by eqn.(23), the energy per site is given by $$E=J \frac{3}{4} \cos \theta$$ Since in the Hubbard model, the occupation number of each species of spin $<n_{i \alpha}>=\frac{1}{2}$, we find that with $J=\frac{4t^2}{U}$, the energy of one particle can be related with the energy per site in the antiferromagnetic chain by the relation $$t \cos k_0=\frac{4t^2}{U} \frac{3}{4}\cos\theta$$ For $t=U$, we find $$\cos\theta =\frac{1}{3}\cos k_0$$ So the concurrence for different values of $k_0$ at $t=U$ can be obtained in terms of the Berry phase acquired by the spin through the relation $$C=\frac{1}{2}(1+|\cos\theta|)_{\theta \neq 0,\pi}=\frac{1}{2}(1+\frac{1}{3}|\cos k_0|)_{k_0 \neq 0,\pi}$$
From this, we can have a numerical estimate of concurrence for different values of $k_0$. In fact, we find for $k_0 = \pi/4, \pi/2, 3\pi/4$, $C= .62, .5, .62$ respectively. Again, from eqn.(26) we note that for $k_0=0,\pi$ we get $C=1$. It is found that the results are in good agreement with the values of concurrence obtained by Ramsak et.al. [@8] from an analysis of the spin flip and nonflip amplitudes of two electron interaction for wavepackets with well defined momentum.
Summary and Conclusion
======================
To summarize, the present analysis shows that the spin entanglement of two identical fermions at two different spatial regions can be described by the Berry phase acquired by the spins in the two domains. We have considered two identical fermions, localised in two different spatial regions whose spins interact through magnetic coupling. As the study of entanglement in the solid state environment is important, to substantiate our derivation, we have considered two electrons in two different spatial regions in Hubbard model. We have derived the concurrence for their spin entanglement in terms of the Berry phase acquired by their spins. We have found that the results obtained in our method (value of the concurrence in Hubbard model) are in good agreemnent with the existing results in the literature [@8].
We may conclude by mentioning that it is difficult [@h1; @h2; @h3; @h4; @h5] to have any directly measurable observable which corresponds to entanglement of a given arbitrary quantum state. In this novel approach, the value of concurrence, which is a degree of measure to quantify spin entanglement of two fermions, can be estimated by the observed Berry phase acquired by their spins. Furthermore, as we have already shown that the concurrence for the entanglement of distinguishable spins in a spin system can be related to the Berry phase acquired by their spins[@1], the present approach generalises the relationship between entanglement of two distinguishable and indistinguishable fermions.
[*Acknowledgement:*]{} We are thankful to the referees for their constructive comments and good suggestions.
[\*]{} B. Schumacher, Phys. Rev. A [**54**]{}, 2614 (1996) S. Lloyd, Science [**261**]{}, 1589 (1993); D. P. Vincenzo, Scienc [**270**]{}, 255 (1995) A. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991) C. H. Bennet et. al., Phys. Rev. Lett.[**70**]{}, 1095 (1993) C. Bennet, H. Bernstein, S. Popescu and B. Schumacher, Phys. Rev. A [**53**]{}, 2046 (1996) S. Hill and W. K. Wootters, Phys. Rev. Lett. [**78**]{}, 5022 (1997) V. Vedral, M. B. Plenio, M. A. Rippin and P.L. Knight, Phys. Rev. Lett. [**78**]{}, 2225 (1997) W.K. Wootters, Phys. Rev. Lett. [**80**]{}, 2245 (1998) G. C. Ghirardi and L. Marinatto, Phys. Rev. A [**70**]{}, 012109 (2004) K Eckert, J. Schliemann, G. Bruss and M. Lewenstein, Ann. Phys. [**299**]{}, 88 (2002) J R Gittings and A. J. Fisher, Phys. Rev. A [**66**]{}, 032305 (2002) Z. Huang and S. Kais, Chem. Phys. Lett. [**413**]{}, 1, (2005) F. Buscemi, P. Bordone and A. Bertoni, Phys. Rev. A [**73**]{}, 052312 (2006) V. Vedral, Central Eur. J. Phys. [**2**]{}, 289 (2003) A Ramsak, I.Sega and J. A. Jefferson, Phys. Rev. A [**74**]{}, 010304(R)(2006) B.Basu, Europhys. Lett. [**73**]{}, 833 (2006); B. Basu and P. Bandyopadhyay, Int. J.Geo. Meth. Mod.Phys. [**4**]{}, No. 5 707 (2007) R. A. Bertlmann, K. Durstberger, Y. Hasegawa,B. C. Hersmayer, Phys. Rev. A [**69**]{}, 032112 (2004) J. Hubbard, Proc. Roy. Soc.(London)A [**276**]{}, 238 (1963) A. Peres, Phys. Rev. Lett.[**77**]{}, 1413 (1996) G. Vidal and R. F. Werner, Phys. Rev. A[**65**]{}, 032314 (2002) A. G. White, D.F.V. James, P.H.Eberhe et.al., Phys. Rev. Lett.[**83**]{}, 3103 (1999) H. Haffner et.al. Nature (London)[**438**]{}, 443(2005) K.J. Ruch, P. Walther and A. Zerlinger, Phys. Rev. Lett.[**94**]{}, 070402 (2005)
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'BigBib.bib'
---
=1.5pc
[**High order operator splitting methods based on an integral deferred correction framework** ]{}
Andrew J. Christlieb [^1] Yuan Liu[^2] Zhengfu Xu[^3]
**Abstract**
Integral deferred correction (IDC) methods have been shown to be an efficient way to achieve arbitrary high order accuracy and possess good stability properties. In this paper, we construct high order operator splitting schemes using the IDC procedure to solve initial value problems (IVPs). We present analysis to show that the IDC methods can correct for both the splitting and numerical errors, lifting the order of accuracy by $r$ with each correction, where $r$ is the order of accuracy of the method used to solve the correction equation. We further apply this framework to solve partial differential equations (PDEs). Numerical examples in two dimensions of linear and nonlinear initial-boundary value problems are presented to demonstrate the performance of the proposed IDC approach.
[**Key Words:**]{} Integral deferred correction, initial-boundary value problem, high-order accuracy, operator splitting.
Introduction {#sec1}
============
In this paper we present high order operator splitting methods based on the integral deferred correction (IDC) mechanism. The methods are designed to leverage recent progress on parallel time stepping and offer a great deal of flexibility for computing the ordinary differential equations (ODEs). We focus on extending IDC theory to the case of splitting schemes on the IVP $$\label{eqn:split-ode}
u_{t} = f(t,u) = \sum^{\Lambda}_{\nu = 1} f_{\nu}(t,u), \qquad u(0) = u_0, \qquad t\in [0,T],$$ and discuss the application in parabolic PDEs. Here, $u \in \mathbb{R}^n$ and $f(t, u) : \mathbb{R}^{+}\times \mathbb{R}^n \rightarrow \mathbb{R}^n$.
In the case that (\[eqn:split-ode\]) arises from a method of lines discretization of time dependent PDEs which describe multi-physics problems, we encounter high dimensional computation. For these problems, the splitting methods can be applied to decouple the problems into simpler sub-problems. Therefore, the main advantage of operator splitting methods are problem simplification, dimension reduction, and lower computational cost. Two broad categories can classify many splitting methods: differential operator splitting [@Bagrinovskii; @Marchuk; @Strang1; @Strang2] and algebraic splitting with the prominent example of the alternating direction implicit(ADI) method, which was first introduced in [@Doug2; @Doug1; @Peachman] for solving two dimensional heat equations. The main barrier in designing high order numerical methods based on the idea of splitting is the operator splitting error. To obtain high order accuracy via low order splitting method generally adds complexity to designing a scheme and stability analysis [@Hansen; @McLachlan; @Jia; @McLachlan; @McLachlan1; @THALHAMMER; @Geiser]. A recent work in [@Bourlioux] utilizes the spectral deferred correction (SDC) procedure to the advection-diffusion-reaction system in one dimension in order to enhance the overall order of accuracy. However, their work does not contain a proof that the corrections raise the order of the method. In [@Dutt], a SDC method is first proposed as a new variation on the classical deferred correction methods [@Bohmer]. The key idea is to recast the error equation such that the residual appears in the error equation in integral form instead of differential form, which greatly stabilizes the method. It is proposed as a framework to generate arbitrarily high order methods. This family of methods use Gaussian quadrature nodes in the correction to the defect or error, hence the method can achieve a maximal order of $2(M-1)$ on $M$ grid points with $2(M-2)$ corrections. This main feature of the SDC method made it popular and extensive investigation can be found in [@Dutt; @Minion; @Layton3; @Layton4; @Layton1; @Huang1; @Huang2; @liu2008strong]. Following this line of approach, the IDC methods are introduced in [@Andrew; @Andrew1; @Andrew2]. High order explicit and implicit Runge-Kutta (RK) integrators in both the prediction and correction steps (IDC-RK) are developed by utilizing uniform quadrature nodes for computing the residual. In [@Andrew; @Andrew1], it is established that using explicit RK methods of order $r$ in the correction step results in $r$ higher degrees of accuracy with each successive correction step, but only if uniform nodes are used instead of the Gaussian quadrature nodes of SDC. It is shown in [@Andrew1] that the new methods produced by the IDC procedure are yet again RK methods. It is also demonstrated that, for the same order, IDC-RK methods possess better stability properties than the equivalent SDC methods. Furthermore, for explicit methods, each correction of IDC or SDC increases the region of absolute stability. Similar results are generalized to arbitrary order implicit and additive RK methods in [@Andrew2]. Generally, for *implicit* methods based on IDC and SDC, the stability region becomes smaller when more correction steps are employed. It is believed that this is due to the numerical approximation of the residual integral. The primary purpose of this work is to apply the IDC methods to the low order operator splitting methods in order to obtain higher order accuracy.
The paper is organized as follows. In Section 2, we briefly review several classical operator splitting methods and show how these methods can be cast as additive RK (ARK) methods. In Section 3 we formulate the IDC methodology for application to operator splitting schemes. In Section 4, we prove that IDC methods can correct for both the splitting and numerical errors of ODEs, giving $r$ higher degrees of accuracy with each correction, where $r$ is the order of the method used in the correction steps. In section 5, as an interesting example, we will show how to use integral deferred correction for operator splitting (IDC-OS) schemes as a temporal discretization when solving PDEs. In Section 6 we carry out numerical simulations based on IDC methods for both linear and non-linear parabolic equations, and demonstrate that the new framework can achieve high order accuracy in time. In Section 7 we conclude the paper and discuss future work. We note that both the parallel time stepping version of IDC and the work presented in this paper are likely to benefit from the work in [@Rokhlin], and will be the subject of further investigation.
Operator splitting schemes for ODEs {#section2}
===================================
In this section, we review several splitting methods which will serve as the base solver in the IDC framework. For differential operator splitting, such as Lie-Trotter splitting and Strang splitting, which happens at continuous level, we will apply appropriate numerical methods to the sub-problems and refer the whole approach as the discrete form of differential splitting. For both the differential splitting and algebraic splitting, we will show that each of the numerical schemes can be written as an ARK method. This insight is the first step required to apply the IDC methodology [@Andrew2] to operator splitting schemes, which is the primary purpose of the present work.
Review of ARK methods
---------------------
For IVP (\[eqn:split-ode\]), when different $p$-stage RK integrators are applied to each operator $L_{\nu}$, the entire numerical method is called an ARK method. If we define the numerical solution after $n$ time steps as $\upsilon^n$, which is an approximation to the exact solution $u(t_n)$, then one step of a $p$-stage ARK method is given by $$\begin{aligned}
\label{eqn:ark}
& \displaystyle{ \upsilon^{n+1} = \upsilon^n + \Delta t \sum^{\Lambda}_{\nu =1}\sum^p_{i=1}b_i^{[\nu]} f_{\nu}(t_n+c^{[\nu]}_i \Delta t, \tilde{\upsilon}_i)}, \\
\text{with}
& \displaystyle{ \tilde{\upsilon}_i = \upsilon^n + \Delta t\sum^{\Lambda}_{\nu =1}\sum^p_{j = 1} a^{[\nu]}_{ij} f_{\nu}(t_n+c^{[\nu]}_j \Delta t, \tilde{\upsilon}_j)}.\end{aligned}$$ and $\Delta t = t_{n+1}-t_n$. An ARK method is succinctly identified by its Butcher tableau, as is demonstrated in Table \[table1\].
------------- --------------- ------------------- ---------------- ---------------- ---------- ---------------- -- --------------------- --------------------- ---------------------- ---------------------- ---------- ----------------------
$c^{[1]}_1$ $\cdots$ $c^{[\Lambda]}_1$ $a^{[1]}_{11}$ $a^{[1]}_{12}$ $\cdots$ $a^{[1]}_{1p}$ $\cdots$ $a^{[\Lambda]}_{11}$ $a^{[\Lambda]}_{12}$ $\cdots$ $a^{[\Lambda]}_{1p}$
$c^{[1]}_2$ $\cdots$ $c^{[\Lambda]}_2$ $a^{[1]}_{21}$ $a^{[1]}_{22}$ $\cdots$ $a^{[1]}_{2p}$ $\cdots$ $a^{[\Lambda]}_{21}$ $a^{[\Lambda]}_{22}$ $\cdots$ $a^{[\Lambda]}_{2p}$
$\vdots$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\ddots$ $\vdots $ $\cdots$ $\vdots$ $\vdots$ $\ddots$ $\vdots$
$c^{[1]}_p$ $\cdots$ $c^{[\Lambda]}_p$ $a^{[1]}_{p1}$ $a^{[1]}_{p2}$ $\cdots$ $a^{[1]}_{pp}$ $\cdots$ $a^{[\Lambda]}_{p1}$ $a^{[\Lambda]}_{p2}$ $\cdots$ $a^{[\Lambda]}_{pp}$
$b^{[1]}_{1}$ $b^{[1]}_{2}$ $\cdots$ $b^{[1]}_{p}$ $\cdots$ $b^{[\Lambda]}_{1}$ $b^{[\Lambda]}_{2}$ $\cdots$ $b^{[\Lambda]}_{p}$
------------- --------------- ------------------- ---------------- ---------------- ---------- ---------------- -- --------------------- --------------------- ---------------------- ---------------------- ---------- ----------------------
: Butcher tableau for a $p$-stage ARK method. []{data-label="table1"}
In the following sections, we will explicitly write out the Butcher tableau for each operator splitting scheme and conclude that each of the operator splitting schemes considered in this work is indeed a form of ARK method.
Lie-Trotter splitting
---------------------
We describe Lie-Trotter splitting for in the case of $\Lambda = 2$ in the right hand side functions. We consider a single interval $[t_n, t_{n+1}]$ . With first order Lie-Trotter splitting, can be solved by two sub-problems: $$\label{3.2}
\left\{
\begin{array}{ll}
\vspace{0.05in}
u_t = f_1(t, u), & \hbox{on $ [t_n, t_{n+1}]$,} \\
u_t = f_2(t, u), & \hbox{on $[t_n, t_{n+1}]$.}
\end{array}
\right.$$ The solution calculated from the first equation is used as the initial value of the second equation. Note that this splitting occurs on the continuous level. In order to define a discrete solver for , we need to choose a numerical scheme for solving each sub-problem. For example, if we use the backward Euler scheme to solve both equations, we obtain a scheme of the form $$\label{3.3}
\left\{
\begin{array}{ll}
\vspace{0.05in}
\displaystyle{ \frac{\widetilde{\upsilon}-\upsilon^n}{\Delta t}} = f_1(t_{n+1},\widetilde{\upsilon}), & \hbox{} \\
\displaystyle{ \frac{\upsilon^{n+1}-\widetilde{\upsilon}}{\Delta t}} = f_2(t_{n+1}, {\upsilon}^{n+1}) ~,~ & \hbox{}
\end{array}
\right.$$ where $\upsilon^n$ denotes the numerical approximation for $u$ at time level $t_n$. However, this approach only produces a first order approximation. In order to make use of IDC methodology [@Andrew2] to lift the order of accuracy of , we write a Butcher tableau for in Table \[table2\]. Comparing the Butcher tableau for the Lie-Trotter splitting with the general form of the Butcher tableau of an ARK method, we can view the discrete form of Lie-Trotter splitting (\[3.3\]) as a 2-stage ARK method. This can be extended to the case of $\Lambda$ operators, where the resulting Butcher tableau would be a $\Lambda$-stage ARK method.
--- --- --- --- --- --- ---
0 0 0 0 0 0 0
1 0 1 0 0 0 0
1 0 1 0 0 0 1
0 1 0 0 0 1
--- --- --- --- --- --- ---
: Butcher tableau for Lie-Trotter splitting. []{data-label="table2"}
Strang splitting
----------------
In this section, we consider the second order Strang splitting for the case of three operators to demonstrate how to construct Butcher tableaus for general differential operator splitting schemes. The case of $\Lambda = 3$ operators can arise when splitting a stiff ODE into three sub-problems while maintaining second order accuracy in time.
We also focus on a single time step, $[t_n, t_{n+1}]$. Second order Strang splitting for reads as $$\begin{aligned}
\begin{cases}\label{3.6}
\vspace{0.1in}
\displaystyle{u_t}= f_1(t, u), \qquad t \in [t_n,t_{n+\frac{1}{2}}], \\
\vspace{0.1in}
\displaystyle{u_t}= f_2(t, u), \qquad t \in [t_{n+\frac{1}{2}},t_{n+1}], \\
\vspace{0.1in}
\displaystyle{u_t}= f_3(t, u), \qquad t \in [t_n,t_{n+1}], \\
\vspace{0.1in}
\displaystyle{u_t}= f_2(t, u), \qquad t \in [t_n,t_{n+\frac{1}{2}}], \\
\vspace{0.1in}
\displaystyle{u_t}= f_1(t, u), \qquad t \in [t_{n+\frac{1}{2}}, t_{n+1}].
\end{cases}\end{aligned}$$ Note that this splitting occurs on the continuous level, i.e. the temporal derivative for each sub-problem in has yet to be discretized. If we discretize equations (\[3.6\]) with trapezoidal rule, we obtain an update of the form, $$\begin{aligned}
\begin{cases}\label{3.2.1}
\vspace{0.1in}
\displaystyle{\frac{\tilde{\upsilon}_1 -\upsilon^n}{\frac{1}{2}\Delta t}= \frac{1}{2}( f_1(t_n,\upsilon^n)+ f_1(t_{n+\frac{1}{2}},\tilde{\upsilon}_1)) }, \\
\vspace{0.1in}
\displaystyle{\frac{\tilde{\upsilon}_2-\tilde{\upsilon}_1}{\frac{1}{2}\Delta t}= \frac{1}{2}(f_2(t_{n+\frac{1}{2}},\tilde{\upsilon}_1)+f_2(t_{n+1},\tilde{\upsilon}_2))}, \\
\vspace{0.1in}
\displaystyle{\frac{\tilde{\upsilon}_3-\tilde{\upsilon}_2}{\Delta t}= \frac{1}{2}( f_3(t_n,\tilde{\upsilon}_2)+f_3(t_{n+1}, \tilde{\upsilon}_3))}, \\
\vspace{0.1in}
\displaystyle{\frac{\tilde{\upsilon}_4-\tilde{\upsilon}_3}{\frac{1}{2}\Delta t}= \frac{1}{2}( f_2(t_n, \tilde{\upsilon}_3)+f_2(t_{n+\frac{1}{2}},\tilde{\upsilon}_4))} , \\
\vspace{0.1in}
\displaystyle{\frac{\upsilon^{n+1}-\tilde{\upsilon}_4}{\frac{1}{2}\Delta t}= \frac{1}{2}( f_1(t_{n+\frac{1}{2}},\tilde{\upsilon}_4)+f_1(t_{n+1},\upsilon^{n+1}) )},
\end{cases}\end{aligned}$$ where $t_{n+\frac{1}{2}} = t_n +\frac{1}{2}\Delta t$. In Table \[table3\], we write this scheme in the Butcher tableau. Comparing this with the Butcher tableau of the ARK methods, again we see that we can view the Strang splitting as a 5-stage ARK scheme.
----------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --- --------------- --------------- --------------- --------------- --- ---
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
\[4pt\] $\frac{1}{2}$ $\frac{1}{2}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
\[4pt\] 0 1 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 0 0 0 0
\[4pt\] 0 0 1 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 $\frac{1}{2}$ $\frac{1}{2}$ 0 0
\[4pt\] $\frac{1}{2}$ $\frac{1}{2}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 $\frac{1}{2}$ $\frac{1}{2}$ 0 0
\[4pt\] 1 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 $\frac{1}{2}$ $\frac{1}{2}$ 0 0
\[4pt\] $\frac{1}{4}$ $\frac{1}{4}$ 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 $\frac{1}{2}$ $\frac{1}{2}$ 0 0
\[4pt\]
----------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --- --------------- --------------- --------------- --------------- --- ---
: Butcher tableau for Strang splitting with $\Lambda = 3$. []{data-label="table3"}
----------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- ---
0 0 0 0 0 0 0 0 0 0 0 0 0 0
\[4pt\] $\frac{1}{2}$ $\frac{1}{2}$ $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 0 0 0 0 0
\[4pt\] 0 1 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0
\[4pt\] $\frac{1}{2}$ $\frac{1}{2}$ $\frac{1}{4}$ $\frac{1}{4}$ 0 0 0 0 0 $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ 0
\[4pt\] 1 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ 0
\[4pt\] $\frac{1}{4}$ $\frac{1}{4}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ 0 $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ 0 0
\[4pt\]
----------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- ---
: Butcher tableau for Strang splitting when $L_3 = 0$. []{data-label="table4"}
ADI splitting
-------------
The ADI method is a predictor-corrector scheme as a typical example of algebraic splitting, which happens after the discretization of equations. Here we are considering the discretized ODE version of the Peaceman-Rachford scheme [@Peachman] for (\[eqn:split-ode\]). When $\Lambda = 2$, the ADI scheme takes the form
$$\label{3.3ADI}
\left\{
\begin{array}{ll}
\vspace{0.05in}
\displaystyle{ \frac{\widetilde{\upsilon}-\upsilon^n}{\frac{1}{2}\Delta t}} = f_1(t_{n+\frac{1}{2}} ,\widetilde{\upsilon}) + f_2(t_n, {\upsilon}^n), & \hbox{} \\
\displaystyle{ \frac{\upsilon^{n+1}-\widetilde{\upsilon}}{\frac{1}2{}\Delta t}} = f_1(t_{n+\frac{1}{2}} ,\widetilde{\upsilon}) + f_2(t_{n+1}, {\upsilon}^{n+1}) . & \hbox{}
\end{array}
\right.$$
The Butcher tableau for the scheme (\[3.3ADI\]) is shown in Table \[table6\], and clearly, we see that we can view the ADI splitting scheme as a 2-stage ARK method.
----------------------- --- --------------- --- --------------- --- ---------------
0 0 0 0 0 0 0
\[4pt\] $\frac{1}{2}$ 0 $\frac{1}{2}$ 0 $\frac{1}{2}$ 0 0
\[4pt\] 1 0 1 0 $\frac{1}{2}$ 0 $\frac{1}{2}$
\[4pt\] 0 1 0 $\frac{1}{2}$ 0 $\frac{1}{2}$
----------------------- --- --------------- --- --------------- --- ---------------
: Butcher tableau for ADI scheme.[]{data-label="table6"}
Formulation of IDC-OS schemes
=============================
In this section, we review the formulation of IDC-OS presented in [@Andrew2]. The authors [@Andrew2] considered IDC methods for implicit-explicit (IMEX) schemes, where the non stiff part of the problem was treated explicitly, and the stiff part of the problem was treated implicitly. At present, our focus is on entirely implicit schemes.
We begin with some preliminary definitions. The starting point is to partition the time interval $[0, T]$ into intervals $[t_n, t_{n+1}]$, $ n=0, 1, ..., N-1$, that satisfy $$\begin{aligned}
\label{eqn_idc_os_2}
0 = t_0 < t_1 < t_2 < \cdots < t_n < \cdots < t_N = T.\end{aligned}$$ “macro”-time steps are defined by $H_n = t_{n+1} - t_n$, and we permit them to vary with $n$. Next, each interval $[t_n, t_{n+1}]$ is further partitioned into M sub-intervals $[t_{n,m}, t_{n,m+1}]$, $m = 0,1,..., M-1$, $$\begin{aligned}
\label{eqn_idc_os_3}
t_{n} = t_{n, 0} < t_{n, 1} < t_{n, 2} < \cdots < t_{n, m} < \cdots < t_{n, M} = t_{n+1}\end{aligned}$$ with time step size $h_{n, m} = t_{n, m} - t_{n, m-1}$. If Gaussian quadrature nodes are selected, as was originally done with the SDC method [@Dutt], $h_{n, m}$ varies with $m$. Here, we only consider the case of uniform quadrature nodes, i.e. with $h_{n, m } = \frac{H_n}{M}$ for $m = 1, 2,
\dots, M$. Thus, without any ambiguity, we will drop the subscript $m$ on $h_{n, m}$. Note that in what follows we will use superscript $[i]$ to denote the $i^{th}$ correction at a discrete set of time points and superscript $(i)$ to denote the continuous approximation given by passing a $M^{th}$ order polynomial through the discrete approximation. For simplicity, we drop the $n$ subscript for the description of the IDC procedure on “macro”-time interval $[t_n, t_{n+1}]$. The whole iterative prediction-correction procedure is completed before moving on to the next time interval $[t_{n+1}, t_{n+2}]$. The numerical solution at $t_{n+1}$ serves as the initial condition for the following interval $[t_{n+1}, t_{n+2}]$.
**Prediction step :** Use an $r_0$-th order numerical method to obtain a preliminary solution to IVP (\[eqn:split-ode\]) $$\begin{aligned}
\label{eqn_idc_os_4}
\upsilon^{[0]} = (\upsilon^{[0]}_0, \upsilon^{[0]}_1, \dots, \upsilon^{[0]}_m, \dots, \upsilon^{[0]}_M),
\end{aligned}$$ which is an $r_0$-th order approximation to the exact solution $$\begin{aligned}
\label{eqn_idc_os_5}
u = (u_0, u_1, ..., u_m, ..., u_M),
\end{aligned}$$ where $u_m = u(t_m)$ is the exact solution at $t_m$ for $m = 0, 1, 2, ... , M$.
**Correction step :** Use the error function to improve the accuracy of the scheme at each iteration. For $k = 1$ to $c_{s}$, ($c_s$ is the number of correction steps):
\(1) Denote the error function from the previous step as $$\begin{aligned}
\label{eqn_idc_os_7}
e^{(k-1)} (t) = u(t) - \upsilon^{(k-1)}(t),
\end{aligned}$$ where $u(t)$ is the exact solution and $\upsilon^{(k-1)}(t)$ is an $M$-th degree polynomial interpolating $\upsilon^{[k-1]}$. Note that the error function, $e^{(k-1)}(t)$ is not a polynomial in general.
\(2) Denote the residual function as $$\begin{aligned}
\label{eqn_idc_os_8}
\epsilon^{(k-1)}(t) \equiv (\upsilon^{(k-1)})'(t)- f(t, \upsilon^{(k-1)}) , \end{aligned}$$ and compute the integral of the residual. For example, $$\begin{aligned}
\label{eqn_idc_os_9}
\int^{t_{m+1}}_{t_0}\epsilon^{(k-1)}(\tau) d \tau \approx \upsilon^{[k-1]}_{m+1} - u_0 -(t_{m+1}-t_0) \sum^M_{j=0}{\gamma}_{m, j}
f(t_j, \upsilon^{[k-1]}_j),
\end{aligned}$$ [ where ${\gamma}_{m, j}$ are the coefficients that result from approximation of the integral by quadrature formulas]{} and $\upsilon^{[k-1]}_j = \upsilon^{(k-1)}(t_j)$.
\(3) Use an $r_k$-th order numerical method to obtain an approximation to error vector $$\begin{aligned}
\label{eqn_idc_os_11}
e^{[k-1]} = (e^{[k-1]}_0, ... , e^{[k-1]}_m, ... , e^{[k-1]}_M),
\end{aligned}$$ where $e^{[k-1]}_m = e^{(k-1)}(t_m)$ is the value of the exact error function (\[eqn\_idc\_os\_7\]) at time $t_m$ and we denote it as $$\begin{aligned}
\label{eqn_idc_os_10}
\delta^{[k]} = (\delta^{[k]}_0, ... , \delta^{[k]}_m, ... , \delta^{[k]}_M).
\end{aligned}$$ To compute $\delta^{[k]}$ by an operator splitting method consistent with the base method, we first express the error equation in a form consistent with original problem we are solving. We start by differentiating the error (\[eqn\_idc\_os\_7\]), together with (\[eqn:split-ode\]) $$\begin{aligned}
\label{eqn_idc_os_11.5}
(e^{(k-1)})'(t) & = & u'(t) - (\upsilon^{(k-1)})'(t) \\ \nonumber
& = & f(t, u(t)) - f(t, \upsilon^{(k-1)}(t)) - \epsilon^{(k-1)}(t) \\ \nonumber
& = & f(t, \upsilon^{(k-1)}(t)+e^{(k-1)}(t)) - f(t, \upsilon^{(k-1)}(t)) - \epsilon^{(k-1)}(t) . \nonumber
\end{aligned}$$
Bring the residual to the left hand side, we have $$\label{eqn_idc_os_13}
(e^{(k-1)}(t) + \int^t_{t_0}\epsilon^{(k-1)} (\tau) d\tau )' = f(t, \upsilon^{(k-1)}(t)+e^{(k-1)}(t)) - f(t,\upsilon^{(k-1)}(t)) .$$ We now make the following change of variable, $$\begin{aligned}
\label{eqn_idc_os_14}
& Q^{(k-1)} (t) = e^{(k-1)}(t) + \displaystyle \int^t_{t_0} \epsilon^{(k-1)} (\tau) d\tau, \\ \nonumber
& G^{(k-1)}(t, Q^{(k-1)}(t)) = f(t, \upsilon^{(k-1)}(t)+Q^{(k-1)}(t)-\displaystyle \int^t_{t_0} \epsilon^{(k-1)} (\tau) d\tau) - f(t,\upsilon^{(k-1)}(t)).
\end{aligned}$$ With this change of variable, we see that the error equation can be expressed as an IVP of the form, $$\begin{aligned}
\begin{cases}\label{eqn_idc_os_15}
\vspace{0.1in}
\displaystyle{(Q^{(k-1)})'(t)}= G^{(k-1)}(t, Q^{(k-1)}(t)) , \qquad t\in [t_0,t_{M}],\\
Q^{(k-1)}(t_0) = 0.
\end{cases}\end{aligned}$$ This is now in the form of and we can apply the same operator splitting scheme to (\[eqn\_idc\_os\_15\]) that we applied to (\[eqn:split-ode\]) and obtain the numerical approximation to ${\vartheta}^{[k-1]}_m = Q^{(k-1)}(t_m)$. Recovering $\delta$ given ${\vartheta}$ is a simple procedure.\
\(4) Update the numerical solution as $\upsilon^{[k]} = \upsilon^{[k-1]} + \delta^{[k]}$.
[**Remark 1 (The prediction step):**]{} For example, if we apply the discrete form of first order Lie-Trotter splitting (\[3.3\]) to (\[eqn:split-ode\]) with $\Lambda = 2$, we have for $m = 0, 1, 2, ... , M-1$,
$$\label{eqn_idc_os_6}
\left\{
\begin{array}{ll}
\vspace{0.05in}
\displaystyle{ \frac{\widetilde{\upsilon}-\upsilon^{[0]}_m}{h_n}} = f_1(t_{m+1},\widetilde{\upsilon}), & \hbox{} \\
\displaystyle{ \frac{\upsilon^{[0]}_{m+1}-\widetilde{\upsilon}}{h_n}} = f_2(t_{m+1}, \upsilon^{[0]}_{m+1}) . & \hbox{}
\end{array}
\right.$$
[**Remark 2 (The correction step):**]{} As an example, if we use ADI splitting in the correction step, we will solve (\[eqn\_idc\_os\_15\]) with $\Lambda = 2$, we have for $m = 0, 1, 2, ..., M-1$, $$\label{eqn_idc_os_16}
\left\{
\begin{array}{ll}
\vspace{0.05in}
\displaystyle{\frac{\widetilde{\vartheta} - \vartheta^{[k]}_m}{\frac{h_n}{2}}} = G_1^{(k-1)}(t_m+\frac{h_n}{2}, \widetilde{\vartheta}) + G_2^{(k-1)}(t_m, \vartheta^{[k]}_m) , & \hbox{} \\
\displaystyle{\frac{\vartheta^{[k]}_{m+1} - \widetilde{\vartheta}}{\frac{h_n}{2}} = G_2^{(k-1)}(t_{m+1}, \vartheta^{[k]}_{m+1} ) + G_1^{(k-1)}(t_m+\frac{h_n}{2}, \widetilde{\vartheta} )}, & \hbox{}
\end{array}
\right.$$ where $$\label{eqn_idc_os_17}
G_{\nu}^{(k-1)}(t, Q^{(k-1)}(t)) = f_{\nu}(t, \upsilon^{(k-1)}(t)+Q^{(k-1)}(t)-\displaystyle \int^t_{t_0} \epsilon^{(k-1)} (\tau) d\tau) - f_{\nu}(t,\upsilon^{(k-1)}(t))$$ for $\nu = 1, 2$. Moreover, we note that we split the residual term equally for each operator in implementation.
Analysis of IDC-OS methods {#sec:analysis}
==========================
In this section, we will discuss the error estimate for IDC-OS methods. Our analysis is similar to previous work of IDC-RK and IDC-ARK [@Andrew; @Andrew1; @Andrew2]. In section \[subsec:continuous-splitting-theory\], we will establish that the IDC procedure can successfully reduce the splitting error for differential operator splitting methods where each sub-problem is solved exactly. In section \[subsec:discrete-splitting-theory\], we continue by leveraging the ideas from the work in [@Andrew2], and prove that the overall accuracy for the fully discrete methods is increased, as expected, with each successive correction. The second set of arguments apply to the discrete form of the differential operator splitting methods as well as the algebraic operator splitting methods. We present results for the stability regions of IDC-OS schemes in section \[subsec:stability\]. We remark that throughout this section, superscripts with a curly bracket $\{k\}$ denote the analytical functions related to solutions through differential splitting methods.
Splitting error: exact solutions to sub-problems {#subsec:continuous-splitting-theory}
------------------------------------------------
Differential operator splitting introduces a splitting error. If each sub-problem is solved exactly, the overall method only contains splitting error. Our starting point is to prove that IDC framework can reduce this splitting error. The primary result from this subsection is given by the following theorem.
\[thm0\] Assume $u(t)$ is the exact solution to IVP . Consider one time interval of an IDC method with $t \in [0, h]$. Suppose Lie-Trotter splitting (\[3.2\]) is used in the prediction step and the successive $c_s$ correction steps, and the sub-problems in each step are solved exactly. If $u(t)$ and $f_{\nu}$ are at least $(c_s +3)$ differentiable, then the splitting error is of order ${\mathcal{O}}(h^{{c_s}+2})$ after $c_s$ correction steps.
The proof of Theorem \[thm0\] follows by induction from the following two lemmas: Lemma \[thm0\_lemma0\] for the prediction step and Lemma \[thm0\_lemma1\] for the correction steps respectively.
\[thm0\_lemma0\] (Prediction step) Consider IVP on the interval $t\in[0, h]$. If $u(t)$ and $f_{\nu}$ satisfy the smoothness requirements in Theorem \[thm0\], and $u^{\{0\}}(t)$ is the solution obtained by applying Lie-Trotter splitting (\[3.2\]) to , and the followed sub-problems are solved exactly, then the splitting error scales as $$\begin{aligned}
\| e^{(0)} \| = \| u(h)- u^{\{0\}}(h) \| \sim {\mathcal{O}}(h^{2}), \qquad t \in [0, h].\end{aligned}$$
The conclusion of Lemma \[thm0\_lemma0\] is simply a restatement of what the local error of splitting methods measures. The splitting error is ${\mathcal{O}}(h^2)$ for first order Lie-Trotter splitting [@McLachlan1].
\[thm0\_lemma1\] (Correction step) Assume $u(t)$ is the solution to IVP (\[eqn:split-ode\]) on the interval $t\in[0, h]$. Let $u(t)$, and $f_{\nu}$ satisfy the smoothness requirements in Theorem \[thm0\]. For $k\leq c_s$, let $u^{\{k\}} (t)$ be the solution after the prediction step and $k$-th correction step via Lie-Trotter splitting method in Theorem \[thm0\]. If $\| e^{(k-1)}\| \sim {{\mathcal{O}}}(h^{k+1})$, then [ $\| e^{(k)}\| \sim {\mathcal{O}}(h^{k+2})$ after k correction steps]{}.
*Proof:* We show the proof with the simple case $\Lambda = 2$. We have the error equation (\[eqn\_idc\_os\_15\]) after prediction and $(k-1)$ correction steps. Use the Lie-Trotter splitting method (\[3.2\]) to solve (\[eqn\_idc\_os\_15\]), we have $$\begin{aligned}
\begin{cases}\label{lemma1_eqn6}
\vspace{0.1in}
\displaystyle{(Q_1^{\{k-1\}}(t))'}= G_1^{(k-1)}(t, Q_1^{\{k-1\}}(t)), \qquad t\in [0,h],\\
Q_1^{\{k-1\}}(0) = 0,
\end{cases}
\end{aligned}$$ and $$\begin{aligned}
\begin{cases}\label{lemma1_eqn7}
\vspace{0.1in}
\displaystyle{(Q_2^{\{k-1\}}(t))'}= G_2^{(k-1)}(t, Q_2^{\{k-1\}}(t)), \qquad t\in [0,h],\\
Q_2^{\{k-1\}}(0) = Q_1^{\{k-1\}}(h).
\end{cases}
\end{aligned}$$ with $G_{\nu}^{(k-1)}(t, Q^{(k-1)}(t))$ defined in (\[eqn\_idc\_os\_17\]). Hence $Q_2^{\{k-1\}}(h)$ is the approximation of $Q^{(k-1)}(h)$ solved by the Lie-Trotter splitting method. It’s easy to see that $$\label{lemma1_eqn7.5}
e^{(k)}(h) = e^{(k-1)}(h) - e^{\{k-1\}}(h) = Q^{(k-1)}(h)- Q_2^{\{k-1\}}(h),$$ for $t \in [0, h]$. To prove $ Q^{(k-1)}(h)- Q_2^{\{k-1\}}(h) \sim {{\mathcal{O}}}(h^{k+2}) $, we examine the scaled variant $$\label{lemma1_eqn8}
\bar{Q}^{(k-1)}(t) = \frac{1}{h^k} Q^{(k-1)}(t).$$ With this new notation, IVP (\[eqn\_idc\_os\_15\]) can be equivalently written as $$\begin{aligned}
\begin{cases}\label{lemma1_eqn9}
\vspace{0.1in}
\displaystyle{(\bar{Q}^{(k-1)}(t))'}= \bar{G}^{(k-1)}(t, \bar{Q}^{(k-1)}(t)), \qquad t\in [0,h],\\
\bar{Q}^{(k-1)}(0) = 0.
\end{cases}
\end{aligned}$$ with $$\label{lemma1_eqn10}
\bar{G}^{(k-1)}(t, \bar{Q}^{(k-1)}(t)) = \frac{1}{h^k} G^{(k-1)}(t, h^k\bar{Q}^{(k-1)}(t))~.~$$ Using the Lie-Trotter splitting method to solve IVP (\[lemma1\_eqn9\]) will give us $$\begin{aligned}
\begin{cases}\label{lemma1_eqn11}
\vspace{0.1in}
\displaystyle{(\bar{Q}_1^{\{k-1\}}(t))'}= \bar{G}_1^{(k-1)}(t, \bar{Q}_1^{\{k-1\}}(t)), \qquad t\in [0,h],\\
\bar{Q}_1^{\{k-1\}}(0) = 0,
\end{cases}
\end{aligned}$$ and $$\begin{aligned}
\begin{cases}\label{lemma1_eqn12}
\vspace{0.1in}
\displaystyle{(\bar{Q}_2^{\{k-1\}}(t))'}= \bar{G}_2^{(k-1)}(t, \bar{Q}_2^{\{k-1\}}(t)), \qquad t\in [0,h],\\
\bar{Q}_2^{\{k-1\}}(0) = \bar{Q}_1^{\{k-1\}}(h)~,~
\end{cases}
\end{aligned}$$ with $$\label{lemma1_eqn12.1}
\bar{G}_{\nu}^{(k-1)}(t, \bar{Q}^{(k-1)}(t)) = \frac{1}{h^k} G_{\nu}^{(k-1)}(t, h^k\bar{Q}^{(k-1)}(t)), \qquad \nu = 1, 2.$$ $\bar{Q}_2^{\{k-1\}}(h)$ is the approximation to $\bar{Q}^{(k-1)}(h)$ through Lie-Trotter splitting. If $ e^{(k-1)} \sim {\mathcal{O}}(h^{k+1})$, it is easy to verify that $ Q^{(k-1)}(t) \sim {\mathcal{O}}(h^{k+1})$ and $G^{(k-1)}(t, Q^{(k-1)}(t)) \sim {\mathcal{O}}(h^{k+1})$. Similar as the work of IDC-RK in [@Andrew], one can further check that $\frac{d}{d t}\bar{Q}^{\{k-1\}}(t) \sim {\mathcal{O}}(1)$ and $\bar{G}^{(k-1)}(t, \bar{Q}^{\{k-1\}}(t)) \sim {\mathcal{O}}(1)$ . Therefore, $$\label{lemma1_eqn13}
\parallel \bar{Q}^{(k-1)}(h)- \bar{Q}_2^{\{k-1\}}(h) \parallel \sim {\mathcal{O}}(h^2).$$ [Notice that IVP (\[lemma1\_eqn6\]) and (\[lemma1\_eqn11\]) are both first order ODEs, and $h^k\bar{G}_1^{(k-1)}(t, \bar{Q}_1^{\{k-1\}}(t)) = G_1^{(k-1)}(t, h^k\bar{Q}_1^{\{k-1\}}(t))$. Since $\bar{Q}_1^{\{k-1\}}(t)$ is the solution to (\[lemma1\_eqn11\]), $h^k\bar{Q}_1^{\{k-1\}}(t)$ is a solution to (\[lemma1\_eqn6\]). Through the uniqueness of the solution for IVP, one can conclude that $$\label{lemma1_eqn13.5}
\bar{Q}_1^{\{k-1\}}(h) = \frac{1}{h^k} Q_1^{\{k-1\}}(h) .
$$ Similarly, from IVP (\[lemma1\_eqn7\]) and (\[lemma1\_eqn12\]), one can further conclude $$\label{lemma1_eqn13.5_1}
\bar{Q}_2^{\{k-1\}}(h) = \frac{1}{h^k} Q_2^{\{k-1\}}(h).$$ ]{} Thus (\[lemma1\_eqn13\]) is equivalent to $$\begin{aligned}
\label{lemma1_eqn14}
\parallel \frac{1}{h^k} Q^{(k-1)}(h)- \frac{1}{h^k} Q_2^{(k-1)}(h) \parallel \sim {\mathcal{O}}(h^2),\end{aligned}$$ i.e. $$\begin{aligned}
\label{lemma1_eqn15}
\parallel e^{(k)} \parallel = \parallel Q^{(k-1)}(h)- Q_2^{\{k-1\}}(h) \parallel \sim {\mathcal{O}}(h^{k+2}).\end{aligned}$$ We now complete the proof of Lemma \[thm0\_lemma1\] for the case of Lie-Trotter splitting. $\square$ The conclusion in Theorem \[thm0\] also holds for Strang splitting method (\[3.6\]) and the proof is essentially the same as Lie-Trotter splitting. We have now demonstrated that IDC can lift the order of accuracy when each sub-problem is solved exactly, however, in practice, we usually do not have access to analytical solutions for these sub-problems. We will consider the fully discrete scheme in the next section.
Local truncation error: discrete solutions to sub-problems {#subsec:discrete-splitting-theory}
----------------------------------------------------------
A fully discrete solution introduces additional error beyond the splitting error. In this section, we turn to analyzing fully discrete IDC-OS schemes and begin with some preliminary definitions [@Andrew].
(Discrete differentiation) Consider the discrete data set, $(\vec{t},\vec{\psi}) = \{(t_0, \psi_0),..., (t_M, \psi_M) \}$, with $\{t_m\}^M_{m=0}$ defined as uniform quadrature nodes in (\[eqn\_idc\_os\_3\]). We denote $L^M$ as the $M$-th degree Lagrangian interpolant of $(t, \psi)$: $$\begin{aligned}
\label{eqn4.1}
L^M(t, \psi) = \sum^M_{m=0} c_m(t)\psi_m, \qquad c_m(t) = \prod_{n \neq m} \frac{t-t_n}{t_m-t_n}.\end{aligned}$$ An $s$-th degree discrete differentiation is a linear mapping that maps $\vec{\psi}$ to $\overrightarrow{\hat{d}_s \psi}$, where $$\begin{aligned}
\label{eqn4.2}
(\hat{d}_s \psi)_m = \frac{\partial^s}{\partial t^s} L^M(t, \psi)\mid_{t=t_m}.\end{aligned}$$ This linear mapping can be represented by a matrix multiplication $\overrightarrow{\hat{d}_s \psi} = \hat{D}_s \cdot \vec{\psi}$, where $\hat{D}_s \in \Re^{(M+1)\times{(M+1)}}$ and $(\hat{D})_{mn} = \frac{\partial^s}{\partial t^s} c_n(t)\mid_{t=t_m}$, $m,n = 0,...,M.$
The $(\hat{S}, \infty)$ Sobolev norm of the discrete data set $(\vec{t}, \vec{\psi})$ is defined as $$\begin{aligned}
\label{eqn4.3}
\|\vec{\psi}\|_{\hat{S}, \infty} \doteq \sum^{\hat{S}}_{s=0} \parallel\overrightarrow{\hat{d}_s\psi}\parallel_{\infty} = \sum^{\hat{S}}_{s=0} \parallel \hat{D}_s \cdot \vec{\psi}\parallel_{\infty},\end{aligned}$$ where $\overrightarrow{\hat{d}_s \psi} = Id \cdot \hat{\psi}$ is the identity matrix operating on $\hat{\psi}$.
(smoothness of a discrete data set) A discrete data set $(\vec{t},\vec{\psi}) = \{(t_0, \psi_0),..., (t_M, \psi_M) \}$ possesses $\hat{S} (\hat{S} \leq M)$ degrees of smoothness if $\parallel \vec{\psi}\parallel_{\hat{S},\infty}$ is bounded as $h\rightarrow 0$, with $h$ defined as the step size in the sub-interval $(t_{m},t_{m+1})$ where $m = 0,1,\cdots,M-1$.
As discussed in section 2, all the listed operator splitting schemes are a form of ARK methods. Therefore, we can use the framework of the IDC-ARK schemes in [[@Andrew2]]{} to enhance the order of the discretized scheme. Hence, we shall describe only what is needed for clarity when extending the results of the work in [[@Andrew2]]{} to the fully implicit case under consideration here. For further details, we refer the reader to [ [@Andrew1; @Andrew2]]{}. The theorems below apply to lifting the order of algebraic splitting as well as the discrete form of differential splitting. The splitting error discussed in Theorem \[thm0\] is directly related to the local truncation error. We note that the results in the following theorem can be generalized to all IDC-OS schemes which can be written as a form of ARK method and the proof is quite similar.
\[thm1\] Let $u(t)$ be the solution to IVP [ (\[eqn:split-ode\])]{}. Assume $u(t)$, $f(t,u)$ and $f_{\nu}(t,u)$ are at least $\sigma$ differentiable with respect to each argument, where $\sigma \geq M+2$. Consider one time interval of an IDC method with $t\in [0,H]$ and $M+1$ uniformly distributed quadrature points. Suppose an $r_0$-th order ARK method is used in the prediction step and $(r_1, r_2, ... , r_{c_s})$-th order ARK methods are used in the successive $c_s$ correction steps. Let $s_k = \Sigma^k_{j=0}r_j$. If $s_{c_s}\leq M+1$, then the local truncation error is of order ${{\mathcal{O}}}(h^{s_{c_s}+1})$ after $c_s$ correction steps.
The proof of Theorem \[thm1\] follows by induction from the following lemmas for the prediction and correction steps. For clarity, similar as [@Andrew2], we will sketch a proof for Lie-Trotter splitting.
\[lem1\] (prediction step) Consider an $r_0$-th order ARK method for (\[eqn:split-ode\]) on $[0,H]$, with (M+1) uniformly distributed quadrature points. $u(t)$ and $f_{\nu}$ satisfy the smoothness requirement in Theorem \[thm1\] and let $\upsilon^{[0]} = (\upsilon^{[0]}_0, \upsilon^{[0]}_1, ... \upsilon^{[0]}_m, ..., \upsilon^{[0]}_M )$ be the numerical solution. Then,
\(1) The error vector $e^{[0]} = u - \upsilon^{[0]}$ satisfies $\|e^{[0]} \|_{\infty} \sim {\mathcal{O}}(h^{r_0+1}) $.
\(2) The rescaled error vector $\displaystyle{\bar{e}^{[0]} = \frac{1}{h^{r_0}} e^{[0]}}$ has $\min(\sigma-r_0, M)$ degrees of smoothness in the discrete sense.
*Proof:* (1) is obvious. We will prove (2) next. We drop the superscript $[0]$ as there is no ambiguity. Applying the discrete form of the Lie-Trotter splitting (\[3.3\]) to IVP (\[eqn:split-ode\]) with $\Lambda = 2$, we have $$\begin{aligned}
\begin{cases}\label{lem3_1}
\vspace{0.1in}
\displaystyle \frac{\widetilde{\upsilon} - \upsilon_m}{h} = f_1(t_{m+1}, \widetilde{\upsilon}),\\
\displaystyle \frac{\upsilon_{m+1} - \widetilde{\upsilon}}{h} = f_2(t_{m+1}, \upsilon_{m+1}),
\end{cases}
\end{aligned}$$ i.e. $$\label{lem3_2}
\upsilon_{m+1} = \upsilon_m + hf_1(t_{m+1}, \widetilde{\upsilon}) + hf_2(t_{m+1}, \upsilon_{m+1}).$$ Performing Taylor expansion of $f_1(t_{m+1},\widetilde{\upsilon})$ at $t = t_m$, we get $$\label{lem3_3}
\upsilon_{m+1} = \upsilon_m + hf_1(t_{m}, \upsilon_m)+ hf_2(t_{m+1}, \upsilon_{m+1}) + \sum^{\sigma-2}_{i=1} \frac{h^{i+1}}{i !} \frac{d^i f_1}{d t^i} (t_m, \upsilon_m) + {\mathcal{O}}(h^{\sigma}),$$ on the other hand, the exact solution satisfies $$\begin{aligned}
\label{lem3_4}
u_{m+1} & = & u_m + \int^{t_{m+1}}_{t_m} f_1(\tau, u(\tau)) d\tau + \int^{t_{m+1}}_{t_m} f_2(\tau, u(\tau)) d\tau \\ \nonumber
& = & u_m + h f_1(t_m, u_m) + \sum^{\sigma-2}_{i=1} \frac{h^{i+1}}{(i+1)!} \frac{d^i f_1}{d t^i}(t_m, u_m) \\ \nonumber
& + & h f_2(t_{m+1}, u_{m+1}) + \sum^{\sigma -2}_{i=1} \frac{(-1)^{i+1} h^{i+1}}{(i+1)!} \frac{d^i f_2}{d t^i} (t_{m+1}, u_{m+1}) + {\mathcal{O}}(h^{\sigma}).\end{aligned}$$ Subtracting [(\[lem3\_3\]) ]{}from (\[lem3\_4\]) gives $$\begin{aligned}
\label{lem3_5}
e_{m+1} & = & e_m + h(f_1(t_m, u_m) - f_1(t_m, \upsilon_m)) + h (f_2(t_{m+1}, u_{m+1}) - f_2(t_{m+1}, \upsilon_{m+1})) \\ \nonumber
& + & \sum^{\sigma-2}_{i=1} \frac{h^{i+1}}{(i+1)!} \frac{d^i f_1}{d t^i} (t_m, u_m) + \sum^{\sigma -2}_{i=1} \frac{(-1)^{i+1} h^{i+1}}{(i+1)!} \frac{d^i f_2}{d t^i} (t_{m+1}, u_{m+1}) - \sum^{\sigma-2}_{i=1} \frac{h^{i+1}}{i !} \frac{d^i f_1}{d t^i} (t_m, \upsilon_m) \\ &+& {{\mathcal{O}}}(h^{\sigma}),\end{aligned}$$ where $e_{m+1} = u_{m+1}-\upsilon_{m+1}$ is the error at $t_{m+1}$. Denote $$\begin{aligned}
\label{lem3_6}
l_m = (f_1(t_m, u_m) - f_1(t_m, \upsilon_m)) + (f_2(t_{m+1}, u_{m+1}) - f_2(t_{m+1}, \upsilon_{m+1}))\end{aligned}$$ and $$\label{lem3_7}
r_m = \sum^{\sigma-2}_{i=1} \frac{h^{i+1}}{(i+1)!} \frac{d^i f_1}{d t^i} (t_m, u_m) + \sum^{\sigma -2}_{i=1} \frac{(-1)^{i+1} h^{i+1}}{(i+1)!} \frac{d^i f_2}{d t^i} (t_{m+1}, u_{m+1}) - \sum^{\sigma-2}_{i=1} \frac{h^{i+1}}{i !} \frac{d^i f_1}{d t^i} (t_m, \upsilon_m).$$ We will use an inductive approach with respect to the degree of the smoothness $s$ to investigate the smoothness of the rescaled error vector $\bar{e} = \frac{e}{h}$, and $$\label{lem3_8}
(d_1\bar{e})_m = \frac{\bar{e}_{m+1}-\bar{e}_m}{h} = \frac{l_m}{h} + \frac{r_m}{h^2} + {{\mathcal{O}}}(h^{\sigma-2}).$$ First of all, $\bar{e}$ has at least zero degrees of smoothness in the discrete sense since $\|\bar{e}\| \sim {\mathcal{O}}(h)$. Assume $\bar{e}$ has $s\leq M-1$ degrees of smoothness, we will show $d_1\bar{e}$ has $s$ degrees of smoothness, from which we can conclude $\bar{e}$ has $(s+1)$ degrees of smoothness. $$\begin{aligned}
\label{lem3_9}
l_m &=& (f_1(t_m, u_m)-f_1(t_m, \upsilon_m)) + (f_2(t_{m+1}, u_{m-1}) - f_2(t_{m+1} \upsilon_{m+1})), \\ \nonumber
&=& \sum^{\sigma-2}_{i=1} \frac{1}{i !} (e_m)^i \frac{\partial^i f_1}{\partial u^i} (t_m, u_m) + \sum^{\sigma-2}_{i=1} \frac{1}{i !} (e_{m+1})^i \frac{\partial^i f_2}{\partial u^i}(t_{m+1},u_{m+1}) \\ \nonumber & +& {{\mathcal{O}}}((e_m)^{\sigma-1}) + {{\mathcal{O}}}((e_{m+1})^{\sigma-1}) \\ \nonumber
&=& \sum^{\sigma-2}_{i=1} \frac{h^i}{i !} (\bar{e}_m)^i \frac{\partial^i f_1}{\partial u^i} (t_m, u_m) + \sum^{\sigma-2}_{i=1} \frac{h^i}{i !} (\bar{e}_{m+1})^i \frac{\partial^i f_2}{\partial u^i}(t_{m+1},u_{m+1})\\ \nonumber & + & {{\mathcal{O}}}((h \bar{e}_m)^{\sigma-1}) + {{\mathcal{O}}}((h \bar{e}_{m+1})^{\sigma-1}).\end{aligned}$$ By assuming that $f_1$ and $f_2$ have at least $\sigma$ degrees of smoothness, we can conclude $\frac{\partial^i f_1}{\partial u^i}$ and $\frac{\partial^i f_2}{\partial u^i}$ have at least $\sigma - i -1$ degrees of smoothness, which implies $h^{i-1}\frac{\partial^i f_1}{\partial u^i}$ and $h^{i-1}\frac{\partial^i f_2}{\partial u^i}$ have at least $\sigma-2$ degrees of smoothness. Therefore $\frac{l_m}{h}$ will have $\min(\sigma-2, s)$ degrees of smoothness. Also, $\frac{r_m}{h^2}$ will have at least $s$ degrees of smoothness. Therefore, $d_1 \bar{e}$ has $s$ degrees of smoothness. Therefore, $\bar{e}$ has $(s+1)$ degrees of smoothness. Notice that $\sigma \ge M+2$, we complete the inductive approach and conclude $\bar{e}$ has $M$ degrees of smoothness. $\square$
Before investigating the correction step for IDC-OS schemes, we describe some details for the error equations first. Notice that the error equation after (k-1) correction steps has the form of [ (\[eqn\_idc\_os\_13\])]{}, with the notation $Q^{(k-1)}(t)$, we actually implement the problem (\[eqn\_idc\_os\_15\]) on time interval $[t_m, t_{m+1}]$ via discrete Lie-Trotter splitting as follows $$\begin{aligned}
\begin{cases}\label{lem4_1}
\vspace{0.1in}
\displaystyle \frac{\widetilde{\vartheta} - \vartheta^{[k]}_m}{h} = G_1^{(k-1)}(t_{m+1}, \widetilde{\vartheta}),\\
\displaystyle \frac{\vartheta^{[k]}_{m+1} - \widetilde{\vartheta}}{h} = G_2^{(k-1)}(t_{m+1}, \vartheta^{[k]}_{m+1}),
\end{cases}
\end{aligned}$$ through which $\vartheta^{[k]}_{m+1}$ is updated. Furthermore, we can update $\delta^{[k]}_{m+1}$ by (\[eqn\_idc\_os\_14\]) and (\[eqn\_idc\_os\_9\]). Similarly, if we apply Lie-Trotter splitting to the scaled error equation (\[lemma1\_eqn9\]) over the time interval $[t_m, t_{m+1}]$, we have $$\begin{aligned}
\begin{cases}\label{lem4_3}
\vspace{0.1in}
\displaystyle \frac{\widetilde{\bar{\vartheta}} - \bar{\vartheta}^{[k]}_m}{h} = \bar{G}_1^{(k-1)}(t_{m+1}, \widetilde{\bar{\vartheta}}),\\
\displaystyle \frac{\bar{\vartheta}^{[k]}_{m+1} - \widetilde{\bar{\vartheta}}}{h} = \bar{G}_2^{(k-1)}(t_{m+1}, \bar{\vartheta}^{[k]}_{m+1}),
\end{cases}
\end{aligned}$$ from which we obtain $\bar{\vartheta}^{[k]}_{m+1}$ and further $\bar{\delta}^{[k]}_{m+1}$.
\[lem2\] (correction step) Let $u(t)$ and $L_{\nu}$ satisfy the smoothness requirements in Theorem \[thm1\]. Suppose $e^{[k-1]} \sim {{\mathcal{O}}}(h^{s_{k-1}+1})$ and $\displaystyle {\bar{e}^{[k-1]} = \frac{1}{h^{s_{k-1}}} e^{[k-1]}}$ has $(M+1-s_{k-1})$ degrees of smoothness in the discrete sense after the $(k-1)$-th correction step. Then, after the $k$-th correction step using an $r_k$-th order ARK method and $k \leq k_{c_s}$,
\(1) [$ \| e^{[k]} \|_{\infty} \sim {{\mathcal{O}}}(h^{s_k+1}) $]{}.
\(2) The rescaled error vector $\displaystyle{ \bar{e}^{[k]} = \frac{1}{h^{s_k}} e^{[k]} }$ has $M+1-s_k$ degrees of smoothness in the discrete sense.
*Proof:* The proof of Lemma \[lem2\] is similar as Lemma \[lem1\], but more tedious. Similar as in [@Andrew2], we outline the proof here and present the difference between the proof of IDC-OS and IDC-RK, IDC-ARK in Proposition \[prop1\], we refer the reader to [@Andrew1] for details.
1. Substract the numerical error vector from the integrated error equation $$\label{lem4_eqn1}
e^{[k]}_{m+1} = e^{[k-1]}_{m+1} - \delta^{[k]}_{m+1}$$ and make necessary substitution and expansion via the rescaled equations.
2. Bound the error $e^{[k]}$ by an inductive approach.
The following proposition is about the equivalence of the rescaled error vector and unscaled error vectors for Lie-Trotter splitting. We remark that the proof of this proposition shows the difference of the proof between IDC-OS and IDC-RK in [@Andrew1], IDC-ARK in [@Andrew2].
\[prop1\] Consider a single step of an IDC scheme constructed with the Lie-Trotter splitting scheme for the error equation, assume the exact solution $u(t)$, and $L_{\nu}$ satisfies the smoothness requirement in Theorem \[thm1\], then for a sufficiently smooth error function $e^{(k-1)}(t)$, the difference between the Taylor series for the exact error $e^{(k-1)}(t_{m+1})$ and the numerical error $\delta^{[k]}_{m+1}$ is ${{\mathcal{O}}}(h^{k+2})$ after $k$ correction steps.
*Proof:* Notice that the left and right hand side terms of the rescaled error equation (\[lemma1\_eqn9\]) is ${{\mathcal{O}}}(1)$, applying the discrete form of Lie-Trotter splitting scheme (\[3.3\]) to (\[lemma1\_eqn9\]) will result in $$\label{prop1_eqn1}
\bar{Q}^{[k-1]}_{m+1} - \bar{\vartheta} ^{[k]}_{m+1} \sim {{\mathcal{O}}}(h^2).$$ Since $$\label{prop1_eqn2}
\bar{Q}^{[k-1]}_{m+1} = \frac{1}{h^k} Q^{[k-1]}_{m+1} = \frac{1}{h^k} (e^{[k-1]}_{m+1} - \int^{t_{m+1}}_{t_m} \varepsilon^{(k-1)}(\tau) d \tau).$$ The proof is complete if the following argument holds. $$\label{prop1_eqn3}
h^k \bar{\delta}^{[k]}_{m} = \delta^{[k]}_{m} + {{\mathcal{O}}}(h^{\sigma}), \qquad m = 0, 1, 2, ..., M,$$ which is also equivalent to $$\label{prop1_eqn4}
h^k \bar{\vartheta}^{[k]}_{m} = \vartheta^{[k]}_{m} +{{\mathcal{O}}}(h^{\sigma}), \qquad m = 0, 1, 2, ..., M.$$ We will prove [ (\[prop1\_eqn4\])]{} by induction. (\[prop1\_eqn4\]) holds for $m=0$ since the initial condition for the error equation is set as $0$. Assume (\[prop1\_eqn3\]) holds for $m$, then $$\begin{aligned}
\label{prop1_eqn5}
\widetilde{\bar{\vartheta}} & = & \bar{\vartheta}^{[k]}_m + h \bar{G}^{(k-1)}_1 (t_{m+1}, \widetilde{\bar{\vartheta}} ), \\ \nonumber
& = & \bar{\vartheta}^{[k]}_m + h \sum^{\sigma -1}_{i = 0} \frac{h^i}{i !} \frac{d^i }{d t^i} \bar{G}_1^{(k-1)}(t_m, \bar{\vartheta}^{[k]}_m) + {{\mathcal{O}}}(h^{\sigma}) \\ \nonumber
& = & \bar{\vartheta}^{[k]}_m + h \sum^{\sigma -1}_{i = 0} \frac{h^i}{i !} \frac{d^i }{d t^i} \left(\frac{1}{h^k} G_1^{(k-1)}(t_m, h^k \bar{\vartheta}^{[k]}_m) \right ) + {{\mathcal{O}}}(h^{\sigma}) \\ \nonumber
& = & \frac{1}{h^k} \left( \vartheta^{[k]}_m + h \sum^{\sigma -1}_{i = 0} \frac{h^i}{i !} \frac{d^i }{d t^i} G_1^{(k-1)}(t_m, h^k \bar{\vartheta}^{[k]}_m) \right ) + {{\mathcal{O}}}(h^{\sigma}).\end{aligned}$$ On the other hand, Taylor expanding $\widetilde{\vartheta}$ at $t_m$ will give us $$\begin{aligned}
\label{prop1_eqn6}
\widetilde{\vartheta} & = & \vartheta^{[k]}_m + h G_1^{(k-1)}(t_{m+1}, \widetilde{\vartheta} ) \\ \nonumber
& = & \vartheta^{[k]}_m + h \sum^{\sigma -1}_{i = 0} \frac{h^i}{i !} \frac{d^i }{d t^i} G_1^{(k-1)}(t_m, h^k \bar{\vartheta}^{[k]}_m) + {{\mathcal{O}}}(h^{\sigma}).\end{aligned}$$ Compare (\[prop1\_eqn5\]) and (\[prop1\_eqn6\]), we can conclude $$\label{prop1_eqn7}
\widetilde{\vartheta} = h^k \widetilde{\bar{\vartheta}} +{{\mathcal{O}}}(h^{\sigma}).$$ Similar approach to the second equation in (\[lem4\_1\]) and (\[lem4\_3\]) will result in $$\label{prop1_eqn8}
\vartheta^{[k]}_{m+1} =h^k\bar{\vartheta}^{[k]}_{m+1} + {\mathcal{O}}(h^{\sigma}),$$ which completes the inductive proof for (\[prop1\_eqn4\]). $\square$
Stability {#subsec:stability}
---------
In this subsection, we study the linear stability of the proposed IDC-OS numerical schemes. As is common practice [@Hairer], we consider the test problem $$\begin{aligned}
\label{eqn6.1}
u_t = \lambda u,\end{aligned}$$ and observe how the numerical scheme behaves for different complex values of $\lambda$. Without loss of generality, we will assume that $u(0) = 1$, and we’ll consider a single time step of length $\Delta t = 1$. The stability region of a numerical method is then defined as
$$\begin{aligned}
\mathbb{D} := \{ \lambda \in \mathbb{C} : \left| u\left( 1 \right) \right| \leq 1 \}.\end{aligned}$$
An additional complication comes from the fact that an operator splitting scheme requires a splitting of the right hand side of into $\Lambda$ parts. For simplicity, we’ll consider the special case of $\Lambda = 2$ with $\lambda = \lambda_1+\lambda_2$, and we further assume that $\lambda_1 = \lambda_2$ for simplicity. In Figure \[fig6\_1\] we present stability regions for IDC-OS methods based on three separate base solvers: Lie-Trotter splitting, Strang splitting and ADI splitting. The stability region of Lie-Trotter splitting with IDC procedure is everywhere outside the curves, and the stability regions for Strang splitting and ADI is the finite region inside the curves. The number in the legend denotes the order of the method. For example, “IDC4" represents fourth order methods achieved by the IDC-OS schemes; for Lie-Trotter splitting, we require three correctors to attain fourth-order accuracy, whereas Strang and ADI splitting only require a single correction. Our first observation is that of the three base solvers, Lie-Trotter splitting is the only solver that retains an infinite region of absolute stability, whereas Strang splitting and ADI reduce to finite regions of absolute stability.
Consistent with other *implicit* IDC methods, the stability regions for our implicit IDC-OS methods decreases as the number of correction steps increases. We have observed that larger stability regions can be found if we include more quadrature nodes for evaluating the integral of the residual.[^4] This leads us to conjecture that a more accurate numerical approximation of the residual integral is important in finding larger stability regions.
![Stability region for IDC-OS schemes with different number of corrections. (a) Lie-Trotter splitting; (b) Strang splitting; (c) ADI splitting. []{data-label="fig6_1"}](fig_Lie_test_new-eps-converted-to){width="2.0in"}
(a)
![Stability region for IDC-OS schemes with different number of corrections. (a) Lie-Trotter splitting; (b) Strang splitting; (c) ADI splitting. []{data-label="fig6_1"}](fig_strang-eps-converted-to){width="2.0in"}
(b)
![Stability region for IDC-OS schemes with different number of corrections. (a) Lie-Trotter splitting; (b) Strang splitting; (c) ADI splitting. []{data-label="fig6_1"}](fig_adi-eps-converted-to){width="2.0in"}
(c)
Application of IDC-OS schemes to parabolic PDEs {#sec:5}
===============================================
In this section, we will discuss how to apply the IDC-OS framework to the parabolic problem of the form $$\begin{aligned}
\begin{cases}\label{eqn:parabolic}
\displaystyle{u_t}= \nabla \cdot (a(x, y)\nabla u) +{s(t,u)}, \qquad (x,y) \in \Omega \\
u(0,x,y)=u_0(x,y), \qquad \\
u = g, \qquad (x,y) \in \partial \Omega.
\end{cases}\end{aligned}$$ The methods can be generalized to a high dimensional setting, but in this work we restrict our attention to two dimensions. For differential splitting methods, it is quite straightforward to apply IDC-OS schemes if we solve (\[eqn:parabolic\]) via method of lines. One can obtain semi-discrete ODE systems which have the same form as (\[eqn:split-ode\]) after spatial discretization. It is natural to assume one operator, say $L_1$ is related to the terms in $x-$direction, while $L_2$ is related to the terms in $y-$direction. As for algebraic splitting, one major difficulty for applying the IDC-OS framework to PDEs is how to handle the boundary and initial conditions for the error equation. In the following context, we will introduce one ADI formulation which can effectively deal with those issues. For simplicity, we only discuss the case when there is no nonlinear source in , i.e. $s(t, u) = 0$.
Classical ADI starts by applying second-order Crank-Nicholson time discretization to the continuous PDE , this process produces a semi-discrete scheme $$\label{eqn3.4.2}
\frac{u^{n+1}-u^n}{\Delta t} = \frac{ a }{2}(u^{n+1}_{xx}+u^n_{xx})+\frac{
a_x}{2}(u^{n+1}_x+u^n_x)+\frac{a}{2}(u^{n+1}_{yy}+u^n_{yy})+\frac{a_y}{2}(u^{n+1}_y+u^n_y),$$ where $\Delta t = t^{n+1}-t^n$ is the time step, $a = a(x,y)$, $a_x = a_x(x,y)$ and $a_y= a_y(x,y)$. On a two dimensional structured mesh, we choose to use the central difference approximation (of orders $2,4$ or $6$) for approximating the spatial operators $\frac{\partial}{\partial x^2}$, $\frac{\partial}{\partial x}$, $\frac{\partial}{\partial y^2}$ and $\frac{\partial}{\partial y}$, and we denote them by $A_x, B_x, A_y, B_y$, respectively. If the spatial discretization is performed on an $N_x\times N_y$ grid, there are $N_x\times N_y$ equations in the form of (\[eqn3.4.2\]). We denote $\Upsilon$ as the unknowns in vector form, then we can write these $N_x\times N_y$ equations into matrix multiplication where boundary conditions are also incorporated, $$\begin{aligned}
\label{eqn3.4.3}
\frac{{\Upsilon}^{n+1}-{\Upsilon}^n}{\Delta t}& =& \nonumber
\frac{a}{2}(A_x{\Upsilon}^{n+1}+A_x{\Upsilon}^n)+\frac{a_x}{2}(B_x{\Upsilon}^{n+1}+B_x{\Upsilon}^n) \\ \nonumber
& + &\frac{a}{2}(A_y{\Upsilon}^{n+1}+A_y{\Upsilon}^n)+\frac{a_y}{2}(B_y{\Upsilon}^{n+1}+B_y{\Upsilon}^n)\\
& + & \frac{a}{2}(g^{n+1}_{A_x}+g^n_{A_x}) +\frac{a_x}{2}(g^{n+1}_{B_x}+g^n_{B_x}) \\ \nonumber
& + & \frac{a}{2}(g^{n+1}_{A_y}+g^n_{A_y})+ \frac{a_y}{2}(g^{n+1}_{B_y}+g^n_{B_y})~,~\end{aligned}$$ where $g_{A_x}$, $g_{B_x}$, $g_{A_y}$, and $g_{B_y}$ are the boundary [terms]{}. Notice that, different from [@Doug3], we enforce the boundary conditions strictly in the scheme. It is easy to verify that the method given in [ (\[eqn3.4.3\])]{} is second order accurate in time. Specifically, if we use six order central difference for spatial derivatives such as in the numerical simulations, the local truncation error of [ (\[eqn3.4.3\])]{} is ${{\mathcal{O}}}(\Delta t \Delta x^6+\Delta t^3)$. Denoting $$\begin{aligned}
& &\nonumber {J_1} = \frac{\Delta t}{2}(aA_x+a_xB_x), \\
& & {J_2} = \frac{\Delta t}{2}(aA_y+a_yB_y), \\
& & \nonumber S = \frac{a}{2}(g^{n+1}_{A_x}+g^n_{A_x}) +\frac{a_x}{2}(g^{n+1}_{B_x}+g^n_{B_x}) + \frac{a}{2}(g^{n+1}_{A_y}+g^n_{A_y})+ \frac{a_y}{2}(g^{n+1}_{B_y}+g^n_{B_y})~,~\end{aligned}$$ [ (\[eqn3.4.3\])]{} is equivalent to $$\label{eqn3.4.4}
(I-{J_1}-{J_2}){\Upsilon}^{n+1} = (I+{J_1}+{J_2}){\Upsilon}^n+\Delta t S.$$ To set up an ADI scheme, we follow [@Doug3] by adding one term ${J_1J_2}{\Upsilon}^{n+1}$ to both sides of [ (\[eqn3.4.4\])]{}, which results in $$\label{eqn3.4.5}
(I-{J_1}-{J_2}+{J_1J_2}){\Upsilon}^{n+1} = (I+{J_1+J_2+J_1J_2}){\Upsilon}^n+{J_1J_2}({\Upsilon}^{n+1}-{\Upsilon}^n)+\Delta t S.$$ Then it is straightforward to factor [(\[eqn3.4.5\])]{} as $$\label{eqn3.4.7}
(I-{J_1})(I-{J_2}){\Upsilon}^{n+1} = (I+{J_1})(I+{J_2}){\Upsilon}^n+{J_1J_2}({\Upsilon}^{n+1}-{\Upsilon}^n) +\Delta t S.$$ Let us consider the second term on the right hand side of [ (\[eqn3.4.7\])]{}. Observe that $$\label{eqn3.4.8}
\Upsilon^{n+1} = \Upsilon^n +{{\mathcal{O}}}(\Delta t),$$ and that $J_1$ and $J_2$ both carry a $\Delta t$ in them, we see that the term ${J_1J_2}({\Upsilon}^{n+1}-{\Upsilon}^n)\sim {{\mathcal{O}}}(\Delta t^3)$. Hence, the second term on the right hand side of [ (\[eqn3.4.7\])]{} is the same order as the truncation error, thus can be dropped. Therefore, the scheme reduces to $$\label{eqn3.4.9}
(I-{J_1})(I-{J_2}){\Upsilon}^{n+1} = (I+{J_1})(I+{J_2}){\Upsilon}^n+\Delta t S.$$ To solve (\[eqn3.4.9\]), a two-step method was proposed in [@Doug2; @Peachman], $$\begin{aligned}
\begin{cases}\label{eqn3.4.10}
\vspace{0.1in}
(I-{J_1})\tilde{{\Upsilon}}^{n+\frac{1}{2}} = (I+{J_2}){\Upsilon}^n + \frac{\Delta t}{2}S, \qquad \text{x-sweep},\\
(I-{J_2}){\Upsilon}^{n+1} = (1+{J_1})\tilde{{\Upsilon}}^{n+\frac{1}{2}} + \frac{\Delta t}{2}S, \qquad \text{y-sweep}.
\end{cases}\end{aligned}$$ However, to be symbolically consistent, symmetric and suited for IDC method, we choose to split the boundary values $S$ in the following way, $$\begin{aligned}
\begin{cases}\label{eqn3.4.11}
\vspace{0.1in}
(I-{J_1})\tilde{{\Upsilon}}^{n+\frac{1}{2}} = (I+{J_2}){\Upsilon}^n + S_1, \qquad \text{x-sweep},\\
(I-{J_2}){\Upsilon}^{n+1} = (1+{J_1})\tilde{{\Upsilon}}^{n+\frac{1}{2}} + S_2, \qquad
\text{y-sweep},
\end{cases}
\end{aligned}$$ with boundary terms defined as $$\begin{aligned}
\begin{cases}\label{eqn3.4.12}
\vspace{0.1in}
S_1 = \frac{\Delta t}{2}(ag^{n+1}_{A_x}+a_xg^{n+1}_{B_x}+ag^n_{A_y}+a_yg^n_{B_y}),\\
S_2 = \frac{\Delta t}{2}(ag^n_{A_x}+a_xg^n_{B_x}+ag^{n+1}_{A_y}+a_yg^{n+1}_{B_y}).
\end{cases}
\end{aligned}$$ It should also be pointed out that the boundary values $S$ are associated with boundary functions $g$ at time $t = t_n$ and $t = t_{n+1}$, instead of $t_{n+\frac{1}{2}}$, therefore there is no error introduced from intermediate values $\tilde{\Upsilon}$ on the boundary. This is important for setting the boundary conditions when solving the error equation of IDC when we combine the ADI scheme with the IDC methodology. Because the Dirichlet boundary conditions of (\[eqn:parabolic\]) are exact and accounted for in the formulation of the prediction, therefore, the boundary terms will not show up in the correction steps.
Numerical examples {#numer}
==================
In this section, we present numerical results for the proposed implicit IDC-OS schemes on a variety of examples of the parabolic initial-boundary value problem , where our aim is to demonstrate the efficiency of the proposed time-stepping methods. We begin with two linear examples of , and then present an example of the heat equation with a nonlinear forcing term. Our final two examples come from mathematical biology: the Fitzhugh-Nagumo reaction-diffusion model and the Schnakenberg model. Our present work is in two-dimensions, and every result is performed on a square domain with a cartesian grid. We solve using $6$-th order central difference for the spatial discretization in order that the temporal error is dominant in the measured numerical error.
[**Example 1.**]{} **Linear example: Dirichlet boundary conditions.** We solve initial boundary value problem (\[eqn:parabolic\]) with constant coefficient $a(x,y) = 1$ in the domain $[-1,1]\times[-1,1]$. Initial condition is taken as $u_0(x,y) = (1-y)e^x$ and time dependent boundary conditions are ${g(x,y,t)} = (1-y)e^{t+x}$. Therefore, (\[eqn:parabolic\]) has the exact solution $u(x,y,t)=(1-y)e^{t+x}$. $N_{x,y} = N_x = N_y$ represents the number of spatial grids in $x$- and $y$-direction. $N_t$ is the time steps used in the time interval $[0, T]$ where $T$ is end time. $c_s$ is the number of correction steps. $u$ is the exact solution and ${\upsilon}$ as the numerical solution. We solve Example 1 by first order Lie-Trotter splitting (\[3.3\]), second order Strang splitting (\[3.2.1\]) and ADI splitting (\[eqn3.4.11\]) and all the splitting performed via dimensional fashion, the numerical error are shown in Table \[table5.1\], Table \[table5.2\] and Table \[table5.4\] respectively. We can clearly conclude that the schemes achieve the designed order with IDC methodology, i.e. with one more correction step, the order of the scheme increases by 1 for Lie-Trotter splitting, while the order of the scheme increases by 2 for Strang splitting and ADI splitting.
$N_{x, y} = 45$
----------------- ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 60 $ order $N_t = 80$ order $N_t = 100$ order $N_t = 120$ order
$c_s$ = 0 1.53e-5 – 1.15e-5 0.99 9.24e-6 0.98 7.70e-6 1.00
$c_s$ = 1 1.90e-7 – 1.09e-7 1.93 7.08e-8 1.93 4.94e-8 1.97
$c_s$ = 2 3.10e-9 – 1.47e-9 2.59 8.06e-10 2.69 4.87e-10 2.76
: Linear example with Dirichlet boundary conditions. Errors $\parallel u-{\upsilon}_{N_t}\parallel_{\infty}$ for Lie-Trotter splitting method, $T = 0.025$.[]{data-label="table5.1"}
$N_{x, y} = 45$
----------------- ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 60 $ order $N_t = 80$ order $N_t = 100$ order $N_t = 120$ order
$c_s$ = 0 3.02e-5 – 1.69e-5 2.02 1.08e-5 2.01 7.55e-6 1.96
$c_s$ = 1 7.15e-7 – 2.45e-7 3.72 1.04e-7 3.84 5.20e-8 3.80
$c_s$ = 2 3.64e-10 – 8.06e-11 5.24 2.28e-11 5.66 7.16e-12 6.35
: Linear example with Dirichlet boundary conditions. Errors $\parallel u-{\upsilon}_{N_t}\parallel_{\infty}$ for Strang splitting method, $T = 0.025$.[]{data-label="table5.2"}
$N_{x, y} = 150$
------------------ ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 60 $ order $N_t = 80$ order $N_t = 100$ order $N_t = 120$ order
$c_s$ = 0 3.68e-5 – 2.07e-5 2.00 1.32e-5 2.00 9.20e-6 2.00
$c_s$ = 1 4.49e-7 – 2.08e-7 2.67 1.03e-7 3.13 5.07e-8 3.91
$c_s$= 2 1.18e-7 – 1.69e-8 6.74 4.74e-9 5.71 1.59e-9 5.98
: Linear example with Dirichlet boundary conditions. Errors $\parallel u-{\upsilon}_{N_t}\parallel_{\infty}$ for IDC-OS based on ADI splitting, $T = 0.025$.[]{data-label="table5.4"}
[**Example 2.**]{} **Linear example: periodic boundary conditions.** In this example, we will solve (\[eqn:parabolic\]) when $a(x,y) = 2 + 0.5\sin(\pi(4x+y))$, and the initial condition $u_0(x,y) = \sin(2\pi(x+y))$. We compute errors using the difference between two successive refinements: $$\label{eqnerror}
\text{error} = \parallel {\upsilon}_{N_t} - {\upsilon}_{\frac{N_t}{2}}\parallel_{\infty},$$ where $N_t$ describes the number of time steps.
Again, we present results using three splitting options: Lie-Trotter, Strang and ADI splitting. Convergence studies are presented in Tables \[table5.5\], \[table5.6\] and \[table5.7\], and we can also observe that the schemes achieve the designed order. Note that the numerical error for the two correctors when $N_t = 320$ is not reliable in the cases of Strang and ADI splitting because of precision limitation.
$N_{x, y} = 45$
----------------- ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 40 $ order $N_t = 80$ order $N_t = 160$ order $N_t = 320$ order
$c_s$ = 0 4.65e-3 – 2.35e-3 0.98 1.18e-3 0.99 5.94e-4 0.99
$c_s$ = 1 1.85e-4 – 5.68e-5 1.70 1.63e-5 1.80 4.44e-6 1.88
$c_s$ = 2 3.47e-6 – 6.55e-7 2.41 1.19e-7 2.46 1.88e-8 2.66
: Linear example with periodic boundary conditions. Errors $\parallel {\upsilon}_{N_t} - {\upsilon}_{\frac{N_t}{2}}\parallel_{\infty}$ for Lie-Trotter splitting method, $T = 0.025$.[]{data-label="table5.5"}
$N_{x, y} = 45$
----------------- ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 40 $ order $N_t = 80$ order $N_t = 160$ order $N_t = 320$ order
$c_s$ = 0 5.24e-5 – 1.31e-5 2.00 3.29e-6 1.99 8.22e-7 2.00
$c_s$ = 1 3.30e-9 – 2.06e-10 4.00 1.29e-11 4.00 8.04e-13 4.00
$c_s$ = 2 5.80e-12 – 4.90e-14 6.89 7.77e-16 5.98 1.11e-16 2.81
: Linear example with periodic boundary conditions. Errors $\parallel {\upsilon}_{N_t} - {\upsilon}_{\frac{N_t}{2}}\parallel_{\infty}$ for Strang splitting method, $T = 0.025$.[]{data-label="table5.6"}
$N_{x, y} = 200$
------------------ ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 40 $ order $N_t = 80$ order $N_t = 160$ order $N_t = 320$ order
$c_s$ = 0 7.77e-5 – 1.94e-5 2.00 4.85e-6 2.00 1.21e-6 2.00
$c_s$ = 1 1.93e-8 – 1.20e-9 4.00 7.52e-11 4.00 4.70e-12 4.00
$c_s$ = 2 1.43e-11 – 2.23e-13 6.01 3.56e-15 5.96 1.04e-16 5.10
: Linear example with periodic boundary conditions. Errors $\parallel {\upsilon}_{N_t} - {\upsilon}_{\frac{N_t}{2}}\parallel_{\infty}$ for ADI splitting method, $T = 0.05$.[]{data-label="table5.7"}
[**Example 3.**]{} **Nonlinear equation with Dirichlet boundary conditions.** We now test the proposed IDC-OS methods on a nonlinear example of with a known exact solution.
$$\begin{aligned}
\begin{cases}\label{eqn5.9}
\displaystyle{u_t}= u_{xx}+u_{yy} -u^2 + e^{-2t}\cos^2(\pi x)cos^2(\pi y) +(2\pi^2-1)e^{-t}\cos(\pi x)\cos(\pi y), \\
u(0,x,y)=\cos(\pi x)\cos(\pi y),
\end{cases}\end{aligned}$$
on the domain $(x,y) \in [-1,1]\times[-1,1]$. The exact solution to this problem is $u(x,y,t) = e^{-t} \cos(\pi x)\cos(\pi y)$. Given that we have an exact solution, all our numerical tests use exact boundary conditions from this solution. An IDC-OS solver for requires a definition for how the splitting will be performed. Here, we choose to split the problem into three pieces: $L_1$ and $L_2$ are the same as linear case, while $L_3$ contains the remaining non-linear terms, $$\label{eqn5.13}
L_3(t, u) = -u^2 + e^{-2t}\cos^2(\pi x)cos^2(\pi y) +(2\pi^2-1)e^{-t}\cos(\pi x)\cos(\pi y).$$ We use Newton-iteration to solve the discretized version of $u_t = L_3(t,u)$. In Tables \[table5.3.1\] and \[table5.3.2\] we present results from applying the IDC-OS method with Lie-Trotter and Strang splitting as the base solvers. In each case, we can see the successful increase of order after each correction: one in the case of Lie-Trotter splitting, and two in the case of Strang splitting.
In this work, we do not present results for IDC-OS methods based on ADI splitting for non-linear problems due to their computational complexity. The high-order differential operator splitting methods discussed here are much simpler than what would arise from using even low-order ADI splitting.
$N_{x, y} = 45$
----------------- ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 60 $ order $N_t = 80$ order $N_t = 100$ order $N_t = 120$ order
$c_s$ = 0 6.88e-3 – 5.16e-3 1.00 4.13e-3 1.00 3.44e-3 1.00
$c_s$ = 1 7.31e-4 – 4.33e-4 1.82 2.87e-4 1.84 2.03e-4 1.90
$c_s$ = 2 1.60e-5 – 6.95e-6 2.90 3.59e-6 2.96 2.06e-6 3.05
: Nonlinear example with Dirichlet boundary conditions. Errors $\parallel u-\upsilon_{N_t}\parallel_{\infty}$ for IDC-OS based on Lie-Trotter splitting, $T = 0.025$.[]{data-label="table5.3.1"}
$N_{x, y} = 100$
------------------ ------------- ------- ------------ ------- ------------- ------- ------------- -------
Correction $N_t = 60 $ order $N_t = 80$ order $N_t = 100$ order $N_t = 120$ order
$c_s$ = 0 9.21e-5 – 5.20e-5 1.99 3.34e-5 1.98 2.33e-5 1.99
$c_s$ = 1 3.04e-6 – 8.96e-7 4.25 3.38e-7 4.37 1.56e-7 4.24
$c_s$ = 2 1.87e-8 – 3.22e-9 6.11 8.33e-10 6.06 2.84e-10 5.90
: Nonlinear example with Dirichlet boundary conditions. Errors $\parallel u-\upsilon_{N_t}\parallel_{\infty}$ for IDC-OS based on Strang splitting, $T = 0.01$. []{data-label="table5.3.2"}
[**Example 4.**]{} **Fitzhugh-Nagumo reaction-diffusion model.** A simple mathematical model of an excitable medium is Fitzhugh-Nagumo (FHN) equations [@Fife]. FHN equations with diffusion can be written as $$\begin{aligned}
\begin{cases}\label{FNeqn1}
\vspace{0.1in}
\displaystyle{\frac{\partial u}{\partial t}}= D_u \nabla^2 u +\frac{1}{\delta}h(u,v), \\
\displaystyle{\frac{\partial v}{\partial t}}= D_v \nabla^2 v+g(u,v),
\end{cases}
\end{aligned}$$ where $D_u$, $D_v$ are the diffusion coefficients for activator $u$ and inhibitor $v$ respectively, and $\delta$ is a real parameter. We consider the classical cubic FHN local dynamics [@Keener; @Olmos] $$\begin{aligned}
\begin{cases}\label{FNeqn2}
\vspace{0.1in}
\displaystyle h(u,v) = Cu(1-u)(u-a)-v, \\
\displaystyle g(u,v) = u - d v,
\end{cases}
\end{aligned}$$ where $C$, $a$ and $d$ are dimensionless parameters. We perform the numerical experiment for (\[FNeqn1\]) and (\[FNeqn2\]) on the domain $[-20, 20]\times [-20, 20]$ with periodic boundary conditions. The parameters are chosen as following, $D_u = 1$, $D_v = 0$, $a = 0.1$, $C = 1$, $d = 0.5$, and $\delta = 0.005$. The initial condition is
$$\label{FNeqn3}
\displaystyle u(x,y,0) = \left\{
\begin{array}{ll}
\vspace{0.05in}
0, & \hbox{if $ \{x < 0 \} \bigcup \{ y > 5 \}$;} \\
\displaystyle \frac{1}{(1+e^{4(|x|-5)})^2}- \frac{1}{(1+e^{4(|x|-1)})^2}, & \hbox{otherwise.}
\end{array}
\right.$$
$$\label{FNeqn4}
\displaystyle v(x,y,0) = \left\{
\begin{array}{ll}
\vspace{0.05in}
0.15, & \hbox{if $ \{x < 1 \} \bigcap \{ y > -10 \}$;} \\
0 , & \hbox{otherwise.}
\end{array}
\right.$$
The domain is partitioned with a $200 \times 200$ grid. Figure \[fig\_FN\] shows the numerical solution to the concentration of the activator $u $ solving by Lie-Trotter splitting scheme, with three operators similar as Example 3. We observe the spiral waves at $T = 2, 5, 10$, which show a good agreement with the reference solutions. The computational step size is $\Delta t = 0.005$ in all cases. We remark that, because the lower order schemes suffer more from the numerical error of diffusion than higher order ones, the pattern for $T = 10$ look more consistent if we take a smaller computational time step size or a more refined mesh. Similar patterns can also be obtained by IDC-OS scheme based on Strang splitting.
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_2_0-eps-converted-to){width="2.0in"}
(a1)
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_2_2-eps-converted-to){width="2.0in"}
(a2)
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_2_3-eps-converted-to){width="2.0in"}
(a3)
\
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_5_0-eps-converted-to){width="2.0in"}
(b1)
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_5_2-eps-converted-to){width="2.0in"}
(b2)
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_5_3-eps-converted-to){width="2.0in"}
(b3)
\
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_10_0-eps-converted-to){width="2.0in"}
(c1)
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_10_2-eps-converted-to){width="2.0in"}
(c2)
![Numerical simulations of the concentration of activator $u$ for Fitzhugh-Nagumo reaction-diffusion model at different times. (a1-a3) t = 2; (b1-b3) t = 5; (c1-c3) t = 10. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with two correctors; (a3-c3) Lie-Trotter with three correctors. []{data-label="fig_FN"}](FN_10_3-eps-converted-to){width="2.0in"}
(c3)
[**Remark.**]{} Figure \[fig\_FN\_order\] shows the order of accuracy for Fitzhugh-Nagumo reaction-diffusion model (\[FNeqn1\]) at $t = 0.025$, in which we clearly observe the order increase of IDC-OS scheme with successive correction steps. However, for a more stiff parameter such as $\delta = 10^{-10}$, order reduction phenomena is observed in the convergence study. Similar observation is made in the Example 5 on Schnakenberg model. How to approximate the residual integral and design a robust solver for stiff ODEs is an open question. Recently work in [@Rokhlin], the authors proposed a highly accurate solver based on an approximation of the integral of the residual as a linear combination of exponentials on uniform quadrature nodes. Their method is shown to do a good job of attaining high order of accuracy with correction steps and preserving the stability region of the original implicit time integrator which is used as the base scheme.
![Accuracy study for Fitzhugh-Nagumo reaction-diffusion model. $t = 0.025$ []{data-label="fig_FN_order"}](Lie_FN-eps-converted-to){width="2.0in"}
[**Example 5.**]{} **Schnakenberg model.** The Schnakenberg system [@Schnakenberg] has been used to model the spatial distribution of a morphogen. It has the following form $$\begin{aligned}
\begin{cases}\label{SMeqn1}
\vspace{0.1in}
\displaystyle{\frac{\partial C_a}{\partial t}}= D_1 \nabla^2 C_a+\kappa(a-C_a+C_a^2C_i), \\
\displaystyle{\frac{\partial C_i}{\partial t}}= D_2 \nabla^2 C_i+\kappa(b-C_a^2C_i),
\end{cases}
\end{aligned}$$ where $C_a$ and $C_i$ represents the concentration of activator and inhibitor, with $D_1$ and $D_a$ as the diffusion coefficients respectively. $\kappa$, $a$ and $b$ are rate constants of biochemical reactions. Following the setup in [@Hundsdorfer], we take the initial conditions as $$\begin{aligned}
\label{SMeqn2}
\displaystyle C_a(x,y,0)& = a + b + 10^{-3}e^{-100((x-\frac{1}{3})^2+(y-\frac{1}{2})^2)}, \\
\displaystyle C_i (x,y,0)& = \frac{b}{(a+b)^2},\end{aligned}$$ and the boundary conditions are periodic. The parameters are $\kappa = 100$, $a = 0.1305$, $b = 0.7695$, $D_1 = 0.05$ and $D_2 = 1$. The computational domain is $[0, 1] \times [0, 1]$. The numerical simulations with Lie-Trotter splitting is performed on a $200 \times 200$ spatial grid and the numerical dynamical process of the concentration of the activator $C_a$ at different times are shown in Figure \[fig\_pig\], we can observe that the initial data are amplified and spreads, leading to thew formation of spot pattern. The computational time step size is chosen as $\Delta t = 0.001$. We also note that similar patterns can be obtained by IDC-OS scheme based on Strang splitting.
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_05_0_new-eps-converted-to){width="2.0in"}
(a1)
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_05_1_new-eps-converted-to){width="2.0in"}
(a2)
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_05_2_new-eps-converted-to){width="2.0in"}
(a3)
\
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_1_0_new-eps-converted-to){width="2.0in"}
(b1)
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_1_1_new-eps-converted-to){width="2.0in"}
(b2)
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_1_2_new-eps-converted-to){width="2.0in"}
(b3)
\
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_15_0_new-eps-converted-to){width="2.0in"}
(c1)
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_15_1_new-eps-converted-to){width="2.0in"}
(c2)
![Numerical simulations of the concentration of activator $C_a$ for Schnakenberg reaction-diffusion model at different times. (a1-a3) t = 0.5; (b1-b3) t = 1; (c1-c3) t = 1.5. (a1-c1) Lie-Trotter without corrector; (a2-c2) Lie-Trotter with one corrector; (a3-c3) Lie-Trotter with two correctors. []{data-label="fig_pig"}](pig_15_2_new-eps-converted-to){width="2.0in"}
(c3)
Conclusion
==========
In this paper, we have provided a general temporal framework for the construction of high order operator splitting methods based on the integral deferred correction procedure. The method can achieve arbitrary high order via solving correction equation whereas reduce the computational cost by taking the advantage of operator splitting. Error analysis and numerical examples for IDC-OS methods are performed to show that the proposed IDC framework successfully enhances the order of accuracy in time. A study on order reduction for very stiff problems will be part of our future work.
Acknowledgments {#acknowledgments .unnumbered}
===============
AJC supported in part by AFOSR grants FA9550-11-1-0281, FA9550-12-1-0343 and FA9550-12-1-0455, NSF grant DMS-1115709, and MSU Foundation grant SPG-RG100059. ZX is supported by NSF grant DMS-1316662. Additionally, the authors would like to thank Prof. William Hitchon and Dr. David Seal for helpful comments on this work.
[^1]: Department of Mathematics and Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA. E-mail: [email protected]
[^2]: Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA. E-mail: [email protected]
[^3]: Department of Mathematical Science, Michigan Technological University, Houghton, MI 49931, USA. E-mail: [email protected].
[^4]: For all simulations used in this work, we use 13 uniformly distributed interior nodes for evaluating the residual integral in the error equation.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Given a finite number $N$ of copies of a qubit state we compute the maximum fidelity that can be attained using joint-measurement protocols for estimating its purity. We prove that in the asymptotic $N\to\infty$ limit, separable-measurement protocols can be as efficient as the optimal joint-measurement one if classical communication is used. This in turn shows that the optimal estimation of the entanglement of a two-qubit state can also be achieved asymptotically with fully separable measurements. Thus, quantum memories provide no advantage in this situation. The relationship between our global Bayesian approach and the quantum Cramér-Rao bound is also discussed.'
author:
- 'E. Bagan'
- 'M. A. Ballester'
- 'R. Mu[ñ]{}oz-Tapia'
- 'O. Romero-Isart'
title: 'Measuring the purity of a qubit state: entanglement estimation with fully separable measurements'
---
The ultimate goal of quantum state estimation is to determine the value of the parameters that fully characterize a given unknown quantum state. However, in practical applications, a partial characterization is often all one needs. Thus, e.g., knowing the purity of a qubit state or the degree of entanglement of a bipartite state may be sufficient to determine whether it can perform some particular task [@white] —See Ref. [@gisin] for recent experimental progress on estimating the degree of polarization (the purity) of light beams. This paper concerns this type of situation.
To be more specific, assume we are given $N$ identical copies of an unknown qubit mixed state $\rho(\vec r)$, so that the state of the total system is $\rho^N(\vec r)\equiv[\rho(\vec r)]^{\otimes
N}$. The set of all such density matrices $\{\rho(\vec r)\}$ can be mapped into the Bloch sphere ${\cal B}=\{\vec r :\ r\equiv|\vec
r|\le1\}$ through the relation $\rho(\vec r)=(\openone+\vec
r\cdot\vec\sigma)/2$, where $\vec\sigma=(\sigma_x,\sigma_y,\sigma_z)$ is a vector made out of the three standard Pauli matrices. Our aim is to estimate the purity, $r$, as accurately as possible by performing suitable measurements on the $N$ copies, i.e., on $\rho^N(\vec r)$. This problem can also be viewed as the parameter estimation of a depolarizing channel [@depolarizing] when it is fed with $N$ identical states.
The estimation protocols are broadly divided into two classes depending on the type of measurements they use: joint and separable. The former treats the system of $N$ qubits as a whole, allowing for the most general measurements, and leads to the most accurate estimates or, equivalently, to the largest fidelity (properly defined below). The latter, treats each copy separately but classical communication can be used in the measurement process. This class is particularly important because it is feasible with nowadays technology and it offers an economy of resources. In this paper we show that for a sufficiently large $N$, separable measurement protocols for purity estimation can attain the optimal joint-measurement fidelity bound. The power of separable measurement protocols in achieving optimal performance has also been demonstrated in other contexts [@us-local; @others; @discrim].
It has been shown [@vidal] that given $N$ copies of a bipartite qubit pure state, $|\Psi\rangle_{AB}$, the optimal protocol for measuring its entanglement consists in estimating the purity of $\rho(\vec r)\equiv{\rm
tr}_B(|\Psi\rangle_{AB}\langle\Psi|)$, where ${\rm tr}_B$ is the partial trace over the Hilbert space of party $B$ (see [@susana; @horodecki] for related work on bipartite mixed states). We thus show that for [*large $N$*]{} this entanglement can be optimally estimated by performing just [*separable*]{} measurements on [*one*]{} party (party $A$ in this discussion) of [*each*]{} of the $N$ copies of $|\Psi\rangle_{AB}$.
Though many of our results here concern finite $N$, special attention is paid to the asymptotic regime, when $N$ is large. There are several reasons for this. First, in this limit, formulas greatly simplify and usually reveal important features of the estimation protocol. Second, the asymptotic theory of quantum statistical inference, which has become in recent years a very active field in mathematical statistics [@masahito-book], deals with problems such as the one at hand. Our results give support to some quantum statistical methods for which only heuristic proofs exist; e.g., the applicability of the integrated quantum Cramér-Rao bound in the Bayesian approach (which is formulated below) [@us-prep].
In the first part of this paper we obtain the optimal joint estimation protocols and the corresponding fidelity bounds. In addition to the general case of states in $\cal B$, which was partially addressed in [@vidal], we also discuss the situation when the unknown state is constrained to lie on the equatorial plane $\cal E$ of the Bloch sphere $\cal B$. In the second part, we discuss separable measurement protocols, we prove that they saturate the joint-measurement bound asymptotically and we state our conclusions.
Mathematically, the problem of estimating the purity of $\rho(\vec
r)$ can be formulated within the Bayesian framework as follows (see [@keyl] for an alternative approach). Let ${\cal R}_{\cal O}=\{R_\chi\}$ be the set of estimates of $r$, each of them based on a particular outcome $\chi$ of some generalized measurement, $\cal O$, over $\rho^N(\vec r)$. In full generality, we assume that such measurement is characterized by a Positive Operator Valued Measure (POVM), namely, by a set of positive operators ${\cal O}=\{O_\chi\}$ that satisfy $\sum_\chi
O_\chi=\openone$ ($\chi$ can be a continuous variable, in which case the sum becomes an integral over $\chi$). A separable measurement is a particularly interesting instance of a POVM for which each $O_\chi$ is a tensor product of $N$ individual operators (usually projectors) each one of them acting on $\rho(\vec r)$. Next, a figure of merit, $f(r,R_\chi)$, is introduced as a quantitative way of expressing the quality of the purity estimation. Throughout this paper we use $$\begin{aligned}
f(r,R_\chi)&\equiv&2\max_{\vec m} \left[{\rm tr}\sqrt{\rho^{1/2}(\vec r)\rho(R_\chi \vec m)
\rho^{1/2}(\vec r)}\right]^2-1\nonumber\\
&=&rR_\chi+\sqrt{1-r^2}\sqrt{1-R_\chi^2}={\bf r}\cdot{\bf R}_\chi,
\label{fidelity}\end{aligned}$$ where $|\vec m|=1$, i.e., $[1+f(r,R_\chi)]/2$ is the standard fidelity [@fuchs] (see also [@fid]) between $\rho(\vec r)$ and $\rho(R_\chi \vec n)$, where we have defined $\vec n=\vec
r/r$. Throughout this paper we refer to $f(r,R_\chi)$ also as fidelity for short. Its values are in the range $[0,1]$, where unity corresponds to perfect determination. It is interesting to note that in Uhlmann’s geometric representation of the set of density matrices as the hemisphere $(1/2){\mathbb S}^3\subset{\mathbb R}^4$, the function $D(r,R_\chi)=(1/2)\arccos f(r,R_\chi)$ is the geodesic (Bures) distance [@som] between two sets (two parallel 2-dimensional spheres) characterized by the purities $r$ and $R_\chi$ respectively.
In the same spirit as in [@us-prep; @alberto], we have written $f(r,R_\chi)$ as a scalar product of the two unit vectors ${\bf a}=(\sqrt{1-a^2},a)$; $a=r, \,R_\chi$. The optimal protocol is obtained by maximizing $$F({\cal O},{\cal R}_{\cal O})=\sum_\chi\int d\rho f(r,R_\chi) {\rm
tr}[\rho^N(\vec r) O_\chi], \label{averaged fidelity}$$ where $d\rho$ is the prior probability distribution of $\rho(\vec
r)$, and we identify the trace as the probability of obtaining the outcome $\chi$ given that the state we measure upon is $\rho^N(\vec r)$. Thus, $F$ is the average fidelity. The maximization is over the estimator (guessed purity) ${\cal
R}_{\cal O}$ and the POVM ${\cal O}$. Using Schwarz inequality the optimal estimator is easily seen to be $$R_\chi^{\rm opt}={V_\chi\over\sqrt{{\bf V}_\chi\cdot{\bf
V}_\chi}}; \quad {\bf V}_\chi=\int d\rho\; {\bf r} \, {\rm
tr}[\rho^N(\vec r) O_\chi], \label{optimal guess}$$ and $$F({\cal O})\equiv \max_{\{{\cal R}_{\cal O}\}}F({\cal O},{\cal
R}_{\cal O})=\sum_\chi \sqrt{{\bf V}_\chi\cdot{\bf V}_\chi} \ .
\label{optimal fidelity}$$ We are still left with the task of computing $F^{\rm
max}=\max_{{\cal O}} F({\cal O})$.
In this formulation, we need to provide a prior probability distribution (prior for short) $d\rho$, which encodes our initial knowledge about $\rho(\vec r)$. Here we assume to be completely ignorant of both $\vec n$ and $r$. Our lack of knowledge about the former is properly represented with the choice $d\rho \propto
d\Omega$ (solid angle element), which states that [*à priori*]{} $\vec n$ is isotropically distributed on ${\cal B}$. Therefore, we write $$d\rho={d\Omega\over4\pi} w(r)dr;\quad \int_0^1dr\, w(r)=1.
\label{measure}$$ While there is wide agreement on this respect, the $r$-dependence of the prior is controversial and so far we will not stick to any particular choice. Nevertheless, it is worth keeping in mind that the hard sphere prior $w(r)=3 r^2$ shows up in the context of entanglement estimation [@zycz], whereas the Bures prior $w(r)=(4/\pi) r^2 (1-r^2)^{-1/2}$ is most natural in connection with distinguishability of density matrices [@fuchs; @fid; @prior].
We are now in a position to compute $F^{\max}$. We first assume no constraint on $\cal O$, thus allowing for the most general measurement setup. The density matrix $\rho^N(\vec r)$ can be written in a block-diagonal form, where each block, $\rho_{Nj\alpha}(\vec r)$, transforms with a corresponding spin $\bf j$ irreducible representation of $SU(2)$ and $\alpha$ ($\alpha=1,2,\dots, n_j$) labels the different $n_j$ occurrences of the same block [@cirac; @us-prep]. This implies that each element, $O_\chi$, of the optimal POVM can be likewise chosen to have the same block-diagonal structure. Given a POVM $\tilde{\cal O}$ of this type, we consider the two-stage measurement protocol ${\cal O}$ consisting of ([*i*]{}) a ‘preliminary’ measurement of the projection of the state $\rho^N(\vec r)$ onto the $SU(2)$ irreducible subspaces, followed by ([*ii*]{}) the measurement defined by $\tilde{\cal O}$. The outcomes of $\cal O$ are thus labeled by three indexes $\chi=(j,\alpha,\xi)$, and the corresponding operators are defined by $O_{j\alpha\xi}=\openone_{j\alpha}{\tilde O}_\xi\openone_{j\alpha}$. Since the projector on each irreducible subspace, $\openone_{j\alpha}\equiv\sum_m |jm;\alpha\rangle
\langle jm;\alpha|$, commutes with $\rho^N(\vec r)$, the probabilities ${\rm
tr}[\rho^N(\vec r)\, \tilde O_\xi]$ are the marginals of ${\rm
tr}[\rho^N(\vec r)\, O_{j\alpha\xi }]$ and the fidelity cannot decrease by using $\cal O$ instead of the original $\tilde{\cal O}$. In our quest for optimality, we thus stick to these two-stage measurements.
We next recall that $\rho(\vec r)=U\rho(r \vec z)U^\dagger$ for a suitable $SU(2)$ transformation $U$, where $\vec z$ is the unit vector along the $z$ axis, and that $d\Omega$ can be replaced by the Haar measure of $SU(2)$. Using Schur’s lemma the integral in (\[optimal guess\]) gives $${\bf V}_{j\alpha\xi}={{\rm tr}(
O_{j\alpha\xi})\over2j+1}\int dr\,w(r)\,{\bf r}\,{\rm tr}
[\rho_{Nj\alpha}(r\vec z)] . \label{Vjchi}$$ Hence, the estimate $R^{\rm opt}_{\chi}=R^{\rm opt}_{j\alpha\xi}$ turns out to be independent of the outcomes $\xi$ (of $\tilde{\cal O}$), and we can write $R^{\rm
opt}_{j\alpha}$ instead. This, in turn, renders the maximization in (\[optimal fidelity\]) trivial, since, using the relation $\sum_\xi
O_{j\alpha\xi}=\openone_{j\alpha}$, we see that the right hand side of (\[optimal fidelity\]) becomes also independent of $\tilde {\cal O}$, and we can drop the subscript $\xi$ from now on.
The bottom line is that, assuming an isotropic prior, the optimal purity estimation is entirely based on the outcomes of $\cal I$ (no additional information about the purity can be extracted from the state) and we might as well choose not to perform any further measurement ($\{\tilde O_\xi\}\to\openone$). With this choice, the prefactor in (\[Vjchi\]) becomes unity. Since the $n_j$ spin $\bf j$ blocks $\rho_{Nj\alpha}$ all give an identical contribution $${\rm tr}[\rho_{Nj\alpha}(r\vec z)]=\sum_{m=-j}^j p_r^{{N\over2}-m}
q_r^{{N\over2}+m}, \label{trace}$$ where $p_r=(1-r)/2$, $q_r=1-p_r$, the left hand side of (\[Vjchi\]) can be simply called ${\bf V}_{j}$,
The maximal fidelity is thus given by $$F^{\rm
max}=\pmatrix{N\cr{N\over2}-j}{2j+1\over{N\over2}+j+1}\sum_j\sqrt{{\bf
V}_j\cdot{\bf V}_j} \ , \label{Fmax}$$ where the coefficient in front of the sum is $n_j$ [@cirac; @us-prep]. This, along with (\[trace\]) and (\[Vjchi\]), provides an explicit expression of $F^{\rm
max}$. For large $N$, this can be computed to be [@details] $$F^{\rm max}= 1-{1\over2N}+ o(N^{-1}) . \label{Fasymp}$$ One can also check that at leading order in $1/N$ the optimal guess is $R^{\rm opt}_j=2j/N$, as one would intuitively expect. These asymptotic results hold for any prior $w(r)$.
\[bc\][$N(1-F^{\rm max})$]{} ![A log-linear plot of $N(1-F^{\rm max})$ in terms of the number $N$ of copies for the optimal joint measurement and for the Bures (solid line) and hard sphere (dashed line) priors.[]{data-label="fig"}](figura-1 "fig:"){width="8cm"}
In Fig. \[fig\], we plot $N(1-F^{\rm max})$ as a function of $N$ in the range $10$–$5000$ for states in $\cal B$ and for the Bures (solid line) and the hard sphere (dashed line) priors. The two lines are seen to approach the asymptotic value $1/2$ \[which can be read off from Eq. (\[Fasymp\]) \] for large $N$ at a similar rate.
It is also interesting to analyze the case where $\vec r$ is known to lie on the equatorial plane $\cal E$. With this information, the prior probability distribution becomes $d\rho=(d\phi/2\pi)w(r)dr$, where $\phi$ is the polar angle of the spherical coordinates. Though it is still possible to use the block-diagonal decomposition discussed above, the individual blocks are now reducible under the unitary symmetry transformations on $\cal E$, i.e., under a $U(1)$ subgroup of $SU(2)$. In full analogy to the general case, the optimal POVM is given by the set of one-dimensional projectors over the $U(1)$-invariant subspaces, $\{\openone_{j\alpha
m}\equiv|jm;\alpha\rangle\langle jm;\alpha|\}$, and, as above, the equivalent representations, labelled by $\alpha$, contribute a multiplicative factor $n_j$. The analogous of (\[trace\]) is now $$[\rho_{Nj\alpha}(r\vec x)]_{mm}=\!\!\sum_{m'=-j}^j \left[{\rm
d}_{mm'}^{(j)}(\mbox{${\pi\over2}$})\right]^2 p_r^{{N\over2}-m'}
q_r^{{N\over2}+m'}, \label{trace2D}$$ where ${\rm
d}_{mm'}^{(j)}(\mbox{${\beta}$})$ are the standard Wigner d-matrices [@edmonds]. From (\[trace2D\]) we can compute ${\bf V}_{jm}$ and $F^{\rm max}$, as in (\[Fmax\]), where in this case the sum extends over $j$ and $m$. The resulting expression can be evaluated for small $N$ but it is not very enlightening. The corresponding plots for the analogous of Bures and hard sphere priors are indistinguishable from those in Fig. \[fig\]. Far more interesting is the large $N$ regime. It turns out that $F^{\rm max}$ is also given by (\[Fasymp\]) and the optimal guess becomes $m$ independent, $R^{\rm
opt}_{jm}=2j/N+\dots$. Therefore, we see that the information about $\vec n$ becomes irrelevant in the asymptotic limit.
A word regarding quantum statistical inference is in order here. It is often argued that the quantum Cramér-Rao bound [@holevo] can be integrated to provide an attainable asymptotic lower bound for some averaged figures of merit, such as the fidelity (\[fidelity\]). Ours is a so-called one parameter problem for which the quantum Cramér-Rao bound takes the simple form ${\rm Var}\, R\ge H^{-1}(\vec r)/N$, where ${\rm Var}\,
R\equiv\langle (R_\chi-\langle R_\chi\rangle)^2\rangle$ is the variance of the estimator $R_\chi$, the average is over the outcomes $\chi$ of a measurement, $H(\vec r)$ is the quantum information matrix [@holevo], and $R_\chi$ is assumed to be unbiased: $ \langle R_\chi\rangle=r $. In our case $H(\vec
r)=(1-r^2)^{-1}$, and the bound is attainable. This provides in turn an attainable asymptotic upper bound for the fidelity (\[fidelity\]), since $\langle
f(r,R_\chi)\rangle\approx
1-\raisebox{.12em}{\mbox{\tiny$1\over2$}}H(\vec r)\,{\rm Var}\,
R+\dots$. Assuming one can integrate this relations over the whole of $\cal B$ (including the region $r\approx1$, where $H(\vec
r)$ is singular), with a weight function given by the prior (\[measure\]), we obtain Eq. (\[Fasymp\]). Unfortunately, there are only heuristic arguments supporting this assumption, but so far no rigorous proof exists in the literature [@van-trees].
We now abandon the joint protocols to dwell on separable measurement strategies for the rest of the paper. Here we focus on the asymptotic regime, but some brief comments concerning small $N$ can be found in the conclusions.
In previous work [@alberto], some of the authors showed that the maximum fidelity one can achieve in estimating both $r$ and $\vec n$ (full estimation of a qubit mixed state) assuming the Bures prior and using tomography behaves as $$F^{\rm max}_{\rm full}=1-{\xi\over N^{3/4}}+o(N^{-3/4}) ,
\label{Ffull}$$ where $\xi$ is a positive constant. The same behavior one should expect for our fidelity $F^{\rm max}$, since the effect of the purity estimation is dominant in (\[Ffull\]). This strange power law, somehow unexpected on statistical grounds, is caused by the behavior of $w(r)$ in a small region . Indeed, it is not difficult to convince oneself that if $w(r) \propto
(1-r^2)^{-\lambda}\approx 2(1-r)^{-\lambda}$ for $r\approx 1$, one should expect $1-F^{\rm max}\propto N^{\lambda/2-1}+\dots$, for $0<\lambda<1$ (for $\lambda=0$, hard sphere prior, one should expect logarithmic corrections). This differs drastically from (\[Fasymp\]) which, as stated above, holds for [any]{} such values of $\lambda$. Would classical communication be enough to restore the right power law $N^{-1}$ for $1-F^{\rm
max}$ and, moreover, saturate the bound of the optimal joint protocol?
On quantum statistical grounds, one should expect a positive answer to this question since the quantum Cramér-Rao bound is attained by a separable protocol consisting in performing the (von Neumann) measurements ${\cal M}=\{(\openone\pm\vec
n\cdot\sigma)/2\}$ on each copy. Note, however, that $\cal M$ depends on $\vec n=\vec r/r$, which is, of course, unknown [*à priori*]{}. This protocol can only make sense if we are ready to spend a fraction of the $N$ copies of $\rho(\vec r)$ to obtain an estimate of $\vec n$, use this classical information to design $\cal M$ and, finally, perform this adapted measurement on the remaining copies. This protocol was successfully applied to pure states by Gill and Massar in [@gill-massar]. We extend it to purity estimation below.
Let us consider a family of priors of the form $$w(r)={4\over\sqrt\pi}{\Gamma(5/2-\lambda)\over\Gamma(1-\lambda)}{r^2
(1-r^2)^{-\lambda}} , \label{gen prior}$$ which includes both the Bures ($\lambda=1/2$) and the hard sphere ($\lambda=0$) metrics. Despite of this particular $r$ dependence, the final results apply to any prior whose behavior near $r=1$ is given by (\[gen prior\]).
We now proceed [*à la*]{} Gill-Massar [@gill-massar] and consider the following one-step adaptive protocol: we take a fraction $N^\alpha\equiv N_0$ ($0<\alpha<1$) of the $N$ copies of $\rho(\vec r)$ and we use them to estimate $\vec n$. Tomography along the three orthogonal axis $x$, $y$ and $z$, together with a very elementary estimation based on the relative frequencies of the outcomes [@us-local], enables us to estimate $\vec n$ with an accuracy given by $${\langle\Theta^2_r\rangle\over
2}\approx1-\langle\cos\Theta_r\rangle={3\over N_0}\left({1\over
r^2}-{1\over5}\right)+o(N_0^{-1}), \label{Theta}$$ where $\Theta_r$ is the angle between $\vec n$ and its estimate. Here and below $\langle\cdots\rangle$ is not only the average over the outcomes of this tomography measurements, but also contains an integration over the prior angular distribution $d\Omega/(4\pi)$ for fixed $r$. We see from (\[Theta\]) that the pure state limit is , and one can compute the fidelity, as defined in [@us-local], to check that it agrees with the result therein. This concludes the first step of the protocol.
In a second step, we measure the projection of $\vec\sigma$ along the estimated $\vec n$ obtained in the previous step. We perform this von Neumann measurement on each of the remaining $N-N_0\equiv
N_1$ copies of the state $\rho(\vec r)$. We estimate the purity to be $R=2N_+/N_1-1$, where $N_\pm/N_1$ is the relative frequency of $\pm1$ outcomes, and we drop the $N_+$ dependence of $R$ to simplify the notation.
Obviously, as a random variable and for large $N_1$, $R$ is normally distributed as $R\sim{\rm N}(r c_r,\sqrt{1-r^2
c^2_r}/\sqrt{N_1})$, where $c_r=\cos\Theta_r$. Hence, for large $N_0$ and $N_1$ it makes sense to expand $f(r,R)$, Eq. (\[fidelity\]), around $R= r c_r$, and thereafter, because of (\[Theta\]), expand the resulting expression around $c_r=1$. We obtain $$F(r)= 1-{1\over 2
N_1}+{r^2\over1-r^2}\left({\langle\Theta^2_r\rangle\over4N_1}-{\langle\Theta^4_r\rangle\over8}\right)+\dots
, \label{<f>}$$ where $F(r)$ is the average fidelity for fixed $r$, i.e., $\int
dr\,w(r) F(r)=F$. In view of (\[Theta\]), $\langle\Theta_r^4\rangle\sim N_0^{-2}=N^{-2\alpha}$. Hence, the two terms in parenthesis in (\[<f>\]) can only be dropped if $\alpha>1/2$. Provided $w(r)$ vanishes as in (\[gen prior\]) with $\lambda<0$, we can integrate $r$ in (\[<f>\]) over the unit interval to obtain $$F=1-{1\over2N(1-N^{\alpha-1})}+o(N^{-1}) , \label{F in I}$$ and we conclude that this protocol attains asymptotically the joint-measurement bound (\[Fasymp\]).
However, most of the physically interesting priors [@prior; @zycz], $w(r)$, not only do not vanish as $r\to1$, but often diverge like (\[gen prior\]) with $0<\lambda<1$. In this case (\[<f>\]) cannot be integrated, as the last term does not lead to a convergent integral. This signals that the series expansion around $c_r=1$ leading to (\[<f>\]) is not legitimated in the whole of $\cal
B$.
To fix the problem, we split $\cal B$ in two regions. A sphere of radius $1-\epsilon$, $\epsilon>0$, which we call ${\cal B}^{\rm
I}$, and a spherical sheet of thickness $\epsilon$: ${\cal
B}^{\rm II}=\{\vec r: 1-\epsilon<r\le 1\}$. The fidelity can thus be written as the sum of the corresponding two contributions: $F=F^{\rm I}+F^{\rm II}$. While $F^{\rm I}$ can be obtained by simply integrating (\[<f>\]) over ${\cal B}^{\rm I}$, where this expansion is valid, some care must be taken in the region ${\cal B}^{\rm II}$. There, we proceed as follows.
We compute the fidelity as if all the states in ${\cal B}^{\rm II}$ had the lowest possible purity ($r=1-\epsilon$) when the first-step tomography was performed. This leads to a lower bound for $F^{\rm II}$, because the lower the purity of a state the less accurately $\vec n$ can be determined \[see Eq. (\[Theta\])\], and hence, the worse its purity can be estimated in the second step. The trick, which amounts to replacing $c_r$ by $c_{1-\epsilon}$, enables us to perform the $r$-integration prior to $\langle\cdots\rangle$. We simply expand $f(r,R)$, Eq. (\[fidelity\]), around $R= r c_{1-\epsilon}$ to obtain $$\begin{aligned}
F(r)&\gtrsim& \Bigg\langle\sqrt{(1-r^2)(1-r^2c^2_{1-\epsilon})}\nonumber\\
&-&{1\over2N_1}\sqrt{{1-r^2\over 1-r^2 c_{1-\epsilon}^2}} +\dots\Bigg\rangle
,\end{aligned}$$ where the dots stand for additional terms that are irrelevant to the problem we are addressing here. Integrating this expression and expanding around $c_{1-\epsilon}=1$ we obtain $$\begin{aligned}
&&\kern-4em\int_{1-\epsilon}^1\kern-1.3em dr\,w(r) F(r)\gtrsim 1-{1\over2N_1}-
k_\lambda\left\langle(1-c_{1-\epsilon})^{2-\lambda}\right\rangle\nonumber\\
&&\kern3.4em-\left(1-{1\over2N_1}\right)\int_0^{1-\epsilon}\kern-1.3em
dr\,w(r)+\dots,\end{aligned}$$ where $
k_\lambda={2^{2-\lambda}\Gamma({5\over2}-\lambda)\Gamma({3\over2}-\lambda)\Gamma(\lambda-2)/[\pi\Gamma(1-\lambda)]}
$. Putting together the different pieces of the calculation we have $$F\gtrsim 1-{1\over2N_1}-2^{\lambda-2}k_\lambda
\langle\Theta_{1-\epsilon}^2\rangle^{2-\lambda} +\dots,
\label{fidelity ok}$$ $0<\lambda<1$, where now we can safely take the limit $\epsilon\to0$. We see that by choosing $${\rm
\max}\left\{{1\over2},{1\over2-\lambda}\right\}<\alpha< 1
\label{alpha}$$ we ensure that the joint-measurement bound (\[Fasymp\]) is attained. It is worth emphasizing that the last term in (\[fidelity ok\]), which is completely missing in (\[F in I\]), is actually the dominant contribution if $\alpha<1/(2-\lambda)$. For $\lambda=0$ we have $$F^{\rm hard}\gtrsim1-{1\over2N_1}-{3 \langle\Theta^2_1\rangle
\log\langle\Theta^2_1\rangle\over8N_1}+\dots ,$$ and we again conclude that the protocol presented here attains the joint-measurement bound.
Two comments about the choice of $\alpha$ are in order. First, numerical simulations show that the optimal value of $\alpha$ is very close to the lower bound in (\[alpha\]). Second, we see that the lower bound in (\[alpha\]) increases with increasing $\lambda$. This can be understood by recalling that for large $N$, the estimated purity $R$ is normally distributed with a variance of ${\rm Var}\,R=(1-r^2 c_r^2)/N_1$. For $\lambda\ll1$, the prior is a rather flat function of $r$ and, on average, ${\rm Var}\,R=a/N_1$, where $a$ is a constant. Increasing the accuracy by which $\vec n$ is determined does not improve significantly the estimation of $r$. Hence, using a small fraction of the number of copies at the first stage of the protocol should be enough. This suggest that $\alpha$ must be relatively small. In contrast, for $\lambda\approx1$ the prior peaks at $r=1$ and ${\rm Var}\,R=\Theta^2_r/N_1$. Hence, it pays to spend a large fraction of $N$ to estimate $\vec n$ with high accuracy (as this drastically reduces ${\rm Var}\,R$), for which we need that $\alpha\approx 1$.
At this point one may wonder if the conclusions above depend upon our particular choice of figure of merit. To get a grasp on this, it is worth using again the standard pointwise approach to quantum statistics. There, one is interested in the mean square error ${\rm MSE}\,R=\langle(R-r)^2\rangle$ for fixed $r$, where now the average $\langle\cdots\rangle$ is over the outcomes of [*all*]{} measurements for a fixed $\vec r$. One can write ${\rm
MSE}\,R={\rm Var}\, R + (\langle R\rangle-r)^2 $, where the second term is the *bias*. Using the same one-step adaptive protocol described above, we get that the mean square error after step two is $${\rm
MSE}\,R=\frac{H^{-1}(r)}{N_1}+{r^2\over4}\langle\Theta_r^4\rangle+
\dots .$$ As above, the last term can be dropped if $\alpha>1/2$, and $${\rm MSE}\,R=\frac{H(r)^{-1}}{N} +o[N^{-1}],$$ saturating the quantum Cramér-Rao bound. This protocol is, therefore, also asymptotically optimal in the present context. Though the argumentation above is somehow heuristic, it can be made fully rigorous [@ballester].
In summary, we have addressed the problem of optimally estimating the purity of a qubit state of which $N$ identical copies are available. The optimal estimation of the entanglement of a bipartite qubit state can be reduced to this problem. Though the absolute bounds for the average fidelity involve joint measurements, these bounds can be obtained asymptotically with separable measurements. This requires classical communication among the sequential von Neumann measurements performed on each of the $N$ individual copies of the state. This result, which has been speculated on quantum statistical grounds, is here proved for the first time by a direct calculation. This leads to a very surprising result: in the asymptotic limit of many copies, bipartite entanglement, a genuinely non-local property, can be optimally estimated by performing fully separable measurements. This meaning that measurements can be performed not only on copies of [*one*]{} of the two entangled parties, but on [*each*]{} of these copies [*separately*]{}. This avoids the necessity of quantum memories.
For finite (but otherwise arbitrary) $N$, finding the optimal separable measurement protocol is an open problem. Interestingly enough, a ‘greedy’ protocol designed to be optimal at each measurement step [@us-local; @others] leads to an unacceptably poor estimation. Notice that in the one-step adaptive protocol described above, part of the copies were spent (‘wasted’ from a ‘greedy’ point of view) in estimating $\vec n$. We have seen that this strategy pays in the long run. However, the ‘greedy’ strategy optimizes measurements in the short run, which translates into measuring $\vec\sigma$ along the same arbitrarily fixed axis on each copy of $\rho(\vec r)$. This yields a low value for the fidelity, which does not even converge to unity in the strict limit $N\to\infty$. This counterintuitive behavior of the ‘greedy’ protocol also appears in other contexts as, e.g., economics, biology or social sciences (see [@parrondo] for a nice example).
We acknowledge useful conversations with Antonio Acín, Richard Gill and Juanma Parrondo. This work is supported by the Spanish Ministry of Science and Technology project BFM2002-02588, CIRIT project SGR-00185, Netherlands Organization for Scientific Research NWO, the European Community projects QUPRODIS contract no. IST-2001-38877 and RESQ contract no IST-2001-37559.
[99]{} A. G. White *et al.*, [Phys. Rev. Lett. **83**, 3103 (1999)]{}. M. Legre, M. Wegmueller and N. Gisin, [Phys. Rev. Lett. **91**, 167902 (2003)]{}. M. Sasaki, M. Ban and S. M. Barnett, [Phys. Rev. A **66**, 022308 (2002)]{}; A. Fujiwara, [Phys. Rev. A **70**, 012317 (2004)]{}. E. Bagan, M. Baig and R. Munoz-Tapia, [Phys. Rev. Lett. **89**, 277904 (2002)]{}; E. Bagan, A. Monras and R. Munoz-Tapia, [Phys. Rev. A **71**, 062318 (2005)]{}. D. G. Fisher, S. H. Kienle and M. Freyberger, [Phys. Rev. A **61**, 032306 (2000)]{}; Th. Hannemann *et al.*, [Phys. Rev. A **65**, 050303 (2002)]{}. D. Brody and B. Meister. [Phys. Rev. Lett. **76**, 1 (1996)]{}; A. Acin *et al*., [Phys. Rev. A **71**, 032338 (2005)]{}. A. Acin, R. Tarrach and G. Vidal, [Phys. Rev. A **61**, 062307 (2000)]{}. J. M. G. Sancho and S. F. Huelga, [Phys. Rev. A **61**, 042303 (2000)]{}. P. Horodecki, [Phys. Rev. Lett. **90**, 167901 (2003)]{}. *Asymptotic Theory Of Quantum Statistical Inference: Selected Papers*, Ed. by Masahito Hayashi (World Scientific, Singapore, 2005) E. Bagan *et al.*, in preparation. M. Keyl and R. F. Werner, [Phys. Rev. A **64**, 52311 (2001)]{}. C. A. Fuchs, PhD Dissertation, University of New Mexico, (1995)(quant-ph/9601020). M. H[ü]{}bner, [Phys. Lett. A **163**, 239 (1992)]{}; R. Josza, [J. Mod. Opt. **41**, 2315 (1994)]{}. H. J. Sommers and K. Zyczkowski, [J. Phys. A **36**, 10083 (2003)]{}. E. Bagan, M. Baig, R. Munoz-Tapia, and A. Rodriguez, [Phys. Rev. A **69**, 010304 (2004)]{}. K. Zyczkowski and H. J. Sommers, [J. Phys. A **34**, 7111 (2001)]{}; H. J. Sommers and K. Zyczkowski, *ibid.* **37**, 8457 (2004). D. Petz and C. Sudar, [J. Math. Phys. **37**, 2662 (1996)]{}. J. I. Cirac, A. K. Ekert and C. Macchiavello, [Phys. Rev. Lett. **82**, 4344 (1999)]{}. The techniques required to compute Eq. (\[Fmax\]) are explained in detail in [@us-prep]. A. R. Edmonds, [*Angular Momentum in Quantum Mechanics*]{} (Princeton University Press, Princeton 1960). A. Holevo, [*Probabilistic and Statistical Aspects of Quantum Theory*]{} (North-Holland Publishing, Amsterdam, 1982). One can use van Trees inequalities, R. D. Gill and B. Y. Levitt, Bernouilli **1**, 59 (1995), to proof that the integrated quantum Cramér-Rao bound gives an upper bound to the fidelity. See [@us-prep]. R. D. Gill and S. Massar, [Phys. Rev. A **61**, 042312 (2000)]{}; O. E. Barndorff-Nielsen and R. D. Gill, [J. Phys. A **33**, 4481 (2000)]{}. M. Ballester. In preparation. L. Dinis and J. M. R. Parrondo, Europhys. Lett. **63**, 319 (2003).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we present three-dimensional atmospheric simulations of the hot Jupiter HD 189733b under two different scenarios: local chemical equilibrium and including advection of the chemistry by the resolved wind. Our model consistently couples the treatment of dynamics, radiative transfer and chemistry, completing the feedback cycle between these three important processes. The effect of wind–driven advection on the chemical composition is qualitatively similar to our previous results for the warmer atmosphere of HD 209458b, found using the same model. However, we find more significant alterations to both the thermal and dynamical structure for the cooler atmosphere of HD 189733b, with changes in both the temperature and wind velocities reaching $\sim10\%$. We also present the contribution function, diagnosed from our simulations, and show that wind–driven chemistry has a significant impact on its three–dimensional structure, particularly for regions where methane is an important absorber. Finally, we present emission phase curves from our simulations and show the significant effect of wind–driven chemistry on the thermal emission, particularly within the 3.6 m Spitzer/IRAC channel.'
author:
- Benjamin Drummond
- 'Nathan J. Mayne'
- James Manners
- Isabelle Baraffe
- Jayesh Goyal
- Pascal Tremblin
- 'David K. Sing'
- Krisztian Kohary
title: 'The 3D thermal, dynamical and chemical structure of the atmosphere of HD 189733b: implications of wind-driven chemistry for the emission phase curve'
---
Introduction
============
Interpreting observations of exoplanet atmospheres requires comparison with theoretical models in order to infer the properties of the atmosphere, such as the thermal structure and the chemical composition. Whether this is done by a retrieval method [e.g. @IrwTd08; @MadS09; @WalTR15] or by comparison with forward models [e.g. @Barman2005; @ForML05; @MadBC11; @Moses2011; @GoyMS18] these theoretical tools are most often in the form of a one–dimensional (1D) model. However, the results of three–dimensional (3D) radiative-hydrodynamic codes [@ShoG02; @DobL08; @Showman2009; @MenR09; @HenMP11; @MayBA14; @AmuMB16] clearly show that the atmospheres of highly–irradiated tidally–locked planets are far from being horizontally symmetric.
For tidally—locked hot Jupiters, intense stellar irradiation heats the dayside to thousands of Kelvin while the nightside is significantly cooler, driving fast zonal winds ($\sim10^3$ ${\rm m}~{\rm s}^{-1}$) that redistribute heat around the planet [e.g. @Showman2009; @DobA13; @AmuMB16]. High resolution transmission spectra revealing Doppler shifting of atomic absorption lines [@Snedd10; @Louden2015] as well as measurements of the emission as a function of orbital phase [e.g. @KnuLF12; @ZelLK14] have since confirmed these theoretical predictions. However, @DanCS18 recently reported a westward shift in the location of the hot spot, that contradicts the eastward shifts unanimously predicted by current 3D models.
Recently, @DDC17 demonstrated the importance of considering the 3D structure of the atmosphere when interpreting the emission phase curve. They showed that the contribution function, the peak of which indicates the pressure level of the photosphere, varies with longitude. In general, they found that the photosphere typically lies at lower pressures on the dayside compared with the nightside, with the conclusion that the observed emission originates from different pressure levels of the atmosphere throughout the orbit of the planet.
Most 3D models of hot exoplanet atmospheres to date share a common limitation: the assumption of a cloud–free atmosphere that is in local chemical equilibrium. However, many investigations with 1D models have shown the importance of vertical transport and photochemistry that drive the chemistry away from equilibrium [e.g. @LinLY2010; @Moses2011; @Venot2012; @Zahnle2014; @DruTB16; @TsaLG2017]. Large dayside-nightside temperature contrasts naturally lead to large dayside-nightside contrasts in the chemical equilibrium composition. In addition, the properties of clouds have recently been investigated using 3D models of various complexity [@LeeDH16; @ParFS16; @RomR17; @LinMB18].
@CooS06 used simplified chemistry and radiative transfer schemes with a 3D hydrodynamics code and found that vertical transport from high pressure regions, which are horizontally uniform, leads to homogenisation of the chemistry at lower pressures. @AguPV14 used a “psuedo–2D” code (in practice a 1D column with a time–varying temperature profile) and found that horizontal (zonal) transport has a more important effect than vertical transport and contaminates the nightside chemistry with that of the hot dayside.
Recently, we coupled the simplified chemical scheme of @CooS06 to our own 3D model, including a state-of-the-art radiative transfer scheme [@AmuMB16], completing the feedback cycle between the dynamics, radiation and gas–phase chemistry [@DruMM18]. We found that a combination of horizontal (zonal and meridional) and vertical transport ultimately determine the abundance of methane in the atmosphere of HD 209458b. The overall effect of 3D transport is to increase the mole fraction of methane leading to more prominent methane absorption features in both the simulated transmission and emission spectra, compared with the chemical equilibrium case.
In this paper we present results from the same 3D radiative-hydrodynamics code, coupled with the same gas–phase chemical relaxation scheme [@CooS06; @DruMM18], applied to the specific case of HD 189733b. We compare our results with our previous simulations of the warmer atmosphere of HD 209458b and present the response of the atmospheric temperature and circulation to wind–driven chemistry, in Section \[section:chemistry\] and Section \[section:temp\]. We then present contribution functions diagnosed from our 3D simulations and compare with the results of @DDC17, where we find significant qualitative differences, in Section \[section:cf\]. Finally we consider the effect of departures of the chemistry from chemical equilibrium on the emission phase curve in Section \[section:phase\].
Model description
=================
The Unified Model {#section:um}
-----------------
We use the Met Office Unified Model (UM) to simulate the atmosphere of HD 189733b. The UM has been used in previous works to simulate the exoplanet atmospheres of HD 209458b [@MayBA14; @MayDB17; @AmuMB16; @LinMB18; @DruMM18], GJ 1214b [@DruMB18] and Proxima Centauri b [@BouMD17; @LewLB18].
The dynamical core of the UM (ENDGame) solves the deep-atmosphere, non-hydrostatic Navier-Stokes equations [@WooSW14; @MayBA14b; @MayDB17]. The heating rates are computed using the open-source SOCRATES[^1] radiative transfer scheme [@Edw96; @EdwS96] which has been updated and tested for the high-temperature, hydrogen-dominated conditions of hot Jupiter atmospheres [@AmuBT14; @AmuMB16; @AmuTM17]. The chemical composition is derived using the same methods as described in @DruMM18: an analytical formula to derive the chemical equilibrium abundances and a chemical relaxation scheme based on @CooS06.
The chemical relaxation method describes the time–dependence of a chemical species by relaxing the mole fraction toward a prescribed equilibrium profile on some chemical timescale [@CooS06; @TsaKL17; @DruMM18]. The equilibrium profile is taken to be the mole fraction corresponding to chemical equilibrium while the timescale is estimated or parameterised based on the elementary reactions involved in the interconversion of the relevant chemical species. The accuracy of the chemical relaxation method is primarily determined by the accuracy of the chosen chemical timescale [@TsaKL17].
In Appendix \[section:app3\] we compare the results of the chemical relaxation method against a full chemical kinetics method within the framework of a 1D model and find a good agreement. We also demonstrate the sensitivity of our results to the chosen chemical timescale in Appendix \[section:test\] by artificially increasing/decreasing the value of the chemical timescale by a factor of 10. We find that varying the timescale does not significantly effect the final mole fraction of methane over a large pressure range and therefore uncertainty in the precise value of the timescale is unlikely to effect our conclusions.
Tracer advection is handled using the extensively tested semi–implicit semi–Lagrangian scheme of the UM [@WooSW14]. In Appendix \[section:cons\] we test the conservation of the global mass of elemental carbon and oxygen for our simulations. The model conserves the global mass of these elements to within better than 99.9%.
Model parameters and setup {#sect:mod_param}
--------------------------
We use the planetary and stellar parameters of HD 189733b from @Southworth2010 which we summarise in \[table:params\]. The intrinsic temperature $T_{\rm int}$ is the blackbody temperature of the net intrinsic flux at the lower boundary, accounting for heat escaping from the interior [@AmuMB16]. A full description of the vertical damping terms ($R_w$ and $\eta_s$) can be found in @MayBA14.
For the stellar irradiation spectrum we use the Kurucz spectrum for HD 189733[^2]. We assume solar elemental abundances from @Asplund2009. For a complete description of the included opacities see @AmuMB16. For the main model integration the radiation spectrum is divided into 32 bands for the heating rate calculations [@AmuMB16]. However, for the simulated observation calculations we restart the model, starting from the state at 1000 days, and run for a small number of timesteps using 500 spectral bands, to obtain a higher spectral resolution [as done in @BouMD17; @DruMB18; @DruMM18; @LinMB18].
The model setup is broadly the same as in our previous coupled radiative transfer simulations of hot Jupiters [@AmuMB16; @DruMM18]. As in @DruMM18 we perform two simulations that are identical except that one assumes local chemical equilibrium while the other includes the effect of advection, due to the large-scale resolved wind, and chemical evolution via the chemical relaxation method [@CooS06]. We refer to these two simulations as the “equilibrium” and “relaxation” simulations, respectively.
The model is initialised with zero winds and with a horizontally uniform thermal profile. For the latter we use a radiative-convective equilibrium temperature profile from the 1D ATMO model using the same stellar and planetary parameters [@DruTB16]. The simulations are integrated for 1000 Earth days (hereafter days refers to Earth days) by which point the maximum wind velocities have ceased to evolve, though we note that the deep atmosphere ($P\gtrsim10^6$ Pa) has not reached a steady-state [@AmuMB16; @MayDB17], though this does not affect lower pressures. Unless otherwise stated, all figures show the simulations at 1000 days. The total axial angular momentum is conserved to within 99% for the simulations presented here.
We note that throughout the results sections of this paper we regularly calculate the difference of several quantities (e.g. temperature) between the relaxation and equilibrium simulations, to aid interpretation. Unless otherwise stated, the difference we refer to is the absolute difference, $A^{\rm relaxation}-A^{\rm equilibrium}$. In some cases, where more useful, we consider the relative difference, $(A^{\rm relaxation}-A^{\rm equilibrium})/A^{\rm relaxation}$, instead.
Parameter Value
------------------------------------------ ---------------------------------------------
Mass, $M_{\rm P}$ 2.18$\times10^{27}$ kg
Radius, $R_{\rm P}$ $8.05\times10^{7}$ m
Semi major axis, $a$ 0.031 AU
Surface gravity, $g_{\rm surf}$ 22.49 ms$^{-2}$
Intrinsic temperature, $T_{\rm int}$ 100 K
Lower boundary pressure, $P_{\rm lower}$ $2\times10^7$ Pa
Rotation rate, $\Omega$ $3.28\times10^{-5}$ s$^{-1}$
Specific heat capacity, $c_P$ 13 ${\rm kJ}~{\rm kg}^{-1}~{\rm K}^{-1}$
Specific gas constant, $R$ 3556.8 ${\rm J}~{\rm kg}^{-1}~{\rm K}^{-1}$
Vertical damping coefficient, $R_w$ 0.15
Vertical damping extent, $\eta_s$ 0.75
Horizontal resolution 144$\times$90
Vertical resolution 66
Dynamical time step 30 s
Radiative time step 150 s
: Key model parameters. Stellar and planetary parameters for HD 189733b adapted from @Southworth2010.[]{data-label="table:params"}
Wind–driven chemistry in the atmosphere of HD 189733 {#section:chemistry}
====================================================
In this section we present the thermal, dynamical and chemical structure of the atmosphere. We show the thermal and dynamical structure of the equilibrium simulation, and compare the chemical structure of the equilibrium and relaxation simulations.
{width="45.00000%"} {width="45.00000%"}\
{width="45.00000%"} {width="45.00000%"}\
{width="45.00000%"} {width="45.00000%"}
\[figure:wind\_temp\] shows the zonal-mean zonal wind velocity, meridional wind velocity at $P=5\times10^4$ Pa and temperature structure for the chemical equilibrium simulation of HD 189733b. The wind velocities and temperatures are qualitatively similar to previous 3D simulations of HD 189733b [@Showman2009; @DobA13; @RauM13; @KatSL16]. The circulation is characterised by an equatorial jet with a maximum wind velocity of $\sim6$ ${\rm km}$ ${\rm s}$$^{-1}$ with slower retrograde circulation at higher latitudes. The dayside-nightside temperature contrast (typically hundreds of Kelvin) increases with decreasing pressure while the hot spot moves closer to the substellar point, due to the decreasing radiative timescale with decreasing pressure [e.g. @Iro2005].
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"}\
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"}\
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"}\
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"}\
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"}\
{width="32.00000%"} {width="32.00000%"} {width="32.00000%"}
\[figure:eq\] shows the mole fractions of carbon monoxide, water and methane for the equilibrium simulation. The distribution of these species exactly traces the temperature structure (compare with \[figure:wind\_temp\]) since in chemical equilibrium the composition is entirely dependent on the local pressure and temperature, for a given mix of elements.
Carbon monoxide is more abundant than methane almost everywhere in the modeled domain, though the methane mole fraction varies significantly between $\sim10^{-4}$ on the nightside and $\sim10^{-10}$ on the warmer dayside. In the mid-latitude region of the nightside (where the atmosphere is the coolest) methane becomes more abundant than carbon monoxide; there is a corresponding increase in the mole fraction of water.
For the relaxation simulation, shown in \[figure:relax\], which includes advection due to the resolved wind, the chemistry is homogenised both vertically and horizontally over a large pressure range ($P\lesssim10^5$ Pa), similar to our results for HD 209458b [@DruMM18]. Overall this leads to a much larger dayside, equatorial methane abundance compared with the equilibrium simulation.
In @DruMM18 we identified a mechanism whereby meridional transport leads to an increase in the equatorial methane abundance, compared with chemical equilibrium, for the case of HD 209458b. Using a simple tracer experiment we demonstrated that mass is transported from higher latitudes to the equatorial region. Since the atmosphere is typically cooler at higher latitudes, compared with the equator, the equilibrium abundance of methane is larger. The net result is the transport of mass that has a relatively high methane fraction toward the equator. \[figure:wind\_temp\] (top right panel) clearly shows that significantly large regions of equatorward flow exist. This 3D effect cannot be captured by 1D [e.g. @Moses2011; @Venot2012; @DruTB16] or 2D [@AguPV14] models.
\[figure:profiles\] shows vertical profiles of the mole fractions of carbon monoxide, water and methane for a number of longitude points around the equator. Methane becomes vertically quenched at $P\sim10^5$ Pa with a mole fraction of $\sim4\times10^{-5}$. Importantly, we find the same meridional transport effect (the sharp increase in methane abundance between $1\times10^{5}$ and $2\times10^5$ Pa) as identified in our simulations of HD 209458b [@DruMM18], which is due to meridional transport from the mid-latitudes toward the equator.
![Vertical profiles of the carbon monoxide (red), water (blue) and methane (black/grey) mole fractions for a number of columns equally spaced in longitude around the equator (at 0$^{\circ}$ latitude) for the chemical equilibrium simulation (dashed) and relaxation simulation (solid).[]{data-label="figure:profiles"}]({fig23}.pdf){width="45.00000%"}
Overall, we find qualitatively the same trends as we previously found for HD 209458b [@DruMM18]. The main quantitative differences are the larger mole fractions of methane in the cooler atmosphere of HD 189733b.
Thermal and dynamical response of the atmosphere {#section:temp}
================================================
In this section we consider the thermal and dynamical response of the atmosphere to the changes in the local chemical composition due to wind–driven chemistry, by comparing the temperature structures of the equilibrium and relaxation simulations. The chemical composition and the thermal structure are linked via the radiative heating rates.
Temperature response {#section:temp_response}
--------------------
\[figure:diff\_temp\] shows the absolute temperature difference between the equilibrium and relaxation simulations in various different perspectives. A positive difference indicates a larger temperature in the relaxation simulation.
For pressures less than $\sim10^4$ Pa the atmosphere is generally cooler on the dayside and warmer on the nightside, for the relaxation simulation. In particular, there is a significant temperature increase of $>80$ K in the mid-latitude regions of the nightside. For larger pressures, particularly between $10^4$ and $10^5$ Pa, the temperature increases significantly in the equatorial region, within the equatorial jet, for all longitudes.
We compared estimates of both the radiative ($\tau_{\rm rad}$) and dynamical ($\tau_{\rm dyn}$) timescales for the equilibrium simulation. For cases where $\tau_{\rm rad}<\tau_{\rm dyn}$ the atmosphere is expected to be in radiative equilibrium (the atmosphere is radiatively driven) while for $\tau_{\rm rad}>\tau_{\rm dyn}$ advection of heat is expected to be important (the atmosphere is dynamically driven). We used Eq. 10 of @ShoG02 to estimate $\tau_{\rm rad}$ and for $\tau_{\rm dyn}$ assumed (for the upper atmosphere) $$\tau_{\rm dyn} \sim \frac{R_{\rm P}}{u}\sim\frac{H}{w} \sim 10^4~{\rm s},$$ where $R_{\rm P}\sim10^7$ m is the planet radius, $H\sim10^5$ m is the vertical scale height and $u\sim10^3$ ${\rm m}~{\rm s}^{-1}$ and $w\sim10^1$ m s$^{-1}$ are the horizontal and vertical wind velocities.
Comparing the timescales we find that the atmosphere is expected to be radiatively driven ($\tau_{\rm rad}<\tau_{\rm dyn}$) for $P<10^4$ Pa and dynamically driven ($\tau_{\rm rad}>\tau_{\rm dyn}$) for $P>10^4$ Pa. This transition gives a hint as to why we find different trends in thermal responses above and below $P\sim10^4$ Pa.
![The temperature difference between the relaxation and equilibrium simulations at a longitude of 0$^{\circ}$ (top), on the $P=5\times10^4$ Pa isobaric surface (middle) and an area–weighted meridional–mean between $\pm20^{\circ}$ latitude (bottom). A positive difference indicates a larger temperature in the relaxation simulation.[]{data-label="figure:diff_temp"}]({fig24}.pdf "fig:"){width="47.00000%"}\
![The temperature difference between the relaxation and equilibrium simulations at a longitude of 0$^{\circ}$ (top), on the $P=5\times10^4$ Pa isobaric surface (middle) and an area–weighted meridional–mean between $\pm20^{\circ}$ latitude (bottom). A positive difference indicates a larger temperature in the relaxation simulation.[]{data-label="figure:diff_temp"}]({fig25}.pdf "fig:"){width="47.00000%"}\
![The temperature difference between the relaxation and equilibrium simulations at a longitude of 0$^{\circ}$ (top), on the $P=5\times10^4$ Pa isobaric surface (middle) and an area–weighted meridional–mean between $\pm20^{\circ}$ latitude (bottom). A positive difference indicates a larger temperature in the relaxation simulation.[]{data-label="figure:diff_temp"}]({fig26}.pdf "fig:"){width="47.00000%"}
To unpick the combined effects of the three molecules (methane, carbon monoxide and water) we performed an additional test simulation (not shown) that is identical to the relaxation simulation except the mole fractions of water and carbon monoxide used in the radiative transfer calculations (i.e. to calculate the heating rate) correspond to chemical equilibrium, isolating the effect of methane. The resulting temperature and wind structure is almost identical to the nominal relaxation simulation, indicating that methane is the most important driver of these temperature changes, with water and carbon monoxide making a less important contribution. This is not surprising given the relatively small changes in water and carbon monoxide abundances between the equilibrium and relaxation simulations, despite both species being more abundant than methane.
Top-of-atmosphere (TOA) radiative flux
--------------------------------------
{width="45.00000%"} {width="45.00000%"}
The energy balance of a close–in tidally–locked atmosphere is dominated by stellar (shortwave) heating of the dayside atmosphere due to irradiation by the host star and thermal (longwave) cooling that occurs throughout the atmosphere. Throughout the rest of this paper we will refer to the stellar and thermal components as shortwave and longwave, respectively. The dominant source of heating for the nightside atmosphere is advection of heat from the irradiated dayside. To obtain a global view of the energy balance we consider the outgoing longwave radiative flux at the top-of-atmosphere (TOA), spectrally integrated over all wavelengths, shown in \[figure:toa\].
The TOA longwave flux qualitatively traces the temperature structure at $P\sim5\times10^4$ Pa (see \[figure:wind\_temp\]) with the largest emission occuring eastwards of the substellar point, where the atmosphere is the warmest. Comparing the equilibrium and relaxation simulations, it is apparent that the dayside TOA flux is decreased in the relaxation simulation, while on the nightside it is increased. The greatest change occurs in the mid–latitude regions of the nightside where the TOA flux is increased by around $25\%$.
As the primary source of energy for the nightside atmosphere is advection of heat from the irradiated dayside, this indicates an overall increased efficiency of dayside-to-nightside heat transport. This trend agrees with the changes in the temperature explored in the previous section, where the atmosphere was generally found to be warmer on the nightside but cooler on the dayside, for the relaxation simulation, for pressure less than $10^4$ Pa. In the following sections we will show that this is due to changes in the radiative heating rates and an increase in the speed of the equatorial jet, which themselves are not independent of each other.
Heating rates
-------------
{width="45.00000%"} {width="45.00000%"}\
{width="45.00000%"} {width="45.00000%"}\
{width="45.00000%"} {width="45.00000%"}\
To further understand the temperature response of the atmosphere we consider the radiative heating rates, which are shown in \[figure:hr\] for the equatorial region. We show separately the shortwave heating (positive heating rate) and the longwave cooling (negative heating rate), as well as the net heating rate which is the sum of the shortwave and longwave components.
We note that in \[figure:hr\] we show the heating rate $\mathcal{H}$ in units of ${\rm W}~{\rm m}^{-3}$, the rate of change of energy per unit volume [@AmuBT14 Eq. 17]. This is related to $\mathcal{H}$ in units of ${\rm K}~{\rm s}^{-1}$, the rate of change of temperature, by $$\mathcal{H}[{\rm W}~{\rm m}^{-3}] = \rho c_{P}\mathcal{H}[{\rm K}~{\rm s}^{-1}],$$ where $\rho$ is the mass density and $c_P$ is the specific heat capacity. In these simulations $c_P$ is a global constant, shown in \[table:params\].
\[figure:hr\] (left column) shows the heating rates for the equilibrium simulation, which are qualitatively very similar to those from the relaxation simulation (not shown). Naturally, the shortwave heating (top left panel) is restricted to the dayside atmosphere and peaks at $P\sim4\times10^4$ Pa at the substellar point. Longwave cooling (middle left panel), which occurs across both the dayside and nightside, peaks at similar pressures although shifted in longitude eastward of the substellar point. The net heating rate (bottom left panel) shows an overall positive heating rate (i.e. net heating) for the dayside and an overall negative heating rate (i.e. net cooling) for the nightside.
\[figure:hr\] also shows the absolute difference in the heating rates between the relaxation and equilibrium simulations. On the dayside, there is a clear shift to lower pressures for the shortwave heating (top right panel), as stellar flux is absorbed higher in the atmosphere due to the enhanced methane abundance. This results in a relative heating for $P<2\times10^4$ Pa and a relative cooling for larger pressures, in the shortwave.
In the longwave (middle right panel), the peak of the cooling is also shifted to lower pressures, particularly for the region eastward of the substellar point, where the increase in the methane abundance is most significant. In \[figure:hr\] this is indicated by a negative difference in the longwave heating rate (i.e. more cooling) for $P<10^4$ Pa and a positive difference (i.e. less cooling) for $P>10^4$ Pa. The change in the net heating rate (bottom right panel) shows that the shortwave effects are dominant on the dayside and the longwave effects are dominant on the nightside.
The overall effect of the changes to the shortwave and longwave heating rates is to increase the shortwave heating on the dayside and increase the longwave cooling on the nightside, for the $10^4<P<10^5$ Pa region. This increased differential heating acts to drive a faster equatorial jet. In \[figure:diff\_wind\] we show the absolute difference in the zonal–mean zonal wind between the relaxation and equilibrium simulations. There is clearly an overall increase in the zonal wind velocity within the equatorial jet of 250–500 ${\rm m}~{\rm s}^{-1}$. Comparing this with \[figure:wind\_temp\] this corresponds to a 5–10% increase compared with the equilibrium simulation.
The estimated dynamical and radiative timescales (Section \[section:temp\_response\]) indicate that the atmosphere is radiatively driven for $P<10^4$ Pa and dynamically driven for higher pressures. Advection of heat by the equatorial jet, aided by the increased wind velocities, leads to a heating between $10^4<P<10^5$ Pa all around the planet. For lower pressures, where the atmosphere is radiatively driven, the local temperature change is more spatially dependent.
We note that the longwave cooling is itself dependent on the atmospheric temperature and the temperature changes between the equilibrium and relaxation simulations will result in different steady–state cooling rates. We further note that the net radiative heating rates shown \[figure:hr\] are approximately balanced by advection of heat in the latter stages of the simulation, where the atmosphere (for less than $P\sim10^6$ Pa) is in an approximate steady-state.
![The absolute difference in the zonal–mean temporal–mean (800–1000 days) zonal wind velocity between the relaxation and equilibrium simulations. A positive difference indicates an increased wind velocity in the relaxation simulation. []{data-label="figure:diff_wind"}](fig35.pdf){width="50.00000%"}
Summary
-------
We find that for pressures less than $10^4$ Pa the atmosphere is generally cooler on the dayside and warmer on the nightside due to wind–driven chemistry, compared with chemical equilibrium. For the pressure region $10^4<P<10^5$ Pa we find a significant warming all around the planet, in the equatorial region. An increased differential heating between the dayside and nightside drives a faster equatorial jet. These temperature changes lead to a decreased TOA radiative flux for the dayside and an increased TOA radiative flux for the nightside. Our testing shows that these changes are primarily due to changes in the abundance of methane.
Contribution functions {#section:cf}
======================
To futher understand the effect of wind–driven chemistry on the radiative properties of the atmosphere we consider the contribution function, which quantifies the contribution of a layer to the upwards intensity at the top of the atmosphere. A peak in the contribution function effectively indicates the pressure level of the photosphere. The calculation of the contribution function is described and validated in Appendix \[section:app1\] and Appendix \[section:app2\], respectively. We first discuss the structure of the contribution functions from the equilibrium simulation and compare with previous studies in Section \[section:con\_fun\_eq\]. We then show the effect of wind–driven chemistry in Section \[section:con\_fun\_neq\].
Chemical equilibrium {#section:con_fun_eq}
--------------------
\[figure:cf\] shows an area–weighted meridional–mean ($\pm20^{\circ}$ latitude) of the contribution function as a function of longitude and pressure for the four Spitzer/IRAC channels centered on 3.6 m, 4.5 m, 5.8 m and 8.0 m. Methane is a prominant absorber in the 3.6 m and 8.0 m channels while water and carbon monoxide are the primary absorbers in the 4.5 m and 5.8 m channels, respectively.
{width="40.00000%"} {width="40.00000%"}\
{width="40.00000%"} {width="40.00000%"}\
{width="40.00000%"} {width="40.00000%"}\
{width="40.00000%"} {width="40.00000%"}\
In the case of chemical equilibrium (left panels, \[figure:cf\]), it is immediately apparent that the contribution function varies significantly with longitude for the 3.6 and 8.0 m channels. The peak of the contribution function in these channels generally occurs at higher pressures on the dayside compared with the nightside. On the other hand, the contribution function is relatively flat (isobaric) in the 4.5 and 5.8 m channels.
The shapes of the 3.6 and 8.0 m contribution functions are explained by the distribution of methane in chemical equilibrum. The dayside equilibrium abundance of methane is many orders of magnitude less than the nightside equilibrium abundance (Section \[section:chemistry\]) and therefore the opacity, within these spectral channels, is also smaller for the dayside. The smaller dayside opacity results in a smaller optical depth and hence a “deeper” photosphere. On the nightside, where opacity due to methane is significantly larger, the photosphere is shifted to lower pressures.
The contribution function in the 3.6 m channel actually contains a “double peak” in the nightside region, with a secondary, slightly smaller peak at $P\sim10^5$ Pa. This means that these two pressure regions both make a significant contribution to the intensity emerging from the top of the atmosphere. We find that this double peak is due to the spectral dependence of the opacity within the 3.6 m Spitzer/IRAC channel, which spans the range 3.08–4.01 m and corresponds to $\sim75$ spectral bands in the high spectral resolution setup of our model (see Section \[sect:mod\_param\]). For bands where methane has significant absorption the contribution function peaks at lower pressures, but for bands where methane absorption is not so prominent the contribution peaks at higher pressures. When combined in the 3.6 m channel this leads to two distinct peaks.
The 4.5 and 5.8 m contribution functions are also consistent with the equilibrium composition. Both water and carbon monoxide show small variations with longitude, compared with methane which varies by orders of magnitude. Therefore, the contribution functions in the spectral regions where these species are dominant absorbers are approximately isobaric around the equatorial region.
We now compare our results from the equilibrium simulation (\[figure:cf\]) with those of @DDC17 [see their Fig. 4], where we find several significant differences. We note that in our figures (\[figure:cf\]) the substellar point is located at 180$^{\circ}$ longitude, while in Fig. 4 of @DDC17 the substellar point is located at 0$^{\circ}$ longitude.
Firstly, for the 3.6 m and 8.0 m channels, we find that the contribution function peaks at lower pressures on the nightside, compared with the dayside. Intuitively, we expect this to be the case given the larger abundance of methane on the nightside. However, the opposite appears to be true for the results of @DDC17 with the contribution function peaking at lower pressures on the dayside, compared with the nightside. Secondly, for the 4.5 m and 5.8 m channels @DDC17 find that the contribution function (generally) shifts to a higher presures on the nightside, in contrast to the approximately isobaric contribution functions that we find in the same channels The temperature structure appears to be very similar between the two models (compare our \[figure:wind\_temp\] with their Fig. 4) and therefore the equilibrium chemical composition should also be similar. The cause of the discrepancy is therefore puzzling.
Whatever the cause of the difference, the opposing results have important consequences for the interpretation. Taking the 3.6 m as an example, we find that the peak of the contribution function crosses many isotherms between the dayside and nightside of the planet, since the shape of the contribution function is approximately opposite to that of the isotherms. In contrast, the shape of the contribution functions presented by @DDC17 cause them to approximately follow the isotherms, meaning that the temperature contrast at the photosphere around the planet should be much smaller.
The effect of wind–driven chemistry {#section:con_fun_neq}
-----------------------------------
The contribution functions for the relaxation simulations (right panels, \[figure:cf\]) show significant differences with the equilibrium simulations, particularly for the 3.6 and 8.0 m channels. Most importantly, we find that the peak of the contribution functions occur at lower pressures on the dayside than on the nightside: opposite to the chemical equilibrium case.
Since the chemical composition is approximately homogenous in the relaxation simulation, the non-isobaric shape of the contribution functions is likely related to the temperature structure. To check, we performed a series of simple tests. Firstly, we fixed the temperature at which the Planck function $B(T)$ (see \[equation:final\_cfi\]) is evaluated and the shape of the contribution functions become flatter, though not isobaric. Secondly, we also fixed the temperature at which the absorption coefficient $\kappa$ is calculated. Fixing the temperature for both $B(T)$ and $\kappa$ yields an isobaric contribution function, indicating that it is the temperature dependence of these two terms that drive the shape of the contribution functions for the relaxation simulations.
Since the temperature at the peak of the contribution function (i.e. the photosphere) effectively determines the TOA flux, changes in the shape of the contribution function will effect the observed emission from the atmosphere. As previously discussed, the shape of the contribution functions in the equilibrium case means the photosphere cuts across many isotherms between the dayside and nightside. In contrast, for the relaxation case the contribution functions approximately follow the shape of the isotherms, leading to a much smaller temperature difference at the pressure level of the photosphere between the dayside and nightside.
With the above argument, this allows us to predict that we should find a larger phase amplitude in the simulated emission phase curve, in the 3.6 m channel, for the equilibrium simulation compared with the relaxation simulation.
Summary
-------
We have presented contribution functions calculated for both the equilibrium and relaxation simulations of HD 189733b. In the chemical equilibrium case, we find relatively flat (isobaric) contribution functions for the 4.5 m and 5.8 m Spitzer/IRAC channels but non-isobaric contribution functions in the 3.6 m and 8.0 m channels, where methane is an important absorber. In the latter channels, the contribution function peaks at lower pressures on the nightside compared with the dayside, opposite to the trends found by @DDC17. However, when including wind–driven chemistry, this trend reverses with the contribution functions peaking at lower pressures on the dayside in the 3.6 m channel. This has a significant effect on the temperature at the pressure level of the photosphere.
Emission phase curves {#section:phase}
=====================
In this section we present the emission phase curves calculated from both the equilibrium and relaxation simulations of HD 189733b. We compare the results from the equilibrium simulation with results from previous studies and discuss the effect of wind–driven chemistry.
Chemical equilibrium {#chemical-equilibrium}
--------------------
![Calculated emission phase curves from our simulations with both chemical equilibrium and chemical relaxation as well as from @DobA13 and @Showman2009 (both assuming chemical equilibrium) for the 3.6 m (top) and 4.5 m (bottom) Spitzer/IRAC channels.[]{data-label="figure:phase_curve"}](fig44.pdf "fig:"){width="50.00000%"}\
![Calculated emission phase curves from our simulations with both chemical equilibrium and chemical relaxation as well as from @DobA13 and @Showman2009 (both assuming chemical equilibrium) for the 3.6 m (top) and 4.5 m (bottom) Spitzer/IRAC channels.[]{data-label="figure:phase_curve"}](fig45.pdf "fig:"){width="50.00000%"}
\[figure:phase\_curve\] shows the 3.6 m and 4.5 m emission phase curves from our simulations, as well as those from previous simulations of @Showman2009 and @DobA13. We first compare our chemical equilibrium phase curves with those of previous models and focus on the normalised phase amplitude and the phase offset. The normalised phase amplitude quantifies the difference between the maximum and minimum flux ratio normalised by the flux ratio at secondary eclipse [e.g. @DDC17] while the phase offset quantifies the angular shift between the maximum flux ratio in the phase curve and the secondary eclipse. The normalised phase amplitudes and phase offsets are shown separately in \[figure:offset\_amp\] which is based on Fig. 1 from @DDC17.
Firstly, for the 3.6 m channel, we find that all three models show similar normalised phase amplitudes (see \[figure:offset\_amp\]). However, \[figure:phase\_curve\] shows that the @DDC17 phase curve has a significantly larger actual (i.e. non-normalised) phase amplitude, compared with both our result and that of @Showman2009. There is a significant phase offset difference between all three models, with an offset of $\sim50^{\circ}$ for @Showman2009, $\sim35^{\circ}$ for this work and $\sim15^{\circ}$ for @DDC17.
Secondly, for the 4.5 m channel, we obtain a similar normalised phase amplitude ($\sim0.5$) to that of @Showman2009. The normalised phase amplitude of @DDC17 is significantly larger ($\sim0.8$) than both our result and that of @Showman2009, which is particularly apparent in \[figure:phase\_curve\]. The phase offsets for each model are similar to their values in the 3.6 m channel.
The trends in the emission phase curves from our simulations, shown in \[figure:phase\_curve\] and \[figure:offset\_amp\], are consistent with the trends in the contribution functions previously discussed in Section \[section:temp\]. The equilibrium 3.6 m contribution function crosses many isotherms as it peaks at lower pressures on the nightside compared with the dayside, leading to a large normalised phase amplitude. In contrast, the 4.5 m contribution function is approximately isobaric and crosses fewer isotherms, resulting in a smaller emission phase amplitude. However, the contribution functions in both channels peak at similar pressures on the dayside, near the location of the hotspot, giving a similar phase offset.
\[figure:offset\_amp\] also shows the phase offset and normalised phase amplitude derived from observations of HD 189733b by @KnuLF12. We note that none of the models reproduce the significant phase offset difference between the 3.6 m and 4.5 m channels.
![The phase offset versus the normalised phase amplitude for the 3.6 m (blue) and 4.5 m (green) Spitzer/IRAC channels from our UM simulations, as well as those of @DobA13 and @Showman2009. Also shown are the phase offset and normalised phase amplitude derived from the observed phase curves of HD 189733b [@KnuLF12].[]{data-label="figure:offset_amp"}](fig46.pdf){width="50.00000%"}
The effect of wind–driven chemistry {#the-effect-of-winddriven-chemistry}
-----------------------------------
We now consider the effect of wind–driven chemistry on the emission phase curve, also shown in \[figure:phase\_curve\]. As expected from previous discussion of the contribution function, we find only a small difference in the 4.5 m phase curve between the equilibrium and relaxation simulations. The phase offset is slightly increased and the normalised phase amplitude is slightly decreased compared with equilibrium. On the other hand, we find a significant difference in the 3.6 m channel with a strongly decreased normalised phase amplitude compared with the equilibrium simulation.
This effect on the 3.6 m channel phase curve is consistent with the previous discussion of the contribution functions (Section \[section:cf\]). Homogenisation of the methane chemistry effectively inverts the shape of the contribution function (\[figure:cf\]) with the ultimate effect of significantly reducing the number of isotherms that the photosphere crosses, leading to a smaller emission phase amplitude. We note that the 3.6 m normalised phase amplitude for the relaxation simulation provides a significantly poorer match to the observed normalised phase amplitude, compared with the equilibrium simulation.
Summary
-------
The emission phase curves calculated from our equilibrium simulation indicate that we find a phase offset somewhere inbetween the previous results of @Showman2009 and @DobA13. The normalised phase amplitude in the 3.6 m channel is somewhat similar between all three models, however @DobA13 obtains a significantly larger normalised phase amplitude in the 4.5 m channel compared with both our results and those of @Showman2009. The effect of wind–driven chemistry is to significantly decrease the normalised phase amplitude in the 3.6 m channel, as well as to slightly increase the phase offset in both the 3.6 m and 4.5 m channels.
Discussion and Conclusions
==========================
Summary of results
------------------
We have presented results from two simulations of the atmosphere of the hot Jupiter HD 189733b: one with the assumption of local chemical equilibrium and the other with the inclusion of wind–driven advection and chemical relaxation.
The trends in the chemical composition between the equilibrium and relaxation simulations are qualitatively similar to previous results for the warmer atmosphere of HD 209458b [@DruMM18], and further demonstrate the importance of 3D modeling of exoplanet atmospheres. The main difference with HD 209458b is a larger equilibrium abundance of methane due to the lower temperature of the atmosphere, leading to larger methane mole fractions in the relaxation simulation. The net result of wind–driven advection is to increase the methane mole fraction by several orders of magnitude, compared with chemical equilibrium, throughout most of the modeled domain, with the largest effect on the dayside.
In @DruMM18 we found relatively unimportant changes ($\sim$1%) to the thermal and dynamical structure due to wind–driven chemistry for the atmosphere of HD 209458b. However, for the simulations of HD 189733b we find more significant differences of up to $\sim10\%$ in both temperature and wind velocities. For pressures less than $10^4$ Pa we estimate the radiative timescale to be faster than the dynamical timescale, and in this region we observe a local temperature decrease (increase) where the local methane abundance is increased (decreased), compared with chemical equilibrium. This is dominated by changes in the thermal (longwave) cooling.
For pressures greater than $10^4$ Pa, where advection of heat is expected to be more important, this trend no longer holds. We find a significant temperature increase between $10^4$ and $10^5$ Pa in the equatorial region, due to an increase in the differential heating between the dayside and nightside. This increased differential heating drives a faster equatorial jet.
Our tests show that these changes to the temperature and circulation are due to the large increase in the methane abundance, compared with chemical equilibrium, with a minor contribution from the relatively small changes in the abundances of carbon monoxide and water.
The temperature changes found in this paper (up to 10%) using a 3D model are similar in magnitude to the temperature changes that we previously found using a 1D model [@DruTB16], including only vertical transport. One of the main effects of vertical transport in the 1D model is to increase the abundance of methane above chemical equilibrium.
When considering the TOA thermal radiative flux it is clear that there is a total reduction in the amount of energy emitted from the dayside and an increase from the nightside when wind–driven chemistry is taken into account. Overall, this indicates an increased efficiency of heat transport from the dayside to nightside atmosphere.
We find significant quantitative and qualitative differences in the 3D contribution functions between our chemical equilibrium simulation and the results of @DDC17. The contribution functions of @DDC17 generally peak at lower pressures on the dayside compared with the nightside for all spectral channels considered. In contrast, our simulations show that the contribution functions in the 4.5 m and 5.8 m channels are approximately isobaric, while we find an inverse trend to @DDC17 in the 3.6 m and 8.0 m channels with contribution functions that peak at lower pressures on the nightside. Since the temperature structure between our simulation and theirs appears to be similar, the cause of the discrepancy is undetermined. When including the effect of wind–driven chemistry we find that the longitude dependence of the contribution functions in the 3.6 m and 8.0 m channels is essentially inverted, peaking at lower pressures on the dayside than the nightside.
Differences in the shapes of the contribution functions explain the differences that we find in the simulated emission phase curves. The main effect of wind–driven chemistry is to significantly reduce the amplitude of the 3.6 m phase curve, since the photosphere crosses fewer isotherms in the relaxation simulation compared with the equilibrium simulation. The 4.5 m phase curve does not change significantly between the equilibrium and relaxation simulations.
Comparing the phase curves calculated from our equilibrium simulation with previous simulations of HD 189733b we find fairly similar phase amplitudes with @Showman2009 in both the 3.6 m and 4.5 m spectral channels, but larger differences in the phase amplitude with @DDC17. The phase offset in both channels varies significantly between all three models. We find that none of the three models [this work; @Showman2009; @DDC17] match the observed phase curve characteristics well.
The spectrum of HD 189733b and the potential effect of clouds and haze {#section:cloud}
----------------------------------------------------------------------
The observed transmission spectrum of HD 189733b shows potential signatures of cloud or haze [@PonKF08; @PonSG13; @SinFN16; @BarAI17]. The presence of cloud or haze particles can effect the observed spectrum both directly and indirectly. The direct effect is to change the optical properties of the atmosphere and thereby change the observed transit or eclipse depth. The indirect effect occurs if the presence of cloud or haze particles alters the radiative heating rates, and thereby also the thermal structure and atmospheric circulation.
The structure, composition and effect of clouds on the background atmosphere has been investigated using a number of different 3D models with varying degrees of complexity and levels of approximation [@LeeDH16; @ParFS16; @RomR17; @LinMB18]. Recently, @LinMM18 reported a flat transmission spectrum from their 3D simulations of HD 189733b that include a state–of–the–art treatment of cloud formation [@HelWT08; @LeeDH16], contrary to the molecular absorption features that have been measured [@SinFN16]. This model–observation discrepancy suggests that current theoretical models are missing important physical mechanisms at play in the atmospheres of hot Jupiters [@LinMM18]. @LeeWD17 investigated the effect of inhomogenous cloud structures on the reflected light and thermal emission spectra of HD 189733b. Haze particles may also be produced in a HD 189733b–like atmosphere via photochemical processes, with their presence impacting the observed transmission spectrum and, potentially, the thermal profile of the atmosphere [@LavK17].
The simulations presented in this paper assume that clouds and haze are not present in the atmosphere. The inclusion of clouds would likely alter the thermal and dynamical structure of the model atmosphere and changes to the optical properties would also directly impact the emission phase curves that we present.
In this paper we choose to focus on the gas–phase chemistry and, in particular, how this interacts with the circulation and radiative heating. This is important as absorption from gas–phase chemical species (e.g. water) gives rise to the dominant features in many observed spectra of transiting exoplanets [e.g. @SinFN16]. In addition, models that include a sophisticated treatment of cloud formation currently assume a simple treatment of the gas–phase chemistry [i.e. local chemical equilibrium, @LeeDH16; @LinMB18]. The simulations presented in this paper therefore represent an important step in the hierarchy of model complexity.
Future prospects
----------------
In this study we found a larger effect of wind–driven chemistry on the thermal and dynamical structure compared with our previous results for the warmer case of HD 209458b [@DruMM18], with differences in the temperature and wind velocities reaching $\sim10\%$. This is due to the cooler atmosphere of HD 189733b (compared with HD 209458b) having a larger equilibrium methane abundance, and hence larger quenched methane abundance. A natural question to ask is whether this trend will continue with cooler temperature planets.
It is possible that a “sweet spot” exists where the dayside atmosphere is warm enough that carbon monoxide is dominant over methane, in chemical equilibrium, but the nightside is cool enough that methane is dominant over carbon monoxide. In this case, horizontal and/or vertical transport may lead to significant changes in the relative abundances of methane and carbon monoxide, with consequences for the opacity and heating rates. For cooler atmospheres still, it is likely that methane will be the dominant carbon species everywhere, with carbon monoxide present as a trace species. If this is true we might expect a similar process as presented in this paper but with the roles of carbon monoxide and methane reversed.
Our simulations used a chemical relaxation scheme [@CooS06] to solve for the chemical evolution. The accuracy of this type of scheme is reliant on an accurate estimation of the chemical timescale [@TsaKL17]. Whilst we have validated the @CooS06 chemical relaxation scheme against a full chemical kinetics calculation in a 1D model (Appendix \[section:app3\]), a full kinetics calculation within a 3D framework has not yet been achieved in the literature. As we have previously discussed in @DruMM18 there remain significant differences in the results between the 3D chemical relaxation approach (complex dynamics with simplified chemistry) and the pseudo–2D chemical kinetics approach (complex chemistry with simplified dynamics) of @AguPV14. It is also unclear whether it is possible to include a treatment of photochemistry within the chemical relaxation method. For these reasons, a model consistently coupling chemical kinetics calculations within a 3D framework is required.
Using a 2D steady-state circulation model @TreCM17 found that the deep atmosphere ($P\gtrsim1\times10^5$ Pa) of hot Jupiters may be significantly hotter than predicted by 1D radiative-convective equilibrium models, possibly due to the advection of potential temperature. GCMs also show a trend of converging towards a hotter profile in the deep atmosphere [@AmuMB16; @MayDB17] though, due to computational limitations, the models cannot be integrated for long enough to reach a steady-state at these pressures.
A hotter deep atmosphere may directly impact atmospheric emission within the 4.5 m Spitzer/IRAC channel, as the contribution function in this spectral channel peaks near to this region in the simulations presented here. In addition, our results have shown that methane first departs from the equilibrium profile at pressures greater than $P=1\times10^5$ Pa. Therefore, a hotter deep atmosphere may change the equilibrium abundance at the quench point and the pressure level of the quench point itself, with consequences for the chemical abundances at lower pressures.
Our results add to the evidence that solar composition gas–phase models fail to match the observed properties of the emission phase curve (i.e. the phase offset and normalised phase amplitude) within the Spitzer/IRAC 3.6 m and 4.5 m channels, both assuming local chemical equilibrium and with the effect of wind–driven chemistry. An important missing ingredient in the simulations presented in this paper is cloud formation (Section \[section:cloud\]) which can have a significant impact on the thermal structure and observed spectra. Consistently coupling a cloud formation module with gas–phase wind–driven chemistry in a 3D model is a significant challange, but is likely required to understand the atmospheric composition of hot exoplanets. On the other hand, @NikSF18 recently reported the clearest hot gas-giant atmosphere to date, which may prove to be a highly useful observational target for testing the theory of gas–phase chemistry.
The authors thank the annonymous referee for their report that helped to improve the quality of this paper. This work is partly supported by the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013 Grant Agreement No. 336792-CREATES and No. 320478-TOFU). N.J.M. and J.G. are partially funded by a Leverhulme Trust Research Project Grant. J.M. acknowledges the support of a Met Office Academic Partnership secondment. This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. This work also used the University of Exeter Supercomputer ISCA.
Deriving the contribution function {#section:app1}
==================================
We start with the half-range (upward) intensity at the top of the atmosphere [see @Thomas1999 section 5.4.3] $$\begin{aligned}
{2}
I_{\nu}^+\left(0,\mu,\phi\right) = I_{\nu}^+\left(\tau^*,\mu,\phi\right){\rm e}^{-\tau^*/\mu} \nonumber \\
+ \int_0^{\tau^*} \frac{d\tau'}{\mu}{\rm e}^{-\tau'/\mu}B_{\nu}\left(\tau'\right),
\label{equation:i+}\end{aligned}$$ where $I_{\nu}^+$ is the hemispherical intensity at wavenumber $\nu$, $\mu=\cos\theta$ where $\theta$ is the zenith angle, $\phi$ is the azimuthal angle, $B_{\nu}$ is the Planck function and $\tau$ is the optical depth, $$\tau = \int_0^s ds' \kappa\left(s'\right),$$ where $\kappa$ is the absorption coefficient and $s$ is the path.
The top of the atmosphere is denoted $\tau=0$ while the bottom of the atmosphere is denoted $\tau=\tau^*$. The first term on the right in \[equation:i+\] is the upward intensity from the bottom of the atmosphere that has survived extinction while the second term on the right is the thermal contribution from the atmosphere. For a gas–giant atmosphere with no solid surface we can assume $\tau^*=\infty$ which reduces \[equation:i+\] to $$I_{\nu}^+\left(0,\mu,\phi\right) = \int_0^{\infty} \frac{d\tau'}{\mu}{\rm e}^{-\tau'/\mu}B_{\nu}\left(\tau'\right).$$
Changing the limits of the integral, we can write the intensity contribution of a particular layer (dropping the $\nu$ notation) as $$I^+\left(0,\mu,\phi\right) \equiv \mathcal{CF}_I\left(\mu,\phi\right) = \int_{\tau_1}^{\tau_2} \frac{d\tau'}{\mu}{\rm e}^{-\tau'/\mu}B\left(\tau'\right)$$ where we have defined the intensity contribution function $\mathcal{CF}_I$. Evaluating the integral we obtain $$\begin{aligned}
{2}
\mathcal{CF}_I\left(\mu,\phi\right) &= B\left(\tau_{1,2}\right)\left[{\rm e}^{-\tau_1/\mu}-{\rm e}^{-\tau_2/\mu}\right] \nonumber \\
\mathcal{CF}_I\left(\mu,\phi\right) &= B\left(\tau_{1,2}\right)d\left[{\rm e}^{-\tau/\mu}\right],
\label{equation:cfi}\end{aligned}$$ where $B\left(\tau_{1,2}\right)$ is the Planck function of the layer $\tau_1\rightarrow\tau_2$, which is assumed to be constant across the layer.
Discretising \[equation:cfi\] onto a model grid would give the contribution of a model layer to the emergent intensity. To generalise this, we can introduce a factor $1/d\log P$ [e.g. @ChaH87] to give the contribution per decade in pressure. Finally then, we have $$\mathcal{CF}_I\left(\mu,\phi\right) = B\left(\tau_{1,2}\right)\frac{d\left[{\rm e}^{-\tau/\mu}\right]}{d\left[\log P\right]}.
\label{equation:final_cfi}$$
It is clear that \[equation:final\_cfi\] is equivalent to @Knutson2009 [their Eq. 2] if $\mu=1$. For the calculations presented in this paper $\mu=1$. However, our equation is different to that presented in both @Griffith1998 and @DDC17, where the negative sign is missing in the exponential factor; we assume that this negative sign is taken care of elsewhere in their implementation.
Importantly we note that $\mathcal{CF}_I$ is a physical quantity with units ${\rm W}~{\rm m}^{-2}~{\rm ster}^{-1}$. However, it is more common to present the [*normalised*]{} contribution function ($\bar{\mathcal{CF}}_I$) such that the layer that contributes the most to the top-of-atmosphere intensity has a value $\bar{\mathcal{CF}}_I=1$. In the main body of this paper we present the quantity $\bar{\mathcal{CF}}_I$.
Validating the contribution function calculation {#section:app2}
================================================
To validate the implementation of the contribution function calculation we first derive the [*flux*]{} contribution function ($\mathcal{CF}_F$). Integrating \[equation:cfi\] over $\mu$ and $\phi$ [see @Thomas1999 Eq. 5.48] $$\begin{aligned}
{2}
\mathcal{CF}_F &= \int_0^{2\pi} d\phi \int_0^1 d\mu \mu \mathcal{CF}_I\left(\mu,\phi\right) \nonumber \\
\mathcal{CF}_F &= 2\pi B\left(\tau_{1,2}\right) \int_0^1d\mu\left[\mu{\rm e}^{-\tau_1/\mu} - \mu{\rm e}^{-\tau_2/\mu}\right]. \label{eq:cff_int}\end{aligned}$$
To evalute the integral in \[eq:cff\_int\] we use the diffusivity approximation [see @Thomas1999 section 11.2.5] $$\int_0^1d\mu \mu{\rm e}^{-\tau/\mu} = \frac{{\rm e}^{-D\tau}}{D} \nonumber$$ where $D$ is the diffusivity factor. Finally, we now have $$\begin{aligned}
{2}
\mathcal{CF}_F &= \frac{2\pi B{\left(\tau_{1,2}\right)}}{D}\left[{\rm e}^{-D\tau_1}-{\rm e}^{-D\tau_2}\right] \nonumber \\
\mathcal{CF}_F &= \frac{2\pi B{\left(\tau_{1,2}\right)}}{D}d\left[{\rm e}^{-D\tau}\right]. \label{equation:cff}\end{aligned}$$
$\mathcal{CF}_F$ is the amount of flux that escapes to space from a particular layer of the atmosphere. For the special case of an isothermal atmosphere this will be equal to the divergence of the radiative flux ($\nabla F$) in the layer, as exchange with other layers of the atmosphere will be zero [see @Thomas1999 section 11.2.7]. We note that $\nabla F$ is the main output of the radiative scheme and is extensively tested. In the isothermal case $D\sim2$ [@Edw96].
We compare $\mathcal{CF}_F$ and $\nabla F$ for a 1000 K isothermal atmosphere (where we have used $D=2.0$) in \[figure:test\] and we find excellent agreement between the two quantities, validating our implementation of the contribution function calculation.
![The flux contribution function $\mathcal{CF}_F$ and the net flux divergence $\nabla F$ across the layer for a 1000 K isothermal atmosphere.[]{data-label="figure:test"}](fig47.pdf){width="50.00000%"}
Validating the chemical relaxation scheme {#section:app3}
=========================================
Chemical relaxation methods rely on the accurate estimation and paramaterisation of chemical timescale that is based on results from a full chemical kinetics calculation [@TsaKL17]. To assess the accuracy of the chemical relaxation scheme that we use in this study [@CooS06] we compare it against a full chemical kinetics calculation using two different chemical networks: @Venot2012 and @TsaLG2017. We used the chemical equilibrium pressure–temperature profile and model parameters for HD 189733b from @DruTB16 and tested a range of eddy diffusion coefficients ($10^8<K_{zz}<10^{11}$ cm$^2$ s$^{-1}$) that are constant with pressure.
\[figure:1d\_test\] shows the comparison between the @CooS06 chemical relaxation with the @Venot2012 and @TsaLG2017 chemical kinetics schemes, as well as the @TsaKL17 relaxation scheme. For the smallest $K_{zz}$ value, the @CooS06 relaxation result matches very well with the @TsaLG2017 kinetics result, while for larger values the @CooS06 relaxation result is somewhere in between the @TsaLG2017 and @Venot2012 kinetics results.
We conclude that the @CooS06 chemical relaxation method has an acceptable level of accuracy, compared with a full chemical kinetics calculation, for the current application. Importantly, we note that the result obtained with the @CooS06 relaxation scheme typically lies somewhere in between the result obtained with the @TsaLG2017 and @Venot2012 kinetics schemes, depending on the value of $K_{zz}$. Therefore, potential inaccuracies of the present chemical relaxation method are within the expected uncertainties of currently available chemical networks, for HD 189733b conditions.
![The mole fraction of methane along a 1D profile with different values of the $K_{zz}$. We compare the steady-state abundances obtained from the @CooS06 chemical relaxation scheme (CS06 Relax) with the @TsaKL17 chemical relaxation scheme (T17 Relax) as well as chemical kinetics calculations using both the @Venot2012 and @TsaLG2017 chemical networks (V12 Kinetics and T17 Kinetics, respectively). The chemical equilibrium mole fraction is shown in dotted blue. The calculations are performed with the pressure-temperature profile and planetary parameters (e.g. surface gravity) of HD 189733b.[]{data-label="figure:1d_test"}](fig48.pdf){width="50.00000%"}
Sensitivity to the chemical timescale {#section:test}
=====================================
To test the sensitivity of the model to the value of the chemical timescale we performed additional simulations that are identical to the relaxation simulation but with an artificially increased/decreased chemical timescale by a factor 10. The vertical mole fraction profiles around the equator for these tests are shown in \[figure:vert\_profiles\_test\] and should be compared with the results using the nominal chemical timescale in \[figure:profiles\].
![Vertical profiles of the chemical equilibrium (dashed) and chemical relaxation (solid) mole fractions of methane, carbon monoxide and water for several longitude points around the equator, for simulations with the @CooS06 chemical timescale multiplied by a factor 10 (left panel) and the @CooS06 chemical timescale divided by a factor 10 (right panel). This figure should be compared with \[figure:profiles\] which shows the simulation with the nominal @CooS06 chemical timescale.[]{data-label="figure:vert_profiles_test"}](fig49.pdf "fig:"){width="48.00000%"} ![Vertical profiles of the chemical equilibrium (dashed) and chemical relaxation (solid) mole fractions of methane, carbon monoxide and water for several longitude points around the equator, for simulations with the @CooS06 chemical timescale multiplied by a factor 10 (left panel) and the @CooS06 chemical timescale divided by a factor 10 (right panel). This figure should be compared with \[figure:profiles\] which shows the simulation with the nominal @CooS06 chemical timescale.[]{data-label="figure:vert_profiles_test"}](fig50.pdf "fig:"){width="48.00000%"}\
We find that an increase (decrease) in the chemical timescale leads to an increase (decrease) in the pressure level of the “turnoff” point, where the methane abundance begins the increase with decreasing pressure. The location of the quench point depends on the ratio of the transport to chemical timescales. The chemical timescale typically increases with decreasing pressure due to the pressure and temperature dependence of the reaction rate, while the transport timescale depends on the local wind velocities and the relevant length scale [see @DruMM18 Fig. 3]. An increase in the chemical timescale will therefore shift the quench point to higher pressures, while decreasing the chemical timescale will shift the quench point to lower pressures.
While changing the chemical timescale clearly alters the pressure level at which meridional transport becomes important, the final vertically quenched methane abundance shows only a small variation between the three cases. The quenched mole fraction of methane using the nominal @CooS06 timescale is $\sim4.5\times10^{-5}$, while in the $\times10$ and $\div10$ cases this changes to $\sim6\times10^{-5}$ and $\sim3.5\times10^{-5}$, respectively. This test suggests that uncertainties or errors in the value of the chosen timescale do not lead to large differences in our results.
Conservation of global mass of the elements {#section:cons}
===========================================
As a basic test of the advection scheme we check the conservation of the global mass of carbon and oxygen. We note that this test does not necessarily validate the accuracy of the advection scheme but does provide a basic check that the advection scheme is not artificially gaining or losing mass.
The total mass of the atmosphere $M$ can written, $$M = \sum_{k} \rho_{k} V_{\rm k},$$ where $\rho_k$ and $V_k$ are the mass density and volume of the cell $k$, respectively, and the sum is over the total number of cells in the model grid. Similarly, the total mass of an element $i$ can be written as, $$M_{i} = \sum_k \sum_j \alpha_{i,j}w_{j,k}\rho_kV_k =\frac{1}{\mu} \sum_k \sum_j \alpha_{i,j}\mu_jf_{j,k}\rho_kV_k$$ where $w_{j,k}$ and $f_{j,k}$ are the mass fraction and mole fraction of the species $j$, respectively, $\alpha_{i,j}$ is the fractional mass of element $i$ in species $j$, $\mu_{j}$ is the molar mass of the species $j$ and $\mu$ is the mean molar mass. Specifically for the simulations presented in this paper, that include methane, water and carbon monoxide, we can write the global mass of carbon as, $$\begin{aligned}
{2}
M_{\rm C} = & \frac{1}{\mu}\sum_k\rho_kV_k\left[\alpha_{\rm C, CO}\mu_{\rm CO}f_{{\rm CO},k} + \alpha_{\rm C, CH_4}\mu_{\rm CH_4}f_{{\rm CH_4},k} \right], \nonumber \\
M_{\rm C} = & \frac{1}{\mu}\sum_k\rho_kV_k\left[\alpha_{\rm C, CO}\mu_{\rm CO}f_{{\rm CO},k} + \alpha_{\rm C, CH_4}\mu_{\rm CH_4}\left(A_{\rm C} - f_{{\rm CO},k}\right) \right],\end{aligned}$$ where we have replaced $f_{{\rm CH_4},k}$ with $f_{{\rm CO},k}$ using the mass balance approach and $A_{\rm C}$ is a constant [@CooS06; @DruMM18]. Similarly for oxygen we have, $$\begin{aligned}
{2}
M_{\rm O} = & \frac{1}{\mu}\sum_k\rho_kV_k\left[\alpha_{\rm O, CO}\mu_{\rm CO}f_{{\rm CO},k} + \alpha_{\rm O, H_2O}\mu_{\rm H_2O}f_{{\rm H_2O},k} \right], \nonumber \\
M_{\rm O} = & \frac{1}{\mu}\sum_k\rho_kV_k\left[\alpha_{\rm O, CO}\mu_{\rm CO}f_{{\rm CO},k} + \alpha_{\rm O, H_2O}\mu_{\rm H_2O}\left(A_{\rm O} - f_{{\rm CO},k}\right) \right].\end{aligned}$$ As before, we have replaced $f_{{\rm H_2O},k}$ with $f_{{\rm CO},k}$ by assuming mass balance and $A_{\rm O}$ is a constant. We note that it is the mole fraction of carbon monoxide ($f_{\rm CO}$) that is advected as a tracer is these simulations.
\[figure:conservation\] shows the conservation of the global mass of carbon and oxygen as a function of simulation time. The global mass of carbon and oxygen is conserved to within significantly better than $99.9\%$ over 1000 days. As a point of comparison, \[figure:conservation\] also shows the change in the global mass of carbon monoxide which increases by approximately 25%.
The fractional change in the mass of carbon monoxide is much larger than the errors in the conservation of the global mass of elemental carbon and oxygen. This suggests that the change in the mass is due to the physics schemes included in the model and not due to numerical gain or loss of mass from the advection scheme.
![Left panel: the percentage of the initial global mass of carbon and oxygen throughout the simulation. Right panel: the global mass of carbon monoxide throughout the simulation.[]{data-label="figure:conservation"}](fig53.pdf "fig:"){width="48.00000%"} ![Left panel: the percentage of the initial global mass of carbon and oxygen throughout the simulation. Right panel: the global mass of carbon monoxide throughout the simulation.[]{data-label="figure:conservation"}](fig54.pdf "fig:"){width="48.00000%"}
natexlab\#1[\#1]{}
, M., [Parmentier]{}, V., [Venot]{}, O., [Hersant]{}, F., & [Selsis]{}, F. 2014, , 564, A73
, D. S., [Baraffe]{}, I., [Tremblin]{}, P., [et al.]{} 2014, , 564, A59
, D. S., [Tremblin]{}, P., [Manners]{}, J., [Baraffe]{}, I., & [Mayne]{}, N. J. 2017, , 598, A97
, D. S., [Mayne]{}, N. J., [Baraffe]{}, I., [et al.]{} 2016, , 595, A36
, M., [Grevesse]{}, N., [Sauval]{}, A. J., & [Scott]{}, P. 2009, , 47, 481
, T. S., [Hauschildt]{}, P. H., & [Allard]{}, F. 2005, , 632, 1132
, J. K., [Aigrain]{}, S., [Irwin]{}, P. G. J., & [Sing]{}, D. K. 2017, , 834, 50
, I. A., [Mayne]{}, N. J., [Drummond]{}, B., [et al.]{} 2017, , 601, A120
, J. W., & [Hunten]{}, D. M. 1987, [Theory of planetary atmospheres. An introduction to their physics andchemistry.]{}
, C. S., & [Showman]{}, A. P. 2006, , 649, 1048
, L., [Cowan]{}, N. B., [Schwartz]{}, J. C., [et al.]{} 2018, Nature Astronomy, 2, 220
, I., & [Agol]{}, E. 2013, , 435, 3159
, I., & [Cowan]{}, N. B. 2017, , 851, L26
, I., & [Lin]{}, D. N. C. 2008, , 673, 513
, B., [Mayne]{}, N. J., [Baraffe]{}, I., [et al.]{} 2018, , 612, A105
, B., [Tremblin]{}, P., [Baraffe]{}, I., [et al.]{} 2016, , 594, A69
, B., [Mayne]{}, N. J., [Manners]{}, J., [et al.]{} 2018, , 855, L31
Edwards, J. M. 1996, Journal of the Atmospheric Sciences, 53, 1921
Edwards, J. M., & Slingo, A. 1996, Quarterly Journal of the Royal Meteorological Society, 122, 689
, J. J., [Marley]{}, M. S., [Lodders]{}, K., [Saumon]{}, D., & [Freedman]{}, R. 2005, , 627, L69
, J. M., [Mayne]{}, N., [Sing]{}, D. K., [et al.]{} 2018, , 474, 5158
, C. A., [Yelle]{}, R. V., & [Marley]{}, M. S. 1998, Science, 282, 2063
, C., [Woitke]{}, P., & [Thi]{}, W.-F. 2008, , 485, 547
, K., [Menou]{}, K., & [Phillipps]{}, P. J. 2011, , 413, 2380
, N., [B[é]{}zard]{}, B., & [Guillot]{}, T. 2005, , 436, 719
, P. G. J., [Teanby]{}, N. A., [de Kok]{}, R., [et al.]{} 2008, , 109, 1136
, T., [Sing]{}, D. K., [Lewis]{}, N. K., [et al.]{} 2016, , 821, 9
, H. A., [Charbonneau]{}, D., [Cowan]{}, N. B., [et al.]{} 2009, , 690, 822
, H. A., [Lewis]{}, N., [Fortney]{}, J. J., [et al.]{} 2012, , 754, 22
, P., & [Koskinen]{}, T. 2017, , 847, 32
, G., [Dobbs-Dixon]{}, I., [Helling]{}, C., [Bognar]{}, K., & [Woitke]{}, P. 2016, , 594, A48
, G. K. H., [Wood]{}, K., [Dobbs-Dixon]{}, I., [Rice]{}, A., & [Helling]{}, C. 2017, , 601, A22
, N. T., [Lambert]{}, F. H., [Boutle]{}, I. A., [et al.]{} 2018, , 854, 171
, M. R., [Liang]{}, M. C., & [Yung]{}, Y. L. 2010, , 717, 496
, S., [Manners]{}, J., [Mayne]{}, N. J., [et al.]{} 2018, , 481, 194
, S., [Mayne]{}, N. J., [Boutle]{}, I. A., [et al.]{} 2018, , 615, A97
, T., & [Wheatley]{}, P. J. 2015, , 814, L24
, N., [Burrows]{}, A., & [Currie]{}, T. 2011, , 737, 34
, N., & [Seager]{}, S. 2009, , 707, 24
, N. J., [Baraffe]{}, I., [Acreman]{}, D. M., [et al.]{} 2014, Geoscientific Model Development, 7, 3059
, N. J., [Baraffe]{}, I., [Acreman]{}, D. M., [et al.]{} 2014, , 561, A1
, N. J., [Debras]{}, F., [Baraffe]{}, I., [et al.]{} 2017, , 604, A79
, K., & [Rauscher]{}, E. 2009, , 700, 887
, J. I., [Visscher]{}, C., [Fortney]{}, J. J., [et al.]{} 2011, , 737, 15
, N., [Sing]{}, D. K., [Fortney]{}, J. J., [et al.]{} 2018, , 557, 526
, V., [Fortney]{}, J. J., [Showman]{}, A. P., [Morley]{}, C., & [Marley]{}, M. S. 2016, , 828, 22
, F., [Knutson]{}, H., [Gilliland]{}, R. L., [Moutou]{}, C., & [Charbonneau]{}, D. 2008, , 385, 109
, F., [Sing]{}, D. K., [Gibson]{}, N. P., [et al.]{} 2013, , 432, 2917
, E., & [Menou]{}, K. 2013, , 764, 103
, M., & [Rauscher]{}, E. 2017, , 850, 17
, A. P., [Fortney]{}, J. J., [Lian]{}, Y., [et al.]{} 2009, , 699, 564
, A. P., & [Guillot]{}, T. 2002, , 385, 166
, D. K., [Fortney]{}, J. J., [Nikolov]{}, N., [et al.]{} 2016, , 529, 59
, I. A. G., [de Kok]{}, R. J., [de Mooij]{}, E. J. W., & [Albrecht]{}, S. 2010, , 465, 1049
, J. 2010, , 408, 1689
Thomas, G. E., & Stammes, K. 1999, [Radiative Transfer in the Atmosphere and Ocean]{} (Cambridge University Press)
, P., [Chabrier]{}, G., [Mayne]{}, N. J., [et al.]{} 2017, , 841, 30
, S.-M., [Kitzmann]{}, D., [Lyons]{}, J. R., [et al.]{} 2017, ArXiv e-prints, arXiv:1711.08492
, S.-M., [Lyons]{}, J. R., [Grosheintz]{}, L., [et al.]{} 2017, , 228, 20
, O., [H[é]{}brard]{}, E., [Ag[ú]{}ndez]{}, M., [et al.]{} 2012, , 546, A43
, I. P., [Tinetti]{}, G., [Rocchetto]{}, M., [et al.]{} 2015, , 802, 107
Wood, N., Staniforth, A., White, A., [et al.]{} 2014, Quarterly Journal of the Royal Meteorological Society, 140, 1505
, K. J., & [Marley]{}, M. S. 2014, , 797, 41
, R. T., [Lewis]{}, N. K., [Knutson]{}, H. A., [et al.]{} 2014, , 790, 53
[^1]: <https://code.metoffice.gov.uk/trac/socrates>
[^2]: <http://kurucz.harvard.edu/stars.html>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Conversational agents have become ubiquitous, ranging from goal-oriented systems for helping with reservations to chit-chat models found in modern virtual assistants. In this survey paper, we explore this fascinating field. We look at some of the pioneering work that defined the field and gradually move to the current state-of-the-art models. We look at statistical, neural, generative adversarial network based and reinforcement learning based approaches and how they evolved. Along the way we discuss various challenges that the field faces, lack of context in utterances, not having a good quantitative metric to compare models, lack of trust in agents because they do not have a consistent persona etc. We structure this paper in a way that answers these pertinent questions and discusses competing approaches to solve them.'
author:
- '**Vinayak Mathur**'
- '**Arpit Singh**'
bibliography:
- 'egpaper\_final.bib'
title: The Rapidly Changing Landscape of Conversational Agents
---
Introduction
============
One of the earliest goals of Artificial Intelligence (AI) has been to build machines that can converse with us. Whether in early AI literature or the current popular culture, conversational agents have captured our imagination like no other technology has. In-fact the ultimate test of whether true artificial intelligence has been achieved, the Turing test [@turing] proposed by Alan Turing the father of artificial intelligence in 1950, revolves around the concept of a good conversational agent. The test is deemed to have been passed if a conversational agent is able to fool human judges into believing that it is in fact a human being.\
\
Starting with pattern matching programs like ELIZA developed at MIT in 1964 to the current commercial conversational agents and personal assistants (Siri, Allo, Alexa, Cortana et al) that all of us carry in our pockets, conversational agents have come a long way. In this paper we look at this incredible journey. We start by looking at early rule-based methods which consisted of hand engineered features, most of which were domain specific. However, in our view, the advent of neural networks that were capable of capturing long term dependencies in text and the creation of the sequence to sequence learning model [@seq] that was capable of handling utterances of varying length is what truly revolutionized the field. Since the sequence to sequence model was first used to build a neural conversational agent [@vinyals2015neural] in 2016 the field has exploded. With a multitude of new approaches being proposed in the last two years which significantly impact the quality of these conversational agents, we skew our paper towards the post 2016 era. Indeed one of the key features of this paper is that it surveys the exciting new developments in the domain of conversational agents.\
\
Dialogue systems, also known as interactive conversational agents, virtual agents and sometimes chatterbots, are used in a wide set of applications ranging from technical support services to language learning tools and entertainment. Dialogue systems can be divided into goal-driven systems, such as technical support services, booking systems, and querying systems. On the other hand we have non-goal-driven systems which are also referred to as chit-chat models. There is no explicit purpose for interacting with these agents other than entertainment. Compared to goal oriented dialog systems where the universe is limited to an application, building open-ended chit-chat models is more challenging. Non-goal oriented agents are a good indication of the state of the art of artificial intelligence according to the Turing test. With no grounding in common sense and no sense of context these agents have to fall back on canned responses and resort to internet searches now. But as we discuss in section \[kb\] , new techniques are emerging to provide this much needed context to these agents.\
\
The recent successes in the domain of Reinforcement Learning (RL) has also opened new avenues of applications in the conversational agent setting. We explore some of these approaches in section \[rl\]\
\
Another feature that has been traditionally lacking in conversation agents is a personality. O Vinayal et al [@vinyals2015neural] hypothesis that not having a consistent personality is one of the main reasons that is stopping us from passing the turing test. Conversational agents also lack emotional consistency in their responses. These features are vital if we want humans to trust conversational agents. In section \[human\] we discuss state of the art approaches to overcome these problems.\
\
Despite such huge advancements in the field, the way these models are evaluated is something that needs to be dramatically altered. Currently there exists no perfect quantitative method to compare two conversational agents. The field has to rely on qualitative measures or measures like BLeU and perplexity borrowed from machine translation. In section \[eval\] we discuss this problem in detail.
Early Techniques
================
Initially, the interactive dialogue systems were based on and limited to speaker independent recognition of isolated words and phrases or limited continuous speech such as digit strings. In August 1993, there came the ESPRIT SUNDIAL project (Peckham et al, 1993 [@sundial]) which was aimed at allowing spontaneous conversational inquiries over the telephone for the train timetable and flight enquiries. The linguistic processing component in it was based on natural language parsing. The parser made use of alternative word hypotheses represented in a lattice or graph in constructing a parse tree and allowance was made for gaps and partially parsable strings. It made use of both syntactic and semantic knowledge for the task domain. It was able to achieve a 96% success rate for the flight inquiry application in English. However, the issue was that the given conversational agent was heavily limited to the types of applications it can perform and its high success rate was more due to that instead of great natural language techniques (relative to recent times).\
\
In 1995, two researchers (Ball et al, 1995 [@persona]) at Microsoft developed a conversational assistant called Persona which was one of the first true personal assistant similar to what we have in recent times (like Siri, etc). It allowed users the maximum flexibility to express their requests in whatever syntax they found most natural and the interface was based on a broad-coverage NLP system unlike the system discussed in the previous paragraph. In this, a labelled semantic graph is generated from the speech input which encodes case frames or thematic roles. After this, a sequence of graph transformations is applied on it using the knowledge of interaction scenario and application domain. This results into a normalized application specific structure called as task graph which is then matched against the templates (in the application) which represent the normalized task graphs corresponding to all the possible user statements that the assistant understands and the action is then executed. The accuracy was not that good and they did not bother to calculate it. Also, due to the integrated nature of conversational interaction in Persona, the necessary knowledge must be provided to each component of the system. Although it had limitations, it provided a very usable linguistic foundation for conversational interaction.\
\
The researchers thought that if they can create assistant models specific to the corresponding models, they can achieve better accuracy for those applications instead of creating a common unified personal assistant which at that time performed quite poorly. There was a surge in application-specific assistants like in-car intelligent personal assistant (Schillo et al, 1996 [@incar]), spoken-language interface to execute military exercises (Stent et al, 1999 [@commandtalk]), etc. Since it was difficult to develop systems with high domain extensibility, the researchers came up with a distributed architecture for cooperative spoken dialogue agents (Lin et al, 1999 [@distarch]).\
\
Under this architecture, different spoken dialogue agents handling different domains can be developed independently and cooperate with one another to respond to the user’s requests. While a user interface agent can access the correct spoken dialogue agent through a domain switching protocol, and carry over the dialogue state and history so as to keep the knowledge processed persistently and consistently across different domains. Figure \[fig:mesh1\] shows the agent society for spoken dialogue for tour information service.\
\
![Agent society for spoken dialogue for tour information service [@distarch][]{data-label="fig:mesh1"}](capture1.JPG)
If we define the false alarm rate by counting the utterances in which unnecessary domain-switching occurred and the detection rate by counting the utterances in which the desired domain-switching were accurately detected, then in this model, high detection rate was achieved at very low false alarm rate. For instance, for around a false alarm rate of 0.2, the model was able to achieve a detection rate of around 0.9 for the case of tag sequence search with language model search scheme.
Machine Learning Methods
========================
Next came the era of using machine learning methods in the area of conversation agents which totally revolutionized this field.\
\
Maxine Eskenazi and her team initially wanted to build spoken dialog system for the less general sections of the population, such as the elderly and non-native speakers of English. They came up with Let’s Go project (Raux et al, 2003 [@alan1]) that was designed to provide Pittsburgh area bus information. Later, this was opened to the general public (Raux et al, 2005 [@alan2]). Their work is important in terms of the techniques they used.\
\
The speech recognition was done using n-gram statistical model which is then passed to a robust parser based on an extended Context Free Grammar allowing the system to skip unknown words and perform partial parsing. They wrote the grammar based on a combination of their own intuition and a small scale Wizard-of-Oz experiment they ran. The grammar rules used to identify bus stops were generated automatically from the schedule database. After this, they trained a statistical language model on the artificial corpus. In order to make the parsing grammar robust enough to parse fairly ungrammatical, yet understandable sentences, it was kept as general as possible. On making it public, they initially achieved a task success rate of 43.3% for the whole corpus and 43.6 when excluding sessions that did not contain any system-directed speech.\
\
After this they tried to increase the performance of the system (Raux et al, 2006 [@alan3]). They retrained their acoustic models by performing Baum-Welch optimization on the transcribed data (starting from their original models). Unfortunately, this only brought marginal improvement because the models (semi-continuous HMMs) and algorithms they were using were too simplistic for this task. They improved the turn-taking management abilities of the system by closely analysing the feedback they received. They added more specific strategies, aiming at dealing with problems like noisy environments, too loud or too long utterances, etc. They found that they were able to get a success rate of 79% for the complete dialogues (which was great).\
\
The previous papers (like the ones which we discussed in the above paragraph) did not attempt to use data-driven techniques for the dialog agents because such data was not available in large amount at that time. But then there was a high increase in the collection of spoken dialog corpora which made it possible to use data-driven techniques to build and use models of task-oriented dialogs and possibly get good results. In the paper by Srinivas et al,2008 [@mlbangalore], the authors proposed using data-driven techniques to build task structures for individual dialogs and use the dialog task structures for dialog act classification, task/subtask classification, task/subtask prediction and dialog act prediction.\
\
For each utterance, they calculated features like n-grams of the words and their POS tags, dialog act and task/subtask label. Then they put those features in the binary MaxEnt classifier. For this, their model was able to achieve an error rate of 25.1% for the dialog act classification which was better than the best performing models at that time. Although, according to the modern standards, the results are not that great but the approach they suggested (of using data to build machine learning models) forms the basis of the techniques that are currently used in this area.
Neural Models
=============
Sequence to Sequence approaches for dialogue modelling
------------------------------------------------------
The problem with rule-based models was that they were often domain dependent and could not be easily ported to a new domain. They also depended on hand crafted rules which was both expensive and required domain expertise. Two factors which when combined spell doom for scalbility. All of this changed in 2015 when Vinyals et al proposed an approach [@vinyals2015neural] inspired from the recent progress in machine translation [@seq]. Vinyals et al used the sequence to sequence learning architecture for conversation agents. Their model was the first model which could be trained end-to-end, and could generate a new output utterance based on just the input sentence and no other hand crafted features.\
\
They achieved this by casting the conversation modelling task, as a task of predicting the next sequence given the previous sequence using recurrent networks. This simple approach truly changed the conversation agent landscape. Most of the state-of-the-art today is built on their success. In a nutshell the input utterance is input to an encoder network, which is a recurrent neural network (RNN) in this case, but as we will see Long Short Term Memory (LSTMs) [@lstm] have since replaced RNNs as the standard for this task. The encoder summarizes the input utterance into a fixed length vector representation which is input to the decoder, which itself is again a RNN. The paper looks at this fixed vector as the *thought vector* - which hold the most important information of the input utterance. The Decoder netwroks takes this as input and output’s an output utterance word-by-word until it generates an end-of-speech $<eos>$ token. This approach allows for variable length inputs and outputs. The network is jointly trained on two turn conversations. Figure \[fig:seq\] shows the sequence to sequence neural conversation model.\
\
Even though most of the modern work in the field is built on this approach there is a significant drawback to this idea. This model can theoretically never *solve* the problem of modelling dialogues due to various simplifications, the most important of them being the objective function that is being optimized does not capture the actual objective achieved through human communication, which is typically longer term and based on exchange of information rather than next step prediction. It is important to see that optimizing an agent to generate text based on what it sees in the two-turn conversation dataset that it is trained on does not mean that the agent would be able to generalize to human level conversation across contexts. Nevertheless in absence of a better way to capture human communication this approach laid the foundation of most of the modern advances in the field. Another problem that plagues this paper and the field in general is Evaluation. As there can be multiple correct output utterances for a given input utterance there is no quantitative way to evaluate how well a model is performing. In this paper to show the efficacy of their model the authors publish snippets of conversations across different datasets. We discuss this general problem in evaluation later.\
![sequence to sequence framework for modelling conversation [@vinyals2015neural][]{data-label="fig:seq"}](seq.png){width="12cm" height="4cm"}
Iulian et al. build on this sequence-to-sequence based approach in their paper presented in AAAI 2016 [@serban2016building]. Their work is inspired by the hierarchical recurrent encoder-decoder architecture (HRED) proposed by Sordoni et al. [@sordoni2015hierarchical]. Their premise is that a dialogue can be seen as a sequence of utterances which, in turn, are sequences of tokens. Taking advantage of this built in hierarchy they model their system in the following fashion.\
\
The encoder RNN maps each utterance to an utterance vector. The utterance vector is the hidden state obtained after the last token of the utterance has been processed. The higher-level context RNN keeps track of past utterances by processing iteratively each utterance vector. After processing utterance $U_m$, the hidden state of the context RNN represents a summary of the dialogue up to and including turn $m$, which is used to predict the next utterance $U_{m+1}$. The next utterance prediction is performed by means of a decoder RNN, which takes the hidden state of the context RNN and produces a probability distribution over the tokens in the next utterance. As seen in figure \[fig:hred\]
![Hierarchical approach to dialogue modelling. A context RNN summarizes the utterances until that point from the encoder. The decoder produces output utterances based on the hidden state of the context RNN instead of the encoder RNN [@serban2016building][]{data-label="fig:hred"}](hred.png){width="14cm" height="7cm"}
The advantages of using a hierarchical representation are two-fold. First, the context RNN allows the model to represent a form of common ground between speakers, e.g. to represent topics and concepts shared between the speakers using a distributed vector representation. Second, because the number of computational steps between utterances is reduced. This makes the objective function more stable w.r.t. the model parameters, and helps propagate the training signal for first-order optimization methods.\
\
Models like sequence-to-sequence and the hierarchical approaches have proven to be good baseline models. In the last couple of years there has been a major effort to build on top of these baselines to make conversational agents more robust [@r1] [@r2].\
Due to their large parameter space, the estimation of neural conversation models requires considerable amounts of dialogue data. Large online corpora are helpful for this. However several dialogue corpora, most notably those extracted from subtitles, do not include any explicit turn segmentation or speaker identification.The neural conversation model may therefore inadvertently learn responses that remain within the same dialogue turn instead of starting a new turn. Lison et al [@lison2017not] overcome these limitations by introduce a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimized. The purpose of this model is to associate each ⟨context, response⟩ example pair to a numerical weight that reflects the intrinsic “quality” of each example. The instance weights are then included in the empirical loss to minimize when learning the parameters of the neural conversation model. The weights are themselves computed via a neural model learned from dialogue data. Approaches like [@lison2017not] are helpful but data to train these neural conversational agents remains scarce especially in academia, we talk more about the scarcity of data in a future section.
Language Model based approaches for dialogue modelling
------------------------------------------------------
Though sequence-to-sequence based models have achieved a lot of success, another push in the field has been to instead train a language model over the entire dialogue as one single sequence [@luan2016lstm]. These works argue that a language model is better suited to dialogue modeling, as it learns how the conversation evolves as information progresses.\
\
Mei et al. [@mei2017coherent] improve the coherence of such neural dialogue language models by developing a generative dynamic attention mechanism that allows each generated word to choose which related words it wants to align to in the increasing conversation history (including the previous words in the response being generated). They introduce a dynamic attention mechanism to a RNN language model in which the scope of attention increases as the recurrence operation progresses from the start through the end of the conversation. The dynamic attention model promotes coherence of the generated dialogue responses (continuations) by favoring the generation of words that have syntactic or semantic associations with salient words in the conversation history.
Knowledge augmented models {#kb}
==========================
Although these neural models are really powerful, so much so that they power most of the commercially available smart assistants and conversational agents. However these agents lack a sense of context and a grounding in common sense that their human interlocutors possess. This is especially evident when interacting with a commercial conversation agent, when more often that not the agent has to fall back to canned responses or resort to displaying Internet search results in response to an input utterance. One of the main goals of the research community, over the last year or so, has been to overcome this fundamental problem with conversation agents. A lot of different approaches have been proposed ranging from using knowledge graphs [@kg] to augment the agent’s knowledge to using latest advancements in the field of online learning [@cont]. In this section we discuss some of these approaches.\
The first approach we discuss is the Dynamic Knowledge Graph Network (DynoNet) proposed by He et al [@kg], in which the dialogue state is modeled as a knowledge graph with an embedding for each node. To model both structured and open-ended context they model two agents, each with a private list of items with attributes, that must communicate to identify the unique shared item. They structure entities as a knowledge graph; as the dialogue proceeds, new nodes are added and new context is propagated on the graph. An attention-based mechanism over the node embeddings drives generation of new utterances. The model is best explained by the example used in the paper which is as follows: The knowledge graph represents entities and relations in the agent’s private KB, e.g., *item-1’s* company is *google*. As the conversation unfolds, utterances are embedded and incorporated into node embeddings of mentioned entities. For instance, in Figure \[fig:dyno\], *“anyone went to columbia”* updates the embedding of *columbia*. Next, each node recursively passes its embedding to neighboring nodes so that related entities (e.g., those in the same row or column) also receive information from the most recent utterance. In this example, *jessica* and *josh* both receive new context when *columbia* is mentioned. Finally, the utterance generator, an LSTM, produces the next utterance by attending to the node embeddings.\
![Example demonstrating how DynoNet augments the conversation [@kg][]{data-label="fig:dyno"}](dyno.png){width="14cm" height="6cm"}
However Lee et al in [@cont] take a different approach to add knowledge to conversational agents. They proposes using a continuous learning based approach. They introduce a task-independent conversation model and an adaptive online algorithm for continual learning which together allow them to sequentially train a conversation model over multiple tasks without forgetting earlier tasks.\
\
In a different approach, Ghazvininejad et al [@ghazvininejad2017knowledge] propose a knowledge grounded approach which infuses the output utterance with factual information relevant to the conversational context. Their architecture is shown in figure \[fig:kg\]. They use an external collection of world facts which is a large collection of raw text entries (e.g., Foursquare, Wikipedia, or Amazon reviews) indexed by named entities as keys. Then, given a conversational history or source sequence S, they identify the “focus” in S, which is the text span (one or more entities) based on which they form a query to link to the facts. The query is then used to retrieve all contextually relevant facts. Finally, both conversation history and relevant facts are fed into a neural architecture that features distinct encoders for conversation history and facts. Another interesting facet of such a model is that new facts can be added and old facts updated by just updating the world facts dictionary without retraining the model from scratch, thus making the model more adaptive and robust.
![The neural architecture of the knowledge grounded model which uses a set of external world facts to augment the output utterance generated bt the model [@ghazvininejad2017knowledge][]{data-label="fig:kg"}](kg.png){width="10cm" height="5cm"}
Instead of just having a set of facts to augment the conversation, a richer way could be to use knowledge graphs or commonsense knowledge bases which consist of \[entity-relation-entity\] triples. Young et al explore this idea in [@young2017augmenting]. For a given input utterance, they find the relevant assertions in the common sense knowledge base using simple n-gram matching. They then perform chunking on the relevant assertions and feed the individual token to a tri-LSTM encoder. The output of this encoder is weighted along with the input utterance and the output utterance is generated. They claim that such common sense conversation agents outperform a naive conversation agent.\
\
Another interesting way to add knowledge to the conversation agents is to capture external knowledge for a given dialog using a search engine. In the paper by Long et al, 2017 [@longsearch], the authors built a model to generate natural and informative responses for customer service oriented dialog incorporating external knowledge.\
\
They get the external knowledge using a search engine. Then a knowledge enhanced sequence-to-sequence framework is designed to model multi-turn dialogs on external knowledge conditionally. For this purpose, their model extends the simple sequence-to-sequence model by augmenting the input with the knowledge vector so as to take account of the knowledge in the procedure of response generation into the decoder of the sequence-to-sequence model. Both the encoder and the decoder are composed of LSTM.\
\
Their model scores an average human rating of 3.3919 out of 5 in comparison to the baseline which is 3.3638 out of 5. Hence, their model generates more informative responses. However, they found the external knowledge plays a negative role in the procedure of response generation when there is more noise in the information. Exploring how to obtain credible knowledge of a given dialog history can be a future generation of their model.
Reinforcement Learning based models {#rl}
===================================
After exploring the neural methods in a lot of detail, the researchers have also begun exploring, in the current decade, how to use the reinforcement learning methods in the dialogue and personal agents.
Initial reinforcement methods
-----------------------------
One of the first main papers that thought of using reinforcement learning for this came in 2005 by English et al [@rlintro]. They used an on-policy Monte Carlo method and the objective function they used was a linear combination of the solution quality (S) and the dialog length (L), taking the form: o(S,I) = $w_1S$ - $w_2L $.\
\
At the end of each dialog the interaction was given a score based on the evaluation function and that score was used to update the dialog policy of both agents (that is, the conversants). The state-action history for each agent was iterated over separately and the score from the recent dialog was averaged in with the expected return from the existing policy. They chose not to include any discounting factor to the dialog score as they progressed back through the dialog history. The decision to equally weight each state-action pair in the dialog history was made because an action’s contribution to the dialog score is not dependent upon its proximity to the end of the task. In order to combat the problem of converging to an effective policy they divided up the agent training process into multiple epochs.\
\
The average objective function score for the case of learned policies was 44.90. One of the main reasons for the low accuracy (which is also a limitation of this paper) was that there were a number of aspects of dialog that they had not modeled such as non-understandings, misunderstandings, and even parsing sentences into the action specification and generating sentences from the action specification. But the paper set the pavement of the reinforcement learning methods into the area of dialog and personal agents.
End-to-End Reinforcement Learning of Dialogue Agents for Information Access
---------------------------------------------------------------------------
Let’s have a look at KB-InfoBot (by Dhingra et al, 2017 [@rlendtoend]): a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. In this paper, they replace the symbolic queries (which break the differentiability of the system and prevent end-to-end training of neural dialogue agents) with an induced ‘soft’ posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users.\
\
In this, the authors used an RNN to allow the network to maintain an internal state of dialogue history. Specifically, they used a Gated Recurrent Unit followed by a fully-connected layer and softmax non-linearity to model the policy π over the actions. During training, the agent samples its actions from this policy to encourage exploration. Parameters of the neural components were trained using the REINFORCE algorithm. For end-to-end training they updated both the dialogue policy and the belief trackers using the reinforcement signal. While testing, the dialogue is regarded as a success if the user target is in top five results returned by the agent and the reward is accordingly calculated that helps the agent take the next action.\
\
Their system returns a success rate of 0.66 for small knowledge bases and a great success rate of 0.83 for medium and large knowledge bases. As the user interacts with the agent, the collected data can be used to train the end-to-end agent which we see has a strong learning capability. Gradually, as more experience is collected, the system can switch from Reinforcement Learning-Soft to the personalized end-to-end agent. Effective implementation of this requires such personalized end-to-end agents to learn quickly which should be explored in the future.\
\
However, the system has a few limitations. The accuracy is not enough for using for the practical applications. The agent suffers from the cold start issue. In the case of end-to-end learning, they found that for a moderately sized knowledge base, the agent almost always fails if starting from random initialization.
Actor-Critic Algorithm
----------------------
Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning as we saw in the paper in the above section. This is especially problematic for on-line learning with real users.\
\
In the paper by Su et al, 2017 [@rlactorcritic], they proposed a sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. Just for a heads up, actor-critic algorithms are the algorithms that have an actor stores the policy according to which the action is taken by the agent and a critic that critiques the actions chosen by the actor (that is, the rewards obtained after the action are sent to the critic using which it calculates value functions).\
\
To speed up the learning process, they presented two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER). Both models employ off-policy learning with experience replay to improve sample-efficiency. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence.\
\
To mitigate the cold start issue, a corpus of demonstration data was utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, they demonstrated a practical approach to learn deep RL-based dialogue policies and also demonstrated their effectiveness in a task-oriented information seeking domain.
![The success rate of TRACER for a random policy, policy trained with corpus data (NN:SL) and further improved via RL (NN:SL+RL) respectively in user simulation under various semantic error rates [@distarch][]{data-label="fig:mesh7"}](capture2.JPG)
We can see in the figure \[fig:mesh7\] that the success rate reaches at around 95% for the case of policy trained with corpus data and using reinforcement learning which is impressive. Also, they train very quickly. For instance, for training just around 500-1000 dialogues, eNACER has a success rate of around 95% and TRACER has a success rate of around 92%. However, the authors noted that performance falls off rather rapidly in noise as the uncertainty estimates are not handled well by neural networks architectures. This can also be a topic for future research.
Using Generative Adversarial Network
------------------------------------
Recently, generative adversarial networks are being explored and how they can be used in the dialog agents. Although generative adversarial networks are a topic in itself to explore. However, the paper mentioned below used uses reinforcement learning along with generative adversarial network so we cover it here inside the reinforcement learning methods. They can be used by the applications to generate dialogues similar to humans.\
\
In the paper by Li et al, 2017 [@rlgan], the authors proposed using adversarial training for open-domain dialogue generation such that the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. The task is considered as a reinforcement learning problem where two systems get jointly trained: a generative model to produce response sequences, and a discriminator (similar to the human evaluator in the Turing test) that distinguishes between the human-generated dialogues and the machine-generated ones. The generative model defines the policy that generates a response given the dialog history and the discriminative model is a binary classifier that takes a sequence of dialog utterances as inputs and outputs whether the input is generated by the humans or machines. The outputs from the discriminator are then used as rewards for the generative model pushing the system to generate dialogues that mostly resemble human dialogues.\
\
The key idea of the system is to encourage the generator to generate utterances that are indistinguishable from human generated dialogues. The policy gradient methods are used to achieve such a goal, in which the score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using the REINFORCE algorithm.\
\
Their model achieved a machine vs random accuracy score of 0.952 out of 1. However, on applying the same training paradigm to machine translation in preliminary experiments, the authors did not find a clear performance boost. They thought that it may be because the adversarial training strategy is more beneficial to tasks in which there is a big discrepancy between the distributions of the generated sequences and the reference target sequences (that is, the adversarial approach may be more beneficial on tasks in which entropy of the targets is high). In the future, this relationship can be further explored.
Approaches to Human-ize agents {#human}
==============================
A lack of a coherent personality in conversational agents that most of these models propose has been identified as one of the primary reasons that these agents have not been able to pass the Turing test [@turing] [@vinyals2015neural]. Aside from such academic motivations, making conversational agents more like their human interlocutors which posses both a persona and are capable of parsing emotions is of great practical and commercial use. Consequently in the last couple of years different approaches have been tried to achieve this goal.\
\
Li et al [@li2016persona] address the challenge of consistency and how to endow data-driven systems with the coherent “persona” needed to model human-like behavior. They consider a persona to be composite of elements of identity (background facts or user profile), language behavior, and interaction style. They also account for a persona to be adaptive since an agent may need to present different facets to different human interlocutors depending on the interaction. Ultimately these personas are incorporated into the model as embeddings. Adding a persona not only improves the human interaction but also improves BLeU score and perplexity over the baseline sequence to sequence models. The model represents each individual speaker as a vector or embedding, which encodes speaker-specific information (e.g.dialect, register, age, gender, personal information) that influences the content and style of her responses. Most importantly these traits do not need to be explicitly annotated, which would be really tedious and limit the applications of the model. Instead the model manages to cluster users along some of these traits (e.g. age, country of residence) based on the responses alone. The model first encodes message $S$ into a vector representation $h_S$ using the source LSTM. Then for each step in the target side, hidden units are obtained by combining the representation produced by the target LSTM at the previous time step, the word representations at the current time step, and the speaker embedding $v_i$. In this way, speaker information is encoded and injected into the hidden layer at each time step and thus helps predict personalized responses throughout the generation process. The process described here is visualizes in figure \[fig:per\] below.\
![Visualization of how the persona is integrated in a sequence to sequence style conversational agent [@li2016persona][]{data-label="fig:per"}](person.png){width="14cm" height="6cm"}
Building on works like this the Emotional Chatting Machine model proposed by Zhou et al [@zhou2017emotional] is a model which generates responses that are not only grammatically consistent but also emotionally consistent. To achieve this their approach models the high-level abstraction of emotion expressions by embedding emotion categories. They also capture the change of implicit internal emotion states and use explicit emotion expressions with an external emotion vocabulary.\
\
Although they did not evaluate their model on some standard metric, they showed that their model can generate responses appropriate not only in content but also in emotion. In the future, instead of specifying an emotion class, the model should decide the most appropriate emotion category for the response. However, this may be challenging since such a task depends on the topic, context or the mood of the user.\
\
The goal of capturing emotions and having consistent personalities for a conversational agent is an important one. The field is still nascent but advances in the domain will have far reaching consequences for conversational models in general. People tend to trust agents that are emotionally consistent, and in the long term trust is what will decide the fate of large scale adoption of conversational agents.
Evaluation methods {#eval}
==================
Evaluating conversational agents is an open research problem in the field. With the inclusion of emotion component in the modern conversation agents, evaluating such models has become even more complex.The current evaluation methods like perplexity and BLEU score are not good enough and correlate very weakly with human judgments. In the paper by Liu et al, 2016 [@eval], the authors discuss about how not to evaluate the dialogue system. They provide quantitative and qualitative results highlighting specific weaknesses in existing metrics and provide recommendations for the future development of better automatic evaluation metrics for dialogue systems.\
\
According to them, the metrics (like Kiros et al, 2015 [@kiros]) that are based on distributed sentence representations hold the most promise for the future. It is because word-overlap metrics like BLEU simply require too many ground-truth responses to find a significant match for a reasonable response due to the high diversity of dialogue responses. Similarly, the metrics that are embedding-based consist of basic averages of vectors obtained through distributional semantics and so they are also insufficiently complex for modeling sentence-level compositionality in dialogue.\
\
The metrics that take into account the context can also be considered. Such metrics can come in the form of an evaluation model that is learned from data. This model can be either a discriminative model that attempts to distinguish between model and human responses or a model that uses data collected from the human survey in order to provide human-like scores to proposed responses.
Conclusion
==========
In this survey paper we explored the exciting and rapidly changing field of conversational agents. We talked about the early rule-based methods that depended on hand-engineered features. These methods laid the ground work for the current models. However these models were expensive to create and the features depended on the domain that the conversational agent was created for. It was hard to modify these models for a new domain. As computation power increased, and we developed neural networks that were able to capture long range dependencies (RNNs,GRUs,LSTMs) the field moved towards neural models for building these agents. Sequence to sequence model created in 2015 was capable of handling utterances of variable lengths, the application of sequence to sequence to conversation agents truly revolutionized the domain. After this advancement the field has literally exploded with numerous application in the last couple of years. The results have been impressive enough to find their way into commercial applications such that these agents have become truly ubiquitous. We attempt to present a broad view of these advancements with a focus on the main challenges encountered by the conversational agents and how these new approaches are trying to mitigate them.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We perform an ab-initio calculation for the binding energy of ${}^6Li$ using the CD-Bonn 2000 NN potential renormalized with the Lee-Suzuki method. The many-body approach to the problem is the Hybrid Multideterminant method. The results indicate a binding energy of about $31 MeV$, within a few hundreds KeV uncertainty. The center of mass diagnostics are also discussed.
[**[Pacs numbers]{}**]{}: 21.60.De, $\;\;\;\;$ 21.10.Dr, $\;\;\;\;$ 27.20.+n
author:
- |
G. Puddu\
Dipartimento di Fisica dell’Universita’ di Milano,\
Via Celoria 16, I-20133 Milano, Italy
title: ' Ab-initio calculation of the ${}^6Li$ binding energy with the Hybrid Multideterminant scheme.'
---
Introduction.
==============
A major problem in nuclear physics is the understanding of the structure of nuclei starting from nucleon-nucleon potentials that reproduce the nucleon-nucleon scattering data and the properties of the deuteron. There are nowadays many high accuracy nucleon-nucleon potentials that reproduce these data, either phenomenological or based on meson exchange theories, such as the Argonne V18 (ref.\[1\]) and the CD-Bonn 2000 (ref.\[2\]) or, based on chiral perturbation theory, the N3LO (ref.\[3\]) NN potential. Accurate predictions at the level of NN potentials are rather important in order to elucidate the role of the NNN interaction which are much more difficult to use in nuclear structure calculations.
Once the NN potential is selected, one is left with the many-body problem to evaluate nuclear properties. There are two main steps in order to achieve this goal.
The first step is to renormalize the NN interaction in order to be able to use small model spaces, and the second one is the many-body problem itself. Although for very few nuclei (closed shells) sometimes the bare interaction is used, at the price of very large model spaces (ref. \[4\]), a popular prescription is the Lee-Suzuki method, (ref. \[5\]) whereby an effective interaction is constructed in a small model space, typically using an harmonic oscillator basis, or, as in the case of low momentum interactions, a momentum basis (ref. \[6\] and references in there). A limitation of this approach is that many-body interactions are introduced, and usually one keeps only the two-body part of the renormalized interaction (the 2-particle cluster approximation). As a consequence the independence of the results from the model space must be checked. To further complicate matters, the NN effective interaction derived in this way is not unique, especially because of the hermitization prescription. Although the freedom to hermitize the effective interaction is large, two prescriptions are mostly used, the one of ref. \[5\] (known as the Okubo hermitization) and the one of ref.\[7\], mostly used with low momentum interactions. It is worthwhile to observe that, at least in principle, this freedom could be used to mimic three-body force effects, much in the same spirit it is done with the JISP interactions (ref.\[8\] and references in there). This could be very useful especially for low momentum interactions.
The second step is the solution of the Schroedinger equation for the nuclei under study. Several methods are available. For example the no core shell model (NCSM) (ref. \[9\],\[10\]), which diagonalizes the Hamiltonian renormalized up to a given number of $\hbar\Om$ excitations. Or the coupled cluster method (ref.\[11\] and references in there) whereby the wave function is written as an exponential of one-body+two-body+... operators acting on a reference Slater determinant. The first of these methods, although it is the most used in ab-initio studies of light nuclei, is limited by the large sizes of the Hilbert space. The second of these methods, namely the coupled cluster method, is usually applied at or around closed shells. A third type of methods are based on variational schemes, as the VAMPIR method and its variants (ref.\[12\]), the Quantum Monte Carlo method (ref.\[13\]) and the Hybrid Multideterminant method (HMD) (ref.\[14\]). In this work we shall use this last one which is based on the expansion of the nuclear wave function as a sum of a large number (as many as the accuracy demands) of symmetry unrestricted Slater determinants (SD) with the appropriate angular momentum and parity quantum numbers restored with projectors, the Slater determinants being determined solely by variational requirements. This method does not suffer from the limitation of the size of the Hilbert space, it approaches more and more the exact ground state wave function as the number of Slater determinants is increased, and furthermore it is equally applicable to both closed and open shell nuclei. So far it has been applied in a no core fashion using the Argonne v8’ NN potential (ref.\[14\]) and to a phenomenological local potential in order to study shell effects using the bare interaction (ref.\[15\]). It has also been applied to nuclei in the $fp$ region using phenomenological effective interactions (ref.\[16\]),however these systems are relatively easy since the bulk of the energies are of single-particle character.
In this work we shall apply the HMD method to ${}^6Li$ starting from the accurate CD-Bonn 2000 interaction. This nucleus has been extensively studied within the NCSM approach, using both the CDBonn (ref. \[17\]), the CDBonn 2000 (ref.\[18\],\[19\]) and the N3LO interactions (ref. \[19\]). The motivation to perform a calculation for this nucleus using a different many-body method is the following. An ab-initio calculation requires the results to be independent on the size of the model space and also on the value of $\hbar\Om$ of the harmonic oscillator single-particle basis, at least within some range of values. So far the calculations reported in the literature using the Lee-Suzuki renormalization prescription show a residual dependence on the value of $\hbar\Om$. Such a dependence is not seen using soft potentials such as the low-momentum interaction or the JISP16 interaction (cf. ref. \[20\]). Eventually such a dependence should disappear using larger values of the maximum allowed number of $\hbar\Om$ excitations ($N_{max}$). The HMD method does not use $\hbar\Om$ excitations, but rather utilizes an Hamiltonian in a specified number of major harmonic oscillators shells, which contain a much larger (although not all possible) $N_{max}$ excitations. We do obtain a weaker dependence on $\hbar\Om$, but the dependence does not disappear at large value $\hbar\Om$. However we obtain a much lower value for the ground-state energy, closer to the experimental value.
The HMD method, in its ab-initio form, can be formulated in two different ways. One can construct the effective Hamiltonian directly in the lab frame for a specified number of harmonic oscillator major shells (up to $N_s$ total quantum number) using the standard Talmi-Moshinsky brackets (cf. for example ref.\[21\]) relating these matrix elements to the renormalized matrix elements in the center of mass frame (HMD-a version). In this case the renormalized matrix elements in the center of mass frame up to $N_{cm}=2N_s$ total harmonic oscillator quantum number in the center of mass frame are needed. Differently one could first construct the matrix elements of the renormalized Hamiltonian using $N_{cm}+1$ harmonic oscillator shells and then transform the Hamiltonian to the lab frame using the same number $N_{cm}+1$ of harmonic oscillator shells (HMD-b version). The difference between the HMD-a and the HMD-b version consists in the fact that the HMD-a version truncates the Hamiltonian used in the HMD-b version. Conversely a large fraction of the matrix elements of the renormalized Hamiltonian used by HMD-b are set to $0$, more precisely all matrix elements of the type $<ab|H_{eff}|cd>$ for which the states $a,b$ or $c,d$ satisfy the relation $ 2 n_a+l_a+2 n_b+l_b>N_{cm}$ ($n,l$ being the harmonic oscillator quantum numbers).
The HMD-b version for $A=2$ is exact in the sense that reproduces to very high accuracy the eigenvalues of the bare Hamiltonian, while the HMD-a version converges to the exact values only in the limit of a large number of harmonic oscillator shells. As a consequence the HMD-a version needs to be validated. For $A=2$ clearly HMD-b is superior, however we find that for $A=3$, HMD-b overbinds and that the HMD-a version is superior even for a smaller number of major harmonic oscillator shells. This can be understood by recalling that both versions neglect 3-particle cluster contributions to the renormalized interaction and the implication is therefore that HMD-a has smaller 3-particle cluster effects. In other words, the truncation performed in the HMD-a version effectively takes into account at least some of the missing 3-body interaction induced by an exact renormalization, while in the HMD-b version this can be done only by increasing the number of major shells. This is of course a useful result, although empirical. For ${}^6Li$ we prefer to use the HMD-a version, since also for this nucleus HMD-b strongly overbinds even compared to the experimental binding energy.
The outline of this paper is the following. In section 2 we discuss the validation of the two versions and of the computer programs and in section 3 we discuss the case of ${}^6Li$ and also the center of mass diagnostic recently proposed in ref. \[22\]. We also discuss a calculation for the $3^+$ excited state.
Validation of the method.
==========================
Both versions of the HMD method start, as in NCSM approach (refs.\[9\],\[10\]), from the Hamiltonian $$\HH=\sum_{i=1}^A {p_i^2\over 2m }+ \sum_{i<j} V_{ij}= \HH_{int}+ {P_{cm}^2\over 2 mA},
\eqno(1)$$ $m$ being the average nucleon mass for the nucleus under consideration, $V$ the nucleon-nucleon potential, $P_{cm}$ is the total momentum and $\HH_{int}$ is the intrinsic Hamiltonian. As in ref. \[9\], to this Hamiltonian an harmonic potential acting on the center of mass is added, that is $$\HH_{\Om}=\HH_{int}+\HH_{cm}=\HH+{1\over 2} mA\Om^2 R_{c.m.}^2=\sum_{i=1}^A h_i +\sum_{i<j} V_{ij}^{(A)},
\eqno(2)$$ with $$V_{ij}^{(A)}=V_{ij} -{m \Om^2\over 2 A}r_{ij}^2,
\eqno(3)$$ and $$h_i= {p_i^2\over 2 m} + {1\over 2} m \Om^2 r_i^2.
\eqno(4)$$ $\HH_{cm}$ in eq.(2) is the harmonic oscillator Hamiltonian of the center of mass $$\HH_{cm} = {P_{cm}^2\over 2 mA} + {1\over 2} mA\Om^2 R_{c.m.}^2.
\eqno(5)$$
The Hamiltonian of eq.(2), in which $A$ is considered as a parameter, is solved for the two-particle systems in an harmonic oscillator basis using a large number of major shells (typically $400\div 500$) in all possible angular momentum isospin and z-projection of the isospin channels $j s t t_z$ in the intrinsic frame of the two-particle system. The number of major shell is taken large enough so that the Hamiltonian can be considered in the “infinite” space (the P+Q space). All integrals are evaluated using typically $2000$ integration points. After having done this, the Lee-Suzuki (with the Okubo hermitization) renormalization prescription is performed in which the model space is restricted to the first $N_{cm}+1$ major harmonic oscillator shells (the P space) of the intrinsic frame (cf. also ref. \[23\] for a very compact derivation). $N_{cm}$ is taken to be even, as it will clear in the following ($N_{cm}=2N_s$). Once the renormalized $A$-dependent Hamiltonian for the two-particle system is obtained, the two-body matrix elements of the effective interaction are extracted and the matrix elements of the intrinsic Hamiltonian of the $A$ particle system (the original nucleus) are evaluated.
The HMD method can now be branched into two. The two-body matrix elements for the nucleus under consideration can be transformed into the lab frame up to $N_s+1$ major shells (HMD-a version), or can be transformed into the lab frame up to $N_{cm}+1$ major shells (HMD-b version). The situation is schematically illustrated in fig. 1. In the HMD-b version all matrix elements having one state in the upper right triangle are set to 0. One can optionally add to the lab frame Hamiltonian a term $\be (H_{cm}-3/2 \hbar \Om)$ as commonly done. The effect of this term due to finite space sizes has been recently analyzed in ref. \[22\] in order to study unphysical couplings between intrinsic modes and center of mass excitations (cf. next section also). In both HMD-a and HMD-b versions the resulting Hamiltonian is the input for a variational calculation as done in ref. \[14\]. The variational method in the most recent computer programs is the one discussed in refs. \[14\],\[24\]. The wave function is a linear combination of Slater determinants (without symmetry restrictions) with good quantum numbers restored by projectors.
Needless to say HMD-a is computationally cheaper than HMD-b. A 5 major shells calculation with HMD-a translates into a 9 major shells calculation with HMD-b, for example. The details of the optimization techniques will discussed in the next section, since they are the same utilized for the validation. The validation of the whole set of the computer codes is performed first on Deuterium. Actually in this (and only in this case) a numerical cancellation in the renormalization step prevents the exact reproduction of the “bare” eigenvalues. For all other nuclei, the renormalization step reproduces the “bare” eigenvalues belonging to the model space to very high accuracy. For $\hbar\Om = 16 MeV$ with $N_{cm}=8$ we obtained the renormalized binding energy of deuterium with an error of $0.26 eV$ using 15 Slater determinants (projected to $J_z^{\pi}=1^+$) using the version HMD-b. The situation is different for the HMD-a version since not all matrix elements in the intrinsic frame are used. We therefore expect that the variational calculation will reproduce the renormalized binding energy only in the limit of large $N_s$. We performed some tests for $\hbar\Om= 12 MeV$. For $N_s=4$ the difference between the binding energy obtained by the variational calculation and the exact value is $\de=0.041 MeV$, for $N_s=5$, we obtained $\de=0.026 MeV$ and for $N_s=7$ (excluding all states with $l=7$) we obtained $\de=0.012 MeV$. This test validates both versions of the methods.
We performed also some tests for ${}^3H$ and ${}^4He$. For ${}^3H$ the binding energy obtained with the Faddeev equation method (ref. \[25\]) using the CD-Bonn 2000 interaction, is $-7.998 MeV$. In this case both versions can reach the exact value only in the limit of large $N_s$ (or $N_{cm}=2N_s$). For the HMD-a version and $\hbar\Om=16 MeV$, we obtained a ground state energy (in MeV) of $ -8.29, -8.30, -8.14, -8.03$ for $ N_s=3, N_s=4, N_s=5 $ and $N_s=6$ respectively. For low $N_s$, about $35\div 50$ Slater determinants (with the $J_z^{\pi}$ projector) are needed to converge. For large $N_s$ the number of Slater determinants is larger. For $\hbar\Om=18 MeV$, the ground-state energy in MeV is $-8.183$, $-8.176$, $-8.125$ and $-7.961$ for $ N_s=3, N_s=4, N_s=5 $ and $N_s=6$ respectively. As before, the calculations for large model space are more involved and a large number of Slater determinants is necessary. We estimate a possible further decrease in the energy of few tens of $KeV$. For larger values of $\hbar\Om$ the calculation becomes increasingly more difficult for large model space. For $\hbar\Om=20 MeV$ we obtained for the ground-state energy (in MeV) $-8.023$, $-8.044$, $-7.914$ for $ N_s=3, N_s=4, N_s=5 $ respectively. The wave functions obtained with the HMD-a version can serve as a variational input for the HMD-b version with $N_{cm}=2N_s$. For this version we performed only few calculations since the model spaces are very large and the omission of large $l$ values of the single-particle orbits is necessary. As an example for $\hbar\Om=16 MeV$ and $N_{cm}=6$ omitting all single-particle states having $l$ values larger than $4$ and using only $ 15$ Slater determinants we obtained a ground-state energy of $-8.843 MeV$. The inclusion of larger $l$-values and the increase of the number of Slater determinants will necessarily lower the energy. This value should be compared with the value obtained with the HMD-a version which is much closer to the exact Faddeev result.
The only source of discrepancy between the Faddeev result and the HMD-b result comes from the missing 3-particle cluster contributions. The conclusion that we can draw is that the missing 3-particle cluster contributions are strongly repulsive. The effect of such contributions is much smaller in the HMD-a version. One expects that in order to suppress such contributions in the HMD-b implementation one has to increase the number of major shells. For $\hbar\Om=18 MeV$ and $N_{cm}=8$ we obtained a ground-state energy of $-8.574 MeV$, in this case we excluded from the calculation all $l>6$ values. The inclusion of these states will necessarily decrease the energy. The conclusion we can draw form these calculations is that the HMD-b version, although in principle more rigorous, strongly overbinds since it misses 3-particle cluster contributions, which seem less relevant in the HMD-a version. We performed a calculation also for ${}^6Li$ using the HMD-b version, but even without full convergence to a large number of Slater determinants we obtained strong overbinding. As done in all past calculations with the HMD method, we therefore use only the HMD-a implementation, It is inaccurate only for the 2-particle system, but that is hardly relevant for many-body problems.
Using the HMD-a approach we performed a calculation for the binding energy of ${}^4He$. We considered a reasonable value of the harmonic oscillator frequency, $\hbar\Om=20 MeV$, rather than a full set of frequencies, and took $N_s=3,4,5,6,7$. The ground-state energies are (in MeV) $E=-29.259, -28.504, -27.603, -26.938$ and $ -26.354$ for $N_s=3,4,5,6,7$ respectively. The calculations become increasingly time consuming for large values of $N_s$. In the case of $N_s=6$ we built $150$ Slater determinants using the partial $J_z^{\pi}=0^+$ projector and later reprojecting the energies using the full angular momentum projector. For $N_s=7$ we took only $100$ Slater determinants. The uncertainty in the calculation are about $100 KeV$ or less and $140 KeV$ for $N_s=7$. The ncsm result from ref. \[27\] is $-26.16 MeV$, indicating that for $\hbar\Om=20 MeV$ a larger number of major shells are necessary for good accuracy.
${}^6Li$.
==========
The nucleus ${}^6Li$ with the CDBonn-2000 interaction has been studied in the past in the framework of the NCSM method (ref. \[18\],\[19\]). The ground-state energy obtained with this method is $-29.07 MeV$ (the experimental value from ref. \[26\] is $-31.994 MeV$). The ab-initio approach imposes at least for some $\hbar\Om$ interval constancy of the energies as the model space sizes are increased, and as $\hbar\Om$ is varied. We performed several calculations for this nucleus. The most relevant ones are the ones concerning the intrinsic energy. Most often a center of mass term of the type $\be (\hat H_{cm} -3\hbar\Om /2)$, where $\hat H_{cm}$ is the center of mass harmonic oscillator Hamiltonian, is added to the intrinsic Hamiltonian. The effects of the addition of such a term has been recently scrutinized in ref.\[22\] and the unphysical coupling between intrinsic and center of mass Hamiltonian caused by the finite size of the model space, has been assessed. It was found in ref.\[22\] that this unphysical coupling using model space defined by a specified number of major shells can decrease the binding energy in an appreciable way. Here the calculations with the HMD-a method are performed using the intrinsic Hamiltonian. The effect of the addition of the center of mass Hamiltonian will be analyzed at the end of the section. The HMD-a calculations proceed in two phases. In the first phase a large number of Slater determinants, typically $100\div 400$ is generated using only a partial angular momentum and parity projector to good $J_z^{\pi}=1^+$. In the second phase this set is reprojected using the full angular momentum and parity projector $J^{\pi}=1^+$. At least for this nucleus and for this interaction, we find this optimization technique computationally more efficient than performing from the beginning the variational calculations with the full angular momentum and parity projector.
The first phase is a combination of two steps. We first increase the number of Slater determinants (SD) $N_D$ and optimize the last added SD using the steepest descent method, much in the same way it has been done in ref. \[14\]. In the second step we vary anew all SD’s one at a time using the quasi-newtonian rank-3 update of ref. \[24\]. This second step is repeated several times until the energy decrease is less than a specified amount. Afterwards, the addition step is repeated. We test the accuracy of the final wave function by plotting the energy vs $1/N_D$. As it will be shown, for large $N_D$ in many cases the energy is linear in $1/N_D$.
The total number of SD necessary to obtain a reasonable convergence varies depending on the model space (typically $N_D$ increases as $N_s$ is increased and the variational problem becomes harder as $\hbar\Om$ is increased). It does not seem that $N_D$ depends in any obvious way from the sizes of the Hilbert space which can become very large as $N_s$ is increased. Actually one the main reasons for using methods such as the HMD, is that the calculations can be performed even for very large size of the Hilbert space. However feasibility does not necessarily imply accuracy, as the value of $N_D$ necessary to reach a given accuracy could depend on the size of the Hilbert space. We performed a test using a set of 400 SD, for the same interaction, obtained as a part of another calculation for ${}^{12}C$ with $N_s=3$ (not discussed in this work), $\hbar\Om=15 MeV$ and $\beta=0.5$. A reprojection was performed as explained above. For ${}^6Li$ typical size of the Hilbert space range from about $10^5$ for $N_s=2$ to about $10^8$ for $N_s=4$, while for ${}^{12}C$ at $N_s=3$ the size of the Hilbert space is about $10^{12}$.
The calculated value for the ground-state energy of ${}^{12}C$ is $-91.91 MeV$ (to be compared with the experimental value of $-92.162 MeV$). In fig. 2 we show the behaviour of $E(1/N_D)$ for large $N_D$. A linear extrapolation shows that a plausible final energy of $-92.3 MeV$. A similar behavior is also seen for ${}^6Li$. For comparison in fig. 3 we show the behavior of $E(1/N_D)$ for ${}^6Li$ at $\hbar\Om=15 MeV$ and $N_s=4$ with $\beta=0$. Since there is increase of several orders of magnitude in the size of the Hilbert space from ${}^6Li$ to ${}^{12}C$ it is reasonable to conclude that if there a dependence of $N_D$ on the size of the Hilbert space, such a dependence is very mild. The behavior of the energy as a function of $1/N_D$ can change for different $N_s$ in the vicinity of the origin. Sometimes the energy behaves as a higher power of $1/N_D$ especially for small $N_s$. We performed calculations for ${}^6Li$ for $\hbar\Om=10 MeV,\;\;\;12.5 MeV,\;\;\;15 MeV,\;\;\;17.5 MeV,\;\;\;20 MeV$.
The results are presented in the table. The results for $N_s=2$ and $N_s=3$ are well converged. For $N_s=2$ good convergence is reached using $150$ SD’s (however for $\hbar\Om=20 MeV$ we had to use $180$ SD’s. For $N_s=3$ we used $400$ SD’s ($450$ for $\hbar\Om=20 MeV$) and also for $N_s=4$. The results for $N_s=5$ should be considered as partial ones (we used a set of 300 or less Slater determinants). In fact the computational cost of the variational calculation depends mostly on the size of the single-particle space. The dependence on the particle number is rather mild.
$ \hbar\Om(MeV)$ $ N_s=2 $ $ N_s=3 $ $ N_s=4 $ $ N_s=5 $
------------------ ---------------- ----------------- --------------- ----------------
$ 10.0 $ $ -28.712 $ $ -28.940 $ $ -30.14 $ $ -30.65** $
$ 12.5 $ $ -30.707 $ $ -30.558 $ $ -31.18 $ $ -30.99** $
$ 15.0 $ $ -31.525 $ $ -31.140 $ $ -31.22 $ $ - $
$ 17.5 $ $ -31.381 $ $ -30.843 $ $ -30.57 $ $ - $
$ 20.0 $ $ -30.455 $ $ -30.097 $ $ -29.55* $ $ - $
: Ground-state energies for ${}^6Li$ for different values of $ \hbar\Om(MeV) $ and different model spaces $N_s$. Energies are in MeV. $*$ Result not fully converged. $**$ Only 300 SD were used. For $N_s=4$, 400 SD were employed.
The calculations for ${}^6Li$ were performed without the center of mass Hamiltonian $\HH'=\beta(\HH_{cm}-3/2\hbar\Om)$, i.e. $\beta=0$. In ref. \[22\], The problem of the effect of the addition of $\HH'$ was studied. The main point in ref. \[22\] was that the addition of this term can significantly change the evaluation of the intrinsic energies. To be more precise, In a finite space, the eigenstates $|\psi(\beta)>$ of $\HH_{int}+\HH'$ are not a product of intrinsic eigenstates and center of mass eigenstates. Thus the intrinsic energies, defined as $ E(\beta)=<\psi(\beta)| \HH_{int} |\psi(\beta)>$ acquire a $\beta$ dependence. These considerations do not apply to the calculations for ${}^6Li$ discussed in this work for the following reason. Our wave-functions are obtained by minimizing the energy expectation value of $\HH_{int}$. Therefore, since the wave-functions contain $3 A$ space variables, it must factorize into a product of the intrinsic eigenstate and a function (not necessarily an eigenstate) of the center of mass coordinates. The only requirement is that good convergence must be reached.
One can verify, however, the amount of contamination caused by $\HH'$ to the intrinsic energies by first minimizing the expectation values of $\HH_{int}+\HH'$ in order to obtain the wave functions $|\psi(\beta)>$, by evaluating the expectation values of $\HH_{int}$ with $|\psi(\beta)>$ and then by comparing the energies obtained in this way with the real intrinsic energies. Actually, it is easy to do slightly better than this because of the structure of the HMD ansatz for the wave-functions which are a linear combination of Slater determinants (intrinsic states). The coefficients of this linear combination can easily be determined anew in such a way to minimize the intrinsic energy without a re-variation of the intrinsic states. As an example we consider $N_s=2$ and $\hbar\Om=15 MeV$ and $\beta=1$. The ground-state energy of $\HH_{int}+\HH'$ is $-30.354 MeV$ (obtained with 150 SD’s), while the intrinsic energy obtained using this eigenstate of $\HH_{int}+\HH'$ is $-31.066 MeV$ (the coefficients of each SD was redetermined). This value should be compared with the value given in the table of $-31.525 MeV$. The discrepancy, almost $500 KeV$, is appreciable. For this case, i.e. $N_s=2$ $\hbar\Om=15 MeV$ we show in fig. 4 the behavior of $E(\beta)$ as a function of $\beta$.
We also performed a calculation for the excitation energy of the first $3^+$ state, by re-evaluating the $J_z^{\pi}=1^+$ and $J_z^{\pi}=3^+$ states using exactly the same numerical steps (this is necesssary since both states contain some error compared to the values for $N_d=\infty$ and these errors cancel out provided the same numerical steps are taken for both states). Only the $J_z^{\pi}$ projector has been used. In fig. 5 we show the excitation energy for the $3^+$ state as a function of the number of Slater determinants for $N_s=4,5,6$. The value obtained for $N_s=6$ is $2.9 MeV$ higher than the experimental value of $2.18 MeV$, but consistent with the ncsm value of $2.86 MeV$.
In conclusion, we have performed an ab-initio calculation of the binding energy of ${}^6Li$ with the Hybrid Multideterminant method in a form that has small 3-particle cluster contributions. The evaluated binding energy is about $31 MeV$ with an uncertainty of few hundreds KeV. This estimate for the CD-Bonn 2000 interaction is closer to the experimental value than previously thought.
R.B.Wiringa, V.G.J.Stoks and R.Schiavilla. Phys. Rev. [**C 51**]{},38(1995). R.Machleidt. Phys. Rev. [**C 63**]{},024001 (2001). D.R.Entem and R.Machleidt. Phys. Rev [**C 68**]{},041001(2003). G.Hagen, T.Papenbrock,D.J.Dean and M.Hjort-Jansen.\
Phys. Rev. Lett. [**101**]{},092502(2008) K.Suzuki and S.Y.Lee. Prog. Theor. Phys. [**64**]{},2091(1980).\
K.Suzuki. Prog. Theor. Phys. [**68**]{},1627(1982).\
K.Suzuki. Prog. Theor. Phys. [**68**]{},1999(1982).\
K.Suzuki and R. Okamoto. Prog. Theor. Phys. [**92**]{},1045(1994). J.D.Holt, T.T.S. Kuo and G.E.Brown. Phys. Rev. [**C 69**]{},034329(2004). F. Andreozzi. Phys. Rev. [**C 54**]{},684(1996). A.M.Shirokov, A.I.Mazur, A.Zaytsev,J.P.Vary and T.A.Weber.\
Phys. Rev.[**C 70**]{},044005(2004). P. Navratil, J.P. Vary and B.R. Barrett. Phys. Rev. Lett [**84**]{},5728(2000).\
P. Navratil, J.P. Vary and B.R. Barrett. Phys. Rev. [**C 62**]{},054311(2000). P. Navratil, S. Quaglioni, I.Stetcu, B.R. Barrett. J. Phys. [**G 36**]{},083101(2009). G.Hagen,D.J.Dean, M.Hjort-Jensen,T.Papenbrock and A.Schwenk. Phys. Rev. [**76**]{},044305(2007). K.W.Schmid. Progr. in Part. and Nuc. Phys. [**46**]{},45(2001) and references in there. T.Otsuka, M.Honma, T. Mizusaki, N.Shimizu and Y.Utsuno.\
Prog. Part. Nucl. Phys. [**47**]{},319(2001). G.Puddu. J. Phys. G: Nucl. Part. Phys. [**32**]{},321 (2006).\
G.Puddu. Eur. Phys. J. [**A 31**]{},163(2007) G.Puddu. Acta Physica Polonica [**B 38**]{},3237(2007). G.Puddu. Eur. Phys. J. [**A 34**]{}, 413 (2007) P. Navratil, J.P. Vary, W.E.Ormand and B.R. Barrett. Phys. Rev. Lett [**87**]{},172502(2001). J.P.Vary et al. Eur. Phys. J. [**A 25**]{} s01,475(2005). P. Navratil and E. Caurier. Phys. Rev. [**C 69**]{},014311(2004).\
C.Forssen, E.Caurier and P.Navratil.Phys. Rev. [**C 79**]{}021303(2009). P.Maris,J.P.Vary and A.M.Shirokov. Phys. Rev. [**C79**]{},014308(2009). G..P.Kamuntavicius, R.K.Kalinauskas, B.R. Barrett,S.MicKeVicius\
and D.Germanas. Nucl. Phys. [**A 695**]{}.191(2001). R.Roth, J.R.Gour and P.Piecuch. Phys. Lett. [**B 679**]{},334(2009). A.F. Lisetskiy, B.R.Barrett, M.K.G.Kruse, P.Navratil and J.P.Vary.\
Phys. Rev. [**C78**]{},044302(2008). G.Puddu. Eur. Phys. J. [**A 42**]{}, 281 (2009). M. Viviani, L.E.Marcucci, S.Rosati, A.Kiewsky, L.Girlanda.\
Few-Body Systems [**39**]{} ,159(2006). G. Audi, and A.H. Wapstra. Nucl. Phys. [**A 565**]{},1(1993). E. Caurier and P.Navratil. Phys. Rev. [**C 73**]{},021302(2006).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study a self-dual $N=1$ super vertex operator algebra and prove that the full symmetry group is Conway’s largest sporadic simple group. We verify a uniqueness result which is analogous to that conjectured to characterize the Moonshine vertex operator algebra. The action of the automorphism group is sufficiently transparent that one can derive explicit expressions for all the McKay–Thompson series. A corollary of the construction is that the perfect double cover of the Conway group may be characterized as a point-stabilizer in a spin module for the Spin group associated to a $24$ dimensional Euclidean space.'
author:
- 'John F. Duncan'
date: 'September 14, 2006'
title: 'Super-moonshine for Conway’s largest sporadic group '
---
Introduction
============
The preeminent example of the structure of vertex operator algebra (VOA) is the Moonshine VOA denoted ${V^{\natural}}$ which was first constructed in [@FLM] and whose full automorphism group is the Monster sporadic group.
Following [@HohnPhD] we say that a VOA is nice when it is $C_2$-cofinite and satisfies a certain natural grading condition (see §\[sec:SVOAstruc:SVOAs\]), and we make a similar definition for super vertex operator algebras (SVOAs). We say that a VOA is rational when all of its modules are completely reducible (see §\[sec:SVOAMods\]). Then conjecturally ${V^{\natural}}$ may be characterized among nice rational VOAs by the following properties:
- self-dual
- rank $24$
- no small elements
where a self-dual VOA is one that has no non-trivial irreducible modules other than itself, and we write “no small elements” to mean no non-trivial vectors in the degree one subspace, since in a nice VOA this is the $L(0)$-homogeneous subspace with smallest degree that can be trivial. In this note we study what may be viewed as a super analogue of ${V^{\natural}}$. More specifically, we study an object ${{A^{f\natural}}}$ characterized among nice rational $N=1$ SVOAs by the following properties.
- self-dual
- rank $12$
- no small elements
An $N=1$ SVOA is an SVOA which admits a representation of the Neveu–Schwarz superalgebra, and now “no small elements” means no non-trivial vectors with degree ${1}/{2}$. (We define an SVOA to be rational when its even sub-VOA is rational, and a self-dual SVOA is an SVOA with no irreducible modules other than itself.)
We find that the full automorphism group of ${{A^{f\natural}}}$ is Conway’s largest sporadic group ${\operatorname{\textsl{Co}}}_1$. Thus by considering the graded traces of elements of ${\operatorname{Aut}}({{A^{f\natural}}})$ acting on ${{A^{f\natural}}}$, that is, the McKay–Thompson series, one can associate modular functions to the conjugacy classes of ${\operatorname{\textsl{Co}}}_1$, and we obtain moonshine for ${\operatorname{\textsl{Co}}}_1$. This is directly analogous to the moonshine which exists for the Monster simple group, was first observed in [@ConNorMM] and is to some extent explained by the existence of ${V^{\natural}}$.
The main results of this paper are the following three theorems.
The space ${{A^{f\natural}}}$ admits a structure of self-dual rational $N=1$ SVOA.
The automorphism group of the $N=1$ SVOA structure on ${{A^{f\natural}}}$ is isomorphic to Conway’s largest sporadic group, ${\operatorname{\textsl{Co}}}_1$.
The $N=1$ SVOA ${{A^{f\natural}}}$ is characterized among nice rational $N=1$ SVOAs by the properties: self-dual, rank $12$, trivial degree $1/2$ subspace.
The earliest evidence in the mathematical literature that there might be an object such as ${{A^{f\natural}}}$ was given in [@FLMBerk] where it was suggested to study a ${{\mathbb Z}}/2$-orbifold of a certain SVOA $V_L^f=A_L\otimes V_L$ associated to the $E_8$ lattice. Here $L$ is a lattice of $E_8$ type, $V_L$ denotes the VOA associated to $L$, and $A_L$ denotes the Clifford module SVOA associated to the space ${{\mathbb C}}\otimes_{{{\mathbb Z}}}L$. Since the lattice $L=E_8$ is self-dual, $V^f_L$ is a self-dual SVOA, and one finds that the graded character of $V^f_L$ satisfies $$\begin{gathered}
{\sf tr}|_{V^f_L}q^{L(0)-c/24}
=\frac{\theta_{E_8}(\tau)\eta(\tau)^8}
{\eta(\tau/2)^{8}\eta(2\tau)^{8}}
=q^{-1/2}(1+8q^{1/2}+276q+2048q^{3/2}+\ldots)\end{gathered}$$ and is a Hauptmodul for a certain genus zero subgroup of the Modular group $\bar{\Gamma}={\operatorname{\textsl{PSL}}}(2,{{\mathbb Z}})$. One can check that, but for the constant term, these coefficients exhibit moonshine phenomena for ${\operatorname{\textsl{Co}}}_1$. For example, we have that $276$ is the dimension of an irreducible module for ${\operatorname{\textsl{Co}}}_1$, and $2048=1+276+1771$ is a possible decomposition of the degree ${3}/{2}$ subspace into irreducibles for ${\operatorname{\textsl{Co}}}_1$. The only problem being that there is no irreducible representation of ${\operatorname{\textsl{Co}}}_1$ of dimension $8$, and the space corresponding to the constant term in the character of $V^f_L$ would have to be a sum of trivial modules were it a ${\operatorname{\textsl{Co}}}_1$-module at all. As observed in [@FLMBerk], orbifolding $V^f_L$ by a suitable lift of $-1$ on $L$, one obtains a space ${{V^{f\natural}}}$ with ${\sf
tr}|_{{{V^{f\natural}}}} q^{L(0)}={\sf tr}|_{V^f_L} q^{L(0)}-8$; that is, a space with the correct character for ${\operatorname{\textsl{Co}}}_1$. $$\begin{gathered}
{\sf tr}|_{{{V^{f\natural}}}}q^{L(0)-c/24}
=\frac{\theta_{E_8}(\tau)\eta(\tau)^8}
{\eta(\tau/2)^{8}\eta(2\tau)^{8}}-8
=q^{-1/2}(1+276q+2048q^{3/2}+\ldots)\end{gathered}$$ Also one finds that ${{V^{f\natural}}}$ admits a reasonably transparent action by a group of the shape $2^{1+8}.(W_{E_8}'/\{\pm 1\})$ which is the same as that of a certain involution centralizer and maximal subgroup in ${\operatorname{\textsl{Co}}}_1$. (Here $W_{E_8}'$ denotes the derived subgroup of the Weyl group of type $E_8$.)
We note here that the existence of ${{V^{f\natural}}}$ has certainly been known to Richard E. Borcherds for some time. Also, an action of ${\operatorname{\textsl{Co}}}_1$ on the SVOA underlying what we will call ${{A^{f\natural}}}$ was considered in [@BorRybMMIII].
The space ${{V^{f\natural}}}$ may be described as $$\begin{gathered}
{{V^{f\natural}}}=(V^f_L)^0\oplus (V^f_L)^0_{{\theta}}\end{gathered}$$ where $(V^f_L)_{{\theta}}$ denotes a ${\theta}$-twisted $V^f_L$-module and ${\theta}={\theta}_f\otimes {\theta}_b$ is an involution on $V^f_L$ obtained by letting ${\theta}_f$ be the parity involution on $A_L$, and letting ${\theta}_b$ be a lift to ${\operatorname{Aut}}(V_L)$ of the $-1$ symmetry on $L$. The ${\theta}$-twisted module $(V^f_L)_{{\theta}}$ may be described as a tensor product of twisted modules $(V^f_L)_{{\theta}}=(A_L)_{{\theta}_f}\otimes
(V_L)_{{\theta}_b}$, and $(V^f_L)^0$ and $(V^f_L)_{{\theta}}^0$ denote ${\theta}$-fixed points. Now one is in a situation very similar to that which gives rise to the Moonshine VOA ${V^{\natural}}$ via the Leech lattice VOA as is carried out in [@FLM], and one can hope that a similar approach as that used in [@FLM] would yield the desired result: an SVOA structure on ${{V^{f\natural}}}$ with an action by ${\operatorname{\textsl{Co}}}_1$.
We find it convenient to pursue a different approach. We construct an SVOA ${{A^{f\natural}}}$ whose graded character coincides with that of ${{V^{f\natural}}}$, and we do so using a purely fermionic construction; that is, using Clifford module SVOAs. It turns out that this fermionic construction enables one to analyze the symmetries in quite a transparent way. We find that there is a specific vector in the degree ${3}/{2}$ subspace of ${{A^{f\natural}}}$ such that the corresponding point stabilizer in the full group of SVOA automorphisms of ${{A^{f\natural}}}$ is precisely the group ${\operatorname{\textsl{Co}}}_1$. Furthermore, this vector naturally gives rise to a representation of the Neveu–Schwarz superalgebra on ${{A^{f\natural}}}$, and thus ${\operatorname{\textsl{Co}}}_1$ is realized as the full group of automorphisms of a particular $N=1$ SVOA structure on ${{A^{f\natural}}}$. These results are presented in §\[sec:strucafn\].
In §\[SecUniq\] we consider an analogue of the uniqueness conjecture for ${V^{\natural}}$. It turns out that the $N=1$ SVOA structure on ${{A^{f\natural}}}$ is unique in the sense that the vectors in $({{A^{f\natural}}})_{3/2}$ that give rise to a representation of the Neveu–Schwarz superalgebra on ${{A^{f\natural}}}$ form a single orbit under the action of ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ on ${{A^{f\natural}}}$. Making use of modular invariance for $n$-point functions on a self-dual SVOA, due to [@ZhuModInv] and [@HohnPhD], and utilizing also some ideas of [@DonMasEfctCC] and [@DonMasHlmVOA], one can show that the SVOA structure underlying ${{A^{f\natural}}}$ is characterized up to some technical conditions by the by now familiar properties:
- self-dual
- rank $12$
- no small elements
Combining these results, we find that ${{A^{f\natural}}}$ is characterized among $N=1$ SVOAs by the above three properties. This uniqueness result accentuates the analogy between ${{V^{f\natural}}}$ and certain other celebrated algebraic structures: the Golay code, the Leech lattice, and (conjecturally) the Moonshine VOA ${V^{\natural}}$.
The homogeneous subspace of ${{A^{f\natural}}}$ of degree $3/2$ may be identified with a half-spin module over ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$. The main idea behind the construction of ${{A^{f\natural}}}$ is to realize this spin module in such a way that the vector giving rise to the $N=1$ structure on ${{A^{f\natural}}}$ is as obvious as possible. Essentially, we achieve this by using the Golay code to construct a particular idempotent in the Clifford algebra of a $24$ dimensional vector space. The automorphism group of ${{A^{f\natural}}}$ turns out to be the quotient by center of the subgroup of ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ fixing this idempotent. Thus a curious corollary of the construction is that the group ${\operatorname{\textsl{Co}}}_0$ (the perfect double cover of ${\operatorname{\textsl{Co}}}_1$) may be characterized as a point stabilizer in a spin module over ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$.
The Golay code is characterized among length $24$ doubly-even linear binary codes (see §\[Sec:Notation\]) by the conditions of self-duality and having no “small elements” (that is, no weight $4$ codewords). The Golay code is an important ingredient in the construction of ${{A^{f\natural}}}$, and these defining properties yield direct influence upon the structure of ${{A^{f\natural}}}$. For example, the condition “no small elements” allows one to conclude that ${\operatorname{Aut}}({{A^{f\natural}}})$ is finite, and the uniqueness of the Golay code ultimately entails the uniqueness of the $N=1$ structure on ${{A^{f\natural}}}$. The uniqueness of the $N=1$ structure in turn allows one to formulate the following characterization of ${\operatorname{\textsl{Co}}}_0$.
Let $M$ be a spin module for ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ and let $t\in M$ such that ${{\langle}}xt,t{{\rangle}}=0$ whenever $x\in{\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ is an involution with ${\rm tr}|_{24}x=16$. Then the subgroup of ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ fixing $t$ is isomorphic to ${\operatorname{\textsl{Co}}}_0$.
It is perhaps interesting to note that the Leech lattice, which has automorphism group ${\operatorname{\textsl{Co}}}_0$ and which furnishes a popular definition of this group [@ConCnstCo0], does not figure directly in our construction of ${{A^{f\natural}}}$. Although the Golay code does play a prominent role, the uniqueness of ${{A^{f\natural}}}$, or alternatively the above Theorem \[Thm:Co0Chrztn\], provide definitions for the group ${\operatorname{\textsl{Co}}}_0$ relying neither on the Leech lattice nor the Golay code.
The SVOA ${{V^{f\natural}}}$ constructed from the $E_8$ lattice admits a natural structure of $N=1$ SVOA in analogy with the way in which a usual lattice VOA is naturally equipped with a Virasoro element. Thus a corollary of the uniqueness result for ${{A^{f\natural}}}$ is that ${{A^{f\natural}}}$ is isomorphic to the $N=1$ SVOA ${{V^{f\natural}}}$ discussed above. In the penultimate section §\[LatConst\] we consider the construction of ${{V^{f\natural}}}$ in more detail, and we indicate how to construct an explicit isomorphism with ${{A^{f\natural}}}$.
In the final section §\[sec:MTseries\] we consider the McKay–Thompson series associated to elements of ${\operatorname{\textsl{Co}}}_1$ acting on ${{A^{f\natural}}}$. One can derive explicit expressions for each of the McKay–Thompson series associated to elements of ${\operatorname{\textsl{Co}}}_1$ in terms of the frame shapes of the corresponding preimages in ${\operatorname{\textsl{Co}}}_0$. These expressions are recorded in Theorem \[ThmChars\].
Notation {#Sec:Notation}
--------
If $M$ is a vector space over $\mathbb{F}$ and $\mathbb{E}$ is a field containing ${{\mathbb F}}$, we write $_{\mathbb{E}}M =\mathbb{E}
\otimes_{\mathbb{F}}M$ for the vector space over $\mathbb{E}$ obtained by extension of scalars. For the remainder we shall use ${{\mathbb F}}$ to denote either ${{\mathbb R}}$ or ${{\mathbb C}}$. We choose a square root of $-1$ in ${{\mathbb C}}$ and denote it by ${{\bf i}}$. For $q$ a prime power ${{\mathbb F}}_q$ shall denote a field with $q$ elements. For $G$ a finite group we write ${{\mathbb F}}G$ for the group algebra of $G$ over ${{\mathbb F}}$.
For ${\Sigma}$ a finite set, we denote the power set of ${\Sigma}$ by ${\mathcal{P}}({\Sigma})$. The set operation of symmetric difference (which we denote by $+$) equips ${\mathcal{P}}({\Sigma})$ with a structure of ${{\mathbb F}}_2$-vector space, and with this structure in mind, we sometimes write ${{\mathbb F}}_2^{{\Sigma}}$ in place of ${\mathcal{P}}({\Sigma})$. Suppose that ${\Sigma}$ has $N$ elements. The space ${{\mathbb F}}_2^{{\Sigma}}$ comes equipped with a function $w:{{\mathbb F}}_2^{{\Sigma}}\to \{0,1,\ldots,N\}$ called [*weight*]{}, which assigns to an element $\gamma\in{{\mathbb F}}_2^{{\Sigma}}$ the cardinality of the corresponding element of ${\mathcal{P}}({\Sigma})$. An ${{\mathbb F}}_2$-subspace of ${{\mathbb F}}_2^{{\Sigma}}$ is called a [*linear binary code of length $N$*]{}. A linear binary code ${\mathcal{C}}$ is called [*even*]{} if $2|w(C)$ for all $C\in{\mathcal{C}}$, and is called [*doubly-even*]{} if $4|w(C)$ for all $C\in{\mathcal{C}}$. For ${\mathcal{C}}<{{\mathbb F}}_2^{{\Sigma}}$ a linear binary code, the [*co-code*]{} ${\mathcal{C}}^*$ is the space ${{\mathbb F}}_2^{{\Sigma}}/{\mathcal{C}}$. We write $X\mapsto
\bar{X}$ for the canonical map ${{\mathbb F}}_2^{{\Sigma}}\to{\mathcal{C}}^*$. The weight function on ${{\mathbb F}}_2^{{\Sigma}}$ induces a function $w^*$ on ${\mathcal{C}}^*$ called [*co-weight*]{}, which assigns to $\bar{X}\in{\mathcal{C}}^*$ the minimum weight amongst all preimages of $\bar{X}$ in ${{\mathbb F}}_2^{{\Sigma}}$. Once a choice of code ${\mathcal{C}}$ has been made, it will be convenient to regard $w^*$ as a function on ${{\mathbb F}}_2^{{\Sigma}}$ by setting $w^*(X):=w^*(\bar{X})$ for $X\in{{\mathbb F}}_2^{{\Sigma}}$.
The Virasoro algebra is the universal central extension of the Lie algebra of polynomial vector fields on the circle (see §\[sec:PreN=1SVOAs\] for an algebraic formulation). In the case that a vector space $M$ admits an action by the Virasoro algebra, and the action of $L(0)$ is diagonalizable, we write $M=\coprod_nM_n$ where $M_n=\{v\in M\mid L(0)v=nv\}$. We call $M_n$ the homogeneous subspace of degree $n$, and we write ${\rm
deg}(u)=n$ for $u\in M_n$.
When $M$ is a super vector space, we write $M=M_{\bar{0}}\oplus
M_{\bar{1}}$ for the superspace decomposition. For $u\in M$ we set $|u|=k$ when $u$ is ${{\mathbb Z}}/2$-homogeneous and $u\in M_{\bar{k}}$ for $k\in\{0,1\}$. The dual space $M^*$ has a natural superspace structure such that $M^*_{\gamma}=(M_{\gamma})^*$ for $\gamma\in
\{\bar{0},\bar{1}\}$.
There are various vector spaces throughout the paper that admit an action by a linear automorphism of order two denoted ${\theta}$. Suppose that $M$ is such a space. Then we write $M^k$ for the ${\theta}$-eigenspace with eigenvalue $(-1)^k$. Note that $M$ may be a super vector space, and $M=M^0\oplus M^1$ may or may not coincide with the superspace grading on $M$.
We denote by $D_z$ the operator on formal Laurent series which is formal differentiation in the variable $z$, so that if $f(z)=\sum
f_rz^{-r-1}\in V\{z\}$ is a formal Laurent series with coefficients in some space $V$, we have $D_zf(z)=\sum(-r)f_{r-1}z^{-r-1}$. For $m$ a positive integer, we set $D_z^{(m)}=\tfrac{1}{m!}D_z^m$.
As is customary, we use $\eta(\tau)$ to denote the Dedekind eta function. $$\begin{gathered}
\label{Dedetafun}
\eta(\tau)=q^{1/24}\prod_{n= 1}^{\infty}(1-q^n)\end{gathered}$$ Here $q=e^{2\pi{{\bf i}}\tau}$ and $\tau$ is a variable in the upper half plane ${{\mathbf h}}=\{\sigma+{{\bf i}}t\mid t>0\}$.
We write ${\operatorname{\textsl{Co}}}_1$ for an abstract group isomorphic to Conway’s largest sporadic group. We write ${\operatorname{\textsl{Co}}}_0$ for an abstract group isomorphic to the perfect double cover of ${\operatorname{\textsl{Co}}}_1$. It is well known that ${\operatorname{\textsl{Co}}}_0$ is isomorphic to the automorphism group of the Leech lattice [@ConCnstCo0], and in particular, admits an orthogonal representation of degree $24$ writable over ${{\mathbb Z}}$.
The most specialized notations arise in §\[sec:cliffalgs\]. We include here a list of them, with the relevant subsections indicated in brackets.
[ -3pt ]{}
A real or complex vector space of even dimension with non-degenerate bilinear form, assumed to be positive definite in the real case (§\[sec:cliffalgs:struc\]).
A basis for ${\mathfrak{u}}$, orthonormal in the sense that ${{\langle}}e_i,e_j{{\rangle}}=\delta_{ij}$ for $i,j\in{\Sigma}$ (§\[sec:cliffalgs:struc\]).
We write $e_I$ for $e_{i_1}\cdots e_{i_k}\in{{\rm Cliff}}({\mathfrak{u}})$ when $I=\{i_1,\ldots,i_k\}$ is a subset of ${\Sigma}$ and $i_1<\cdots<i_k$ (§\[sec:cliffalgs:struc\]).
We write $g\mapsto
g(\cdot)$ for the natural homomorphism ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})\to{\operatorname{\textsl{SO}}}({\mathfrak{u}})$. Regarding $g\in{\operatorname{\textsl{Spin}}}({\mathfrak{u}})$ as an element of ${{\rm Cliff}}({\mathfrak{u}})^{\times}$ we have $g(u)=gug^{-1}$ in ${{\rm Cliff}}({\mathfrak{u}})$ for $u\in{\mathfrak{u}}$. More generally, we write $g(x)$ for $gxg^{-1}$ when $x$ is any element of ${{\rm Cliff}}({\mathfrak{u}})$.
When $I$ is even, $e_I\in{{\rm Cliff}}({\mathfrak{u}})$ lies also in ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})$, and $e_I(a)$ denotes $e_Ia{e_I}^{-1}$ when $a\in{{\rm Cliff}}({\mathfrak{u}})$.
The main anti-automorphism of a Clifford algebra (§\[sec:cliffalgs:struc\]).
We denote $e_{{\Sigma}}\in{\operatorname{\textsl{Spin}}}({\mathfrak{u}})$ also by ${{\mathfrak{z}}}$ (§\[sec:cliffalgs:spin\]).
The map which is $-{\operatorname{Id}}$ on ${\mathfrak{u}}$, or the parity involution on ${{\rm Cliff}}({\mathfrak{u}})$ (§\[sec:cliffalgs:struc\]), or the parity involution on $A({\mathfrak{u}})_{\Theta}$ (§\[sec:cliffalgs:SVOAs\]).
A vector in ${{\rm CM}}({\mathfrak{u}})_E$ such that $x1_E=1_E$ for $x$ in $E$ (§\[sec:cliffalgs:mods\]).
The vector corresponding to $1_E$ under the identification between ${{\rm CM}}({\mathfrak{u}})_{E}$ and $(A({\mathfrak{u}})_{{\theta},E})_{N/8}$ when ${\mathfrak{u}}$ has dimension $2N$ (§\[sec:cliffalgs:SVOAs\]).
A finite ordered set indexing an orthonormal basis for ${\mathfrak{u}}$ (§\[sec:cliffalgs:struc\]).
A label for the basis $\{e_i\}_{i\in{\Sigma}}$ (§\[sec:cliffalgs:struc\]).
A subgroup of ${{\rm Cliff}}({\mathfrak{u}})^{\times}$ homogeneous with respect to the ${{\mathbb F}}_2^{{\mathcal{E}}}$ grading on ${{\rm Cliff}}({\mathfrak{u}})$ (§\[sec:cliffalgs:mods\]).
The Clifford module SVOA associated to the vector space ${\mathfrak{u}}$ (§\[sec:cliffalgs:SVOAs\]).
The canonically ${\theta}$-twisted module over $A({\mathfrak{u}})$ (§\[sec:cliffalgs:SVOAs\]).
The ${\theta}$-twisted module $A({\mathfrak{u}})_{{\theta}}$ realized in such a way that the subspace of minimal degree is identified with ${{\rm CM}}({\mathfrak{u}})_E$ (§\[sec:cliffalgs:SVOAs\]).
The direct sum of $A({\mathfrak{u}})$-modules $A({\mathfrak{u}})\oplus A({\mathfrak{u}})_{{\theta}}$ (§\[sec:cliffalgs:SVOAs\]).
The linear binary code on ${\Sigma}$ consisting of elements $I$ in ${{\mathbb F}}_2^{{\Sigma}}$ for which $E$ has non-trivial intersection with ${{\mathbb F}}e_I\subset{{\rm Cliff}}({\mathfrak{u}})$ (§\[sec:cliffalgs:mods\]).
The module over ${{\rm Cliff}}({\mathfrak{u}})$ induced from a trivial module over $E$ (§\[sec:cliffalgs:mods\]).
The Clifford algebra associated to the vector space ${\mathfrak{u}}$ (§\[sec:cliffalgs:struc\]).
The spin group associated to the vector space ${\mathfrak{u}}$ (§\[sec:cliffalgs:spin\]).
A non-degenerate symmetric bilinear form on ${\mathfrak{u}}$ or on ${{\rm Cliff}}({\mathfrak{u}})$ (§\[sec:cliffalgs:struc\]), or on the ${{\rm Cliff}}({\mathfrak{u}})$-module ${{\rm CM}}({\mathfrak{u}})_E$ (§\[sec:cliffalgs:mods\]). In the case that ${\mathfrak{u}}$ is real all of these forms will be positive definite.
A non-degenerate symmetric bilinear form on $A({\mathfrak{u}})_{\Theta}$ (§\[sec:cliffalgs:SVOAs\]).
From §\[sec:strucafn\] onwards we restrict to the case that
[ -3pt ]{}
is a $24$ dimensional real vector space with positive definite symmetric bilinear form ${{\langle}}\cdot\,,\cdot{{\rangle}}$;
is an ordered set with $24$ elements indexing an orthonormal basis ${\mathcal{E}}=\{e_i\}_{i\in\Omega}$ for ${\mathfrak{l}}$;
is a subgroup of ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that ${\mathcal{C}}(G)$ is a copy of the Golay code ${\mathcal{G}}$;
since this is the situation that will be relevant for the construction of ${{A^{f\natural}}}$.
SVOA structure {#sec:PreSVOAs}
==============
SVOAs {#sec:SVOAstruc:SVOAs}
-----
Suppose that $U=U_{\bar{0}}\oplus U_{\bar{1}}$ is a super vector space over ${{\mathbb F}}$. For an [*SVOA structure*]{} on $U$ we require the following data.
- [*Vertex operators:*]{} a map $U\otimes U\to
U((z))$ denoted $u\otimes v\mapsto Y(u,z)v$ such that, when we write $Y(u,z)v=\sum_{{{\mathbb Z}}}u_nvz^{-n-1}$, we have $u_nv\in
U_{\gamma+\delta}$ when $u\in U_{\gamma}$ and $v\in U_{\delta}$, and such that $Y(u,z)v=0$ for all $v\in U$ implies $u=0$.
- [*Vacuum:*]{} a distinguished vector ${{\bf 1}}\in U_{\bar{0}}$ such that $Y({{\bf 1}},z)u=u$ for $u\in U$, and $Y(u,z){{\bf 1}}|_{z=0}=u$.
- [*Conformal element:*]{} a distinguished vector ${{\bf \omega}}\in
U_{\bar{0}}$ such that the operators $L(n)={{\bf \omega}}_{n+1}$ furnish a representation of the Virasoro algebra on $U$(c.f. §\[sec:PreN=1SVOAs\]).
This data furnishes an SVOA structure on $U$ just when the following axioms are satisfied.
1. [*Translation:*]{} for $u\in U$ we have $[L(-1),Y(u,z)] =D_zY(u,z)$.
2. [*Jacobi Identity:*]{} the following Jacobi identity is satisfied for ${{\mathbb Z}}/2$ homogeneous $u,v\in U$. $$\begin{gathered}
\begin{split}
&
z_0^{-1}\delta\left(\frac{z_1-z_2}{z_0}\right)
Y(u,z_1)Y(v,z_2)\\
&-(-1)^{|u||v|} z_0^{-1}\delta\left(\frac{z_2-z_1}{-z_0}\right)
Y(v,z_2)Y(u,z_1)
\\
&=
z_2^{-1}\delta\left(\frac{z_1-z_0}{z_2}\right)
Y(Y(u,z_0)v,z_2)
\end{split}\end{gathered}$$ when $u\in U_{\gamma}$ and $v\in U_{\delta}$.
3. [*$L(0)$-grading:*]{} the action of $L(0)$ on $U$ is diagonalizable with rational eigenvalues bounded below, by $-N$ say, and thus defines a ${{\mathbb Q}}_{>-N}$-grading $U=\coprod_n U_n$ on $U$. This grading is such that the $L(0)$-homogeneous subspaces $U_n=\{u\in U\mid L(0)u=nu\}$ are finite dimensional.
In the case that $U=U_{\bar{0}}$ we are speaking of ordinary VOAs.
An SVOA $U$ is said to be [*$C_2$-cofinite*]{} in the case that $U_{-2}U=\{u_{-2}v\mid u,v\in U\}$ has finite codimension in $U$. Following [@HohnPhD] we say that an SVOA $U$ is [*nice*]{} when it is $C_2$-cofinite, the eigenvalues of $L(0)$ are non-negative and contained in ${\tfrac{1}{2}{{\mathbb Z}}}$, and the degree zero subspace $U_0$ is one dimensional and spanned by the vacuum vector. All the SVOAs we consider in this paper will be nice SVOAs.
By definition the coefficients of $Y({{\bf \omega}},z)$ define a representation of the Virasoro algebra on $U$ (c.f. §\[sec:PreN=1SVOAs\]). Let $c\in{{\mathbb C}}$ be such that the central element of the Virasoro algebra acts as $c{\rm Id}$ on $U$. Then $c$ is called the [*rank*]{} of $U$, and we denote it by ${\rm
rank}(U)$.
SVOA Modules {#sec:SVOAMods}
------------
For an SVOA module over $U$ we require a ${{\mathbb Z}}/2$-graded vector space $M=M_{\bar{0}}\oplus M_{\bar{1}}$ and a map $U\otimes M\to M((z))$ denoted $u\otimes v\mapsto Y^M(u,z)v$ such that, when we write $Y^M(u,z)v = \sum_{{{\mathbb Z}}} u^M_nvz^{-n-1}$, we have $u^M_nv \in
M_{\gamma+\delta}$ when $u\in U_{\gamma}$ and $v\in M_{\delta}$. The pair $(M,Y^M)$ is called an [*admissible $U$-module*]{} just when the following axioms are satisfied.
1. [*Vacuum:*]{} the operator $Y^M({{\bf 1}},z)$ is the identity on $M$.
2. [*Jacobi Identity:*]{} for ${{\mathbb Z}}/2$-homogeneous $u,v\in U$ we have $$\begin{gathered}
\begin{split}
&
z_0^{-1}\delta\left(\frac{z_1-z_2}{z_0}\right)
Y^M(u,z_1)Y^M(v,z_2)\\
&-(-1)^{|u||v|} z_0^{-1}\delta\left(\frac{z_2-z_1}{-z_0}\right)
Y^M(v,z_2)Y^M(u,z_1)
\\
&=
z_2^{-1}\delta\left(\frac{z_1-z_0}{z_2}\right)
Y^M(Y(u,z_0)v,z_2)
\end{split}\end{gathered}$$ where $u\in U_{\gamma}$, $v\in U_{\delta}$.
3. [*Grading:*]{} The space $M$ carries a ${\tfrac{1}{2}{{\mathbb Z}}}$-grading $M=\coprod_rM(r)$ bounded from below such that $u_nM(r)\subset
M(m+r-n-1)$ for $u\in U_m$.
We say that an admissible $U$-module $(M,Y^M)$ is an [*ordinary $U$-module*]{} if there is some $h\in {{\mathbb C}}$ such that the ${\tfrac{1}{2}{{\mathbb Z}}}$-grading $M=\coprod_r M(r)$ satisfies $M(r)=\{m\in M\mid L(0)m=(r+h)m\}$, and the spaces $M(r)$ are finite dimensional for each $r\in{\tfrac{1}{2}{{\mathbb Z}}}$. In this case we set $M_{h+r}=M(r)$ and write $M=\coprod_{r} M_{h+r}$.
All the SVOA modules that we consider will be ordinary modules. We understand that unless otherwise qualified, the term module shall mean ordinary module.
The significance of the notion of admissible module is the result of [@DonLiMasTwRepsVOAs] that if every admissible module over a VOA is completely reducible, then there are finitely many irreducible admissible modules up to equivalence, and every finitely generated admissible module is an ordinary module.
A VOA is called [*rational*]{} if each of its admissible modules are completely reducible. We define an SVOA to be [*rational*]{} if its even sub-VOA is rational. An SVOA $U$ is called [*simple*]{} if it is irreducible as a module over itself. We say that a rational SVOA is [*self-dual*]{} if it is simple, and has no irreducible modules other than itself.
Intertwiners
------------
Let $V$ be a VOA, and let $(M^i,Y^i)$, $(M^j,Y^j)$ and $(M^k,Y^k)$ be three $V$-modules. Suppose given a map $M^i\otimes M^j\to
M^k\{z\}$, and employ the notation $u\otimes v\mapsto
Y^{ij}_k(u,z)v=\sum_{n\in{{\mathbb Q}}}u_nvz^{-n-1}$. We assume that for any $u\in M^i$ and $v\in M^j$ we have $u_nv=0$ for $n$ sufficiently large, and we assume that $Y^{ij}_k(u,z)=0$ only for $u=0$. Then the map $Y^{ij}_k$ is called an [*intertwining operator of type $\binom{k}{i\;j}$*]{} just when the following axioms are satisfied.
1. [*Translation:*]{} for $u\in M^i$ we have $Y^{ij}_k(L^i(-1)u,z) =D_zY^{ij}_k(u,z)$.
2. [*Jacobi Identity:*]{} for $u\in V$, $v\in M^i$ and $w\in
M^j$ we have $$\begin{gathered}
\begin{split}
&z_0^{-1}\delta\left(\frac{z_1-z_2}{z_0}\right)
Y^k(u,z_1)Y^{ij}_k(v,z_2)w\\
&-z_0^{-1}\delta\left(\frac{z_2-z_1}{-z_0}\right)
Y^{ij}_k(v,z_2)Y^j(u,z_1)w
\\
&=
z_2^{-1}\delta\left(\frac{z_1-z_0}{z_2}\right)
Y^{ij}_k(Y^i(u,z_0)v,z_2)w
\end{split}\end{gathered}$$
Twisted SVOA modules {#sec:SVOATwMods}
--------------------
Any SVOA $U$ say, admits an order two involution which is the identity on $U_{\bar{0}}$ and acts as $-1$ on $U_{\bar{1}}$. We refer to this involution as the [*canonical involution*]{} on $U$, and denote it by $\sigma$. For a structure of [*canonically twisted*]{} or [*$\sigma$-twisted*]{} $U$-module on a vector space $M$ we require a map $Y^M:U\otimes M\to M((z^{1/2}))$ such that when we write $Y^M(u,z)=\sum_{n}u^M_nz^{-n-1}$ for ${{\mathbb Z}}/2$-homogeneous $u\in U$, then $u^M_n=0$ unless $u\in U_{\bar{k}}$ and $n\in
{{\mathbb Z}}+\tfrac{k}{2}$. The pair $(M,Y^M)$ is called an [*admissible canonically twisted $U$-module*]{} when it satisfies just the same axioms as for untwisted admissible $U$-modules except that we modify the Jacobi identity axiom and the grading condition as follows.
1. [*Twisted Jacobi identity:*]{} For ${{\mathbb Z}}/2$-homogeneous $u,v\in U$ we require that $$\begin{gathered}
\begin{split}
&z_0^{-1}\delta\left(\frac{z_1-z_2}{z_0}\right)
Y^M(u,z_1)Y^M(v,z_2)\\
&-(-1)^{|u||v|} z_0^{-1}\delta\left(\frac{z_2-z_1}{-z_0}\right)
Y^M(v,z_2)Y^M(u,z_1)
\\
&=z_2^{-1}\delta\left(\frac{z_1-z_0}{z_2}\right)
\left(\frac{z_1-z_0}{z_2}\right)^{-k/2}
Y^M(Y(u,z_0)v,z_2)
\end{split}\end{gathered}$$ when $u\in U_{\bar{k}}$.
2. [*Twisted grading:*]{} The space $M$ carries a ${{\mathbb Z}}$-grading $M=\coprod_rM(r)$ bounded from below such that $u_nM(r)\subset M(m+r-n-1)$ for $u\in U_m$.
We say that an admissible canonically twisted $U$-module $(M,Y^M)$ is an [*ordinary canonically twisted $U$-module*]{} if there is some $h\in {{\mathbb C}}$ such that the ${{\mathbb Z}}$-grading $M=\coprod_r M(r)$ satisfies $M(r)=\{m\in M\mid L(0)m=(r+h)m\}$, and the spaces $M(r)$ are finite dimensional for each $r\in{{\mathbb Z}}$. In this case we set $M_{h+r}=M(r)$ and write $M=\coprod_{r} M_{h+r}$.
As in §\[sec:SVOAMods\], we convene that unless otherwise qualified, the term “canonically twisted module” shall mean “ordinary canonically twisted module”.
A canonically twisted module $(M,Y^M)$ over an SVOA $V$ is called [*$\sigma$-stable*]{} if it admits an action by $\sigma$ compatible with that on $V$, so that we have $\sigma Y^M(u,z)v=Y^M(\sigma
u,z)\sigma v$ for $u\in V$, $v\in M$.
An important result we will make use of is the following.
\[thm:sdVOAhastwmod\] If $V$ is a self-dual rational $C_2$-cofinite SVOA then $V$ has a unique irreducible $\sigma$-stable $\sigma$-twisted module.
Recall from §\[sec:SVOAMods\] that we define an SVOA to be rational in case it’s even sub-VOA is rational. This is a stronger condition than the notion of rationality in [@DonZhaMdltyOrbSVOA], and an SVOA that is rational in our sense is both rational and $\sigma$-rational in the sense of [@DonZhaMdltyOrbSVOA].
For $V$ satisfying the hypotheses of Theorem \[thm:sdVOAhastwmod\] we will write $(V_{\sigma},Y_{\sigma})$ for the unique irreducible canonically twisted $V$-module this theorem guarantees.
$N=1$ SVOAs {#sec:PreN=1SVOAs}
-----------
The Neveu–Schwarz superalgebra is the Lie superalgebra spanned by the symbols $L_m$, $G_{m+1/2}$, and ${\bf{c}}$, for $m\in{{\mathbb Z}}$, and subject to the following relations [@KacWanSVOAs]. $$\begin{gathered}
\label{SVirRelf}
[L_m,L_n]=(m-n)L_{m+n}+\frac{m^3-m}{12}\delta_{m+n,0}\bf{c},\\
\left[ G_{m+1/2},L_n\right]=
\left(m+\frac{1}{2}-\frac{n}{2}\right)G_{m+n+1/2},\\
\left\{G_{m+1/2},G_{n-1/2}\right\}=2L_{m+n}+
\frac{m^2+m}{3}\delta_{m+n,0}\bf{c},\\ \left[L_m,\bf{c}\right]=\left[G_{m+1/2},\bf{c}\right]=0.\end{gathered}$$ Note that this algebra is generated by the $G_{m+1/2}$ for $m\in{{\mathbb Z}}$. The subalgebra generated by the $L_n$ is the Virasoro algebra.
Suppose that $U$ is an SVOA. We say that $U$ is an $N=1$ SVOA if there is an element ${{\bf \tau}}\in U_{3/2}$ such that the operators $G(n+\tfrac{1}{2})=\tau_{n+1}$ generate a representation of the Neveu–Schwarz superalgebra on $U$. We refer to such an element $\tau$ as a [*superconformal element*]{} for $U$. In general, a superconformal element for an SVOA $U$ is not unique and may not even exist, but for any superconformal $\tau$ we have that $\tfrac{1}{2}\tau_{0}\tau =\tfrac{1}{2}G(-\tfrac{1}{2})\tau$ is a conformal element. That is to say, the coefficients of $Y(\tfrac{1}{2}G(-\tfrac{1}{2})\tau,z)$ generate a representation of the Virasoro algebra on $U$. Given an SVOA $U$, we will always assume that any superconformal element for $U$ is chosen so that $\tfrac{1}{2}G(-\tfrac{1}{2})\tau={{\bf \omega}}$.
Note that the commutation relations for the Neveu–Schwarz superalgebra of central charge $c$ are equivalent to the following operator product expansions [@DixGinHarBB]. $$\begin{gathered}
Y({{\bf \omega}},z_1)Y({{\bf \omega}},z_2)\sim\frac{c/2}{(z_1-z_2)^{4}}+
\frac{2Y({{\bf \omega}},z_2)}{(z_1-z_2)^{2}}+
\frac{D_{z_2}Y({{\bf \omega}},z_2)}{(z_1-z_2)}\\
Y({{\bf \omega}},z_1)Y({{\bf \tau}},z_2)\sim
\frac{3}{2}\frac{Y({{\bf \tau}},z_2)}{(z_1-z_2)^{2}}+
\frac{D_{z_2}Y({{\bf \tau}},z_2)}{(z_1-z_2)}\\
Y({{\bf \tau}},z_1)Y({{\bf \tau}},z_2)\sim\frac{2c/3}{(z_1-z_2)^{3}}+
\frac{2Y({{\bf \omega}},z_2)}{(z_1-z_2)}\end{gathered}$$ Consequently, we have the following
\[prop:SConfCriterion\] Suppose $U$ is an SVOA with conformal element ${{\bf \omega}}$ and central charge $c$. Then $\tau\in (U)_{3/2}$ is a superconformal element for $U$ so long as $\tau_{2}\tau=\tfrac{2}{3}c{{\bf 1}}$, $\tau_{1}\tau=0$ and $\tau_{0}\tau={2}{{\bf \omega}}$.
Given an $N=1$ SVOA $U$ with superconformal element $\tau$ and conformal element $\omega=\tfrac{1}{2}\tau_0\tau$, we write ${\operatorname{Aut}}_{SVOA}(U)$ for the group of automorphisms of the SVOA structure on $U$, and we write ${\operatorname{Aut}}(U)$ for group of automorphisms of the $N=1$ SVOA structure on $U$. That is, for $U$ an $N=1$ SVOA, ${\operatorname{Aut}}(U)$ denotes the subgroup of ${\operatorname{Aut}}_{SVOA}(U)$ comprised of automorphisms that fix $\tau$.
Adjoint operators {#Sec:AdjOps}
-----------------
Suppose that $U$ is a nice SVOA. For $M$ a module over $U$, let $M'$ denote the restricted dual of $M$, and let $\langle\cdot\,,\cdot\rangle_{M}$ be the natural pairing $M'\times
M\to {{\mathbb C}}$. We define the adjoint vertex operators $Y':U\otimes M'\to
M'\{z\}$ by requiring that $$\begin{gathered}
\langle Y'(u,z)w',w\rangle_M
=(-1)^n\left\langle w',Y(e^{zL(1)}
z^{-2L(0)}u,z^{-1})w\right\rangle_M\end{gathered}$$ for $u\in U_{n-1/2}\oplus U_n$ with $n\in {{\mathbb Z}}$, where $w'\in M'$ and $w\in M$. As in [@FHL] we have
The map $Y'$ equips $M'$ with a structure of $U$-module.
Let $\langle\cdot| \cdot\rangle:U\otimes U\to{{\mathbb C}}$ be a bilinear form on $U$ such that $\langle U_m\mid U_n\rangle\subset\{0\}$ unless $m=n$. Then there is a unique grading preserving linear map $\varphi:U\to U'$ determined by the formula $\langle\varphi(u),v\rangle_U=\langle u\mid v\rangle$, and $\varphi$ is a $U$-module equivalence if and only if $$\begin{gathered}
\left\langle Y(u,z)w_1\mid w_2\right\rangle
=\left\langle w_1\mid Y(e^{zL(1)}
z^{-2L(0)}u,z^{-1})w_2\right\rangle\end{gathered}$$ for all $u$, $w_1$ and $w_2$ in $U$. When this identity is satisfied we say that the bilinear form $\langle\cdot|
\cdot\rangle$ is an invariant form for $U$.
\[InvFrmIsSymm\] Suppose that $\langle\cdot| \cdot\rangle$ is an invariant form for $U$. Then it is symmetric.
Just as in the untwisted case, we can define the adjoint canonically twisted vertex operators. For $(M,Y)$ a canonically twisted $U$ module, we define operators $Y':U\otimes M'\to
M'\{z\}$ by requiring that $$\begin{gathered}
\langle Y'(u,z)w',w\rangle_{M}
=(-1)^n\langle w',Y(e^{zL(1)}
z^{-2L(0)}u,z^{-1})w\rangle_{M}\end{gathered}$$ for $u\in U_{n-1/2}\oplus U_n$ with $n\in{{\mathbb Z}}$, where $w'\in M'$, $w\in M$, and $\langle\cdot\,,\cdot\rangle_{M}$ is the natural pairing $M'\times M\to{{\mathbb C}}$. As in the untwisted case, we have
The map $Y'$ equips $M'$ with a structure of canonically twisted $U$-module.
Lattice SVOAs {#sec:SVOAstruc:LattSVOAs}
-------------
Let $L$ be a positive definite integral lattice, and recall that ${{\mathbb F}}$ denotes ${{\mathbb R}}$ or ${{\mathbb C}}$. Then the following standard procedure associates to $L$, an SVOA defined over ${{\mathbb F}}$ and we shall denote it by $_{{{\mathbb F}}}V_L$. We refer the reader to [@FLM] for more details.
Let $_{{{\mathbb F}}}{\mathfrak{h}}={{\mathbb F}}\otimes_{{{\mathbb Z}}} L$ and let $_{{{\mathbb F}}}\hat{{\mathfrak{h}}}$ denote the Heisenberg Lie algebra described by $$\begin{gathered}
_{{{\mathbb F}}}\hat{{\mathfrak{h}}}=\coprod_{n\in{{\mathbb Z}}}
\,_{{{\mathbb F}}}{\mathfrak{h}}\otimes t^n\oplus {{\mathbb F}}c,\quad
[h(m),h'(n)]=m\langle h,h'\rangle\delta_{m+n,0},\quad
[h(m),c]=0.\end{gathered}$$ We denote by $_{{{\mathbb F}}}\hat{{\mathfrak{b}}}$ and $_{{{\mathbb F}}}\hat{{\mathfrak{b}}}'$, the (commutative) subalgebras of $_{{{\mathbb F}}}\hat{{\mathfrak{h}}}$ given by $$\begin{gathered}
_{{{\mathbb F}}}\hat{{\mathfrak{b}}}=\coprod_{n\in{{\mathbb Z}}_{\geq 0}}\,
_{{{\mathbb F}}}{\mathfrak{h}}\otimes t^n\oplus {{\mathbb F}}c,\qquad
_{{{\mathbb F}}}\hat{{\mathfrak{b}}}'=\coprod_{n\in{{\mathbb Z}}_{< 0}}\,
_{{{\mathbb F}}}{\mathfrak{h}}\otimes t^n.\end{gathered}$$ Let $\hat{L}$ be the unique up to equivalence extension of $L$ by a group $\langle\kappa\rangle$ with generator $\kappa$ of order two, such that the commutators in $\hat{L}$ satisfy $$\begin{gathered}
aba^{-1}b^{-1}=\kappa^{\langle\bar{a},\bar{b}\rangle+
\langle\bar{a},\bar{a}\rangle
\langle\bar{b},\bar{b}\rangle}\end{gathered}$$ where $a\mapsto \bar{a}$ denotes the natural homomorphism $\hat{L}\to L$. We have the following short exact sequence. $$\begin{gathered}
1\to\langle\kappa\mid\kappa^2=1\rangle\to\hat{L}
\xrightarrow{-}L\to 1\end{gathered}$$ Let ${{\mathbb F}}\{L\}$ denote the $\hat{L}$-module obtain by factoring ${{\mathbb F}}\hat{L}$ by the subalgebra generated by $\kappa+1$. We write $\iota(a)$ for the image of $a\in\hat{L}$ in ${{\mathbb F}}\{L\}$ under the composition of maps $\hat{L}\hookrightarrow{{\mathbb F}}\hat{L}\to{{\mathbb F}}\{L\}$. The space ${{\mathbb F}}\{L\}$ is again an algebra, and is linearly isomorphic to ${{\mathbb F}}L$. The algebra ${{\mathbb F}}\{L\}$ may be equipped with a structure of $_{{{\mathbb F}}}\hat{{\mathfrak{b}}}$-module as follows. $$\begin{gathered}
h(m)\cdot \iota(a)=0\;\;\text{for $m>0$},\qquad
h(0)\cdot \iota(a)=\langle h,\bar{a}\rangle \iota(a),\qquad
c\cdot \iota(a)=\iota(a).\end{gathered}$$ As an $_{{{\mathbb F}}}\hat{{\mathfrak{h}}}$-module, $_{{{\mathbb F}}}V_L$ is defined to be that induced from the $_{{{\mathbb F}}}\hat{{\mathfrak{b}}}$-module structure on ${{\mathbb F}}\{L\}$. $$\begin{gathered}
_{{{\mathbb F}}}V_L=U(_{{{\mathbb F}}}\hat{{\mathfrak{h}}})
\otimes_{U(_{{{\mathbb F}}}\hat{{\mathfrak{b}}})}{{\mathbb F}}\{L\}\end{gathered}$$ We identify ${{\mathbb F}}\{L\}$ with the subspace $1\otimes {{\mathbb F}}\{L\}$ of $_{{{\mathbb F}}}V_L$, and we set ${{\bf 1}}=1\otimes \iota(1)$. There is a natural isomorphism of $_{{{\mathbb F}}}\hat{{\mathfrak{b}}}'$-modules, $_{{{\mathbb F}}}V_L\simeq S(_{{{\mathbb F}}}\hat{{\mathfrak{b}}}')\otimes{{\mathbb F}}\{ L\}$.
Now we define the vertex operators on $_{{{\mathbb F}}}V_L$. Let $h\in\,_{{{\mathbb F}}}{\mathfrak{h}}$. We define generating functions $h(z)$ and $l(h,z)$ of operators on $_{{{\mathbb F}}}V_L$ by $$\begin{gathered}
h(z)=\sum_{n\in{{\mathbb Z}}}h(n)z^{-n-1},\qquad
l(h,z)=\sum_{n\in{{\mathbb Z}},\; n\neq 0}\frac{h(n)}{-n}z^{-n}\end{gathered}$$ Then for $h\in\, _{{{\mathbb F}}}{\mathfrak{h}}$ and $a\in\hat{L}$, the vertex operators associated to $\iota(a)=1\otimes \iota(a)$ and $h(-n-1)=h(-n-1)\otimes \iota(1)$ are given by $$\begin{gathered}
Y(h(-n-1),z)=D_z^{(n)}h(z),\qquad
Y(\iota(a),z)=:\exp\left(l(\bar{a},z)\right)az^{\bar{a}(0)}:\end{gathered}$$ respectively, where the colons denote the Bosonic normal ordering: that all operators $h(m)$ with $m\geq 0$ be commuted to the right of all other operators. The remaining vertex operators are determined by the requirement that $Y$ be a linear map, and that $$\begin{gathered}
Y(u_{-1}v,z)=:Y(u,z)Y(v,z):\quad\text{for
$u,v\in\,_{{{\mathbb F}}}V_L$.}\end{gathered}$$ If $\{h_i\}$ is a basis for $_{{{\mathbb F}}}{\mathfrak{h}}$ and $\{h'_i\}$ is the dual basis, let ${{\bf \omega}}=\tfrac{1}{2}\sum h_i(-1)h_i'(-1)$. Then ${{\bf \omega}}$ is independent of the choice of basis, and we have the following
The quadruple $(_{{{\mathbb F}}}V_L,Y,{{\bf 1}},{{\bf \omega}})$ is an SVOA and the rank of $_{{{\mathbb F}}}V_L$ coincides with the rank of $L$.
We refer to $_{{{\mathbb F}}}V_L$ as the SVOA over ${{\mathbb F}}$ associated to $L$ via the standard construction. The superspace decomposition of $_{{{\mathbb F}}}V_L$ is given by $_{{{\mathbb F}}}V_L=\,_{{{\mathbb F}}}V_{L_0}\oplus
\,_{{{\mathbb F}}}V_{L_1}$ where $L_0$ is the sublattice of $L$ consisting of elements with even norm squared, and $L_1$ is the (unique) coset of $L_0$ in $L$, consisting of elements with odd norm squared.
### The involution ${\theta}$ for $_{{{\mathbb F}}}V_L$
The lattice $L$ admits a non-trivial involution that acts by $\alpha\mapsto-\alpha$ for $\alpha\in L$. This involution lifts naturally to automorphisms of $_{{{\mathbb F}}}{\mathfrak{h}}$ and $_{{{\mathbb F}}}\hat{{\mathfrak{h}}}$ and hence to $U(_{{{\mathbb F}}}\hat{{\mathfrak{h}}})$. We denote it by ${\theta}$. We may define an automorphism of ${{\mathbb F}}\{L\}$ by $$\begin{gathered}
\iota(a)\mapsto (-1)^{\langle\bar{a},\bar{a}\rangle/2+
\langle\bar{a},\bar{a}\rangle^2/2}\iota(a^{-1})\end{gathered}$$ for $a\in\hat{L}$. We denote it also by ${\theta}$. Recalling that $_{{{\mathbb F}}}V_L$ was constructed as $$\begin{gathered}
_{{{\mathbb F}}}V_L=U(_{{{\mathbb F}}}\hat{{\mathfrak{h}}})
\otimes_{_{{{\mathbb F}}}\hat{{\mathfrak{b}}}}{{\mathbb F}}\{L\}\end{gathered}$$ we may define an automorphism of $_{{{\mathbb F}}}V_L$ by ${\theta}\otimes{\theta}$ where the ${\theta}$ on the left is that defined for the left tensor factor of $_{{{\mathbb F}}}V_L$, and the one on the right is that just defined on the right tensor factor. Since all these automorphisms may be regarded as lifts of $-1$, we denote ${\theta}\otimes{\theta}$ also by ${\theta}$. Then ${\theta}$ is an automorphism of the VOA structure on $_{{{\mathbb F}}}V_L$, and we may refer to it as a lift to ${\operatorname{Aut}}(_{{{\mathbb F}}}V_L)$ of the $-1$ symmetry on $L$.
It is a result of [@DonGriHohFVOAsMM] that all lifts of $-1$ are conjugate under the action of ${\operatorname{Aut}}(_{{{\mathbb F}}}V_L)$.
The space $_{{{\mathbb F}}}V_L$ decomposes into eigenspaces for the action of ${\theta}$, and we express this decomposition as $_{{{\mathbb F}}}V_L=\,_{{{\mathbb F}}}V_L^0\oplus\,_{{{\mathbb F}}}V_L^1$ where $_{{{\mathbb F}}}V_L^k$ is the eigenspace with eigenvalue $(-1)^k$ for the action of ${\theta}$.
### Real form for $_{{{\mathbb F}}}V_L$ {#RealFormLatSVOA}
The adjoint operators on $_{{{\mathbb F}}}V_L$ determine an invariant bilinear form, which is given by $$\begin{gathered}
\langle u\mid v\rangle {{\bf 1}}={\rm Res}_{z=0}\,z^{-1}(-1)^nY(e^{zL(1)}
z^{-2L(0)}u,z^{-1})v\end{gathered}$$ when $u$ is in $(_{{{\mathbb F}}}V_L)_{n-1/2}$ or $(_{{{\mathbb F}}}V_L)_{n}$ for some $n\in{{\mathbb Z}}$. Consider the case that ${{\mathbb F}}={{\mathbb R}}$. Then the form $\langle\cdot\mid\cdot\rangle$ is not positive definite on $_{{{\mathbb R}}}V_L$. In fact, the form is positive definite on the subspace $_{{{\mathbb R}}}V_L^0$, and is negative definite on $_{{{\mathbb R}}}V_L^1$. Suppose now that we view $_{{{\mathbb R}}}V_L$ as embedded in $_{{{\mathbb C}}}V_L$ curtesy of the the natural inclusion ${{\mathbb R}}\hookrightarrow {{\mathbb C}}$. Let us set ${V}_L$ to be the ${{\mathbb R}}$ subspace of $_{{{\mathbb C}}}V_L$ described by $$\begin{gathered}
{V}_L=\,_{{{\mathbb R}}}V_L^0\oplus {{\bf i}}\,_{{{\mathbb R}}}V_L^1
=\{u+{{\bf i}}v\mid u\in\,_{{{\mathbb R}}}V_L^0,\,
v\in\,_{{{\mathbb R}}}V_L^1\}\end{gathered}$$ Then ${V}_L$ closes under the vertex operators $Y$ associated to $_{{{\mathbb C}}}V_L$, and the form $\langle\cdot\mid\cdot\rangle$ restricts to be positive definite on ${V}_L$. From now on we write ${V}_L$ for the real VOA with positive definite bilinear form obtained in this way, by restricting the form and vertex operator algebra structure from $_{{{\mathbb C}}}V_L$.
Clifford algebras {#sec:cliffalgs}
=================
The construction of SVOAs that we will use is based on the structure of Clifford algebra modules. In this section we recall some basic properties of Clifford algebras and we exhibit a construction of modules over finite dimensional Clifford algebras using doubly-even linear binary codes (see §\[Sec:Notation\]). In §\[sec:cliffalgs:SVOAs\] we recall the construction of SVOA module structure on modules over certain infinite dimensional Clifford algebras.
Clifford algebra structure {#sec:cliffalgs:struc}
--------------------------
In this section ${{\mathbb F}}$ denotes either ${{\mathbb R}}$ or ${{\mathbb C}}$. For ${\mathfrak{u}}$ an ${{\mathbb F}}$-vector space with non-degenerate symmetric bilinear form $\langle \cdot\,,\cdot\rangle$ we write ${{\rm Cliff}}({\mathfrak{u}})$ for the Clifford algebra over ${{\mathbb F}}$ generated by ${\mathfrak{u}}$. More precisely, we set ${{\rm Cliff}}({\mathfrak{u}})=T({\mathfrak{u}})/I({\mathfrak{u}})$ where $T({\mathfrak{u}})$ is the tensor algebra of ${\mathfrak{u}}$ over ${{\mathbb F}}$ with unit denoted ${\bf 1}$, and $I({\mathfrak{u}})$ is the ideal of $T({\mathfrak{u}})$ generated by all expressions of the form $u\otimes u+\langle u,u\rangle{\bf 1}$ for $u\in {\mathfrak{u}}$. The natural algebra structure on $T({\mathfrak{u}})$ induces an associative algebra structure on ${{\rm Cliff}}({\mathfrak{u}})$. The vector space ${\mathfrak{u}}$ embeds in ${{\rm Cliff}}({\mathfrak{u}})$, and when it is convenient we identify ${\mathfrak{u}}$ with its image in ${{\rm Cliff}}({\mathfrak{u}})$. We also write $\alpha$ in place of $\alpha{\bf
1}+I({\mathfrak{u}})\in{{\rm Cliff}}({\mathfrak{u}})$ for $\alpha\in{{\mathbb F}}$ when no confusion will arise. For $u\in {\mathfrak{u}}$ we have the relation $u^2=-|u|^2$ in ${{\rm Cliff}}({\mathfrak{u}})$. Polarization of this identity yields $uv+vu=-2\langle u,v\rangle$ for $u,v\in {\mathfrak{u}}$.
The linear transformation on ${\mathfrak{u}}$ which is $-1$ times the identity map lifts naturally to $T({\mathfrak{u}})$ and preserves $I({\mathfrak{u}})$, and hence induces an involution on ${{\rm Cliff}}({\mathfrak{u}})$ which we denote by ${\theta}$. The map ${\theta}$ is often referred to as the [*parity involution*]{}. We have ${\theta}(u_1\cdots u_k)=(-1)^ku_1\cdots
u_k$ for $u_1\cdots u_k\in{{\rm Cliff}}({\mathfrak{u}})$ with $u_i\in {\mathfrak{u}}$, and we write ${{\rm Cliff}}({\mathfrak{u}})={{\rm Cliff}}({\mathfrak{u}})^0\oplus {{\rm Cliff}}({\mathfrak{u}})^1$ for the decomposition into eigenspaces for ${\theta}$. Define a bilinear form on ${{\rm Cliff}}({\mathfrak{u}})$, denoted $\langle\cdot\,,\cdot\rangle$, by setting $\langle{\bf 1},{\bf 1}\rangle=1$, and requiring that for $u\in
{\mathfrak{u}}$, the adjoint of left multiplication by $u$ is left multiplication by $-u$. Then the restriction of $\langle\cdot\,,\cdot\rangle$ to ${\mathfrak{u}}$ agrees with the original form on ${\mathfrak{u}}$. $$\begin{gathered}
\langle u,u\rangle=-\langle{\bf 1},u^2\rangle
=-\langle{\bf 1},-|u|^2{\bf 1}\rangle
=|u|^2\end{gathered}$$ More generally, the adjoint of $u=u_1\cdots u_k$ for $u_i\in {\mathfrak{u}}$ is $(-1)^ku_k\cdots u_1$. The [*main anti-automorphism*]{} of ${{\rm Cliff}}({\mathfrak{u}})$ is the map we denote $\alpha$, which acts by sending $u_1\cdots u_k$ to $u_k\cdots u_1$ for $u_i\in{\mathfrak{u}}$.
Spin groups {#sec:cliffalgs:spin}
-----------
Let us write ${{\rm Cliff}}({\mathfrak{u}})^{\times}$ for the group of invertible elements in ${{\rm Cliff}}({\mathfrak{u}})$. For $x\in{{\rm Cliff}}({\mathfrak{u}})^{\times}$ and $a\in {{\rm Cliff}}({\mathfrak{u}})$, we set $x(a)=xax^{-1}$. We will define the Pinor and Spinor groups associated to ${\mathfrak{u}}$ slightly differently according as ${\mathfrak{u}}$ is real or complex: in the case that ${\mathfrak{u}}$ is real, we define the Pinor group $\textsl{Pin}({\mathfrak{u}})$ to be the subgroup of ${{\rm Cliff}}({\mathfrak{u}})^{\times}$ comprised of elements $x$ such that $x({\mathfrak{u}})\subset {\mathfrak{u}}$ and $\alpha(x)x=\pm 1$; in the case that ${\mathfrak{u}}$ is complex we define $\textsl{Pin}({\mathfrak{u}})$ to be the set of $x\in{{\rm Cliff}}({\mathfrak{u}})^{\times}$ such that $x({\mathfrak{u}})\subset{\mathfrak{u}}$ and $\alpha(x)x=1$. In both cases we define the Spinor group by setting ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})=\textsl{Pin}({\mathfrak{u}})\cap{{\rm Cliff}}({\mathfrak{u}})^0$.
Let $x\in \textsl{Pin}({\mathfrak{u}})$. Then we have $\langle
x(u),x(v)\rangle=\langle u,v\rangle$ for $u,v\in {\mathfrak{u}}$, and thus the map $x\mapsto x(\cdot)$, which has kernel $\pm {\bf 1}$, realizes the Pinor group as a double cover of ${O}({\mathfrak{u}})$. (If $u\in{\mathfrak{u}}$ and $\langle u,u\rangle=1$, then $u(\cdot)$ is the orthogonal transformation of ${\mathfrak{u}}$ which is minus the reflection in the hyperplane orthogonal to $u$.) The image of ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})$ under the map $x\mapsto x(\cdot)$ is just the group $SO({\mathfrak{u}})$.
In the case that ${\mathfrak{u}}$ is real with definite bilinear form, we have $\alpha(x)x=1$ for all $x\in{\operatorname{\textsl{Spin}}}({\mathfrak{u}})$, and the group ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})$ is generated by the (well-defined) expressions $\exp(\lambda e_i e_j)\in{{\rm Cliff}}({\mathfrak{u}})^{\times}$ for $\lambda\in{{\mathbb R}}$ and $\{e_i\}$ an orthonormal basis of ${\mathfrak{u}}$. The Spinor group of the complexified space $_{{{\mathbb C}}}{\mathfrak{u}}$ is then generated by the $\exp(\lambda e_ie_j)$ for $\lambda\in{{\mathbb C}}$.
Clifford algebra modules {#sec:cliffalgs:mods}
------------------------
For the remainder of this section we suppose that ${\mathfrak{u}}$ is a finite dimensional real vector space with positive definite symmetric bilinear form, and also that ${\mathcal{E}}=\{e_i\}_{i\in{\Sigma}}$ is an orthonormal basis for ${\mathfrak{u}}$, indexed by a finite set ${\Sigma}$. For $S=(s_1,\ldots,s_k)\in{\Sigma}^{\times k}$, a $k$-tuple of elements from ${\Sigma}$ for any $k$, we write $e_S$ for the element $e_{s_1}e_{s_2}\cdots e_{s_k}$ in ${{\rm Cliff}}({\mathfrak{u}})$. We suppose that ${\Sigma}$ is equipped with some ordering. If $S=\{s_1,\ldots,s_k\}$ is a subset of ${\Sigma}$ (that is, an unordered subset), we denote by $\vec{S}$ the $k$-tuple given by $\vec{S}=(s_1,\ldots,s_k)$ just when $s_1<\cdots <s_k$, so that $e_{\vec{S}}=e_{s_1}\cdots e_{s_k}$. We then abuse notation to write $e_{S}$ for $e_{\vec{S}}$. In this way we obtain an element $e_{S}$ in ${{\rm Cliff}}({\mathfrak{u}})$ for any $S\subset{\Sigma}$. (We set $e_{\emptyset}={\bf 1}$.) This correspondence depends on the choice of ordering, but our discussion will be invariant with respect to this choice. Note that $e_{S}e_{R}=\pm e_{S+R}$ for any $S,R\subset{\Sigma}$, and the set $\{e_S\mid S\subset{\Sigma}\}$ furnishes an orthonormal basis for ${{\rm Cliff}}({\mathfrak{u}})$.
We now obtain an ${{\mathbb F}}_2^{{\Sigma}}$-grading on ${{\rm Cliff}}({\mathfrak{u}})$ by decreeing that for $S\subset{\Sigma}$, the homogeneous subspace of ${{\rm Cliff}}({\mathfrak{u}})$ with degree $S$ is just the ${{\mathbb F}}$-span of the vector $e_{S}$. $$\begin{gathered}
{{\rm Cliff}}({\mathfrak{u}})=\bigoplus_{S\subset{\Sigma}}
{{\rm Cliff}}({\mathfrak{u}})^S,\;
{{\rm Cliff}}({\mathfrak{u}})^S={{\mathbb F}}e_S,\;
{{\rm Cliff}}({\mathfrak{u}})^S{{\rm Cliff}}({\mathfrak{u}})^R\subset{{\rm Cliff}}({\mathfrak{u}})^{S+R}.\end{gathered}$$ Since this grading depends on the choice of orthonormal basis ${\mathcal{E}}$, we will refer to it as the ${{\mathbb F}}_2^{{\mathcal{E}}}$-grading, and we refer to the homogeneous elements $e_S$ as ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous elements. A given subset of ${{\rm Cliff}}({\mathfrak{u}})$ is called ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous if all of its elements are ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous.
Suppose that $E$ is an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroup of ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})$. Then $E$ is a union of elements of the form $\pm e_C$ for $C\subset{\Sigma}$, and hence is finite, and has exponent four. Furthermore, the set of $C$ for which $\pm e_C$ is in $E$ determines a linear binary code on ${\Sigma}$. For $E$ an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroup of ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})$, we write ${\mathcal{C}}(E)$ for the associated linear binary code on ${\Sigma}$. The following result is straightforward.
Suppose that $-1\notin E$ and ${\mathcal{C}}(E)$ is a doubly-even code. Then the map $E\to {\mathcal{C}}(E)$ such that $\pm e_S\mapsto S$ is an isomorphism of abelian groups, and the sub-algebra of ${{\rm Cliff}}({\mathfrak{u}})$ generated by $E$ is naturally isomorphic to the group algebra ${{\mathbb R}}E$.
Suppose now that $E$ is an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroup of ${\operatorname{\textsl{Spin}}}({\mathfrak{u}})$ such that $-1\notin E$ and ${\mathcal{C}}(E)$ is a self-dual doubly-even code. (Note that this forces ${\rm dim}({\mathfrak{u}})$ to be a multiple of eight.) Then we write ${{\rm CM}}({\mathfrak{u}})_E$ for the ${{\rm Cliff}}({\mathfrak{u}})$-module defined by ${{\rm CM}}({\mathfrak{u}})_{E}={{\rm Cliff}}({\mathfrak{u}})\otimes_{{{\mathbb R}}E}{{\mathbb R}}_{1}$ where ${{\mathbb R}}_1$ denotes a trivial $E$-module. Let us set $1_E={\bf 1}\otimes 1\in
{{\rm CM}}({\mathfrak{u}})_E$. Then ${{\rm CM}}({\mathfrak{u}})_E$ admits a bilinear form defined so that $\langle 1_{E},1_E\rangle=1$, and the adjoint to left multiplication by $u\in{\mathfrak{u}}\hookrightarrow {{\rm Cliff}}({\mathfrak{u}})$ is left multiplication by $-u$.
The ${{\rm Cliff}}({\mathfrak{u}})$-module ${{\rm CM}}({\mathfrak{u}})_E$ is irreducible, and a vector-space basis for ${{\rm CM}}({\mathfrak{u}})_E$ is naturally indexed by the elements of the co-code ${\mathcal{C}}(E)^{*}$. The bilinear form on ${{\rm CM}}({\mathfrak{u}})_E$ is non-degenerate.
We have $e_{S+C}{1}=\pm e_{S}{1}$ for any $S\subset{\Sigma}$ when $C\in{\mathcal{C}}(E)$. This shows that a basis for ${{\rm CM}}({\mathfrak{u}})_E$ is indexed by the elements of the co-code ${\mathcal{C}}(E)^{*}={\mathcal{P}}({\Sigma})/{\mathcal{C}}(E)$, and it follows that the irreducible submodules of ${{\rm CM}}({\mathfrak{u}})_E$ are indexed by the cosets of ${\mathcal{C}}(E)$ in its dual code ${\mathcal{C}}(E)^{\perp}=\{S\in{\mathcal{P}}({\Sigma})\mid |S\cap C|\equiv
0\pmod{2},\;\forall C\in{\mathcal{C}}(E) \}$. Since ${\mathcal{C}}(E)$ is self-dual, ${{\rm CM}}({\mathfrak{u}})_E$ is irreducible.
Note that the vector $1_E\in{{\rm CM}}({\mathfrak{u}})_E$ is such that $g1_E=1_E$ for all $g\in E$, and ${{\rm CM}}({\mathfrak{u}})_E$ is spanned by the $a1_E$ for $a\in {{\rm Cliff}}({\mathfrak{u}})$. We have the following
\[CliffModEquivsB\] Suppose that ${{\rm CM}}({\mathfrak{u}})_0$ is a non-trivial irreducible ${{\rm Cliff}}({\mathfrak{u}})$-module with a vector $1_0$ such that $g1_0=1_0$ for all $g\in E$. Then ${{\rm CM}}({\mathfrak{u}})_0$ is equivalent to ${{\rm CM}}({\mathfrak{u}})_E$, and a module equivalence is furnished by the map $\phi:{{\rm CM}}({\mathfrak{u}})_E\to{{\rm CM}}({\mathfrak{u}})_0$ defined so that $\phi(a1_E)=
a1_0$ for $a\in{{\rm Cliff}}({\mathfrak{u}})$.
Recall our assumption that ${\mathcal{C}}(E)$ is self-dual so that $2d={\rm
dim}({\mathfrak{u}})$ is divisible by eight, and $|{\mathcal{C}}(E)|=2^{d}$. In this case we have that ${{\rm Cliff}}({\mathfrak{u}})$ is isomorphic to $M_{2^{d}}({{\mathbb R}})$ so that there is a unique non-trivial irreducible ${{\rm Cliff}}({\mathfrak{u}})$-module up to equivalence and it has dimension $2^{d}$. It suffices then to show that $\phi$ is well defined, and this follows from the universal mapping property of the induced module ${{\rm CM}}({\mathfrak{u}})_E$.
We will sometimes wish to replace ${\mathfrak{u}}$ with its complexification $_{{{\mathbb C}}}{\mathfrak{u}}$ in the above, in which case we shall understand ${{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{u}})_E$ to be the complexification $_{{{\mathbb C}}}{{\rm CM}}({\mathfrak{u}})_E$ of ${{\rm CM}}({\mathfrak{u}})_E$. Then ${{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{u}})_E$ is an irreducible module over ${{\rm Cliff}}(_{{{\mathbb C}}}{\mathfrak{u}})$.
Clifford module construction of SVOAs {#sec:cliffalgs:SVOAs}
-------------------------------------
Let ${\mathfrak{u}}$ be a finite dimensional vector space over ${{\mathbb R}}$ with positive definite symmetric bilinear form. In this section we review the construction of SVOA structure on modules over certain infinite dimensional Clifford algebras associated to ${\mathfrak{u}}$. The construction is quite standard and one may refer to [@FFR] for example, for full details. Our setup is somewhat different from that in [@FFR] in that we prefer to be able to work over ${{\mathbb R}}$, and we must therefore use an alternative construction of the canonically twisted SVOA module, since a polarization of ${\mathfrak{u}}$ does not exist in this case. On the other hand, all one requires is an irreducible module over the (finite dimensional) Clifford algebra ${{\rm Cliff}}({\mathfrak{u}})$, and the arguments of [@FFR] then go through with only cosmetic changes.
So that the reader can translate between this section and the exposition in [@FFR], let us note that in [@FFR] they consider the case that ${\mathfrak{a}}$ say, is a complex vector space with non-degenerate bilinear form $\langle\cdot \,,\cdot \rangle_0$, and a polarization ${\mathfrak{a}}={\mathfrak{a}}^+\oplus{\mathfrak{a}}^-$ such that ${\mathfrak{a}}^{\pm}$ is spanned by vectors $a^{\pm}_i$, which satisfy $\langle a^{\pm}_i, a^{\mp}_j\rangle_0=\delta_{ij}$. They consider the Clifford algebra ${{\rm Cliff}}_0({\mathfrak{a}})$ defined by ${{\rm Cliff}}_0({\mathfrak{a}})=T({\mathfrak{a}})/I_0({\mathfrak{a}})$ where $I_0({\mathfrak{a}})$ is the ideal spanned by elements of the form $uv+vu=\langle u,v\rangle_0
{\bf 1}$. In this section we take ${\mathfrak{u}}$ to be a real vector space of even dimension with positive definite bilinear form and orthonormal basis $\{e_i\}$. Let $d={\rm dim}({\mathfrak{u}})/2$, and in the complexification $_{{{\mathbb C}}}{\mathfrak{u}}$ of ${\mathfrak{u}}$ set $a^{\pm}_j=\tfrac{1}{2}(ie_j\mp e_{j+d})$. Then we have an identification of vector spaces ${\mathfrak{a}}=\,_{{{\mathbb C}}}{\mathfrak{u}}$. We also have $\langle a^{\pm}_i,a^{\mp}_j \rangle=-\tfrac{1}{2}\delta_{ij}$ so that $\{a^{\pm}_i,a^{\mp}_j\}=-2\langle
a^{\pm}_i,a^{\mp}_j\rangle = \delta_{ij}$ in ${{\rm Cliff}}(_{{{\mathbb C}}}{\mathfrak{u}})$, and setting $\langle \cdot\,,\cdot\rangle_0
=-2\langle\cdot\,,\cdot\rangle$ we have an identification of algebras ${{\rm Cliff}}_0({\mathfrak{a}})={{\rm Cliff}}(_{{{\mathbb C}}}{\mathfrak{u}})$.
We now proceed with the construction. We assume that ${\mathfrak{u}}$ admits an orthonormal basis $\{e_i\}_{i\in {\Sigma}}$ indexed by a finite set ${\Sigma}$. For simplicity we suppose that the dimension of ${\mathfrak{u}}$ is divisible by eight so that maximal self-orthogonal doubly-even codes on ${\Sigma}$ are self-dual. Let $\hat{{\mathfrak{u}}}$ and $\hat{{\mathfrak{u}}}_{{\theta}}$ denote the infinite dimensional inner product spaces described as follows. $$\begin{gathered}
\hat{{\mathfrak{u}}}=\coprod_{m\in{{\mathbb Z}}}{\mathfrak{u}}\otimes t^{m+1/2},\quad
\hat{{\mathfrak{u}}}_{{\theta}}=\coprod_{m\in{{\mathbb Z}}}{\mathfrak{u}}\otimes t^m,\\
\langle u\otimes t^r,v\otimes t^s\rangle
=\langle u,v\rangle \delta_{r+s,0},
\; \text{ for $u,v\in{\mathfrak{u}}$ and $r,s\in{\tfrac{1}{2}{{\mathbb Z}}}$.}\end{gathered}$$ We write $u(r)$ for $u\otimes t^r$ when $u\in{\mathfrak{u}}$ and $r\in{\tfrac{1}{2}{{\mathbb Z}}}$. We consider the Clifford algebras ${{\rm Cliff}}(\hat{{\mathfrak{u}}})$ and ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$. The inclusion of ${\mathfrak{u}}$ in $\hat{{\mathfrak{u}}}_{{\theta}}$ given by $u\mapsto u(0)$ induces an embedding of algebras ${{\rm Cliff}}({\mathfrak{u}})\hookrightarrow {{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$. For $S=(i_1,\ldots,i_k)$ an ordered subset of ${\Sigma}$ we write $e_S(r)$ for the element $e_{i_1}(r)\cdots e_{i_k}(r)$, which lies in ${{\rm Cliff}}(\hat{{\mathfrak{u}}})$ or ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$ according as $r$ is in ${{{\mathbb Z}}+\tfrac{1}{2}}$ or ${{\mathbb Z}}$. With this notation $e_S(0)$ coincides with the image of $e_S$ under the embedding ${{\rm Cliff}}({\mathfrak{u}})\hookrightarrow {{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$. Let $E$ be an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroup of ${{\rm Cliff}}({\mathfrak{u}})^{\times}$ such that ${\mathcal{C}}(E)$ is a self-dual doubly-even code on ${\Sigma}$.
We write ${\mathcal{B}}(\hat{{\mathfrak{u}}})$ for the subalgebra of ${{\rm Cliff}}(\hat{{\mathfrak{u}}})$ generated by the $u(m+\tfrac{1}{2})$ for $u\in{\mathfrak{u}}$ and $m\in{{\mathbb Z}}_{\geq 0}$. We write ${\mathcal{B}}(\hat{{\mathfrak{u}}}_{{\theta}})_E$ for the subalgebra of ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$ generated by $E\subset{{\rm Cliff}}({\mathfrak{u}})$, and the $u(m)$ for $u\in{\mathfrak{u}}$ and $m\in{{\mathbb Z}}_{> 0}$. Let ${{\mathbb R}}_1$ denote a one-dimensional module for either ${\mathcal{B}}(\hat{{\mathfrak{u}}})$ or ${\mathcal{B}}(\hat{{\mathfrak{u}}}_{{\theta}})_E$, spanned by a vector $1_E$, such that $u(r)1_E=0$ whenever $r\in{\tfrac{1}{2}{{\mathbb Z}}}_{>0}$, and such that $g(0)1_{E}=1_{E}$ for $g\in E$. We write $A({\mathfrak{u}})$ (respectively $A({\mathfrak{u}})_{{\theta},E}$) for the ${{\rm Cliff}}(\hat{{\mathfrak{u}}})$-module (respectively ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$-module) induced from the ${\mathcal{B}}(\hat{{\mathfrak{u}}})$-module structure (respectively ${\mathcal{B}}(\hat{{\mathfrak{u}}}_{{\theta}})_E$-module structure) on ${{\mathbb R}}_{1}$. $$\begin{gathered}
A({\mathfrak{u}})={{\rm Cliff}}(\hat{{\mathfrak{u}}})
\otimes_{{\mathcal{B}}(\hat{{\mathfrak{u}}})}{{\mathbb R}}_{1},\qquad
A({\mathfrak{u}})_{{\theta},E}={{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})
\otimes_{{\mathcal{B}}(\hat{{\mathfrak{u}}}_{{\theta}})_E}{{\mathbb R}}_{1}.\end{gathered}$$ We write ${\bf 1}$ for $1\otimes 1_{E}\in A({\mathfrak{u}})$, and we write ${\bf 1}_{{\theta}}$ for $1\otimes 1_{E}\in A({\mathfrak{u}})_{{\theta},E}$. When no confusion will arise, we simply write $A({\mathfrak{u}})_{{\theta}}$ in place of $A({\mathfrak{u}})_{{\theta},E}$.
The space $A({\mathfrak{u}})$ supports a structure of SVOA. In order to define the vertex operators we require the notion of [*fermionic normal ordering*]{} for elements in ${{\rm Cliff}}(\hat{{\mathfrak{u}}})$ and ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$. The fermionic normal ordering on ${{\rm Cliff}}(\hat{{\mathfrak{u}}})$ is the multi-linear operator defined so that for $u_i\in{\mathfrak{u}}$ and $r_i\in{{{\mathbb Z}}+\tfrac{1}{2}}$ we have $$:u_1(r_1)\cdots u_k(r_k):=
{\rm sgn}(\sigma)u_{\sigma 1}(r_{\sigma 1})
\cdots u_{\sigma k}(r_{\sigma k})$$ where $\sigma$ is any permutation of the index set $\{1,\ldots,k\}$ such that $r_{\sigma 1}\leq\cdots\leq r_{\sigma
k}$. For elements in ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$ the fermionc normal ordering is defined in steps by first setting $$:u_1(0)\cdots u_k(0):=\frac{1}{k!}\sum_{\sigma\in S_k}
{\rm sgn}(\sigma)u_{\sigma 1}(0)
\cdots u_{\sigma k}(0)$$ for $u_i\in {\mathfrak{u}}$. Then in the situation that $n_i\in{{\mathbb Z}}$ are such that $n_{i}\leq n_{i+1}$ for all $i$, and there are some $s$ and $t$ (with $1\leq s\leq t\leq k$) such that $n_j=0$ for $s\leq
j\leq t$, we set $$:u_1(n_1)\cdots u_k(n_k):=
u_{1}(n_{1})\cdots u_{ s-1}(n_{s-1}):u_s(0)\cdots u_t(0):
u_{t+1}(n_{t+1})\cdots u_{k}(n_k)$$ Finally, for arbitrary $n_i\in{{\mathbb Z}}$ we set $$:u_1(n_1)\cdots u_k(n_k):=
{\rm sgn}(\sigma):u_{\sigma 1}(n_{\sigma 1})
\cdots u_{\sigma k}(n_{\sigma k}):$$ where $\sigma$ is again any permutation of the index set $\{1,\ldots,k\}$ such that $n_{\sigma 1}\leq\cdots\leq n_{\sigma
k}$, and we extend the definition multi-linearly to ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$.
Now for $u\in{\mathfrak{u}}$ we define the generating function, denoted $u(z)$, of operators on $A({\mathfrak{u}})_{\Theta}=A({\mathfrak{u}})\oplus
A({\mathfrak{u}})_{{\theta}}$ by $u(z)=\sum_{r\in{\tfrac{1}{2}{{\mathbb Z}}}}u(r)z^{-r-1/2}$. Note that $u(r)$ acts as $0$ on $A({\mathfrak{u}})$ if $r\in {{\mathbb Z}}$, and acts as $0$ on $A({\mathfrak{u}})_{{\theta}}$ if $r\in {{{\mathbb Z}}+\tfrac{1}{2}}$. To an element $a\in
A({\mathfrak{u}})$ of the form $a=u_{1}(-m_1-\tfrac{1}{2})\cdots
u_{k}(-m_k-\tfrac{1}{2}){\bf 1}$ for $u_i\in {\mathfrak{u}}$ and $m_i\in
{{\mathbb Z}}_{\geq 0}$, we associate the operator valued power series $\overline{Y}(a,z)$, given by $$\begin{gathered}
\overline{Y}(a,z)=:D_z^{(m_1)}u_{i_1}(z)\cdots
D_z^{(m_k)}u_{i_k}(z):\end{gathered}$$ We define the vertex operator correspondence $$\begin{gathered}
Y(\cdot\,,z):A({\mathfrak{u}})\otimes A({\mathfrak{u}})_{\Theta}
\to A({\mathfrak{u}})_{\Theta}((z^{1/2}))\end{gathered}$$ by setting $Y(a,z)b=\overline{Y}(a,z)b$ when $b\in A({\mathfrak{u}})$, and by setting $Y(a,z)b=\overline{Y}(e^{\Delta_z}a,z)b$ when $b\in
A({\mathfrak{u}})_{{\theta}}$, where $\Delta_z$ is the expression defined by $$\begin{gathered}
\Delta_z=-\frac{1}{4}\sum_i\sum_{m,n\in{{\mathbb Z}}_{\geq 0}}C_{mn}
e_i(m+\tfrac{1}{2})e_i(n+\tfrac{1}{2})z^{-m-n-1}\\
C_{mn}=\frac{1}{2}\frac{(m-n)}{m+n+1}
\binom{-\tfrac{1}{2}}{m}\binom{-\tfrac{1}{2}}{n}\end{gathered}$$ Set ${{\bf \omega}}=-\tfrac{1}{4}\sum_i e_i(-\tfrac{3}{2})
e_i(-\tfrac{1}{2}){\bf 1} \in A({\mathfrak{u}})_2$. Then one has the following
The map $Y$ defines a structure of self-dual rational SVOA on $A({\mathfrak{u}})$ when restricted to $A({\mathfrak{u}})\otimes A({\mathfrak{u}})$. The Virasoro element is given by ${{\bf \omega}}$, and the rank is $\tfrac{1}{2}\dim({\mathfrak{u}})$. The map $Y$ defines a structure of ${\theta}$-twisted $A({\mathfrak{u}})$-module on $A({\mathfrak{u}})_{{\theta}}$ when restricted to $A({\mathfrak{u}})\otimes A({\mathfrak{u}})_{{\theta}}$.
The definition of $Y(a,z)b$ for $b\in A({\mathfrak{u}})_{{\theta}}$ appears quite complicated, but all we need to know about these operators is contained in the following
\[CliffTwOpsAWNTK\] Let $b\in A({\mathfrak{u}})_{{\theta}}$.
1. If $a={\bf 1}\in A({\mathfrak{u}})_0$ then $Y(a,z)b=b$.
2. If $a\in A({\mathfrak{u}})_1$ then $\Delta_za=0$ so that $Y(a,z)b=\overline{Y}(a,z)b$.
3. If $a\in A({\mathfrak{u}})_2$ then $\Delta_za=0$ and $Y(a,z)b=
\overline{Y}(a,z)b$ unless $a$ has non-trivial projection onto the span of the vectors $e_i(-\tfrac{3}{2})e_i(-\tfrac{1}{2}){{\bf 1}}$ for $i\in \Omega$.
4. For $a=e_i(-\tfrac{3}{2})e_i(-\tfrac{1}{2}){{\bf 1}}$ we have $\Delta_za=-\tfrac{1}{4}z^{-2}$ and $\Delta_z^2a=0$ so that $Y(a,z)b=\overline{Y}(a,z)b-\tfrac{1}{4}bz^{-2}$ in this case.
As a corollary of Proposition \[CliffTwOpsAWNTK\] we have that $Y(\omega,z){\bf 1}_{{\theta}}=\tfrac{1}{16}{\rm dim}({\mathfrak{u}}){\bf
1}_{{\theta}} z^{-2}$, and consequently the $L(0)$ grading on $A({\mathfrak{u}})_{\Theta}$ is given by $$\begin{gathered}
A({\mathfrak{u}})=\coprod_{n\in{\tfrac{1}{2}{{\mathbb Z}}}_{\geq 0}}A({\mathfrak{u}})_n,\quad
A({\mathfrak{u}})_{{\theta}}=\coprod_{n\in{\tfrac{1}{2}{{\mathbb Z}}}_{\geq 0}}
(A({\mathfrak{u}})_{{\theta}})_{h+n},\end{gathered}$$ where $h=\tfrac{1}{16}{\rm dim}({\mathfrak{u}})$. Given a specific choice of $E$, the embedding of ${{\rm Cliff}}({\mathfrak{u}})$ in ${{\rm Cliff}}(\hat{{\mathfrak{u}}}_{{\theta}})$ gives rise to an isomorphism of ${{\rm CM}}({\mathfrak{u}})_E$ with $(A({\mathfrak{u}})_{{\theta},E})_{h}$, and it will be convenient to consider these spaces as identified. We may have occasion to replace ${\mathfrak{u}}$ by its complexification $_{{{\mathbb C}}}{\mathfrak{u}}$ in the above; in such a situation we shall understand $A(_{{{\mathbb C}}}{\mathfrak{u}})_{\Theta}$ to be the complexified space $_{{{\mathbb C}}}A({\mathfrak{u}})_{\Theta}$.
Structure of ${{A^{f\natural}}}$ {#sec:strucafn}
================================
Construction {#CliffConst}
------------
Suppose that $\Omega$ is some finite set with cardinality $24$ and an arbitrary ordering. Let ${\mathfrak{l}}$ be a $24$ dimensional vector space over ${{\mathbb R}}$ with positive definite bilinear form, and let ${\mathcal{E}}=\{e_i\}_{i\in\Omega}$ be an orthonormal basis for ${\mathfrak{l}}$. The goal of this section is to show that the space ${{A^{f\natural}}}$ given by $$\begin{gathered}
{{A^{f\natural}}}=A({\mathfrak{l}})^0\oplus A({\mathfrak{l}})_{{\theta}}^0\end{gathered}$$ admits a structure of self-dual rational $N=1$ SVOA.
Let ${\mathcal{G}}\subset{\mathcal{P}}(\Omega)$ be a copy of the Golay code. Let $G$ be an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroup of ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that $G$ does not contain $-{\bf 1}\in{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ and the associated code ${\mathcal{C}}(G)$ is ${\mathcal{G}}$. Curtesy of §\[sec:cliffalgs\] we have the ${{\rm Cliff}}({\mathfrak{l}})$-module ${{\rm CM}}({\mathfrak{l}})_G={{\rm Cliff}}({\mathfrak{l}})\otimes_{{{\mathbb R}}G}{{\mathbb R}}_1$, and this space can be used to give an explicit realization of the $A({\mathfrak{l}})$-module $A({\mathfrak{l}})_{{\theta}}$. From now on we set $A({\mathfrak{l}})_{{\theta}}=A({\mathfrak{l}})_{{\theta},G}$. Notice that the odd parity subspace of ${{A^{f\natural}}}$ is precisely $A({\mathfrak{l}})_{{\theta}}^0$. Since ${\mathfrak{l}}$ has dimension $24$, the $L(0)$-homogeneous subspace of $A({\mathfrak{l}})_{{\theta}}$ with minimal degree is $(A({\mathfrak{l}})_{{\theta}})_{3/2}$, and this space is identified with the ${{\rm Cliff}}({\mathfrak{l}})$-module ${{\rm CM}}({\mathfrak{l}})_G$. Note that ${\bf
1}_{{\theta}}\leftrightarrow 1_{G}$ under this identification. Also, the bilinear form on ${{A^{f\natural}}}$ coincides with that on ${{\rm Cliff}}({\mathfrak{l}})_G$ when restricted to $({{A^{f\natural}}})_{3/2}$, and in particular, is normalized so that $\langle {\bf 1}_{{\theta}}|{\bf 1}_{{\theta}}\rangle=1$.
We require a vertex operator correspondence $Y:{{A^{f\natural}}}\otimes{{A^{f\natural}}}\to{{A^{f\natural}}}((z))$ and as yet this map is defined only on $({{A^{f\natural}}})_{\bar{0}}\otimes({{A^{f\natural}}})_{\bar{0}}$ and on $({{A^{f\natural}}})_{\bar{0}}\otimes({{A^{f\natural}}})_{\bar{1}}$. Such a map $Y$ must satisfy skew-symmetry if it exists, so for $u\otimes v\in
({{A^{f\natural}}})_{\bar{1}}\otimes({{A^{f\natural}}})_{\bar{0}}$ we define $Y(u,z)v$ by $$\begin{gathered}
\label{YdefAtwOnA}
Y(u,z)v=e^{zL(-1)}Y(v,- z)u\end{gathered}$$ (since $|u||v|=0$ in this case). Suppose now that $u\otimes
v\in({{A^{f\natural}}})_{\bar{1}}\otimes ({{A^{f\natural}}})_{\bar{1}}$. Motivated by §\[Sec:AdjOps\] we define $Y(u,z)v$ by requiring that for any $w\in ({{A^{f\natural}}})_{\bar{0}}$ we should have $$\begin{gathered}
\label{AdjOfAtwOnAtw}
\langle Y(u,z)v\mid w\rangle=
(-1)^n\langle v\mid Y(e^{zL(1)}z^{-2L(0)}
u,z^{-1})w\rangle\end{gathered}$$ whenever $u\in ({{A^{f\natural}}})_{n-1/2}$ for $n\in{{\mathbb Z}}$. Now the operator on the right of (\[AdjOfAtwOnAtw\]) is defined by (\[YdefAtwOnA\]). We can use this later expression to rewrite (\[AdjOfAtwOnAtw\]) in terms of the operator $Y$ defined on $({{A^{f\natural}}})_{\bar{0}}\otimes{{A^{f\natural}}}$ in §\[sec:cliffalgs:SVOAs\], and doing so we obtain the following convenient working definition for the operator $Y$ on $({{A^{f\natural}}})_{\bar{1}}\otimes ({{A^{f\natural}}})_{\bar{1}}$. For $u\in
({{A^{f\natural}}})_{n-1/2}$ with $n\in{{\mathbb Z}}$, for $v\in({{A^{f\natural}}})_{\bar{1}}$ and $w\in({{A^{f\natural}}})_{\bar{0}}$ we have $$\begin{gathered}
\label{YdefAtwOnAtw}
\langle Y(u,z)v\mid w\rangle
=(-1)^n\langle e^{z^{-1}L(1)}v\mid
Y(w,- z^{-1})e^{zL(1)} z^{-2L(0)}
u\rangle\end{gathered}$$
\[AfnIsSVOA\] The map $Y:{{A^{f\natural}}}\otimes{{A^{f\natural}}}\to{{A^{f\natural}}}((z))$ defines a structure of rank $12$ self-dual rational SVOA on ${{A^{f\natural}}}$.
Let $_{{{\mathbb C}}}{{A^{f\natural}}}$ denote the complexification of ${{A^{f\natural}}}$. Then $(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{0}}$ is a simple VOA of rank $12$, and $(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{1}}$ is an irreducible module over $(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{0}}$. Let us write $Y_{\bar{k}\bar{l}}$ for the restriction of $Y$ to $(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{k}}\otimes\,
(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{l}}$ for ${k},{l}\in\{{0},{1}\}$.
By the Boson-Fermion correspondence [@FreBF] (see also [@DoMaBF]) we have that $(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{0}}$ is isomorphic to a lattice VOA $_{{{\mathbb C}}}V_{M_0}$ where $M_0$ is an even lattice of type $D_{12}$. The irreducible modules over a lattice VOA $_{{{\mathbb C}}}V_L$ for $L$ an even lattice are known to be indexed by the cosets of $L$ in its dual $L^*=\{u\in {{{\mathbb R}}}\otimes_{{{\mathbb Z}}} L\mid
\langle u,L\rangle\subset{{\mathbb Z}}\}$ [@DonVAsLats]. In particular, a lattice VOA is rational. Further, it is known that the fusion algebra associated to the modules over $_{{{\mathbb C}}}V_L$ coincides with the group algebra of $L^*/L$ in the natural way. (One may refer to [@DonLepGVAs] for a thorough treatment.) In the case that $L=M_0$, there are exactly three non-trivial cosets, and for any one of these $M_0+\mu$ say, the set $M_0\cup (M_0+\mu)$ forms an integral lattice in $_{{{\mathbb R}}}L={{\mathbb R}}\otimes_{{{\mathbb Z}}}L$, and in particular, $M_0^*/M_0\cong{{\mathbb Z}}/2\times{{\mathbb Z}}/2$ has exponent two.
From this we conclude that $_{{{\mathbb C}}}{{A^{f\natural}}}$ is isomorphic to $_{{{\mathbb C}}}V_{M}$ for $M=M_0\cup (M_0+\mu)$ for some $\mu\in
M_0^*\setminus M_0$, that there exist unique up to scale intertwiners of types $\binom{M_0+\mu}{M_0+\mu\, M_0}$ and $\binom{M_0}{M_0+\mu\,M_0+\mu}$ for $_{{{\mathbb C}}}V_{M_0}$, and that these intertwiners are just those obtained by restricting the SVOA structure on $_{{{\mathbb C}}}V_{M}$. In particular, there is a unique structure of rank $12$ SVOA on $_{{{\mathbb C}}}{{A^{f\natural}}}$. On the other hand it is known from [@DonLiMasSmpCrt] for example, that the maps $Y_{\bar{1}\bar{0}}$ and $Y_{\bar{1}\bar{1}}$ defined by equations (\[YdefAtwOnA\]) and (\[YdefAtwOnAtw\]), respectively, yield intertwining operators of types $\binom{\bar{1}}{\bar{1}\,\bar{0}}
=\binom{M_0+\mu}{M_0+\mu\, M_0}$ and $\binom{\bar{0}}{\bar{1}\,\bar{1}} =\binom{M_0}{M_0+\mu\,M_0+\mu}$, respectively, for $(_{{{\mathbb C}}}{{A^{f\natural}}})_{\bar{0}}$. By uniqueness, they must coincide with those inherited from the SVOA structure on $_{{{\mathbb C}}}V_{M}$ up to some scalar factors, and in any case the map $Y$ defined above furnishes $_{{{\mathbb C}}}{{A^{f\natural}}}$ with a structure of rational rank $12$ SVOA. We have chosen scalars in such a way that ${{A^{f\natural}}}$ is a real form for $_{{{\mathbb C}}}{{A^{f\natural}}}$. One can check directly that $M$ is a self-dual lattice (given that $({{A^{f\natural}}})_{1/2}$ is trivial, $M$ must be a copy of the $D_{12}^+$ lattice — the unique self-dual lattice of rank $12$ with no vectors of unit norm [@CoS93 Ch.19]), and it follows from the above that $_{{{\mathbb C}}}V_{M}$ and $_{{{\mathbb C}}}{{A^{f\natural}}}$ are then self-dual SVOAs. This completes the proof of the proposition.
The method used here to extend the vertex operator map from a VOA to the sum of the VOA and a module over it was given earlier in [@HuaXtnMoonVOA].
The following proposition gives a convenient criterion for when a vector in $({{A^{f\natural}}})_{3/2}$ is superconformal. In section §\[SecUniq\] it will be shown that all such vectors are equivalent up to the action of ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$.
\[Prop:CodeCriForSC\] Suppose that $t\in({{A^{f\natural}}})_{3/2}$ is such that $\langle
t|t\rangle=8$ and $\langle e_C(0)t|t\rangle=0$ whenever $C\subset\Omega$ has cardinality two or four. Then $t$ is a superconformal vector for ${{A^{f\natural}}}$.
We should compute $t_{n}t$ for $n=0$, $n=1$ and $n=2$, and then compare with the results of Proposition \[prop:SConfCriterion\]. Using (\[YdefAtwOnAtw\]) and recalling $t\in ({{A^{f\natural}}})_{2-1/2}$ we find that for arbitrary $u\in{{A^{f\natural}}}$ we have $$\begin{gathered}
\begin{split}
\langle u| Y(t,z)t\rangle
=&\langle
Y(u,-z^{-1})
e^{zL(1)}z^{-2L(0)}t
|e^{z^{-1}L(1)}t\rangle\\
=&\langle Y(u,-z^{-1})t|
t\rangle z^{-3}
\end{split}\end{gathered}$$ Then for $t_nt$ we obtain $$\begin{gathered}
\label{tauAontauA}
\langle u|t_nt\rangle
={\rm Res}_{z=0}\langle
Y(u,-z^{-1}) t|t\rangle z^{n-3}
= \langle u_{-n+1}t|t\rangle
(-1)^{n-2}\end{gathered}$$ For $L(0)$-homogeneous $u\in {{A^{f\natural}}}$ the expression (\[tauAontauA\]) is zero unless $u\in ({{A^{f\natural}}})_{-n+2}$. In order to determine $t_nt$, we should compute $u_mt$ for various $u\in ({{A^{f\natural}}})^0$, and apply the equation (\[tauAontauA\]). To compute $u_mt$ we will use the results of Proposition \[CliffTwOpsAWNTK\].
For $n=2$ the expression (\[tauAontauA\]) is zero unless $u\in({{A^{f\natural}}})_0={{\mathbb R}}{\bf 1}$, in which case we obtain $\langle {\bf
1}| t_2t\rangle = \langle{\bf 1}_{-1}t|t \rangle$. Since $Y({\bf
1},z)={\bf 1}$ and $\langle {\bf 1}|{\bf 1}\rangle=1$, we find that $t_2t=\langle t|t\rangle{\bf 1}$.
For the case that $n=1$ we should consider the $u$ in $({{A^{f\natural}}})_1$. Suppose $u=e_i(-\tfrac{1}{2})e_j(-\tfrac{1}{2}){{\bf 1}}$ for $i\neq j\in
\Omega$. Then $$\begin{gathered}
Y(u,z)t =\overline{Y}(u,z)t
=e_{ij}(0)tz^{-1}+\ldots\end{gathered}$$ By hypothesis we have that $e_{ij}(0)t$ is orthogonal to $t$, so $t_1t=0$.
Finally, when $n=0$ we are concerned with $u_{1}t$ for $u\in
({{A^{f\natural}}})_2$. The space $({{A^{f\natural}}})_2$ is spanned by the vectors of the form $e_i(-\tfrac{3}{2})e_j(-\tfrac{1}{2}){{\bf 1}}$ for $i,j\in \Omega$, and also by the $e_C(-\tfrac{1}{2}){{\bf 1}}$ for $C=\{i_1,i_2,i_3,i_4\}\subset\Omega$. For $u$ one of these vectors we have $Y(u,z)t=\overline{Y}(u,z) t=e_{C}(0)t z^{-2}+\ldots$ for $C\subset \Omega$ of size two or four unless $u=e_{i}(-\tfrac{3}{2})e_i(-\tfrac{1}{2}){{\bf 1}}$. In the former cases, $u_1t$ is orthogonal to $t$ by hypothesis, and in the later case we have $$\begin{gathered}
Y(u,z)t=
\overline{Y}(u,z)t
-\tfrac{1}{4}tz^{-2} =-\tfrac{1}{4}tz^{-2}+\ldots\end{gathered}$$ The expression (\[tauAontauA\]) now reduces to $\langle
u|t_0t\rangle=-\tfrac{1}{4}\langle t|t\rangle=-2$, and we conclude that $t_0t=-\tfrac{1}{2} \sum_{\Omega} e_{i}(-\tfrac{3}{2})
e_i(-\tfrac{1}{2}){{\bf 1}}$ since we have $\langle u|u\rangle=4$ for $u=e_i(-\tfrac{3}{2})e_i(-\tfrac{1}{2}){{\bf 1}}$ for any $i$ in $\Omega$.
We have verified that $t_2t=8$, $t_1t=0$ and $t_0t=2{{\bf \omega}}$. Since the rank of ${{A^{f\natural}}}$ is $12$ and $8=\tfrac{2}{3}12$, an application of Proposition \[prop:SConfCriterion\] confirms that $t$ is superconformal for ${{A^{f\natural}}}$.
Set $\tau_A=\sqrt{8}{\bf 1}_{{\theta}}\in ({{A^{f\natural}}})_{3/2}$. Then $\langle
\tau_A|\tau_A\rangle=8$, and we have
\[tauAIsSC\] The vector $\tau_A$ is a superconformal vector for ${{A^{f\natural}}}$.
That $\tau_A$ satisfies the hypotheses of Proposition \[Prop:CodeCriForSC\] is a consequence of the fact that the Golay code ${\mathcal{G}}={\mathcal{C}}(G)$ has minimum weight $8$.
We record the results of Proposition \[AfnIsSVOA\] and Corollary \[tauAIsSC\] in the following
\[ThmCnstafn\] The quadruple $({{A^{f\natural}}},Y,{{\bf 1}},\tau_A)$ is a self-dual rational $N=1$ SVOA.
Symmetries
----------
In this section we show that the automorphism group of the $N=1$ SVOA structure on ${{A^{f\natural}}}$ is isomorphic to Conway’s largest sporadic group, ${\operatorname{\textsl{Co}}}_1$.
The operators $x_{0}$ for $x\in A({\mathfrak{l}})_1$ span a Lie algebra of type $D_{12}$ in ${\operatorname{End}}( A({\mathfrak{l}})_{\Theta})$, and the exponentials $\exp(x_{0})$ for $x\in A({\mathfrak{l}})_1$ generate a group $S$ say, which acts as $A({\mathfrak{l}})^0$-module automorphisms of $A({\mathfrak{l}})_{\Theta}=A({\mathfrak{l}})\oplus A({\mathfrak{l}})_{{\theta}}$. This group $S$ is isomorphic to the group ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$, and we may choose the isomorphism so that $\exp(x_{0})$ in $S$ corresponds to $\exp(\tfrac{1}{2}(ab-ba))$ in ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})<{{\rm Cliff}}({\mathfrak{l}})^{\times}$ when $x=a(-\tfrac{1}{2})b(-\tfrac{1}{2}){{\bf 1}}\in A({\mathfrak{l}})_1$ for some $a,b\in{\mathfrak{l}}$. The action of $S$ on $A({\mathfrak{l}})_{\Theta}$ commutes with the action of ${\theta}$, and so preserves the subspace ${{A^{f\natural}}}=A({\mathfrak{l}})^0\oplus A({\mathfrak{l}})_{{\theta}}^0$. The kernel of this action is the group of order $2$ generated by ${\theta}$. Taking the complexification $_{{{\mathbb C}}}{\mathfrak{l}}$ in place of ${\mathfrak{l}}$ in the above we obtain an action of the complex Lie group ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ on the complexified SVOA $_{{{\mathbb C}}}{{A^{f\natural}}}=A(_{{{\mathbb C}}}{\mathfrak{l}})^0\oplus
A(_{{{\mathbb C}}}{\mathfrak{l}})^0_{{\theta}}$. Let us write $_{{{\mathbb C}}}S$ for this copy of ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ generated by exponentials $\exp(x_0)$ with $x$ in $A(_{{{\mathbb C}}}{\mathfrak{l}})_1$.
The group $_{{{\mathbb C}}}S$ maps surjectively onto the group of SVOA automorphisms of $_{{{\mathbb C}}}{{A^{f\natural}}}$.
From the proof of Proposition \[AfnIsSVOA\] we may regard $_{{{\mathbb C}}}{{A^{f\natural}}}$ as the (complex) lattice SVOA $_{{{\mathbb C}}}V_{M}$ where $M$ is a lattice of type $D_{12}^+$. Let us write $M^0$ for the even sublattice of $M$ (of type $D_{12}$), and $M^1$ for the unique coset of $M^0$ in $M$; we may write $_{{{\mathbb C}}}V_{M^0}\oplus\,_{{{\mathbb C}}}V_{M^1}$ for the superspace decomposition of $_{{{\mathbb C}}}V_M$. Let us write ${\sf
G}$ (not to be confused with the $G$ of §\[CliffConst\]) for the group of SVOA automorphisms of $_{{{\mathbb C}}}V_M$, and ${\sf G}^0$ for the group of VOA automorphisms of $_{{{\mathbb C}}}V_{M^0}$. Let ${\sf S}$ denote the image of $_{{{\mathbb C}}}S$ in ${\sf G}={\operatorname{Aut}}_{{\rm SVOA}}(_{{{\mathbb C}}}{{A^{f\natural}}})$. We wish to show that ${\sf S}={\sf G}$.
Any element of ${\sf G}$ preserves the superspace structure on $_{{{\mathbb C}}}V_M$, so we have a natural map $\phi:{\sf G}\to {\sf G}^0$. By a similar token, any element of ${\sf G}^0$ preserves the Lie algebra structure on the degree $1$ subspace of $_{{{\mathbb C}}}V_{M^0}$ (we denote this Lie algebra by ${\mathfrak{g}}$) so we have also a natural map ${\sf G}^0\to{\operatorname{Aut}}({\mathfrak{g}})$. In fact, this map is faithful and onto since $_{{{\mathbb C}}}V_{M^0}$ is generated by its subspace of degree $1$ elements, in the sense that we have $$\begin{gathered}
_{{{\mathbb C}}}V_{M^0}={\rm Span}_{{{\mathbb C}}}\left\{
x^1_{-n_1}x^2_{-n_2}\cdots x^r_{-n_r}{{\bf 1}}\mid \deg(x^i)=1,\; n_i\in{{\mathbb Z}}_{>0}\right\}\end{gathered}$$ so that any element $g\in{\operatorname{Aut}}({\mathfrak{g}})$ extends to an element of ${\operatorname{Aut}}(_{{{\mathbb C}}}V_{M^0})={\sf G}^0$ once we decree $$\begin{gathered}
g:x^1_{-n_1}x^2_{-n_2}\cdots x^r_{-n_r}{{\bf 1}}\mapsto
(gx^1)_{-n_1}(gx^2)_{-n_2}\cdots (gx^r)_{-n_r}{{\bf 1}}\end{gathered}$$ and any element of ${\sf G}^0$ that fixes ${\mathfrak{g}}$ fixes all of $_{{{\mathbb C}}}V_{M^0}$. Thus we may identify ${\operatorname{Aut}}({\mathfrak{g}})$ with ${\sf
G}^0$.
We claim that $\phi({\sf S})=\phi({\sf G})$. In [@DonNagAutsLattVOAs] it is proved that the automorphism group of a lattice VOA (for an even positive definite lattice, such as $M^0$) is generated by exponentials of zero-modes of degree $1$ elements and by lifts of automorphisms of the lattice. In our situation this means ${\sf G}^0={{\langle}}\phi({\sf S}),O(M^0){{\rangle}}$ where $O(M^0)$ denotes the group of automorphisms of $_{{{\mathbb C}}}V_{M^0}$ generated by lifts of elements of ${\operatorname{Aut}}(M^0)$. On the other hand, we know that the group ${\operatorname{Inn}}({\mathfrak{g}})$ of inner automorphisms of ${\mathfrak{g}}$ (this is just our group $\phi({\sf S})$) has index $2$ in ${\operatorname{Aut}}({\mathfrak{g}})$, since ${\mathfrak{g}}$ is a simple complex Lie algebra of type $D_{12}$. So there is some $x\in O(M^0)$ of order $2$, such that ${\sf G}^0=\phi({\sf S})\cup x\phi({\sf S})$, and either $\phi({\sf G})=\phi({\sf S})$, or $\phi({\sf G})={\sf G}^0$. Let $\bar{x}$ denote the canonical image of $x$ in ${\operatorname{Aut}}(M^0)$. The coset $x\phi({\sf S})$ corresponds to a so-called diagram automorphism of $D_{12}$, and $\bar{x}$ acts non-trivially on the coset space $(M^0)^*/M^0$ interchanging the two cosets with minimal norm $3$ (one of which is $M^1$). In particular, $\bar{x}$ does not preserve the lattice $M=M^0+M^1$, and thus $x$ cannot be extended to an automorphism of $_{{{\mathbb C}}}V_M$ (c.f. [@DonNagAutsLattVOAs Lemma 2.3]). We conclude that $\phi({\sf S})=\phi({\sf
G})$.
Next we claim that $\ker(\phi)$ is contained in ${\sf S}$. For suppose $g\in\ker(\phi)$. Then $g$ fixes all elements of ${\mathfrak{g}}$, and therefore commutes with the action of ${\sf S}$ on $(_{{{\mathbb C}}}V_M)_{3/2}={{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{l}})_G^0$. The space ${{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{l}})^0_G$ is irreducible for the action of ${\sf S}$, so $g$ acts as scalar multiplication by $\zeta\in{{\mathbb C}}$ say, on $(_{{{\mathbb C}}}V_M)_{3/2}$, and indeed, on all of $_{{{\mathbb C}}}V_{M^1}$ (since $_{{{\mathbb C}}}V_{M^1}$ is generated by the action of $_{{{\mathbb C}}}V_{M^0}$ on $(_{{{\mathbb C}}}V_M)_{3/2}$). Then for $x,y\in\, _{{{\mathbb C}}}V_{M^1}$, we have $g(x_{(n)}y)=(gx)_{(n)}(gy)=\zeta^2x_{(n)}y$, and also $g(x_{(n)}y)=x_{(n)}y$ since $x_{(n)}y$ lies in $_{{{\mathbb C}}}V_{M^0}$. It follows that $\zeta=\pm 1$ and $\ker(\phi)$ has order $2$. In the non-trivial case that $g|_{_{{{\mathbb C}}}V_{M^0}}={\operatorname{Id}}$ and $g|_{_{{{\mathbb C}}}V_{M^1}}=-{\operatorname{Id}}$, we have that $g$ is realized by the image of the element $-{\bf 1}\in S$ in ${\sf S}$. This proves the claim.
We have shown that $\phi({\sf S})=\phi({\sf G})$ and $\ker(\phi)<{\sf S}$, and it follows that ${\sf S}={\sf G}$, which is what we required.
From now on it will be convenient to regard ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ and ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ as groups of SVOA automorphisms of ${{A^{f\natural}}}$ and $_{{{\mathbb C}}}{{A^{f\natural}}}$, respectively.
Recall that the ordering on $\Omega$ is chosen so that $e_{\Omega}\in{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ lies in $G$. We denote $e_{\Omega}$ also by ${\mathfrak{z}}$. Then the action of $e_{\Omega}$ on $_{{{\mathbb C}}}{{A^{f\natural}}}$ is trivial, the kernel of the map ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})\to{\operatorname{Aut}}_{{\rm
SVOA}}(_{{{\mathbb C}}}{{A^{f\natural}}})$ is $\{1,{{\mathfrak{z}}}\}$, and the full group of SVOA automorphisms of $_{{{\mathbb C}}}{{A^{f\natural}}}$ is ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$. Any automorphism of ${{A^{f\natural}}}$ extends to an automorphism of the complexification $_{{{\mathbb C}}}{{A^{f\natural}}}$, so ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$ contains the group of SVOA automorphisms of the real form ${{A^{f\natural}}}$. On the other hand, this latter group contains ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$ which is maximal compact in ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$. We conclude that ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$ is the full group of SVOA automorphisms of the real SVOA ${{A^{f\natural}}}$.
Let $F$ denote the subgroup of ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ that fixes $1_G\in{{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{l}})^0_G\leftrightarrow (_{{{\mathbb C}}}{{A^{f\natural}}})_{3/2}$. Then the full group of automorphisms of the $N=1$ SVOA structure on $_{{{\mathbb C}}}{{A^{f\natural}}}$ is $F/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$. Let us write ${\mathfrak{l}}'$ for the span of the vectors $u1_G\in{{\rm CM}}({\mathfrak{l}})_G$ for $u\in {\mathfrak{l}}$, and $_{{{\mathbb C}}}{\mathfrak{l}}'$ for the complexification of ${\mathfrak{l}}'$, regarded as a subspace of ${{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{l}})_G$. Then $_{{{\mathbb C}}}{\mathfrak{l}}'$ has dimension $24$, and $F$ embeds naturally in $SO(_{{{\mathbb C}}}{\mathfrak{l}}')$ since $xu1_G=xux^{-1}1_G=x(u)1_G$ for $u\in \,_{{{\mathbb C}}}{\mathfrak{l}}$ and $x\in F$, and $x({\mathfrak{u}})\subset{\mathfrak{u}}$ for $x\in{\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$. We will now show
\[FHasC\] The group $F$ contains a group isomorphic to ${\operatorname{\textsl{Co}}}_0$.
Recall that the natural map ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})\to{\operatorname{\textsl{SO}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ is denoted $x\mapsto x(\cdot)$. Recall that $G$ is an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous lift of the Golay code ${\mathcal{G}}$ to ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ (see §\[sec:cliffalgs:mods\]), so that for each $C\in{\mathcal{G}}$ there is a unique $g_C\in G$ such that $g_C(\cdot)$ is $-1$ on $e_i$ for $i\in C$, and $+1$ on $e_i$ otherwise. Let $C_0$ be a subgroup of ${\operatorname{\textsl{SO}}}({\mathfrak{l}})$ isomorphic to ${\operatorname{\textsl{Co}}}_0$ such that $C_0$ contains $g(\cdot)$ for each $g\in G<{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$. The Golay code construction of the Leech lattice given in [@ConLctXcptGps] for example, shows that this is possible.
Let $\hat{C}$ be the preimage of $C_0$ in ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$. The group ${\operatorname{\textsl{Co}}}_0$ has trivial Schur multiplier [@ATLAS] so there exists a group $C'$ in ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$, a subgroup of index $2$ in $\hat{C}$, such that the map $x\mapsto x(\cdot)$ restricts to an isomorphism of $C'$ with $C_0<{\operatorname{\textsl{SO}}}({\mathfrak{l}})$. Set $\hat{G}=\{g_C,-g_C\mid C\in
{\mathcal{G}}\}$, and set $G'=\hat{G}\cap C'$. Then we have $G'=\{\gamma_Cg_C\mid C\in G\}$ where $C\mapsto\gamma_C$ is a map ${\mathcal{G}}\to\{\pm 1\}$ such that $\gamma_C\gamma_D=\gamma_{C+D}$. In particular, $C\to \gamma_C$ is a homomorphism, and there must be some $S\subset\Omega$ such that we have $\gamma_C=(-1)^{{{\langle}}C,S{{\rangle}}}$ for all $C\in{\mathcal{G}}$. In other words, we have $G'=\{e_Sg{e_S}^{-1}\mid g\in G\}$.
Set $C=\{{e_S}^{-1}xe_S\mid x\in C'\}$. Then $C$ is isomorphic to ${\operatorname{\textsl{Co}}}_0$, and contains $G$. In particular, $C$ contains the central element ${{\mathfrak{z}}}$, and $C/{{\langle}}{{\mathfrak{z}}}{{\rangle}}$ must be isomorphic to ${\operatorname{\textsl{Co}}}_1$. The space ${{\rm CM}}({\mathfrak{l}})_G^0$ is then a $C/{{\langle}}{\mathfrak{z}}{{\rangle}}$-module of dimension $2048$ and since the only ${\operatorname{\textsl{Co}}}_1$ irreducibles with dimension less than $2048$ have dimension $1$, $276$, $299$ and $1771$ [@ATLAS], the space ${{\rm CM}}({\mathfrak{l}})_G^0$ must have a fixed point $t$ say, for the action of $C$. We may assume that $t$ has unit norm. Since $G$ is contained in $C$, the vector $t$ is also invariant for $G$, and this forces $t=1_G$ by Proposition 3.3. We conclude that $C$ is a subgroup of $F$ isomorphic to ${\operatorname{\textsl{Co}}}_0$, and this completes the proof.
The proof of Proposition \[FHasC\] shows that a copy of ${\operatorname{\textsl{Co}}}_0$ may be found even inside the intersection $F\cap{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$.
The group $F$ is finite.
$F$ is a subgroup of the algebraic group ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})\cong{\operatorname{\textsl{Spin}}}_{24}({{\mathbb C}})$. The condition that a subspace be stabilized by a linear transformation is polynomial, so $F$ too is algebraic, since it is by definition the stabilizer of a subspace in a representation of ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$. At the same time, $F$ is a subgroup of ${\operatorname{\textsl{SO}}}(_{{{\mathbb C}}}{\mathfrak{l}}')\cong {\operatorname{\textsl{SO}}}_{24}({{\mathbb C}})$ containing the algebraic group ${\operatorname{\textsl{Co}}}_0$ by Proposition \[FHasC\]. Since the latter group acts irreducibly on $_{{{\mathbb C}}}{\mathfrak{l}}'$, we conclude that $F$ is reductive. We now check if there is any non-trivial semisimple complex algebraic group or algebraic torus that can occur as a factor of the connected component of the identity in $F$. Any such group would have a non-trivial Lie algebra ${\mathfrak{k}}$ say, with an embedding in the Lie algebra ${\mathfrak{g}}$ of ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$, which we may identify with the degree one subspace of $_{{{\mathbb C}}}{{A^{f\natural}}}$ (equipped with the bracket $[x,y]=x_0y$). Now for all $x\in{\mathfrak{k}}$ we have $\exp(x_0)1_G=1_G$, and this implies $x_01_G=0$ for some non-trivial $x\in{\mathfrak{k}}$. We claim that if $x\in{\mathfrak{g}}$ satisfies $x_01_G=0$ then $x=0$. For consider the map ${\mathfrak{g}}\to{{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{l}})_G$ given by $x\mapsto x_01_G$, and write ${\mathfrak{g}}'$ for the image of ${\mathfrak{g}}$ under this map. Then the dimension of ${\mathfrak{g}}'$ is at most $276$. On the other hand ${\mathfrak{g}}'$ contains the span of the vectors $\{e_{ij}1_G\}$ for $i<j$, and since the Golay code has minimum weight $8$ these vectors are linearly independent, and we see that ${\mathfrak{g}}'$ has dimension not less than $276$. It follows that the map $x\mapsto x_01_G$ is a linear isomorphism from ${\mathfrak{g}}$ to ${\mathfrak{g}}'$, and in particular, the kernel is trivial. This verifies the claim. We conclude that $\dim(F)=0$, whence $F$ is finite.
We have shown that $F$ is a finite subgroup of ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ such that $F\cap{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ contains a copy of ${\operatorname{\textsl{Co}}}_0$. Our last main task for this section is to show that $F$ itself is isomorphic to ${\operatorname{\textsl{Co}}}_0$. With this in mind, we offer the following proposition, the proof of which owes much to the methods used in Theorems 5.6 and 6.5 of [@NebRaiSloInvtsCliffGps]. In particular, we utilize the notion of primitive matrix group: a group $G\leq{\operatorname{\textsl{GL}}}(V)$, for $V$ a vector space, is said to be [*primitive*]{} if there is no non-trivial decomposition $V=V_1\oplus\cdots\oplus V_k$ into subspaces permuted by the the action of $G$. Note that if $N$ is normal in $G$, then $G$ permutes the isotypic components of the restricted module $V|_N$, so that $V|_N$ must be multiple copies of a single irreducible representation for $N$ in the case that $G$ is primitive.
\[CMaxSubjFin\] The group ${\operatorname{\textsl{Co}}}_0$ is a maximal subgroup of ${\operatorname{\textsl{SO}}}_{24}({{\mathbb C}})$ subject to being finite.
Any compact subgroup of ${\operatorname{\textsl{SO}}}_n({{\mathbb C}})$ is realizable over ${{\mathbb R}}$ (c.f. [@SeRep §13.2]), so it suffices to show that ${\operatorname{\textsl{Co}}}_0$ is a maximal finite subgroup of ${\operatorname{\textsl{SO}}}_{24}({{\mathbb R}})$. Let $V$ denote a real vector space of dimension $24$, equipped with a non-degenerate symmetric bilinear form. Suppose that $F$ is a finite subgroup of ${\operatorname{\textsl{SO}}}(V)$ properly containing a copy $C$ of the group ${\operatorname{\textsl{Co}}}_0$. Then $C$ is not normal in $F$, since $C$ acts absolutely irreducibly on $V$, and if $C$ were normal in $F$ then $F/C$ would embed in the outer automorphism group of $C$, which is trivial [@ATLAS]. Let us write $Z$ for the center $\{\pm{\operatorname{Id}}\}$ of $C$ (and $F$).
We claim that $F$ has no non-trivial normal $p$-subgroups for $p$ odd, and the only non-trivial normal $2$-subgroup is $Z$. For suppose $N$ is a normal $p$-subgroup of $F$ for some prime $p$. Then $C_F(N)\cap C$ (we write $C_F(N)$ for the centralizer in $F$ of $N$) is normal in $C$ and contains $Z$, so that $C_F(N)\cap C$ is either $Z$ or $C$. In the former case we have that $C/Z\cong {\operatorname{\textsl{Co}}}_1$ is a subgroup of ${\operatorname{Aut}}(N)$. In the latter case, $N$ is centralized by (the absolutely irreducible action of) $C$, and hence must consist of scalar matrices. It follows that $N$ is trivial unless $p=2$, in which case $N$ is either trivial or $N=Z$. We suppose then that $N$ is a normal $p$-subgroup of $F$ such that ${\operatorname{Aut}}(N)$ contains a copy of ${\operatorname{\textsl{Co}}}_1$. The group $C$ acts primitively on $V$, and even on the complexification $_{{{\mathbb C}}}V={{\mathbb C}}\otimes_{{{\mathbb R}}}V$, and hence so does $F$. It follows that $_{{{\mathbb C}}}V|_N$ is an isotypic module for $N$; i.e. several copies of a single irreducible module $M$ say, for $N$. Since $N$ is by definition a subgroup of ${\operatorname{\textsl{SO}}}_{24}({{\mathbb R}})$, it follows that $M$ is the complexification of an irreducible $N$-module $_{{{\mathbb R}}}M$ say, defined over ${{\mathbb R}}$, and that the action of $N$ on $_{{{\mathbb R}}}M$ is faithful. Any $p$-group has non-trivial center, and a central subgroup of a group acts by scalar multiplications on any irreducible module for that group. We conclude that $p$ is not odd, since a $p$-group for odd $p$ has central elements which must act as multiplication by primitive (and non-real) $p$-th roots of unity. We see also that any abelian normal subgroup of $F$ is cyclic. The irreducible representations of a $p$-group are of $p$-power order, so $\deg(_{{{\mathbb R}}}M)$ is a power of $2$ dividing $24$. Without loss of generality, we suppose $\deg(_{{{\mathbb R}}}M)=8$. Note that $N$ can contain no noncyclic characteristic abelian subgroups, since such a subgroup would be a noncyclic normal abelian $p$-subgroup of $F$. A $p$-group with this property is said to be of [*symplectic type*]{}, and such groups are classified by a Theorem of P. Hall (c.f. [@AscFGT (23.9)]). In particular, there are no $2$-groups of symplectic type that both embed in ${\operatorname{\textsl{SO}}}_8({{\mathbb R}})$ and admit a non-trivial action by ${\operatorname{\textsl{Co}}}_1$ as automorphisms. The claim follows.
Now we seek a contradiction. Any finite group is realizable over a cyclotomic number field (c.f. [@SeRep Thm 24]). In fact we may assume that $F$ is a subgroup of $SO_{24}(K)$ where $K$ is a totally real abelian number field (e.g. $K={{\mathbb Q}}(\zeta_m+\zeta_m^{-1})$ for $m$ such that $x^m=1$ for all $x\in F$ and $\zeta_m=\exp(2\pi{{\bf i}}/m)$ — c.f. [@DreIndctnStrucThmsOrth Prop 5.6]). Let $K$ be a minimal such field, and let $R$ be the ring of integers in $K$. Then $F$ preserves an $RC$-lattice, and such a lattice is of the form $I\otimes_{{{\mathbb Z}}}{\Lambda}$ for ${\Lambda}$ a copy of the Leech lattice and $I$ a fractional ideal of $R$, since any $C$ invariant lattice in ${{\mathbb Q}}^{24}$ is isometric to ${\Lambda}$ (c.f. [@TieGIRsFGs]). It follows that $F$ preserves the lattice $R\otimes_{{{\mathbb Z}}}{\Lambda}$. (To see this, note that if $F$ preserves a lattice $L$, then it also preserves $aL$ for any $a\in K$. Also, if $F$ preserves lattices $L_1$ and $L_2$, then it preserves the sum $L_1+L_2$. Now take $L=I\otimes_{{{\mathbb Z}}}{\Lambda}$ for $I$ a fractional ideal of $R$, and let $\{y_i\}$ be a set of generators for the inverse fractional ideal. Then $R=\sum y_iI$, and $\sum y_iL=R\otimes_{{{\mathbb Z}}}{\Lambda}$ is also invariant for $F$.) We may now regard $F$ as a group of matrices with entries in the ring $R$. If $K={{\mathbb Q}}$ then $F=C$ and we are done. If not, then there is some rational prime $p$ that ramifies in $K$. Let ${\mathfrak{p}}$ be a prime ideal of $R$ lying above $p$, and let $\Gamma_{{\mathfrak{p}}}$ denote the subgroup of ${\operatorname{Gal}}(K/{{\mathbb Q}})$ consisting of automorphisms $\sigma$ such that $\sigma(a)\equiv a \pmod{{\mathfrak{p}}}$ for all $a\in R$. (This is the [*first inertia group*]{}. It stabilizes ${\mathfrak{p}}$, and has order equal to the ramification index of ${\mathfrak{p}}$ over $p$ — c.f. [@FroTayANT III:4]) The Galois group of $K$ over ${{\mathbb Q}}$ acts on $F$ by acting componentwise on matrices. Let $F_{{\mathfrak{p}}}=\{g\in F\mid g\equiv{\operatorname{Id}}\pmod{{\mathfrak{p}}}\}$, let $\sigma$ be a non-trivial element of $\Gamma_{{\mathfrak{p}}}$, and let $\phi:F\to
F_{{\mathfrak{p}}}$ be the map defined by $\phi(g)=g^{-1}\sigma(g)$ for $g\in F$. The group $F_{{\mathfrak{p}}}$ is a normal $p$-subgroup of $F$, and we have shown that such a group is trivial except possibly in the case that $p=2$. If $F_{{\mathfrak{p}}}$ is trivial then $\sigma$ fixes $F$, and this contradicts the minimality of $K$. So suppose $p=2$ and $F_{{\mathfrak{p}}}$ is the group $Z=\{\pm{\operatorname{Id}}\}$. Then $\phi$ is in fact a group homomorphism $F\to Z$ (since $g^{-1}\sigma(g)$ is now central for all $g\in F$). Since the image of $\phi$ is abelian, the derived subgroup $F^{(1)}$ of $F$ lies in the kernel of $\phi$, and is thus fixed by $\sigma$. If $F=F^{(1)}$ then $F$ is realizable over the subfield of $K$ fixed by $\sigma$, contradicting the minimality of $K$. So $F$ properly contains $F^{(1)}$, and the argument thus far shows that any finite subgroup of ${\operatorname{\textsl{SO}}}_{24}({{\mathbb R}})$ properly containing $C$, properly contains its own derived subgroup. Consider now the descending chain $F\geq F^{(1)}\geq
F^{(2)}\geq\cdots$ where $F^{(k+1)}$ is the derived subgroup of $F^{(k)}$. Each term contains $C$ since $F>C$ and $C=C^{(1)}$, and thus each containment $F^{(k)}\geq F^{(k+1)}$ is proper unless $F^{(k)}=C$. Since $F$ is finite, not all containments are proper, and thus we have $F^{(k)}=C$ for some $k$. Then $C$ is a characteristic subgroup of $F$, and in particular, is normal in $F$, and this is again a contradiction.
We conclude that ${\operatorname{\textsl{Co}}}_0$ is a maximal subgroup of ${\operatorname{\textsl{SO}}}_{24}({{\mathbb C}})$ subject to being finite.
We have established the following
\[Thm:PtStabIsCo0\] The subgroup of ${\operatorname{\textsl{Spin}}}(_{{{\mathbb C}}}{\mathfrak{l}})$ fixing $1_G\in {{\rm CM}}(_{{{\mathbb C}}}{\mathfrak{l}})_{G}$ is isomorphic to ${\operatorname{\textsl{Co}}}_0$.
Recall that ${\operatorname{Aut}}(_{{{\mathbb C}}}{{A^{f\natural}}}) =F/{{\langle}}{\mathfrak{z}}{{\rangle}}$. The group $\langle{\mathfrak{z}}\rangle$ is the center of $F$, and thus $F/{{\langle}}{\mathfrak{z}}{{\rangle}}$ is isomorphic to ${\operatorname{\textsl{Co}}}_1$. By construction this copy of ${\operatorname{\textsl{Co}}}_1$ is contained in ${\operatorname{Aut}}({{A^{f\natural}}})$. We have therefore established
\[ThmSymms\] There are isomorphisms of groups ${\operatorname{Aut}}(_{{{\mathbb C}}}{{A^{f\natural}}})\cong{\operatorname{Aut}}({{A^{f\natural}}})\cong{\operatorname{\textsl{Co}}}_1$.
It was noted in the Introduction that an action of ${\operatorname{\textsl{Co}}}_1$ on the SVOA underlying ${{A^{f\natural}}}$ was considered earlier in [@BorRybMMIII]. In fact, an action of ${\operatorname{\textsl{Co}}}_0$ (the perfect double cover of ${\operatorname{\textsl{Co}}}_1$) on the SVOA underlying ${{A^{f\natural}}}$ was also considered in [@BorRybMMIII], and in our setting, this action arises naturally by considering the action of the group $F$ on the object $A^{f\flat}$ given by $$\begin{gathered}
A^{f\flat}=A({\mathfrak{l}})^0\oplus A({\mathfrak{l}})_{{\theta}}^1\end{gathered}$$ where we realize $A({\mathfrak{l}})_{{\theta}}$ as $A({\mathfrak{l}})_{G,{\theta}}$ with $G$ as in §\[CliffConst\]. We have seen that the group $F$ is isomorphic to the quasi-simple group ${\operatorname{\textsl{Co}}}_0$, and in contrast to the situation with ${{A^{f\natural}}}$ the central element of $F$ acts non-trivially on $A^{f\flat}$. The same method used in §\[CliffConst\] shows that $A^{f\flat}$ has a unique structure of SVOA, and also that ${{A^{f\natural}}}$ and $A^{f\flat}$ are isomorphic, as SVOAs. There is however no ${\operatorname{\textsl{Co}}}_0$ invariant vector in the degree $3/2$ subspace of $A^{f\flat}$, and hence no ${\operatorname{\textsl{Co}}}_0$ invariant $N=1$ structure on $A^{f\flat}$.
Uniqueness {#SecUniq}
==========
In this section we prove a uniqueness result for ${{A^{f\natural}}}$. In the first subsection we verify that any nice rational $N=1$ SVOA satisfying
- self-dual
- rank $12$
- no small elements
is isomorphic to $_{{{\mathbb C}}}{{A^{f\natural}}}$ as an SVOA. To do this we first recall the modularity results for trace functions on VOAs due to Zhu (see [@ZhuPhd], [@ZhuModInv]), and their extension to the SVOA case given in [@HohnPhD]. Then we make use of some techniques from [@DonMasEfctCC] and [@DonMasHlmVOA], replacing VOA concepts with their SVOA analogues as necessary. The guiding principle that we adopt from these two papers is that one may use modular invariance results for a VOA $V$ to deduce properties about the Lie algebra structure on $V_1$, the degree one subspace of $V$.
In the second subsection we show that the $N=1$ structure on ${{A^{f\natural}}}$ is unique in the sense that if $\tau\in({{A^{f\natural}}})_{3/2}$ is a superconformal vector then there is some SVOA automorphism of ${{A^{f\natural}}}$ mapping $\tau$ to $\tau_A$.
SVOA structure {#svoa-structure}
--------------
Recall from §\[sec:SVOAstruc:SVOAs\] and §\[sec:SVOAMods\] the definitions of niceness and rationality for an SVOA. Recall also from §\[sec:SVOAMods\] that a rational SVOA has finitely many irreducible modules up to equivalence.
### Theta group {#sec:thetagp}
Let $\Gamma={\operatorname{\textsl{SL}}}(2,{{\mathbb Z}})$ and recall that the modular group $\bar{\Gamma}=\Gamma/\{\pm 1\}={\operatorname{\textsl{PSL}}}(2,{{\mathbb Z}})$ acts faithfully on the upper half plane ${{\mathbf h}}=\{\sigma+{{\bf i}}t\mid t>0\}\subset{{\mathbb C}}$, with the action generated by modular transformations $S$ and $T$ where $S:\tau\mapsto-1/\tau$ and $T:\tau\mapsto \tau+1$. We identify $\bar{\Gamma}$ with its image in the isometry group of ${{\mathbf h}}$ and set $\bar{\Gamma}_{\theta}=\langle S,T^2\rangle$. The compactification of the quotient space $\bar{\Gamma} \backslash {{\mathbf h}}$ is topologically a sphere, and the same is true for $\bar{\Gamma}_{\theta} \backslash
{{\mathbf h}}$. The space $\bar{\Gamma}_{\theta} \backslash {{\mathbf h}}$ has two cusps, with representatives $1$ and $\infty$. There is a unique holomorphic function on ${{\mathbf h}}$ that is invariant under $\bar{\Gamma}_{{\theta}}$, has a $q$ expansion of the form $q^{-1/2}+a+bq^{1/2}+cq+\ldots$, and vanishes at $1$. We denote this function by $J_{{\theta}}(\tau)$ since it is an analogue of the $J$ function, which generates the field of functions on the compactified curve $\bar{\Gamma}\backslash {{\mathbf h}}$. The function $J_{{\theta}}$ furnishes a bijective map from the compactification of $\bar{\Gamma}_{{\theta}}\backslash {{\mathbf h}}$ to the Riemann sphere ${{\mathbb C}}\cup\{\infty\}$. One has the following expression for $J_{{\theta}}(\tau)$. $$\begin{gathered}
\begin{split}
J_{{\theta}}(\tau)&=\frac{\eta(\tau)^{48}}
{\eta(\tau/2)^{24}\eta(2\tau)^{24}}\\
&=q^{-1/2}+24+276q^{1/2}+2048q+11202q^{3/2}
+49152q^{2}+ \cdots
\end{split}\end{gathered}$$ To see the behavior of $J_{{\theta}}$ at $1$, note that $TS\tau\to 1$ as $\tau\to \infty$. For $J_{{\theta}}|_{TS}$ we have $$\begin{gathered}
\begin{split}
J_{{\theta}}|_{TS}=J_{{\theta}}(-1/\tau+1)
&=-2^{12}\frac{\eta(2\tau)^{24}}
{\eta(\tau)^{24}}\\
&=-(4096q+98304q^{2}+1228800q^{3}+\ldots)
\end{split}\end{gathered}$$ confirming that $J_{{\theta}}$ vanishes as $\tau\to 1$. We write $\Gamma_{{\theta}}$ for the preimage of $\bar{\Gamma}_{{\theta}}$ in $\Gamma$.
### Modular Invariance {#sec:uniq:SVOA:modinv}
Suppose now that $(V,Y,{{\bf 1}},{{\bf \omega}})$ is a nice rational VOA. Following [@ZhuModInv] we may define a new VOA structure on the space $V$ as follows. We define the genus one VOA associated to $V$ to be the four-tuple $(V,Y[\;],{{\bf 1}},\tilde{{{\bf \omega}}})$ where $\tilde{{{\bf \omega}}}=(2\pi{{\bf i}})^2({{\bf \omega}}-c/24)$ and the linear map $Y[\;]:V\otimes V\to V((z))$ is defined so that $$\begin{gathered}
Y[u,z]=\sum_{n\in{{\mathbb Z}}}u_{[n]}z^{-n-1}
=Y(u,e^{2\pi{{\bf i}}z}-1)e^{{\rm deg}(u)2\pi{{\bf i}}z}\end{gathered}$$ for $u$ an $L(0)$-homogeneous element in $V$. The object thus defined is again a VOA and is isomorphic to $(V,Y,{{\bf 1}},{{\bf \omega}})$ [@ZhuModInv]. In particular the coefficients of $Y[\tilde{{{\bf \omega}}},z]$ define a representation of the Virasoro algebra with central charge $c$. We write $$\begin{gathered}
L[z]=Y[\tilde{{{\bf \omega}}},z]=\sum_{{{\mathbb Z}}}L[n]z^{-n-2}\end{gathered}$$ and for $n\in{{\mathbb Z}}$, we set $V_{[n]}=\{u\in V\mid L[0]u=nu\}$. Note that for $u$ an $L(0)$-homogeneous element in $V$ we have $$\begin{gathered}
\begin{split}
Y[u,z]=
&\sum_{n}u_n(e^{2\pi{{\bf i}}z}-1)^{-n-1}
e^{{\rm deg}(u)2\pi{{\bf i}}z}\\
=&\sum_{n}u_n
(2\pi{{\bf i}}z+\tfrac{1}{2}(2\pi{{\bf i}}z)^2+\ldots)^{-n-1}
e^{{\rm deg}(u)2\pi{{\bf i}}z}\\
=&\sum_{n}
(2\pi{{\bf i}})^{-n-1}u_nz^{-n-1}
(1+\tfrac{1}{2}2\pi{{\bf i}}z+\ldots)^{-n-1}
e^{{\rm deg}(u)2\pi{{\bf i}}z}
\end{split}\end{gathered}$$ and in particular, $u_{[n]}=(2\pi{{\bf i}})^{-n-1}u_n
+\sum_{k>0}c_ku_{n+k}$ for some constants $c_k\in{{\mathbb C}}$. If $u,v\in
V_1$ then $u_1v=\langle u|v\rangle{{\bf 1}}$ and $u_nv=0$ for $n>1$ so we have the following
\[NRu1vlemma\] Let $V$ be a nice rational VOA and let $u,v\in V_1$. Then $u_{[1]}v=-(4\pi^2)^{-1}\langle u|v\rangle{{\bf 1}}$.
Recall the Eisenstein series $G_2(\tau)$ given by $$\begin{gathered}
G_2(\tau)=\frac{\pi^2}{3}+\sum_{m\neq 0}\sum_n
\frac{1}{(m\tau+n)^2}\end{gathered}$$ The function $G_2(\tau)$ has a $q$ expansion which may be expressed in the form $$\begin{gathered}
G_2(\tau)=
\frac{\pi^2}{3}-8\pi^2
\sum_{n=1}^{\infty}\sigma_1(n)q^n\end{gathered}$$ where $\sigma_1(n)$ is the sum of the divisors of $n$. We denote the corresponding formal power series (that is, element of ${{\mathbb C}}[[q]]$) by $\tilde{G}_2(q)$.
We define a linear function $o(\cdot):V\to{\rm End}(V)$ by setting $o(u)=u_{{\rm deg}(u)-1}$ for $L(0)$-homogeneous $u\in V$. The following result is a special case of Proposition 4.3.5 in [@ZhuModInv].
\[NR\_ZhuProp\] Let $V$ be a nice rational SVOA and $M$ a finitely generated $V$-module. Then for $u,v\in V_1$ we have $$\begin{gathered}
{\sf tr}|_Mo(u)o(v)q^{L(0)}=
{\sf tr}|_Mo(u_{[-1]}v)q^{L(0)}
-\tilde{G}_{2}(q)
{\sf tr}|_M o(u_{[1]}v)q^{L(0)}\end{gathered}$$
Let $V$ be a nice rational SVOA and let $M$ be a finitely generated $V$-module. For an $n$-tuple $(u_1,\ldots,u_n)$ of $L(0)$-homogeneous elements in $V$ we define the following formal series. $$\begin{gathered}
\begin{split}
&\tilde{F}_M((u_1,x_1),\ldots,(u_n,x_n);q)\\
&=x_1^{{\rm deg}(u_1)}\cdots x_n^{{\rm deg}(u_n)}
{\sf tr}|_MY(u_1,x_1)\cdots Y(u_n,x_n)q^{L(0)}
\end{split}\end{gathered}$$ We extend the definition of $\tilde{F}_M$ to arbitrary $n$-tuples of elements from $V$ by linearity. As in [@ZhuModInv Th 4.2.1] One can show that this series $\tilde{F}_M$ converges to a holomorphic function in the domain $$\begin{gathered}
\{ (x_1,\ldots,x_n,q)\mid 1>|x_1|>\ldots>|x_n|>|q|\}\end{gathered}$$ and can be continuously extended to be meromorphic in the domain $$\begin{gathered}
\{ (x_1,\ldots,x_n,q)\mid x_i\neq 0,\,|q|<1\}\end{gathered}$$ We denote the meromorphic function so obtained by $F_M$. We substitute variables $x_i$ with $e^{2\pi{{\bf i}}z_i}$ and $q$ with $e^{2\pi{{\bf i}}\tau}$, and we set $$\begin{gathered}
T_M((u_1,z_1),\ldots,(u_n,z_n);\tau)
=q^{-c/24}F_M((u_1,x_1),\ldots,(u_n,x_n);q)\end{gathered}$$ Following [@ZhuModInv] and [@HohnPhD] we call $T_M((u_1,z_1), \ldots, (u_n,z_n); \tau)$ the $n$-point correlation function on the torus with parameter $\tau$ for the operators $Y(u_i,z_i)$ and the module $M$.
The function $T_M((u_1,z_1),\ldots,(u_n,z_n);\tau)$ is doubly periodic in each variable $z_i$ with periods $1$ and $2\tau$, and possible singularities only at the points $z_i=z_j+k+l\tau$ for $i\neq j$, $k,l\in{{\mathbb Z}}$. For $j\in\{1,\ldots,n\}$ and $u_j$ an $L(0)$-homogeneous element of $V$ we have $$\begin{gathered}
T_M(\ldots,(u_j,z_j+\tau), \ldots; \tau)=
(-1)^{p(u_j)}T_M(\ldots,(u_j,z_j), \ldots; \tau)\end{gathered}$$ For a permutation $\sigma\in S_n$ we have $$\begin{gathered}
T_M((u_1,z_1),\ldots,(u_n,z_n);\tau)=(-1)^w
T_M((u_{\sigma(1)},z_{\sigma(1)}),\ldots,
(u_{\sigma(n)},z_{\sigma(n)});\tau)\end{gathered}$$ where $w$ is the number of permutations of the elements $u_i$ that lie in $V_{\bar{1}}$.
We denote by $T_M$ the mapping defined on the set $\bigcup_{n=1}^{\infty}((V\times {{\mathbb C}})^n\times{{\mathbf h}})$ that sends $((u_1,z_1),\ldots,(u_n,z_n);\tau)$ to $T_M((u_1,z_1),\ldots,(u_n,z_n);\tau)$.
Suppose now that $\{M^1,\ldots, M^r\}$ is a complete list of irreducible $V$-modules. The superconformal block on the torus associated to the SVOA $V$ is the ${{\mathbb C}}$-vector space spanned by the $r$ mappings $T_{M^i}$. We denote it by ${\rm SB}_V$.
The following result is an analogue for SVOAs of a celebrated theorem due to Zhu concerning the modularity properties of $n$-point correlation functions on the torus associated to the vertex operators on VOAs. As is indicated in [@HohnPhD], this analogue may be proven in a manner directly analogous to that of the VOA version given in [@ZhuModInv], and one should use the SVOA analogues of Zhu algebras defined in [@KacWanSVOAs].
\[Thm:thetagpinv\] Let $V$ be a nice rational SVOA and suppose that $\{M^1,\ldots,
M^r\}$ is a complete list of irreducible $V$-modules. Then the superconformal block on the torus associated to $V$ is $r$-dimensional and the functions $T_{M^i}$ form a basis. Moreover, there exists a representation $\rho$ of $\Gamma_{{\theta}}$ on ${\rm
SB}_V$ such that for Virasoro highest weight vectors $\{u_1,\ldots,u_n\}\in V$ the $n$-point correlation functions on the torus for the operators $Y(u_i,z_i)$ and the modules $M^i$ satisfy the following transformation property $$\begin{gathered}
\begin{split}
&T_{M^i}\left(
\left(u_1,\frac{z_1}{c\tau+d}\right),\ldots,
\left(u_n,\frac{z_n}{c\tau+d}\right);
\frac{a\tau+b}{c\tau+d}\right)\\
&\qquad=
(c\tau+d)^{\sum_k {\rm deg}(u_k)}
\sum_j\rho(A)_{ij}
T_{M^j}((u_1,z_1),\ldots,(u_n,z_n);\tau)
\end{split}\end{gathered}$$ where $A=\binom{a\;b}{c\;d}$ is an element of $\Gamma_{{\theta}}$ and $(\rho(A)_{ij})$ is the matrix representing $\rho(A)\in{\rm
End}({\rm SB}_V)$ with respect to the basis $\{T_{M^i}\}$.
In the case that $n=1$ the function $T_M((u,z);\tau)$ is elliptic in the variable $z$ and without poles, and is therefore constant with respect to $z$. We may therefore set $T_M(u;\tau)= T_M((u,z);\tau)$. Note that $T_M(u;\tau)={\sf tr}|_Mo(u)q^{L(0)-c/24}$, and in particular $T_M({{\bf 1}};\tau)={\sf tr}|_Mq^{L(0)-c/24}$.
\[SDNR\_wtkModFrm\] Let $V$ be a self-dual nice rational SVOA. Then ${\rm SB}_V$ is one dimensional, and for $u\in V_{\bar{0}}$ a Virasoro highest weight vector with ${\rm deg}(u)=k$, the function ${\sf
tr}|_Mo(u)q^{L(0)-c/24}$ is a weight $k$ modular form on $\bar{\Gamma}_{{\theta}}$, possibly with character.
Corollary \[SDNR\_wtkModFrm\] now appears as a special case of the more general Theorem 3 of [@DonZhaMdltyOrbSVOA] which incorporates also $g$-twisted SVOA modules for $g$ in a finite group of automorphisms of a suitable SVOA $V$.
We apply Corollary \[SDNR\_wtkModFrm\] immediately with $u={{\bf 1}}$ in order to determine the character of an SVOA satisfying our hypotheses.
\[Prop:VChar\] Suppose that $V$ is a self-dual nice rational SVOA of rank $12$. Then we have $$\begin{gathered}
\begin{split}
{\sf tr}|_Vq^{L(0)-c/24}&=J_{{\theta}}(\tau)
+\dim(V_{1/2})-24\\
&=q^{-1/2}+\dim(V_{1/2})+276q^{1/2}
+2048q+11202q^{3/2}+\ldots
\end{split}\end{gathered}$$ for the character of $V$.
Let us set $f(\tau)={\sf tr}|_Vq^{L(0)-c/24}$. By hypothesis, $f(\tau)$ admits a Fourier expansion of the form $q^{-1/2}+\sum_{n\geq 0}f_nq^{n/2}$ with all the $f_n$ non-negative integers. By Corollary \[SDNR\_wtkModFrm\] we know that $f(\tau)$ is holomorphic on ${{\mathbf h}}$ and is invariant for the action of $\Gamma_{{\theta}}$. It follows that $f(\tau)=P(J_{{\theta}})/Q(J_{{\theta}})$ for some polynomials $P(X),Q(X)\in {{\mathbb C}}[X]$, with $\deg(P)=\deg(Q)+1$, and we may assume that $P$ and $Q$ are both monic and have no common factors. The function $J_{{\theta}}(\tau)$ is a surjective map from ${{\mathbf h}}$ to ${{\mathbb C}}\setminus\{0\}$, so that for $f(\tau)$ to be holomorphic we must have $Q(X)=X^m$ for some $m$. Then we have $$\begin{gathered}
\label{eqn:fisLauJ}
f(\tau)=J_{{\theta}}+a_{m}+a_{m-1}J_{{\theta}}^{-1}+\cdots+
a_{0}J_{{\theta}}^{-m}\end{gathered}$$ for $P(X)=X^{m+1}+a_mX^m+\cdots+a_0$ with $a_0\neq 0$ unless possibly if $m=0$. We claim that $m=0$ and $a_m=a_0=\dim(V_{1/2})-24$. Certainly, $a_m=\dim(V_{1/2})-24$, since the first two terms of (\[eqn:fisLauJ\]) determine the first two Fourier coefficients of $f(\tau)$. Let us write $J_{{\theta}}(\tau)^{-d}=\sum_n r_{-d}(n)q^{n/2}$. Then the sequence $\{r_{-d}(n)\}_n$ alternates in sign when $d$ is positive, as can be seen from the following identity. $$\begin{gathered}
J_{{\theta}}(\tau)^{-d}
=\frac{\eta({\tau}/{2})^{24d}\eta(2\tau)^{24d}}{\eta(\tau)^{48d}}
=q^{d/2}\prod\frac{1}{(1+q^{n+1/2})^{24d}}\end{gathered}$$ We see also from this that for $d>0$, the value of $|r_{-d}(n)|$ is the number of partitions of $n-d$ into odd parts with $24d$ colors. The asymptotic behavior of such functions is described by Theorem 1 of [@MeiAsympPtns], and we will quote this result presently. The value of $r_1(n)$ is the number of partitions of $n+1$ into odd parts of $24$ colors without replacement, and for the asymptotics of this function we refer to Proposition 1 of [@HwaLimitThmsIntPtns]. The result is that we have $$\begin{gathered}
r_1(n)\sim C_1\frac{e^{2\pi\sqrt{n}}}{n^{3/4}}\quad\text{and}
\quad
|r_{-d}(n)|\sim
C_{-d}\frac{e^{2\pi\sqrt{2d}\sqrt{n}}}{n^{3/4}}\quad
\text{for $d>0$,}\end{gathered}$$ for some constants $C_k$. Evidently, the growth of the $r_{-m}(n)$ outstrips that of the $r_{-d}(n)$ for $1\geq -d\geq-m+1$ if $m>0$. In particular, the $f_n$ can be all non-negative integers only if $m=0$.
As demonstrated in [@DonZhaMdltyOrbSVOA], one may recover a modular invariance under the full modular group for the trace functions associated to an SVOA by considering canonically twisted modules together with untwisted modules. The following result is a special case of Theorem 1 of [@DonZhaMdltyOrbSVOA] where we take $V$ to be self-dual, and $G$ to be the group of SVOA automorphisms generated by the canonical automorphism of $V$. Recall from §\[sec:SVOATwMods\] that if $V$ is a self-dual rational $C_2$-cofinite SVOA, then $V_{\sigma}$ denotes the unique $\sigma$-stable $\sigma$-twisted $V$-module, and $\sigma$-stable here means that $V_{\sigma}$ admits a compatible action by $\sigma$.
\[prop:FullModInvSVOA\] Let $V$ be a self-dual rational $C_2$-cofinite SVOA. Let $w\in V$ such that $w\in V_{[k]}$ for some $k$. Then for $\gamma\in\Gamma$, we have $$\begin{gathered}
{\sf tr}|_{V}o(w)q^{L(0)-c/24}|_{\gamma}
=(c\tau+d)^k\rho(\gamma)
{\sf tr}|_{W}o(w)\sigma^{1+b+d}q^{L(0)-c/24}\end{gathered}$$ for some $\rho(\gamma)\in{{\mathbb C}}$ independent of $w$, where $W=V_{\sigma}$ if $\sigma^{1+a+c}=\sigma$, and $W=V$ otherwise, and $\gamma$ is the matrix $\left(\begin{array}{cc}
a & b \\
c & d
\end{array}\right)$.
In Proposition \[Prop:VChar\] we determined that the character of a self-dual nice rational SVOA of rank $12$ is $J_{{\theta}}(\tau)+\dim(V_{1/2})-24$. Applying Proposition \[prop:FullModInvSVOA\] with $\gamma=TS$ (so that $(a,b,c,d)=(1,-1,1,0)$) we find that the character ${\sf
tr}|_{V_{\sigma}}q^{L(0)-c/24}$ of the canonically twisted module $V_{\sigma}$ over such an SVOA is just $\alpha
(J_{{\theta}}|_{TS}+\dim(V_{1/2})-24)$ for some $\alpha\in{{\mathbb C}}$. Recalling from §\[sec:thetagp\] that the $q$ expansion of $J_{{\theta}}|_{TS}$ involves only positive integer powers of $q$, we have
\[prop:twmodbnd\] Let $V$ be a self-dual nice rational SVOA of rank $12$ with $V_{1/2}=0$. Then $(V_{\sigma})_n$ vanishes unless $n>0$ and $n\in{{{\mathbb Z}}+\tfrac{1}{2}}$.
### Structure of $V_1$
Our plan now is to study the structure of the Lie algebra on $V_1$ for $V$ satisfying suitable hypotheses. By the end of this section, knowledge of $V_1$ will determine $V$ uniquely under the conditions we consider. Much of our method follows that employed in certain sections of [@DonMasEfctCC] and [@DonMasHlmVOA], and is a manifestation of the principle established there that modular invariance for an SVOA $V$ can be used to make strong conclusions about the structure of $V_1$.
\[Prop:N=1NiceBiFm\] Suppose that $V$ is a nice $N=1$ SVOA with $V_{1/2}=0$. Then $V$ has a unique non-degenerate invariant bilinear form.
By the results of [@SchVASStgs], the space of invariant bilinear forms on $V$ is in natural correspondence with the space $V_0/L(1)V_1$. We have that $V_0$ is one dimensional by hypothesis, so we require to show that $L(1)V_1=0$. From the commutation relations of the Neveu–Schwarz superalgebra we have $G(\tfrac{1}{2})^2=L(1)$ so that $L(1)V_1\subset
G(\tfrac{1}{2})V_{1/2}$. Since $V_{1/2}=0$ the result follows.
From now we assume $V$ to be a nice rational $N=1$ SVOA with $V_{1/2}=0$. Then the import of Proposition \[Prop:N=1NiceBiFm\] is that $V_{\bar{0}}$ is a strongly rational VOA in the sense of [@DonMasEfctCC]. In particular Theorem 1 of [@DonMasEfctCC] yields the following
\[Thm:V1Red\] The Lie algebra $V_1$ is reductive.
Since $V_1$ is reductive, the Lie rank of $V_1$ is well defined. Suppose in addition now that $V$ is self-dual. We will follow the technique used to prove Theorem 2 in [@DonMasEfctCC] to establish the following
\[Thm:RnkBndsLiernk\] The Lie rank of $V_1$ is bounded above by ${\rm rank}(V)$.
We set $c={\rm rank}(V)$. Let ${\mathfrak{h}}$ be a maximal abelian subalgebra of $V_1$ consisting of semisimple elements. The Lie rank of $V_1$ is the dimension of ${\mathfrak{h}}$ and we denote this value by $l$. The bilinear form ${{\langle}}\cdot|\cdot{{\rangle}}$ restricts to be non-degenerate on ${\mathfrak{h}}$, and thus the vertex operators $Y(h,z)$ for $h\in{\mathfrak{h}}$ generate an affine Lie algebra $\hat{{\mathfrak{h}}}$, and we can decompose $V$ as $$\begin{gathered}
\label{Frm:TensDecompV}
V=M(1)\otimes\Omega_V\end{gathered}$$ where $M(1)\simeq S(h_{-m}\mid h\in{\mathfrak{h}},\,m>0)$ is the Heisenberg VOA of rank $l$ associated to the space ${\mathfrak{h}}$, and $\Omega_V$ is the vacuum space consisting of vectors $u\in V$ such that $h_mu=0$ for all $h\in{\mathfrak{h}}$ and $m>0$. Both factors on the right hand side of (\[Frm:TensDecompV\]) are invariant under the action of $L(0)$, so that (\[Frm:TensDecompV\]) holds even as a decomposition of $L(0)$-graded spaces. Taking the trace of $q^{L(0)-c/24}$ on each side of (\[Frm:TensDecompV\]), multiplying both sides by $\eta(q)^c$ and noting that ${\sf
tr}|_{M(1)}q^{L(0)}=q^{l/24}\eta(q)^{-l}$ we have $$\begin{gathered}
\label{Frm:HolmMFrm}
\eta(q)^c{\sf tr}|_Vq^{L(0)-c/24}
=q^{(l-c)/24}\eta(q)^{c-l}
{\sf tr}|_{\Omega_V}q^{L(0)}\end{gathered}$$ The expression on the left hand side of (\[Frm:HolmMFrm\]) is a holomorphic modular form on $\Gamma_{{\theta}}$ (of weight $c/2$). It is a classical result that the Fourier coefficients $r(n)$ say, of such a function satisfy $r(n)= O(n^3)$. On the other hand, the Fourier coefficients of $\eta(q)^{-s}$ grow like $n^{-s/4-3/4}\exp(\pi\sqrt{2s/3}\sqrt{n})$ whenever $s>0$ (c.f. [@MeiAsympPtns Thm 1]), so we must have $c-l\geq 0$. This is what we required to show.
Certainly one expects Theorem \[Thm:RnkBndsLiernk\] to hold not just for the case that $V$ is self-dual, but the present result is sufficiently strong for our interests.
The following proposition is an analogue for self-dual rational SVOAs of rank $12$ of Corollary 2.3 in [@DonMasHlmVOA], and we repeat here the method of proof used there.
\[SDNR\_KFrmProp\] Suppose that $V$ is a self-dual nice rational $N=1$ SVOA of rank $12$ with $V_{1/2}=0$. Then the Killing form on $V_1$ satisfies $\kappa(\cdot\,,\cdot)=44{{\langle}}\cdot|\cdot{{\rangle}}$.
By Proposition \[Prop:VChar\] we have ${\sf
tr}|_Vq^{L(0)-c/24}=J_{{\theta}}(\tau)-24$, and in particular, $\dim(V_1)=276$. Now we apply Proposition \[NR\_ZhuProp\] to $V$ with $M=V$ and use Lemma \[NRu1vlemma\] to rewrite the conclusion as $$\begin{gathered}
\label{PreKillingFrmEqn}
{\sf tr}|_Vo(u)o(v)q^{L(0)-1/2}=
{\sf tr}|_Vo(u_{[-1]}v)q^{L(0)-1/2}
+\frac{\langle u|v\rangle}{4\pi^2}\tilde{G}_{2}(q)
{\sf tr}|_V q^{L(0)-1/2}\end{gathered}$$ The leading term of the second summand on the right hand side of (\[PreKillingFrmEqn\]) is therefore $\tfrac{1}{12}\langle
u|v\rangle q^{-1/2}$, but the leading term on the left hand side of (\[PreKillingFrmEqn\]) is $\kappa(u,v)q^{1/2}$ where $\kappa(\cdot\,,\cdot)$ is the Killing form on $V_1$. We conclude that the leading term of the first summand on the right hand side of (\[PreKillingFrmEqn\]) is $-\tfrac{1}{12}\langle u|v\rangle
q^{-1/2}$. For $u,v\in V_1$ set $X_{u,v}(\tau)={\sf
tr}|_Vo(u_{[-1]}v)q^{L(0)-c/24}$. We claim that $X_{u,v}(\tau)=\tfrac{1}{6}{{\langle}}u|v{{\rangle}}qD_q{\sf
tr}|_Vq^{L(0)-c/24}$. This is certainly true if ${{\langle}}u|v{{\rangle}}=0$. Suppose not then, and note firstly that $X_{u,v}(\tau)$ is a weight $2$ modular form for $\bar{\Gamma}_{{\theta}}$ by Corollary \[SDNR\_wtkModFrm\], and secondly that $X_{u,v}(\tau)
=-\tfrac{1}{12}{{\langle}}u|v{{\rangle}}q^{-1/2}+0+aq^{1/2}+\ldots$ for some $a\in{{\mathbb C}}$ by the above. Applying Proposition \[prop:FullModInvSVOA\] with $w=u_{[-1]}v$ and $\gamma=TS$, we find $$\begin{gathered}
{X_{u,v}(-1/\tau+1)}{\tau^{-2}}
=\alpha\,{\sf tr}|_{V_{\sigma}}
o(u_{[-1]}v)q^{L(0)-c/24}$$ for some $\alpha$ in ${{\mathbb C}}$, and this $q$ series belongs to ${{\mathbb C}}[[q]]$ by Proposition \[prop:twmodbnd\]. From this we note thirdly, that $X_{u,v}(\tau)$ is holomorphic at the cusp represented by $1$. We claim that these three properties determine $X_{u,v}(\tau)$ uniquely, for if $X'(\tau)$ is another such function, then $Z(\tau)=X_{u,v}(\tau)-X'(\tau)$ is a weight two modular form for $\bar{\Gamma}_{{\theta}}$ that is holomorphic at both cusps, and vanishes at $\infty$. The space of weight $2$ modular forms that are holomorphic at both cusps is spanned by the theta function of the lattice ${{\mathbb Z}}^4$ (c.f. [@RanMdlrFrmsFns]), but this function does not vanish at $\infty$, and the claim follows. It is easy to check that $\tfrac{1}{6}{{\langle}}u|v{{\rangle}}qD_q{\sf
tr}|_Vq^{L(0)-c/24}$ satisfies the three properties of $X_{u,v}(\tau)$ so we may rewrite (\[PreKillingFrmEqn\]) as follows. $$\begin{gathered}
\begin{split}
{\sf tr}|_Vo(u)o(v)q^{L(0)-1/2}
=&
\frac{\langle u|v\rangle}{6}
qD_q{\sf tr}|_Vq^{L(0)-1/2}
+\frac{\langle u|v\rangle}{4\pi^2}\tilde{G}_2(q)
{\sf tr}|_Vq^{L(0)-1/2}\\
=&
\frac{\langle u|v\rangle}{6}
(-\tfrac{1}{2}q^{-1/2}+\tfrac{1}{2}276q^{1/2}+\ldots)\\
&+\frac{\langle u|v\rangle}{4\pi^2}
(\tfrac{\pi^2}{3}-8\pi^2q+\ldots)
(q^{-1/2}+276q^{1/2}+\ldots)
\end{split}\end{gathered}$$ Equating the coefficients of $q^{1/2}$ on each side we have $\kappa(\cdot\,,\cdot)= 44\langle\cdot|\cdot\rangle$.
\[UniqSVOA\] Let $V$ be a self-dual nice rational $N=1$ SVOA with rank $12$ and $V_{1/2}=0$. Then $V$ is isomorphic to $_{{{\mathbb C}}}{{A^{f\natural}}}$ as an SVOA.
By Proposition \[Prop:N=1NiceBiFm\] the bilinear form defined by the adjoint operators is non-degenerate, and Theorems \[Thm:V1Red\] and \[Thm:RnkBndsLiernk\] show that $V_1$ is a reductive Lie algebra with Lie rank bounded above by $12$. From Proposition \[SDNR\_KFrmProp\] we find that $V_1$ is of dimension $276$ and the Killing form $\kappa(\cdot\,,\cdot)$ on $V_1$ satisfies $\kappa(\cdot\,,\cdot)=44{{\langle}}\cdot|\cdot{{\rangle}}$. In particular, the Killing form is non-degenerate, and $V_1$ is a semi-simple Lie algebra.
Suppose then that ${\mathfrak{g}}$ is a simple component of $V_1$ with level $k$ and dual coxeter number $h$. By the main theorem of [@DonMasItgbtyVOAs] we have that $k$ is an integer. Suppose that $(\cdot\,,\cdot)$ is the bilinear form on ${\mathfrak{g}}$ normalized so that $(\alpha,\alpha)=2$ for a long root $\alpha$. Then we have ${{\langle}}u|v{{\rangle}}=k(u,v)$ for $u,v \in{\mathfrak{g}}$, and thus also $\kappa(u,v)=44k(u,v)$ for $u,v \in{\mathfrak{g}}$. Taking $u=v=\alpha$ we obtain $h/k=22$ since $\kappa(\alpha,\alpha)=4h$. This argument is independent of the choice of simple component and so the ratio $h/k$ must hold for each simple component. By inspection the only possibility then is that ${\mathfrak{g}}$ is of type $D_{12}$ with level $k=1$.
Thus we find that $V_1$ is a semisimple Lie algebra of type $D_{12}$, and the VOA $V_{\bar{0}}$ is isomorphic to the lattice VOA $V_{M_0}$ for $M_0$ a copy of the $D_{12}$ lattice. It follows then that the SVOA $V$ is isomorphic to a lattice VOA $V_M$ for some positive definite integral lattice $M=M_0\cup M_1$. Since $V$ is self-dual of rank $12$, $M$ is self-dual of rank $12$, and the fact that $V_{1/2}=0$ implies that $M$ has no vectors of unit length. There is one such lattice up to isomorphism; namely, the lattice $D_{12}^+$. We conclude that any self-dual nice rational SVOA with rank $12$ and trivial degree $1/2$ subspace is isomorphic to $V_{M}$, where $M$ is a copy of the lattice $D_{12}^+$. From the proof of Proposition \[AfnIsSVOA\] we see that the SVOA $_{{{\mathbb C}}}{{A^{f\natural}}}$ is also such an object, and this completes the proof of the theorem.
We sketch here an alternative approach to Theorem \[UniqSVOA\] that was described to us by Gerald Höhn. Suppose that $V$ is as in the statement of Theorem \[UniqSVOA\], and let us write $U_0$ for a copy of the lattice VOA associated to the lattice of type $D_4$. This VOA has three irreducible modules beyond itself; we pick one of them and denote it $U_1$. We then set $W=U_0\otimes
V_{\bar{0}}\oplus U_1\otimes V_{\bar{1}}$, so that $W$ is a module for the VOA $U_0\otimes V_{\bar{0}}$ with only integral weights. Using knowledge of the fusion of modules for $V_{\bar{0}}$ and $U_0$ it can be shown that the VOA structure extends in a unique way from $U_0\otimes V_{\bar{0}}$ to the whole space. Then $W$ is a self-dual VOA of rank $16$, and one may invoke Theorem 2 of [@DonMasHlmVOA] to conclude that $W$ is a lattice VOA $W_L$ for $L$ one of the two self-dual lattices of rank $16$; namely, $E_8\oplus E_8$ or $D_{16}^+$. One then shows that the $D_4$ lattice VOA $U_0$ can only be embedded in $W_L$ in such a way that $V_0$ must also be a lattice VOA, and the lattice must be of type $D_{12}$.
$N=1$ structure {#sec:Uniq_N=1Struc}
---------------
We now wish to demonstrate that the $N=1$ structure on ${{A^{f\natural}}}$ is unique. More precisely, suppose that $t\in {{\rm CM}}({\mathfrak{l}})_G^0$ satisfies $\langle e_St,t\rangle=0$ for any $S\subset\Omega$ with $0<w(S)\leq 4$. Citing Proposition \[Prop:CodeCriForSC\] as justification, we call such a vector superconformal. We wish to show that if $t$ is superconformal with $|t|=1$ then $t=x1_G$ for some $x\in{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$. This will be achieved in Theorem \[ThmUniq\], after we establish a few preliminary lemmas. For the benefit of the reader we now include a few words about the idea behind the proof of this theorem.
### Strategy {#Sec:UniqStrat}
Our strategy is the following. Suppose that $t\in{{\rm CM}}({\mathfrak{l}})_G^0$ is a superconformal vector of unit norm, and define a function $f_t:{\operatorname{\textsl{Spin}}}({\mathfrak{l}})\to [-1,1]$ by $f_t(x)=\langle x1_G,t\rangle$. Since the bilinear form on ${{\rm CM}}({\mathfrak{l}})_G^0$ is non-degenerate we are done as soon as we find an $x\in {\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that $f_t(x)=1$. We will show that for any $x\in {\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ with $f_t(x)<1$ there exists some $x'\in {\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that $f_t(x')>f_t(x)$. The function $f$ is certainly continuous and ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ is compact, so showing that $f_t$ can always be made closer to $1$ suffices to show that $f_t$ attains the value $1$. In fact, since $xt$ is superconformal for $x\in {\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ whenever $t$ is, it suffices to show only that for any superconformal $t$ there is some $x\in
{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that $f_{t}(x)>f_t({\bf 1})$ where ${\bf 1}$ denotes the identity in ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$. Thus the following results up to and including Theorem \[ThmUniq\] are dedicated to showing that for any superconformal $t$ in ${{\rm CM}}({\mathfrak{l}})_G^0$ there is some $x\in{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that ${{\langle}}x1_G,t{{\rangle}}>{{\langle}}1_G,t{{\rangle}}$.
We next recall some facts about the Golay co-code, and then introduce some useful notation and terminology before presenting Propositions \[Prop:dbevcntn\] and \[Prop:dbevcntnlift\], which will be the main tools we use to implement the stated strategy. A superconformal vector $t$ may be regarded as an element of the unit ball in $2048$ dimensional space, and Proposition \[Prop:dbevcntn\] provides a way of regarding $t$ as an element of the unit ball in $2048/|\Gamma|$ dimensional space via a kind of linearization over the cosets of certain subgroups $\Gamma<{\mathcal{G}}^*$. The Proposition \[Prop:dbevcntnlift\] is a generalization of this result which arises essentially because ${\mathcal{G}}^*$ has many distinct lifts to ${{\mathbb F}}_2^{\Omega}$. We sometimes refer to Propositions \[Prop:dbevcntn\] and \[Prop:dbevcntnlift\] as the coset contraction results.
### Golay co-code {#Sec:Golayco-code}
Recall that the Golay co-code is the space ${\mathcal{G}}^*={{\mathbb F}}_2^{\Omega}/{\mathcal{G}}$, and recall the co-weight function $w^*$ on ${{\mathbb F}}_2^{\Omega}$ from §\[Sec:Notation\]. We write $X\mapsto \bar{X}$ for the canonical map ${{\mathbb F}}_2^{\Omega}\to{\mathcal{G}}^*$. The Golay code corrects three errors and the range of the co-weight function is the set $\{0,1,2,3,4\}$. Let $X\in{{\mathbb F}}_2^{\Omega}$. We say that $X$ and $\bar{X}$ are co-even if $2|w^*(X)$, and we say that $X$ and $\bar{X}$ are doubly co-even if $4|w^*(X)$.
If $w^*(X)=2$ then there is a unique pair of points $i,j\in\Omega$ such that $X+{\mathcal{G}}=\{ij\}+{\mathcal{G}}$. If $w(X)=w^*(X)=4$ then there are exactly five weight $8$ words (octads) in ${\mathcal{G}}$ containing $X$, and we have $w(Y)=w^*(Y)=4$ and $\bar{X}=\bar{Y}$ just when $X+Y$ is one of these. Thus for $\bar{X}\in{\mathcal{G}}^*$ with $w^*(X)=4$, the six sets of cardinality four in ${{\mathbb F}}_2^{\Omega}$ that lift $\bar{X}$ constitute a partition of $\Omega$ into six disjoint four sets. Such a partition is called a sextet, and the four sets in a given sextet are called tetrads.
Let $\bar{T},\bar{T}'\in{\mathcal{G}}^*$ with co-weight $4$, and let $S=\{T_i\}$ and $S'=\{T_i'\}$ be the sextets determined by $\bar{T}$ and $\bar{T}'$ respectively. Then there are essentially four different ways that the sextets $S$ and $S'$ can overlap.
1. $|T_i\cap T_j'|\in\{0,4\}$ for all $i,j$.
2. $|T_i\cap T_j'|\in\{0,1,3\}$ for all $i,j$.
3. $|T_i\cap T_j'|\in\{0,1,2\}$ for all $i,j$.
4. $|T_i\cap T_j'|\in\{0,2\}$ for all $i,j$.
The first case is just the case that $S$ and $S'$ are the same sextet. The second case is the case that $\bar{T}+\bar{T}'$ has co-weight $2$. In the third and fourth cases $\bar{T}+\bar{T}'$ has co-weight $4$, and the last case is distinguished by the property that any lift of the set $\{\bar{T},\bar{T}'\}$ is isotropic. We refer to the sextets $S$ and $S'$ as commuting sextets when $|T_i\cap T_j'|$ is even for all $i$ and $j$, and we refer to them as non-commuting otherwise. In this setting, we refer to tetrads $T_0$ and $T_0'$ as commuting when the corresponding sextets $\{T_i\}$ and $\{T_i'\}$ are commuting, and we refer to them as non-commuting otherwise.
### Co-code lifts
Let $\Delta$ be a subset of ${{\mathbb F}}_2^{\Omega}$ such that the map ${{\mathbb F}}_2^{\Omega}\to {\mathcal{G}}^*$ induces a bijection $\Delta\leftrightarrow\bar{\Delta}=\{\bar{X}\mid X\in\Delta\}$. We then call $\Delta$ a lift of $\bar{\Delta}$ to ${{\mathbb F}}_2^{\Omega}$. If further we have that $w(X)=w^*(X)$ for all $X\in\Delta$, we say that $\Delta$ is a balanced lift of $\bar{\Delta}$. Recall that the space ${{\mathbb F}}_2^{\Omega}$ admits a bilinear form ${{\mathbb F}}_2^{\Omega}\times{{\mathbb F}}_2^{\Omega}\to{{\mathbb F}}_2$ defined so that ${{\langle}}X,Y{{\rangle}}=|X\cap Y|\pmod{2}$. In some cases a subset $\bar{\Delta}<{\mathcal{G}}^*$ has a lift $\Delta\subset{{\mathbb F}}_2^{\Omega}$ such that ${{\langle}}X,Y{{\rangle}}\equiv 0$ for any $X,Y\in\Delta$, and we call such a lift isotropic. Suppose that $\Delta$ is doubly co-even and balanced. We then say that $\Delta$ is commuting if the sextets determined by any pair of tetrads in $\Delta$ are commuting in the sense of §\[Sec:Golayco-code\].
Suppose now that $\Sigma$ is a balanced lift of $({\mathcal{G}}^*)^0$, the even part of ${\mathcal{G}}^*$. Then the set $\Sigma$ is in natural bijective correspondence with $({\mathcal{G}}^*)^0$, and the group structure on the later may be lifted via this correspondence so as to define a group structure on the former. We denote this group operation with $\dotplus$, so that $X\dotplus Y=Z$ just when $X+Y+Z\in{\mathcal{G}}$. Note that the bilinear form on ${{\mathbb F}}_2^{\Omega}$ is not bilinear with respect to $\dotplus$, so that in general ${{\langle}}A\dotplus B,X{{\rangle}}\neq {{\langle}}A,X{{\rangle}}+{{\langle}}B,X{{\rangle}}$ for example. A multiplicative $2$-cocycle $\sigma$ with values in $\{\pm 1\}$ is defined on the group $\Sigma=(\Sigma,\dotplus)$ by requiring that $e_Xe_Y1_G=\sigma(X,Y)e_{X\dotplus Y}1_G$ for $X,Y\in\Sigma$.
\[prop:2cocycRHsymm\] We have $$\begin{gathered}
\sigma(X,X)=(-1)^{w^*(X)/2}\\
\sigma(X,Y)=(-1)^{{{\langle}}X,Y{{\rangle}}}\sigma(Y,X)\\
\sigma(X,Y)=\sigma(X,X)\sigma(X,X\dotplus Y)\end{gathered}$$ for all $X,Y\in\Sigma$. In particular $\sigma(X,Y)=\sigma(X,X\dotplus Y)$ for all $Y\in\Sigma$ just when $X$ is doubly co-even.
For any $X,Y\in\Sigma$ we have $e_Xe_Y1_G=(-1)^{{{\langle}}X,Y{{\rangle}}}e_Ye_X1_G$, and this implies $\sigma(X,Y)=(-1)^{{{\langle}}X,Y{{\rangle}}}\sigma(Y,X)$. Left multiplying both sides of $e_Xe_Y1_G
=\sigma(X,Y)e_{X\dotplus Y}1_G$ by $e_X$ yields $\sigma(X,Y)
e_Xe_{X\dotplus Y}1_G =e_X^2e_Y1_G$. On the other hand $e_Xe_{X\dotplus Y}1_G=\sigma(X,X\dotplus Y)e_Y1_G$. Since $e_X^2=\sigma(X,X)$, this verifies the claim.
When $\Sigma$ is a lift of $({\mathcal{G}}^*)^0$, the set $\{e_X1_G|X\in\Sigma\}$ constitutes an orthonormal basis for ${{\rm CM}}({\mathfrak{l}})_G^0$. We then have $t=\sum_{\Sigma}t_Xe_X1_G$ for unique $t_X\in{{\mathbb R}}$ such that $\sum t_X^2=1$ when $t$ is a unit vector in ${{\rm CM}}({\mathfrak{l}})_G^0$. Note that $f_t({\bf
1})=t_{\emptyset}$. We write ${\rm supp}(t)$ for the set of $\bar{X}\in{\mathcal{G}}^*$ such that $t_X\neq 0$.
One way to obtain a balanced lift of $({\mathcal{G}}^{*})^0$ is the following. Choose an element in $\Omega$ and denote it by $\infty$. Let $\Delta=\Delta_0\cup\Delta_2\cup\Delta_4$ where $\Delta_0$ contains just the emptyset, $\Delta_2$ is the set of pairs of elements from $\Omega$, and $\Delta_4$ is the set of subsets of $\Omega$ of size four containing $\infty$. Then $\Delta$ is a balanced lift of $({\mathcal{G}}^*)^0$. A doubly co-even subgroup $\Gamma<\Delta$ is then isotropic just when it is commuting.
### Coset contraction
Let $\Sigma$ be a balanced lift of $({\mathcal{G}}^*)^0$, suppose that $\bar{\Gamma}$ is a subgroup of ${\mathcal{G}}^*$, and suppose that $W\in\Sigma$ is chosen so that the coset $\bar{W}+\bar{\Gamma}$ of $\bar{\Gamma}$ in ${\mathcal{G}}^*$ is doubly co-even. Suppose also that the corresponding lift $W\dotplus \Gamma\subset\Sigma$ obtained by restriction from $\Sigma$ is isotropic. Then the $2$-cocycle $\sigma$ is symmetric on $W\dotplus \Gamma$. Let $\chi:\Sigma\to
\pm 1$ be a function such that $$\begin{gathered}
\label{form:chicond}
\chi(A\dotplus W)\sigma(A\dotplus W,Z)=\chi(Z)\chi(A\dotplus
W\dotplus Z),\quad \forall A\in\Gamma,\;
Z\in\Sigma.\end{gathered}$$ Then by Proposition \[prop:2cocycRHsymm\] we have $\sigma(A\dotplus W ,Z)=\sigma(A\dotplus W,A\dotplus W\dotplus Z)$ so that the invariance under swapping $Z$ with $A\dotplus W\dotplus
Z$ is evident for both sides of the expression (\[form:chicond\]). The assumption that $W\dotplus \Gamma$ be isotropic ensures that (\[form:chicond\]) can be satisfied for $Z$ in $W\dotplus \Gamma$.
In the case that $W\dotplus \Gamma=\Gamma$ the condition (\[form:chicond\]) implies that the restriction $\chi|_{\Gamma}$ of $\chi$ to $\Gamma$ is a $1$-cocycle with coboundary $\sigma|_{\Gamma\times\Gamma}$ since we have $\sigma(A\dotplus B,A) =\sigma(A,B)$ for $A,B \in\Gamma$ when $\Gamma$ is isotropic. The values of $\chi$ on a coset $Z\dotplus
\Gamma$ of $\Gamma$ in $\Sigma$ are determined by those on $\Gamma$ together with $\chi(Z)$ since $\chi(Z\dotplus
A)=\chi(A)\sigma(A,Z)/\chi(Z)$. Any two such functions $\chi:\Sigma\to\pm 1$ therefore differ by a single element of $\Gamma^*={\rm Hom}(\Gamma,\pm 1)$ on each coset of $\Gamma$ in $\Sigma$. Recall that the maps $\bar{X}\mapsto(-1)^{{{\langle}}D,X{{\rangle}}}$ for $D\in{\mathcal{G}}$ exhaust the homomorphisms ${\mathcal{G}}^*\to\pm 1$. Indeed, $(-1)^{{{\langle}}D,X{{\rangle}}}$ is independent of the choice of lift $X$ for $\bar{X}$, and thus any homomorphism $(\Sigma,\dotplus)\to
\pm 1$ is of the form $X\mapsto (-1)^{{{\langle}}D,X{{\rangle}}}$ for some $D\in{\mathcal{G}}$.
To a function $\chi$ satisfying (\[form:chicond\]) and to any given $Z\in \Sigma$ we associate the element $u_{\chi,Z}
=\sum_{A\in\Gamma} \chi(Z\dotplus A) e_{Z\dotplus A}$ in ${{\rm Cliff}}({\mathfrak{l}})$, and for ease of notation we set $u_{\chi}=u_{\chi,\emptyset}$. The following proposition is our main tool for classifying superconformal vectors in ${{\rm CM}}({\mathfrak{l}})_G^0$.
\[Prop:dbevcntn\] Let $t=\sum_{\Sigma}t_Xe_X1_G$ be superconformal with $|t|=1$. Let $\bar{\Gamma}$ be a subgroup of ${\mathcal{G}}^*$ and suppose that $\bar{W}\in{\mathcal{G}}^*$ is chosen so that $\bar{W}+ \bar{\Gamma}$ is doubly co-even. Suppose also that $W\dotplus\Gamma \subset\Sigma$ is an isotropic lift of $\bar{W}+\bar{\Gamma}$. Then for $\chi:\Sigma\to \pm 1$ satisfying (\[form:chicond\]) and for ${\mathsf T}$ any transversal of $\Gamma$ in $\Sigma$ we have $$\begin{gathered}
\label{eqn:cosetlinz}
\sum_{Z\in{\mathsf T}}
{{\langle}}u_{\chi,Z}1_G,t{{\rangle}}{{\langle}}u_{\chi,W\dotplus Z}1_G,t{{\rangle}}={{\langle}}u_{\chi,W}t,t{{\rangle}}=\begin{cases} 1&\text{if $W\in\Gamma$,}\\
0&\text{if $W\notin\Gamma$.}
\end{cases}\end{gathered}$$ In particular, for the case that ${\rm supp}(t)$ is a doubly co-even group with an isotropic lift $\Gamma$ we have $\sum_{A\in\Gamma} \chi(A)t_A=\pm 1$ for any $1$-cocycle $\chi:\Gamma\to \pm 1$ with coboundary $\sigma|_{\Gamma\times\Gamma}$.
Let us consider the expression ${{\langle}}u_{\chi}t,t{{\rangle}}$. Since $t$ is superconformal, we have ${{\langle}}e_Xt,t{{\rangle}}=0$ whenever $w(X)<8$, so that ${{\langle}}u_{\chi}t,t{{\rangle}}=\chi(\emptyset){{\langle}}t,t{{\rangle}}=1$. On the other hand we have $$\begin{gathered}
\begin{split}
u_{\chi}t&=\sum_{A\in\Gamma, Z\in\Sigma}
\chi(A)t_Ze_Ae_Z1_G\\
&=\sum_{A\in\Gamma,Z\in \Sigma}
\chi(A)\sigma(A,Z)
t_Ze_{Z\dotplus A}1_G\\
&=\sum_{A\in\Gamma,Z\in\Sigma}
\chi(Z)\chi(Z\dotplus A)
t_Ze_{Z\dotplus A}1_G
\end{split}\end{gathered}$$ and then $1={{\langle}}u_{\chi}t,t{{\rangle}}=\sum \chi(Z)\chi(Z\dotplus A)
t_Zt_{Z\dotplus A}$. From the fact that ${{\langle}}u_{\chi,Z}1_G,t{{\rangle}}=\sum_{A\in\Gamma}\chi(Z\dotplus A)t_{Z\dotplus
A}$ we see that the left hand side of (\[eqn:cosetlinz\]) coincides with ${{\langle}}u_{\chi}t,t{{\rangle}}$, and the equation (\[eqn:cosetlinz\]) follows. This handles the case that $W\in\Gamma$, and the case that $W\notin\Gamma$ is similar. The case that all $t_X$ vanish for $X\notin\Gamma$ then yields $(\sum_{\Sigma}\chi(X)t_X)^2=1$, and the last part follows from this.
Suppose that $\Sigma$ and $\Sigma'$ are balanced lifts of ${\mathcal{G}}^*$ to ${{\mathbb F}}_2^{\Omega}$. We denote the group operations on $\Sigma$ and $\Sigma'$ both by $\dotplus$. There is a correspondence $\Sigma\leftrightarrow \Sigma'$ such that we have $X\leftrightarrow X'$ if and only if $\bar{X}=\bar{X}'\in{\mathcal{G}}^*$. We then have $X\dotplus Y=Z$ in $\Sigma$ just when $X'\dotplus Y'=Z'$ in $\Sigma'$. For $X\in
\Sigma$ we have ${{\langle}}X,X'{{\rangle}}=0$ and $X+X'\in{\mathcal{G}}$. Define $\varrho:{\mathcal{G}}^*\to\pm 1$ so that $\varrho_{\bar{X}}e_Xe_{X'}1_G
=1_G$. We abuse notation to write $\varrho_X$ for $\varrho_{\bar{X}}$ whenever $X\in\Sigma$. Let $\sigma'$ denote the multiplicative $2$-cocycle on $(\Sigma',\dotplus)$ such that $e_{X'}e_{Y'}1_G=\sigma'(X',Y')e_{Z'}1_G$ for $Z'=X'\dotplus Y'$. The following lemma gives the relationship between $\sigma$ and $\sigma'$.
\[Lem:sigrelsig’\] For $X,Y\in\Sigma$ and $Z=X\dotplus Y$ we have $$\begin{gathered}
\sigma'(X',Y')=(-1)^{{{\langle}}X+X',Y{{\rangle}}}
\varrho_X\varrho_Y\varrho_Z
\sigma(X,Y)\end{gathered}$$
As before we assume that $W\dotplus \Gamma$ is doubly co-even and isotropic, and we assume now the same for $W'\dotplus\Gamma'$. We also assume that ${{\langle}}X',Y{{\rangle}}={{\langle}}X,Y'{{\rangle}}$ for all $X,Y\in
W\dotplus \Gamma$. Given $\chi:\Sigma\to\pm 1$ satisfying (\[form:chicond\]), we define $\chi':\Sigma'\to\pm 1$ by setting $\chi'(X')=\psi(X)\chi(X)$ for $X'\in\Sigma'$ where $\psi:\Sigma\to \pm 1$ is chosen to satisfy $$\begin{gathered}
\label{Form:LiftTrans}
\psi(X)
(-1)^{{{\langle}}X+X',Z{{\rangle}}}
\varrho_{X}\varrho_Z
\varrho_{X\dotplus Z}
=\psi(Z)\psi(X\dotplus Z),\quad
\forall X\in W\dotplus\Gamma, Z\in\Sigma.\end{gathered}$$ Then from Lemma \[Lem:sigrelsig’\] we obtain that $\chi'(X')\sigma'(X',Z') =\chi'(Z')\chi'(X'\dotplus Z')$ whenever $X'\in W'\dotplus \Gamma'$ and $Z'\in\Sigma'$. Now we may apply Proposition \[Prop:dbevcntn\] with $\Sigma'$ in place of $\Sigma$, and $\chi'$ in place of $\chi$, and we obtain the following generalization of that proposition.
\[Prop:dbevcntnlift\] Let $\Sigma$, $\Sigma'$ be balanced lifts of ${\mathcal{G}}^*$, and suppose that $\Gamma<\Sigma$ and $W\in\Sigma$ are chosen so that both $W\dotplus \Gamma$ and $W'\dotplus \Gamma'$ are doubly co-even and isotropic. Suppose also that ${{\langle}}X',Y{{\rangle}}={{\langle}}X,Y'{{\rangle}}$ for all $X,Y\in W\dotplus\Gamma$. Then for $\chi:\Sigma\to\pm 1$ satisfying (\[form:chicond\]), for $\psi:\Sigma\to \pm 1$ satisfying (\[Form:LiftTrans\]), and for ${\mathsf T}$ a transversal of $\Gamma$ in $\Sigma$, we have $$\begin{gathered}
\sum_{Z\in{\mathsf T}}
{{\langle}}u_{\chi'',Z}1_G,t{{\rangle}}{{\langle}}u_{\chi'',W\dotplus Z}1_G,t{{\rangle}}=\begin{cases}
1&\text{if $W\in\Gamma$,}\\
0&\text{if $W\notin\Gamma$.}
\end{cases}\end{gathered}$$ where $\chi'':\Sigma\to \pm 1$ is given by $\chi''(X)=\varrho_X\psi(X)\chi(X)$, and $u_{\chi'',Z}$ is defined by $u_{\chi'',Z} =\sum_{A\in\Gamma}\chi''(A\dotplus Z)e_{A\dotplus
Z}$ for $Z\in\Sigma$.
### Superconformal vectors
We now embark upon the task of realizing the strategy summarized in §\[Sec:UniqStrat\]; that is, the task of showing that for any superconformal $t\in{{\rm CM}}({\mathfrak{l}})_G^0$ there is some $x$ in ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that $f_t(x)>t_{\emptyset}$. In practice, we treat all possible unit vectors $t$ on a case by case basis using the coset contraction results to narrow down the possibilities for the coefficients $t_X$ of $t=\sum_{\Sigma}t_Xe_X1_G$ (given a balanced lift $\Sigma$ of $({\mathcal{G}}^*)^0$ to ${{\mathbb F}}_2^{\Omega}$) that can make $t$ superconformal. In the course of doing so we find superconformal vectors in the ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ orbit of $1_G$ other than those of the form $\pm e_X1_G$ for $X\in{{\mathbb F}}_2^{\Omega}$, or $\exp(re_X)1_G$ for $r\in{{\mathbb R}}$ and $w(X)=2$, and ultimately we find that $t$ either has projection on one of these vectors exceeding $t_{\emptyset}$ or cannot be superconformal.
Suppose that $|t_X|>t_{\emptyset}$ for some $X\in\Sigma$. Then we have ${{\langle}}x1_G,t{{\rangle}}=\sigma(X,X)t_X$ for $x=e_X$. Multiplying by $-1$ if necessary, we have $f_t(x)>t_{\emptyset}$. Thus from now on we may suppose that $t_{\emptyset}\geq|t_X|$ for all $X\in\Sigma$.
Suppose that $t_X\neq 0$ for some $X$ with $w^*(X)=2$. Setting $\exp(re_X)=\cos(r)+\sin(r)e_X$ we have ${{\langle}}\exp(re_X)1_G,t{{\rangle}}=
(1-\tfrac{1}{2}r^2)t_{\emptyset}-rt_X+o(r^2)$ so that $f_t(x)>t_{\emptyset}$ for $x=\exp(re_X)$ and suitably chosen $r$. From now on we assume that $t_X=0$ whenever $w^*(X)=2$. That is we may assume that ${\rm supp}(t)$ is a doubly co-even subset of ${\mathcal{G}}^*$.
Suppose that ${\rm supp}(t)$ is contained in a doubly co-even subgroup $\bar{\Gamma}$ of ${\mathcal{G}}^*$ and suppose that $\bar{\Gamma}$ has a balanced isotropic lift $\Gamma$. We may assume that $\Sigma$ is a balanced lift of $({\mathcal{G}}^*)^0$ containing $\Gamma$. Since it is useful, we now state the following result, which is obtained by direct application of Proposition \[Prop:dbevcntnlift\] to our present situation.
\[Prop:dbevsupplift\] Let $\bar{\Gamma}$ be a doubly co-even subgroup of ${\mathcal{G}}^*$ and suppose that $\Gamma$ and $\Gamma'$ are balanced isotropic lifts of $\bar{\Gamma}$ such that ${{\langle}}A',B{{\rangle}}={{\langle}}A,B'{{\rangle}}$ for all $A,B\in \Gamma$. Then for $\chi:\Gamma\to\pm 1$ a $1$-cocycle with coboundary $\sigma|_{\Gamma\times\Gamma}$, and for $\psi:\Gamma\to
\pm 1$ satisfying $$\begin{gathered}
\psi(A)
(-1)^{{{\langle}}A+A',B{{\rangle}}}
\varrho_{A}\varrho_B
\varrho_{A\dotplus B}
=\psi(B)\psi(A\dotplus B)\end{gathered}$$ for all $A,B\in\Gamma$ we have ${{\langle}}u_{\chi''}1_G,t{{\rangle}}^2=1$ where $\chi'':\Gamma\to \pm 1$ is given by $\chi''(A)=\varrho_A\psi(A)\chi(A)$, and $u_{\chi''}=\sum_{A\in\Gamma}\chi''(A)e_{A}$.
The requirement that $\bar{\Gamma}$ be doubly co-even is quite strong. Any maximal doubly co-even subgroup of ${\mathcal{G}}^*$ has order $16$ or $32$, and every doubly co-even subgroup of order $16$ or less has an isotropic lift. A convenient way to generate doubly co-even subgroups of ${\mathcal{G}}^*$ is the following. Choose a weight $12$ word in ${\mathcal{G}}$ (a dodecad), and partition the $12$ non-zero coordinates into six pairs $\{A_i\}$. Then the $15$ elements $\bar{A}_i+\bar{A}_j \in{\mathcal{G}}^*$ are the non-trivial elements in a doubly co-even subgroup $\bar{\Gamma}$ say, of ${\mathcal{G}}^*$ of order $16$. Furthermore, the set $\{\emptyset,A_i+A_j\}$ furnishes a balanced isotropic lift of $\Gamma$. For some partitions there is a sextet $S=\{T_i\}$ such that each pair $A_i$ is contained in a tetrad of $S$. In this case, the addition of one of the $T_i$ extends $\Gamma$ to be a balanced isotropic lift of a doubly co-even $32$ group in ${\mathcal{G}}^*$.
Suppose then that ${\rm supp}(t)$ is contained in a two group $\bar{\Gamma}$ with balanced lift $\Gamma=\{\emptyset,A\}$. Then since $t$ is a unit vector we have $t_{\emptyset}^2+t_{A}^2=1$. On the other hand from Proposition \[Prop:dbevsupplift\] we have $(t_{\emptyset}\pm t_A)^2= 1$ and thus $t_{\emptyset}+t_A=\pm 1$ and $t_{\emptyset}-t_A=\pm 1$. The only solutions are $t_{\emptyset}=\pm
1$ and $t_{A}=\pm 1$. Since we have assumed $t_{\emptyset}\geq
|t_X|$ for all $X\in\Sigma$, we have $t_{\emptyset}=1$ and $t=1_G$.
Suppose now that ${\rm supp}(t)$ is contained in a four group $\bar{\Gamma}$ with balanced isotropic lift $\Gamma=\{\emptyset,A,B,C\}$. Similar to before we have $t_{\emptyset}^2+t_A^2+t_B^2+t_C^2=1$. A function $\chi_0:\Gamma\to\pm 1$ suitable for an application of Proposition \[Prop:dbevsupplift\] may be given arbitrary values on generators of $\Gamma$, and then the remaining values are determined by $\sigma$. For example, we may set $\chi_0(\emptyset)=\chi_0(A)=\chi_0(B)=1$ and $\chi_0(C)=\sigma(A,B)\chi_0(A)\chi_0(B)=\sigma(A,B)$. Any other suitable $1$-cocycle $\chi$ differs from $\chi_0$ by some element of $\Gamma^*$, and Proposition \[Prop:dbevsupplift\] (with $\Sigma'=\Sigma$) now yields $$\begin{gathered}
\label{Form:dbevsuppcntnon4}
t_{\emptyset}+\mu(A)t_A+\mu(B)t_B
+\sigma(A,B)\mu(C)t_C=\pm 1\end{gathered}$$ where $\mu$ is any homomorphism $\mu:\Gamma\to\pm 1$. Summing over (\[Form:dbevsuppcntnon4\]) for various choices of $\mu$ we find that $t_X\pm t_Y\in\{0,\pm 1\}$ for each $X,Y\in\Gamma$, and then $t_X\in\{0,\pm \tfrac{1}{2},\pm 1\}$ for each $X\in\Gamma$. Thus if one of the $t_X$ vanishes then the remaining $t_Y$ must lie in $\{0,\pm 1\}$, and then $t=\pm t_Y$ for some $Y\in\Gamma$. If one of the $t_X$ is $\pm \tfrac{1}{2}$ then the remaining $t_Y$ must lie in $\pm \tfrac{1}{2}$ and there is some restriction on the signs: any two solutions differ by an element of $\Gamma^*$, and one solution is given by $s=\tfrac{1}{2}(1+e_A+e_B-\sigma(A,B)e_C)1_G$. Taking $\Sigma'$ different from $\Sigma$ we can say more: if there is some lift $\Gamma'$ of $\bar{\Gamma}$ such that ${{\langle}}X',Y{{\rangle}}\neq 0$ for some $X,Y\in\Gamma$, then we obtain another equation like (\[Form:dbevsuppcntnon4\]) with signs not differing by an element of $\Gamma^*$, and this extra restriction is enough to rule out the possibility that any $t_X$ has norm $\tfrac{1}{2}$. Such a lift $\Gamma'$ exists just when two of the sextets corresponding to $\{\bar{A},\bar{B},\bar{C}\}$ are non-commuting (see §\[Sec:Golayco-code\]). We are left with the question of whether of not a vector $s\in{{\rm CM}}({\mathfrak{l}})_G^0$ of the form $s =\tfrac{1}{2}(1
+\mu(A)e_A+\mu(B)e_B-\sigma(A,B)\mu(C)e_C)1_G$ with $\mu\in\Gamma^*$ is in the ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ orbit containing $1_G$ given that the sextets in $\bar{\Gamma}$ are commuting. The answer is affirmative, as the following lemma demonstrates.
\[Lem:sc4gp\] Suppose ${\Gamma}=\{\emptyset,A,B,C\}$ is a balanced lift of a commuting four group in ${\mathcal{G}}^*$, and $$\begin{gathered}
s =\frac{1}{2}\left(1 +\mu(A)e_A +\mu(B)e_B -\sigma(A,B)
\mu(C)e_C\right)1_G\end{gathered}$$ for some $\mu\in\Gamma^*$. Then $s$ is in the ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ orbit containing $1_G$.
For $S\in{\mathcal{G}}$ let $g_S=\pm e_S$ with the sign chosen so that $g_S\in G$. For any given $S\in{\mathcal{G}}$ either ${{\langle}}S,X{{\rangle}}=0$ for all $X\in\Gamma$, or there is a unique non-trivial $X\in\Gamma$ such that ${{\langle}}S,X{{\rangle}}=0$. We define a group $G'=\{g_S'\mid
S\in{\mathcal{G}}\}< {\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ by setting $g_S'=s_Xe_Xg_S$ when $X$ is the unique non-trivial element of $\Gamma$ such that ${{\langle}}S,X{{\rangle}}=0$, and setting $g_S'=g_S$ when ${{\langle}}S,X{{\rangle}}=0$ for all $X\in\Gamma$. Then a simple computation shows that $g_S's=s$ for all $S\in{\mathcal{G}}$. The group $G'$ is an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroup of ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ whose associated code is doubly-even self-dual and has no short roots. In other words, $G'$ is a lift of a Golay code on $\Omega$. Noting that both $G$ and $G'$ contain the volume element $e_{\Omega}$, It follows from the uniqueness of the Golay code that there is some coordinate permutation in ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ that sends $s$ to $1_G$.
The method illustrated above for the case that $\bar{\Gamma}$ has order four is a model for the cases of higher order. For this reason we will summarize only the results for the higher order cases that we need, and refrain from burdening the reader with all details.
\[Lem:sc8gp\] In the case that ${\rm supp}(t)$ is contained in an eight group $\bar{\Gamma}$ with balanced isotropic lift $\Gamma= {{\langle}}A,B,C{{\rangle}}$, either ${\rm supp}(t)$ is contained in a commutative four group, or $\Gamma$ is totally commutative and $t$ is of the form $$\begin{gathered}
t=\frac{1}{4}\left(3
+\mu(A)e_A+\mu(B)e_B+\mu(C)e_C
+\mu(A\dotplus B)\sigma(A,B)e_{A\dotplus B}
+\ldots\right)1_G\end{gathered}$$ for some $\mu\in\Gamma^*$, and $t$ belongs to the ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ orbit containing $1_G$.
\[Lem:sc16gp\] In the case that ${\rm supp}(t)$ is contained in a $16$ group $\bar{\Gamma}$ with balanced isotropic lift $\Gamma$, either ${\rm
supp}(t)$ is contained in some commutative eight group, or there is a dodecad in ${\mathcal{G}}$ and a partition $P$ of its non-trivial coordinates into six pairs $P=\{A_0,\ldots,A_5\}$ such that $\Gamma=\{\emptyset,A_{ij}\}$ where we write $A_{ij}$ for the tetrad $A_i+A_j\in{{\mathbb F}}_2^{\Omega}$. We may assume that $e_{A_0}e_{A_1}\cdots e_{A_5}\in G$. Then $\sigma(A_{ij},A_{ik})=-1$ and $\sigma(A_{ij},A_{kl})=1$ for distinct $i,j,k,l$. The vector $t$ is then of the form $$\begin{gathered}
t=\frac{1}{4}
\left( 1+\mu(A_{01})e_{A_{01}}
+\ldots
+\mu(A_{04})e_{A_{04}}
-\mu(A_{12})e_{A_{12}}
-\cdots-\mu(A_{45})e_{A_{45}}
\right)1_G\end{gathered}$$ for some $\mu\in\Gamma^*$, and $t$ belongs to the ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ orbit containing $1_G$.
\[Lem:sc32gp\] In the case that ${\rm supp}(t)$ is contained in a $32$ group with balanced isotropic lift, either ${\rm supp}(t)$ is contained in some $16$ group with isotropic lift, or $f_t(x)>t_{\emptyset}$ for $x1_G=s$ a superconformal vector with ${\rm supp}(s)$ contained in a doubly co-even $16$ group with isotropic lift.
We must now treat the case that ${\rm supp}(t)$ is not contained in any doubly co-even group with isotropic lift. We remind here that we assume $t_{\emptyset}\geq |t_X|$ for all $X\in\Sigma$, and $t_X=0$ whenever $w^*(X)=2$. We claim that for such $t$, either $f_t(x)>t_{\emptyset}$ for some $x1_G=s$ with $s$ supported on a doubly co-even group with isotropic lift as given in Lemmata \[Lem:sc4gp\] through \[Lem:sc16gp\], or $t$ is not superconformal.
So let us assume that $f_t(x)\leq t_{\emptyset}$ for any $x\in{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ such that $x1_G=s$ for $s$ one of the superconformal vectors appearing in Lemmata \[Lem:sc4gp\] through \[Lem:sc16gp\]. This condition amounts to putting upper bounds on the moduli of the coefficients $t_X$ that are non-zero. For example, take $s$ and $\Gamma$ as in Lemma \[Lem:sc4gp\]. Then ${{\langle}}s,t{{\rangle}}\leq t_{\emptyset}$ for all $\mu\in\Gamma^*$ is equivalent to the inequalities $$\begin{gathered}
0\leq \frac{1}{2}\left(
t_{\emptyset}+\mu(A)t_A+\mu(B)t_B
+\mu(C)\sigma(A,B)t_C\right),\quad
\forall\mu\in\Gamma^*,\end{gathered}$$ which in turn imply that the smallest of $|t_A|$, $|t_B|$ and $|t_C|$ is not greater than $\tfrac{1}{4}$, given that all are non-zero. Also, one can construct elements $x\in{\operatorname{\textsl{Spin}}}({\mathfrak{l}})$ of the form $x=\exp(\theta_1e_{X_1})\cdots\exp(\theta_ke_{X_k})$ for $w(X_i)=2$ such that $f_t(x)>t_{\emptyset}$ so long as not all the non-vanishing $t_X$ in $t$ are too small. On the other hand, Proposition \[Prop:dbevcntn\] applied in the case that $W\notin
\Gamma$ can be used to show that the non-vanishing of some co-weight $4$ coefficients $t_X$ implies the non-vanishing of others. The simplest result of this kind is the following.
\[Lem:FourGpCond\] If $t_A\neq 0$ then there is some $B\in\Sigma$ such that $t_Bt_{A\dotplus B}\neq 0$.
We take $\Gamma=\{\emptyset\}$ and $W=X$ in Proposition \[Prop:dbevcntn\]. We then obtain $$\begin{gathered}
0=\langle e_{A}t,t\rangle
=\sum_{X\in\Sigma}\left\langle
t_Xe_Ae_X1_G,t\right\rangle
=2t_{\emptyset}t_A+
\sum_{X\in\Sigma\setminus\emptyset,A}
\sigma(A,X)t_Xt_{A\dotplus X}\end{gathered}$$ and this implies the claim.
We require to show then that the coefficients $t_X$ for $w^*(X)=4$ cannot all be too small. Since some non-vanishing co-weight $4$ components implies more non-vanishing co-weight $4$ components, let us consider the extreme case that $t_X\neq 0$ for all $X\in\Sigma$ with $w^*(X)=4$. Suppose that $t_Z=\varepsilon$ is the greatest among these (we may assume $t_Z>0$) so that $\varepsilon\geq|t_X|$ for all $X\in\Sigma$ with $w^*(X)=4$. Then since $\sum t_X^2=1$ we have $t_{\emptyset}>t_{\emptyset}^2> 1-N\varepsilon^2$ where $N=1771$ is the number of co-weight $4$ elements in ${\mathcal{G}}^*$. On the other hand Proposition \[Prop:dbevcntn\] with $\Gamma=\emptyset$ and $W=Z$ yields $0={{\langle}}e_Zt,t{{\rangle}}$, and we then have $$\begin{gathered}
0={{\langle}}e_Zt,t{{\rangle}}> 2(1-N\varepsilon^2)\varepsilon
-M\varepsilon^2\end{gathered}$$ where $M$ is the number of $X\in\Sigma$ with $w^*(X)=4$ such that $w^*(Z\dotplus X)=4$. We have $2(1-N\varepsilon^2)\varepsilon
-M\varepsilon^2>0$ (that is, a contradiction) just when $2>M\varepsilon+2N\varepsilon^2$, so that $\varepsilon$ cannot be smaller than $1/\sqrt{2N}$ for example. In this way we find that any $t$ which does not satisfy $f_t(x)>t_{\emptyset}$ for some superconformal $x1_G=s$ already constructed is not superconformal. That is, we have the following
\[Thm:UniqOrb\] The superconformal vectors in ${{\rm CM}}({\mathfrak{l}})_G$ form a single orbit under the action of ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$. This orbit contains $1_G$.
From Theorems \[UniqSVOA\] and \[Thm:UniqOrb\] we deduce the following characterization of ${{A^{f\natural}}}$.
\[ThmUniq\] Let $V$ be a self-dual nice rational $N=1$ SVOA with rank $12$ and $V_{1/2}=0$. Then $V$ is isomorphic to $_{{{\mathbb C}}}{{A^{f\natural}}}$ as an $N=1$ SVOA.
For $x\in{\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ let us write ${\rm tr}|_{24}x$ for the trace of $x$ in the representation of ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ on ${{\mathbb R}}^{24}$. We have ${\rm tr}|_{24}(e_X)=24-2n$ when $w(X)=n$. Combining Theorems \[Thm:PtStabIsCo0\] and \[Thm:UniqOrb\] we obtain the following characterization of the group ${\operatorname{\textsl{Co}}}_0$.
\[Thm:Co0Chrztn\] Let $M$ be a spin module for ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ and let $t\in M$ such that ${{\langle}}xt,t{{\rangle}}=0$ whenever $x\in{\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ is an involution with ${\rm tr}|_{24}x=16$. Then the subgroup of ${\operatorname{\textsl{Spin}}}_{24}({{\mathbb R}})$ fixing $t$ is isomorphic to ${\operatorname{\textsl{Co}}}_0$.
Structure of ${{V^{f\natural}}}$ {#LatConst}
================================
In this section we summarize the construction of ${{V^{f\natural}}}$, mentioned in the introduction, and we indicate how to construct an explicit isomorphism with ${{A^{f\natural}}}$.
Lattice $N=1$ SVOAs
-------------------
There is a standard construction which assigns an $N=1$ SVOA to a positive definite integral lattice, and we summarize that construction now. Suppose that $L$ is a positive definite integral lattice. Let ${{\mathbb F}}$ be ${{\mathbb R}}$ or ${{\mathbb C}}$, and recall from §\[sec:SVOAstruc:LattSVOAs\], the SVOA $_{{{\mathbb F}}}V_L$ associated to $L$ via the standard construction. Let $_{{{\mathbb F}}}{\mathfrak{a}}
={{\mathbb F}}\otimes_{{{\mathbb Z}}}L$, and let us denote $A(_{{{\mathbb F}}}{\mathfrak{a}})$ by $_{{{\mathbb F}}}A_L$. Define $_{{{\mathbb F}}}V^f_L$ to be the tensor product of SVOAs $$\begin{gathered}
_{{{\mathbb F}}}V^f_L=\,_{{{\mathbb F}}}A_L\otimes\,_{{{\mathbb F}}}V_L\end{gathered}$$ For ${{\mathbb F}}={{\mathbb C}}$ the SVOA $_{{{\mathbb C}}}V^f_L$ admits a natural structure of $N=1$ SVOA. To define the superconformal element, let $h_i$ be an orthonormal basis of $_{{{\mathbb C}}}{\mathfrak{h}}={{\mathbb C}}\otimes_{{{\mathbb Z}}} L$ and let $e_i$ be the corresponding elements of $_{{{\mathbb C}}}{\mathfrak{a}}$ under the identification $_{{{\mathbb C}}}{\mathfrak{a}}=\,_{{{\mathbb C}}}{\mathfrak{h}}={{\mathbb C}}\otimes_{{{\mathbb Z}}} L$. Then we set $\tau$ to be the element in $(_{{{\mathbb C}}}V^f_L)_{3/2}$ given by $$\tau=\frac{{{\bf i}}}{\sqrt{8}}\sum_ie_i(-\tfrac{1}{2})h_i(-1){{\bf 1}}$$ where we suppress the tensor product from our notation. From [@KapOrlCTorus], [@SchVASStgs] for example, we have the following
The element $\tau$ is a superconformal vector for $_{{{\mathbb C}}}V^f_L$. In particular, $_{{{\mathbb C}}}V^f_L$ admits a natural structure of $N=1$ SVOA.
Just as in §\[RealFormLatSVOA\] we can obtain a real form ${V}_{L}^f$ for $_{{{\mathbb C}}}V^f_L$ by setting ${V}^f_L=
{_{{{\mathbb R}}}A_L}\otimes {V}_L$. Noting that ${{\bf i}}h_i(-1)\in{{\bf i}}{_{{{\mathbb R}}}V^1_L}\subset {V}_L$ we see that $\tau\in{V}^f_L$, and the $N=1$ structure on $_{{{\mathbb C}}}V_L^f$ restricts so as to furnish an $N=1$ structure on ${V}_L^f$.
For simplicity, let us suppose that the rank of $L$ is even. By the Boson-Fermion correspondence [@FreBF], [@DoMaBF], we have an isomorphism of SVOAs $_{{{\mathbb C}}}A_L\cong{_{{{\mathbb C}}}V_{{{\mathbb Z}}^n}}$ where $n={\rm rank}(L)/2$, so that $_{{{\mathbb C}}}A_L$ is self-dual as an SVOA. The tensor product $_{{{\mathbb C}}}V^f_L$ is therefore isomorphic to a lattice SVOA $_{{{\mathbb C}}}V_{{{\mathbb Z}}^n\oplus L}$. The lattice ${{\mathbb Z}}^n\oplus
L$ is self-dual just when $L$ is, so we conclude that the $N=1$ SVOA associated to any self-dual lattice is self-dual as an SVOA. More generally, the irreducible $_{{{\mathbb C}}}V^f_L$ modules are indexed by the cosets of $L$ in its dual.
The case that $L=E_8$
---------------------
From now on we take $L$ be a lattice of $E_8$ type so that $_{{{\mathbb C}}}V_L^f$ is a realization of the $N=1$ SVOA associated to the $E_8$ lattice. Since $L$ is a self-dual lattice, $_{{{\mathbb C}}}V^f_L$ is a self-dual $N=1$ SVOA. The idea is that the $N=1$ SVOA $_{{{\mathbb C}}}{{V^{f\natural}}}$ should be a ${{\mathbb Z}}/2$-orbifold of $_{{{\mathbb C}}}V^f_L$. More particularly, we wish to define the space underlying $_{{{\mathbb C}}}{{V^{f\natural}}}$ to be $(_{{{\mathbb C}}} V^f_L)^0 \oplus (_{{{\mathbb C}}}V^f_L)_{{\theta}}^0$ where ${\theta}$ is a suitably chosen involution on $_{{{\mathbb C}}}V_L^f$, the space $(_{{{\mathbb C}}}V^f_L)_{{\theta}}$ is an ${\theta}$-twisted $_{{{\mathbb C}}}V_L^f$-module, and the superscripts outside the brackets indicate ${\theta}$-fixed points. In order to construct $_{{{\mathbb C}}}{{V^{f\natural}}}$ we must therefore specify the involution ${\theta}$, and construct a ${\theta}$-twisted module. This will be the objective of the next two subsections.
Twisting
--------
The SVOA $_{{{\mathbb C}}}V^f_L$ admits an automorphism ${\theta}$ given by ${\theta}={\theta}_f\otimes{\theta}_b$ where ${\theta}_f$ denotes the parity involution on $_{{{\mathbb C}}}A_L$, and ${\theta}_b$ denotes a lift of $-1$ on $L$ to ${\operatorname{Aut}}(_{{{\mathbb C}}}V_L)$. Observe that both ${\theta}_f$ and ${\theta}_b$ may be regarded as a lift of $-1$ on $L$. We have ${\theta}({{\bf \tau}}_V)={{\bf \tau}}_V$, so ${\theta}$ is an automorphism of the $N=1$ structure on $_{{{\mathbb C}}}V^f_L$. Also, the real form of $_{{{\mathbb C}}}V^f_L$ is just $$\begin{gathered}
{V}^f_L=
\left\{u+{{\bf i}}v\mid u\in(_{{{\mathbb R}}}V^f_L)^0,\;
v\in(_{{{\mathbb R}}}V^f_L)^1\right\}\subset\,_{{{\mathbb C}}}V^f_L\end{gathered}$$ where $_{{{\mathbb F}}}V^f_L=(_{{{\mathbb F}}}V^f_L)^0\oplus (_{{{\mathbb F}}}V^f_L)^1$ indicates the decomposition into ${\theta}$-eigenspaces.
An ${\theta}$-twisted $_{{{\mathbb C}}}V^f_L$-module $(_{{{\mathbb C}}}V^f_L)_{{\theta}}$ is of the form $$\begin{gathered}
(_{{{\mathbb C}}}V^f_L)_{{\theta}}=(_{{{\mathbb C}}}A_L)_{{\theta}_f}
\otimes(_{{{\mathbb C}}}V_L)_{{\theta}_b}\end{gathered}$$ where $(_{{{\mathbb C}}}A_L)_{{\theta}_f}=A(_{{{\mathbb C}}}{\mathfrak{a}})_{{\theta}_f}$ is a canonically twisted $_{{{\mathbb C}}}A_L$-module and may be constructed as in §\[sec:cliffalgs:SVOAs\], and $(_{{{\mathbb C}}}V_L)_{{\theta}_b}$ is a ${\theta}_b$-twisted module over $_{{{\mathbb C}}}V_L$. There is a well known method for constructing ${\theta}_b$-twisted $_{{{\mathbb C}}}V_L$ modules for ${\theta}_b$ a lift of $-1$ on $L$, and one may refer to [@FLM] for a thorough treatment. It turns out that for the case we are interested in there is a simpler approach using only modules over lattice VOAs, and this in turn can be viewed from the point of view of Clifford module SVOAs. Such an approach is convenient for our purpose.
Recall that the lattice $L$ contains a sublattice of the form $K\oplus K$ where $K$ is a lattice of $D_4$ type. Let $K^*$ denote the dual lattice to $K$, and let $K_{\gamma}$ for $\gamma\in\Gamma=\{0,1,{\omega},{\bar{\omega}}\}$ be an enumeration of the cosets of $K$ in $K^*$. We decree that $K_0=K$. The remaining cosets $K_{\gamma}$ for $\gamma\neq 0$ are permuted by automorphisms of $K^*$ preserving $K$, and for this reason it is natural to regard $\Gamma$ as a copy of the field of order $4$. We may assume that the lattice $L$ decomposes as $L=\bigcup_{\Gamma} K_{\gamma}\oplus
K_{\gamma}$ into cosets of $K\oplus K$. Then the VOA $_{{{\mathbb C}}}V_L$ has a decomposition $$\begin{gathered}
\label{KDecompVL}
_{{{\mathbb C}}}V_L=\bigoplus_{\gamma\in\Gamma}
\,_{{{\mathbb C}}}V_{K_{\gamma}}
\otimes\,_{{{\mathbb C}}}V_{K_{\gamma}}\end{gathered}$$ The VOA $_{{{\mathbb C}}}V_K$, being a VOA of $D_4$ type, may be realized using Clifford module SVOAs, and similarly for its modules $_{{{\mathbb C}}}V_{K_{\gamma}}$. In fact we may take $_{{{\mathbb C}}}V_K$ to be a copy of $A(_{{{\mathbb C}}}{\mathfrak{a}})^0$, and then $\bigoplus_{\Gamma}\,
_{{{\mathbb C}}}V_{K_{\gamma}}$ is isomorphic as an $A(_{{{\mathbb C}}}{\mathfrak{a}})^0$-module to the space $A(_{{{\mathbb C}}}{\mathfrak{a}})\oplus
A(_{{{\mathbb C}}}{\mathfrak{a}})_{{\theta}_f}$.
The corresponding construction of $_{{{\mathbb C}}}V_L$, using $A(_{{{\mathbb C}}}{\mathfrak{a}})^0$-modules in place of $_{{{\mathbb C}}}V_K$-modules, was achieved in [@FFR]. Indeed, they provided more than this, describing certain twisted modules over $_{{{\mathbb C}}}V_L$ and proving that the direct sum of these twisted and untwisted structures may be equipped with a certain generalization of VOA structure; namely para-VOA structure. It turns out that the twisting involutions considered in [@FFR] are conjugate to the involution ${\theta}_b$ under the action of ${\operatorname{Aut}}(_{{{\mathbb C}}}V_L)\cong E_8({{\mathbb C}})$, and in particular, we may use one of these in place of ${\theta}_b$. Before describing precisely the involution we will use, we set up some new notation, and recall the relevant results of [@FFR].
Clifford construction of $_{{{\mathbb C}}}V_L$
----------------------------------------------
Recall that $_{{{\mathbb F}}}{\mathfrak{a}}={{\mathbb F}}\otimes_{{{\mathbb Z}}}L$ for ${{\mathbb F}}={{\mathbb R}}$ or ${{\mathbb C}}$. The extended Hamming code is the unique up to equivalence doubly-even self-dual code of length $8$. Let $\Pi$ be some set of cardinality $8$, and let ${\mathcal{H}}$ be a copy of the extended Hamming code, which we regard at once as a subset of ${\mathcal{P}}(\Pi)$, and as a subspace of ${{\mathbb F}}_2^{\Pi}$. Let ${\mathcal{E}}=\{e_i\}_{i\in\Pi}$ be an orthonormal basis for ${_{{{\mathbb R}}}{\mathfrak{a}}}\subset {_{{{\mathbb C}}}{\mathfrak{a}}}$, and let $H$ be an ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous lift of ${\mathcal{H}}$ to ${\operatorname{\textsl{Spin}}}(_{{{\mathbb R}}}{\mathfrak{a}})$. We may then realize an ${\theta}_f$-twisted $A(_{{{\mathbb F}}}{\mathfrak{a}})$-module explicitly by setting $A(_{{{\mathbb F}}}{\mathfrak{a}})_{{\theta}_f}= A(_{{{\mathbb F}}}{\mathfrak{a}})_{{\theta}_f,H}$. We define $_{{{\mathbb F}}}U_0$ to be the VOA $A(_{{{\mathbb F}}}{\mathfrak{a}})^0$ and we enumerate the $_{{{\mathbb F}}}U_0$-modules $_{{{\mathbb F}}}U_{\gamma}$ for $\gamma\in\Gamma$ by setting $$\begin{gathered}
_{{{\mathbb F}}}U_0=A(_{{{\mathbb F}}}{\mathfrak{a}})^0,\quad
_{{{\mathbb F}}}U_1=A(_{{{\mathbb F}}}{\mathfrak{a}})^1,\quad
_{{{\mathbb F}}}U_{{\omega}}=A(_{{{\mathbb F}}}{\mathfrak{a}})_{{\theta}_f}^0,\quad
_{{{\mathbb F}}}U_{{\bar{\omega}}}=A(_{{{\mathbb F}}}{\mathfrak{a}})_{{\theta}_f}^1.\end{gathered}$$ Then $_{{{\mathbb C}}}U_0$ is isomorphic to the $D_4$ lattice VOA, and the $_{{{\mathbb C}}}U_{\gamma}$ are its irreducible modules. We set $_{{{\mathbb F}}}U=\bigoplus_{\gamma\in\Gamma} {_{{{\mathbb F}}}U_{\gamma}}$. From [@FFR] we have $_{{{\mathbb C}}}U_0$-module intertwining operators $I_{\gamma\delta}: {_{{{\mathbb C}}}U_{\gamma}}\otimes {_{{{\mathbb C}}}U_{\delta}}
\to {_{{{\mathbb C}}}U_{\gamma+\delta}}((z^{1/2}))$ such that the map $I=(I_{\gamma\delta}): {_{{{\mathbb C}}}U}\otimes {_{{{\mathbb C}}}U} \to
{_{{{\mathbb C}}}U}((z^{1/2}))$ furnishes $_{{{\mathbb C}}}U$ with a structure of para-VOA. We refer the reader to [@FFR] for detailed information about para-VOAs, and we note here that the restriction of $I$ to $_{{{\mathbb C}}}U_0\oplus {_{{{\mathbb C}}}U_{\gamma}}$ equips that space with a structure of SVOA for any $\gamma\neq 0$.
For $k\in\{1,2,3\}$ let $_{{{\mathbb F}}}{\mathfrak{a}}^k$ be a copy of the space $_{{{\mathbb F}}}{\mathfrak{a}}$ with orthonormal basis ${\mathcal{E}}^k=\{e_i^k\}_{i\in\Pi}$, and let $_{{{\mathbb F}}}U^k$ be a copy of the space $_{{{\mathbb F}}}U$. Suppose we define spaces $_{{{\mathbb F}}}W_L$ and $_{{{\mathbb F}}}W_L'$ by setting $$\begin{gathered}
_{{{\mathbb F}}}{W}_L =\bigoplus_{\gamma\in\Gamma}
{_{{{\mathbb F}}}U^2_{\gamma}} \otimes
{_{{{\mathbb F}}}U^3_{\gamma}},\qquad
_{{{\mathbb F}}}W_L' =\bigoplus_{\gamma\in\Gamma}
{_{{{\mathbb F}}}U^2_{\gamma}}\otimes
{_{{{\mathbb F}}}U^3_{\gamma+\omega}}.\end{gathered}$$ The main result from [@FFR] that we will use is that the para-VOA structure on $_{{{\mathbb C}}}U$ induces a VOA structure on $_{{{\mathbb C}}}W_L$ isomorphic to $_{{{\mathbb C}}}V_L$, and induces a structure of ${\theta}_b'$-twisted $_{{{\mathbb C}}}W_L$-module on $_{{{\mathbb C}}}W_L'$, where ${\theta}_b'=1\otimes{\theta}_f$. Furthermore, we may assume that the isomorphism is chosen so that the action of ${\theta}_b'$ on $_{{{\mathbb C}}}W_L$ corresponds to that of ${\theta}_b$ on $_{{{\mathbb C}}}V_L$. Indeed, a Cartan subalgebra of $_{{{\mathbb C}}}({W}_L)_1$ is spanned by the ${{\bf i}}e^2_i(-\tfrac{1}{2}) e^3_i(-\tfrac{1}{2})$ for ${i\in\Pi}$, and ${\theta}_b'$ acts as $-1$ on this space.
From now on we will regard the VOAs $_{{{\mathbb C}}}W_L$ and $_{{{\mathbb C}}}V_L$ as identified via some VOA isomorphism such that ${\theta}_b$ corresponds to ${\theta}_b'$, and we will write $_{{{\mathbb C}}}V_L$ in place of $_{{{\mathbb C}}}W_L$ and ${\theta}_b$ in place of ${\theta}_b'$. Then for a ${\theta}_b$-twisted $_{{{\mathbb C}}}V_L$-module we may take $(_{{{\mathbb C}}}V_L)_{{\theta}_b}={_{{{\mathbb C}}}W_L'}$. Note that ${\theta}_b$ acts naturally on the ${\theta}_b$-twisted module $(_{{{\mathbb C}}}V_L)_{{\theta}_b}$. A real form ${V}_L$ for $_{{{\mathbb C}}}V_L$ may be described by ${V}_L={_{{{\mathbb R}}}W_L}=\bigoplus_{\Gamma} {_{{{\mathbb R}}}U_{\gamma}^2} \otimes
{_{{{\mathbb R}}}U_{\gamma}^3}$.
We may now express the spaces $_{{{\mathbb C}}}V_L^f$ and $(_{{{\mathbb C}}}V_L^f)_{{\theta}}$ in the following way as sums of tensor products of the $_{{{\mathbb C}}}U^k_{\gamma}$. $$\begin{gathered}
_{{{\mathbb C}}}{V}^f_L
=({_{{{\mathbb C}}}U^1_0}\oplus {_{{{\mathbb C}}}U^1_{1}})\otimes
\left(\bigoplus_{\Gamma} {_{{{\mathbb C}}}U^2_{\gamma}}
\otimes {_{{{\mathbb C}}}U^3_{\gamma}}
\right)\\
({_{{{\mathbb C}}}{V}^f_L})_{{\theta}}
=({_{{{\mathbb C}}}U^1_{\omega}}\oplus
{_{{{\mathbb C}}}U^1_{\bar{\omega}}})\otimes
\left(\bigoplus_{\Gamma} {_{{{\mathbb C}}}U^2_{\gamma}}
\otimes {_{{{\mathbb C}}}U^3_{\gamma+\omega}}\right)\end{gathered}$$ We obtain real forms ${V}_L^f$ and $({V}_L^f)_{{\theta}}$ by replacing ${{\mathbb C}}$ with ${{\mathbb R}}$ in the subscripts of all the $_{{{\mathbb C}}}U^k_{\gamma}$. Note that the super-conformal element $\tau_V\in{_{{{\mathbb C}}}V^f_L}$ may now be written in the following form. $$\begin{gathered}
\tau_V=-\frac{1}{\sqrt{8}}\sum_{\Pi}
e^1_i(-\tfrac{1}{2})
e^2_i(-\tfrac{1}{2})
e^3_i(-\tfrac{1}{2}){{\bf 1}}\end{gathered}$$
One can see that the Clifford module SVOA $A({\mathfrak{u}})$ has an $N=1$ structure whenever ${\rm dim}({\mathfrak{u}})$ is divisible by $3$.
We now define the space $_{{{\mathbb C}}}V^{f\natural}$ and its real form $V^{f\natural}$ as follows. $$\begin{gathered}
_{{{\mathbb C}}}V^{f\natural}=(_{{{\mathbb C}}}V^f_L)^0\oplus
(_{{{\mathbb C}}}V^f_L)_{{\theta}}^0,\quad
V^{f\natural}=({V}^f_L)^0\oplus
({V}^f_L)^0_{{\theta}}.\end{gathered}$$ Then in terms of the $_{{{\mathbb F}}}U^k_{\gamma}$ we have $$\begin{gathered}
_{{{\mathbb C}}}{{V^{f\natural}}}=
\bigoplus_{
\substack{\gamma_k\in\{0,\omega\}\\
\sum\gamma_k=0}}{_{{{\mathbb C}}}U_{\gamma_1\gamma_2\gamma_3}}
\oplus
\bigoplus_{
\substack{\gamma_k\in\{1,\bar{\omega}\}\\
\sum\gamma_k=1}}{_{{{\mathbb C}}}U_{\gamma_1\gamma_2\gamma_3}}\end{gathered}$$ where we use an abbreviated notation to write ${_{{{\mathbb C}}}U_{\gamma_1\gamma_2\gamma_3}}$ for ${_{{{\mathbb C}}}U^1_{\gamma_1}}\otimes {_{{{\mathbb C}}}U^2_{\gamma_2}} \otimes
{_{{{\mathbb C}}}U^3_{\gamma_3}}$, and there is a similar expression for the real form ${{V^{f\natural}}}$ obtained by replacing ${{\mathbb C}}$ with ${{\mathbb R}}$ in the subscripts of the $_{{{\mathbb C}}}U^k_{\gamma}$. By a similar argument to that used in [@FFR] to equip $_{{{\mathbb C}}}W_L$ with VOA structure via the para-VOA structure on $_{{{\mathbb C}}}U$, one may also equip the spaces $_{{{\mathbb C}}}V^{f\natural}$ and $V^{f\natural}$ with $N=1$ SVOA structure. Since our main focus is to study the $N=1$ SVOA ${{V^{f\natural}}}$ via its realization ${{A^{f\natural}}}$, we omit a verification of this claim and proceed directly to the task of indicating how one may arrive at an $N=1$ SVOA isomorphism between ${{V^{f\natural}}}$ and ${{A^{f\natural}}}$.
Isomorphism
-----------
We will concentrate on finding a correspondence between the real $N=1$ SVOAs ${{V^{f\natural}}}$ and ${{A^{f\natural}}}$. Recall that ${{V^{f\natural}}}$ may be described as follows. $$\begin{gathered}
\label{vfnExpInUs}
\begin{split}
{{V^{f\natural}}}=\bigoplus_{
\substack{\gamma_k\in\{0,\omega\}\\
\sum\gamma_k=0}}
{_{{{\mathbb R}}}U_{\gamma_1\gamma_2\gamma_3}}
\oplus
\bigoplus_{
\substack{\gamma_k\in\{1,\bar{\omega}\}\\
\sum\gamma_k=1}}
{_{{{\mathbb R}}}U_{\gamma_1\gamma_2\gamma_3}}
\end{split}\end{gathered}$$ On the other hand, the space underlying ${{A^{f\natural}}}$ is described as $A({\mathfrak{l}})^0\oplus A({\mathfrak{l}})_{{\theta}}^0$ where ${\mathfrak{l}}$ is real vector space of dimension $24$, and in particular, for a suitable identification of ${\mathfrak{l}}$ with $\bigoplus{_{{{\mathbb R}}}{\mathfrak{a}}^k}$, we may identify $_{{{\mathbb R}}}U_{000}=A(_{{{\mathbb R}}}{\mathfrak{a}}^1)^0\otimes
A(_{{{\mathbb R}}}{\mathfrak{a}}^2)^0\otimes A(_{{{\mathbb R}}}{\mathfrak{a}}^3)^0$ with a sub space of $A(\bigoplus{_{{{\mathbb R}}}{\mathfrak{a}}^k})^0=A({\mathfrak{l}})^0$. As a sum of modules over this subVOA ${{A^{f\natural}}}$ admits the following description. $$\begin{gathered}
\label{afnExpInUs}
\begin{split}
\bigoplus_{
\substack{\gamma_k\in\{0,1\}\\
\sum\gamma_k=0}}
{_{{{\mathbb R}}}U_{\gamma_1\gamma_2\gamma_3}}
\oplus
\bigoplus_{
\substack{\gamma_k\in\{\omega,\bar{\omega}\}\\
\sum\gamma_k=\omega}}
{_{{{\mathbb R}}}U_{\gamma_1\gamma_2\gamma_3}}
\end{split}\end{gathered}$$ Thus it is evident that our method of constructing ${{V^{f\natural}}}$ has almost delivered us an isomorphism with ${{A^{f\natural}}}$ already. We require to find some way of interchanging $1$ with ${\omega}$ in the subscripts on the right hand side of (\[vfnExpInUs\]), and to do so we will invoke the results of [@FFR] once more. It is well known that the type $D_4$ Lie algebra admits an $S_3$ group of outer automorphisms that has the effect of permuting transitively the three inequivalent irreducible non-adjoint $D_4$ modules. As shown in [@FFR] this action extends to the corresponding VOA modules, and applying the outer automorphism that preserves the spaces $U_0$ and $U_{\bar{\omega}}$, and interchanges $U_1$ with $U_{{\omega}}$ simultaneously to each tensor factor on the right hand side of (\[vfnExpInUs\]), we obtain an isomorphism of ${{V^{f\natural}}}$ with an $N=1$ SVOA ${{{V^{f\natural}}}}'$ whose underlying $_{{{\mathbb R}}}U_{000}$-module structure is as in (\[afnExpInUs\]). $$\begin{gathered}
{{{V^{f\natural}}}}'=\bigoplus_{
\substack{\gamma_k\in\{0,1\}\\
\sum\gamma_k=0}}
{_{{{\mathbb R}}}U_{\gamma_1\gamma_2\gamma_3}}
\oplus
\bigoplus_{
\substack{\gamma_k\in\{\omega,\bar{\omega}\}\\
\sum\gamma_k=\omega}}
{_{{{\mathbb R}}}U_{\gamma_1\gamma_2\gamma_3}}\end{gathered}$$ We have seen that the spaces $({{V^{f\natural}}}')_{\bar{0}}$ and $({{A^{f\natural}}})_{\bar{0}}$ are isomorphic VOAs due to the fact that $A({\mathfrak{l}})=A(\bigoplus_k {_{{{\mathbb R}}}{\mathfrak{a}}^k})$ and $\bigotimes_k
A(_{{{\mathbb R}}}{\mathfrak{a}}^k)$ are naturally isomorphic. Similarly, $({{V^{f\natural}}}')_{\bar{1}}$ is naturally isomorphic to a canonically-twisted $A({\mathfrak{l}})$-module, and the same is true for $({{A^{f\natural}}})_{\bar{1}}$ by construction. The difference between $({{V^{f\natural}}}')_{\bar{1}}$ and $({{A^{f\natural}}})_{\bar{1}}$ is that the former may be naturally identified with the $A({\mathfrak{l}})^0$-module $A({\mathfrak{l}})_{{\theta},\tilde{H}}^0$ where $\tilde{H}$ is an ${{\mathbb F}}_{2}^{{\mathcal{E}}}$-homogeneous lift of a direct sum of three copies of the Hamming code ${\mathcal{H}}^{\oplus 3}$, and the later is realized as a the $A({\mathfrak{l}})^0$-module $A({\mathfrak{l}})_{{\theta},G}^0$ for $G$ a lift of the Golay code ${\mathcal{G}}$. Canonically twisted modules over $A({\mathfrak{l}})$ are unique up to isomorphism, so we can be assured that $({{V^{f\natural}}}')_{\bar{1}}$ and $({{A^{f\natural}}})_{\bar{1}}$ are isomorphic as $A({\mathfrak{l}})^0$-modules, and the proof of Theorem \[AfnIsSVOA\] shows that the SVOA structures on ${{V^{f\natural}}}'$ and ${{A^{f\natural}}}$ are essentially unique.
What is perhaps not so clear is whether or not the $N=1$ structures on ${{V^{f\natural}}}'$ and ${{A^{f\natural}}}$ coincide. Recall the following description of the superconformal element in ${{V^{f\natural}}}$. $$\begin{gathered}
\tau_V=-\frac{1}{\sqrt{8}}\sum_{\Pi}
e^1_i(-\tfrac{1}{2})
e^2_i(-\tfrac{1}{2})
e^3_i(-\tfrac{1}{2}){{\bf 1}}\in {_{{{\mathbb R}}}U_{111}}\end{gathered}$$ Since ${\mathcal{H}}$ may be defined as a quadratic residue code, it is convenient to take $\Pi$ to be the points of the projective line over ${{\mathbb F}}_7$ so that $\Pi=\{\infty,0,1,2,3,4,5,6\}$ say. Then we may choose the isomorphism ${{V^{f\natural}}}\to{{V^{f\natural}}}'$ in such a way that the image $\tau_V'$ of $\tau_V$ in ${{V^{f\natural}}}'$ is the following. $$\begin{gathered}
\tau_V'=-\frac{1}{\sqrt{8}}\sum_{\Pi}
e^1_{i\infty}
e^2_{i\infty}
e^3_{i\infty}1_{\tilde{H}}
\in (A({\mathfrak{l}})_{{\theta},\tilde{H}}^0)_{3/2}\end{gathered}$$ On the other hand, the superconformal element in ${{A^{f\natural}}}$ is given by $\tau_A=1_G\in (A({\mathfrak{l}})_{{\theta},G}^0)_{3/2}$. The spaces $(A({\mathfrak{l}})_{{\theta},\tilde{H}})_{3/2}$ and $(A({\mathfrak{l}})_{{\theta},G})_{3/2}$ are different realizations of the spin module over ${{\rm Cliff}}({\mathfrak{l}})$, and we may assume that $\tilde{H}$ and $G$ are ${{\mathbb F}}_2^{{\mathcal{E}}}$-homogeneous subgroups of ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$. In the notation of §\[sec:cliffalgs:mods\] the spaces $(A({\mathfrak{l}})_{{\theta},\tilde{H}})_{3/2}$ and $(A({\mathfrak{l}})_{{\theta},G})_{3/2}$ are identified with ${{\rm CM}}({\mathfrak{l}})_{\tilde{H}}$ and ${{\rm CM}}({\mathfrak{l}})_{G}$, respectively. These two modules are equivalent and irreducible as ${\rm
Cliff}({\mathfrak{l}})$-modules, and in particular, the action of ${\rm
Cliff}({\mathfrak{l}})$ on any non-zero vector generates the entire module in each case. The vector $\tau_A=1_G$ in ${{\rm CM}}({\mathfrak{l}})_{G}$ is determined by the property that $g1_G=1_G$ for any $g\in G$. It is a remarkable fact that the correspondence between ${\mathfrak{l}}$ and $\bigoplus_k{_{{{\mathbb R}}}{\mathfrak{a}}}^k$ may be chosen in such a way that $\tau_V$ also satisfies the property $g\tau_V=\tau_V$ for any $g\in
G$. Consequently we obtain explicit ${\rm Cliff}({\mathfrak{l}})$-module and ${\operatorname{\textsl{Spin}}}({\mathfrak{l}})$-module equivalences between ${{\rm CM}}({\mathfrak{l}})_{\tilde{H}}$ and ${{\rm CM}}({\mathfrak{l}})_{G}$ such that $\tau_V'$ corresponds to $\tau_A$. Using this isomorphism we can construct an explicit $A({\mathfrak{l}})$-module equivalence between $A({\mathfrak{l}})_{{\theta},\tilde{H}}$ and $A({\mathfrak{l}})_{{\theta},G}$. This is the last piece of information needed to construct an isomorphism of $N=1$ SVOAs ${{V^{f\natural}}}'\to{{A^{f\natural}}}$, and consequently we obtain
\[ThmIsom\] There is an isomorphism of $N=1$ SVOAs ${{V^{f\natural}}}\xrightarrow{\sim}
{{A^{f\natural}}}$.
McKay–Thompson series {#sec:MTseries}
=====================
In this section we consider the McKay–Thompson series associated to elements of ${\operatorname{\textsl{Co}}}_1$ acting on ${{A^{f\natural}}}$.
Series via ${{A^{f\natural}}}$
------------------------------
Let $g\in {\operatorname{\textsl{Co}}}_0$, and suppose that $g^m=1$. Then there are unique integers $p_k$ for $k|m$ such that for $\det(g-x)$, the characteristic polynomial of $g$, we have $\det(g-x)=\prod_{k|m}(1-x^k)^{p_k}$. This data can be expressed using a kind of formal permutation notation as $\prod_{k|m}k^{p_k}$, and this expression is called the [*frame shape*]{} for $g$. Recall $\eta(\tau)$, the Dedekind eta function (\[Dedetafun\]), and let $\phi(\tau)$ be the function on the upper half plane given by $$\begin{gathered}
\phi(\tau)=\frac{\eta(\tau/2)}{\eta(\tau)}
=q^{-1/48}\prod_{n=0}^{\infty}(1-q^{n+1/2})\end{gathered}$$ For $g\in {\operatorname{\textsl{Co}}}_0$ with frame shape $\prod k^{p_k}$, we set $$\begin{gathered}
\phi_g(\tau)=\prod_{k|m}\phi(k\tau)^{p_k},\quad
\eta_g(\tau)=\prod_{k|m}\eta(k\tau)^{p_k}.\end{gathered}$$ The group ${\operatorname{\textsl{Co}}}_0$ has unique up to equivalence irreducible representations of dimensions $1$, $24$, $276$, $2024$ and $1771$ [@ATLAS]. With $N$ any one of these numbers, we write $\chi_N$ for the trace function $\chi_N:{\operatorname{\textsl{Co}}}_0\to{{\mathbb C}}$ on an irreducible ${\operatorname{\textsl{Co}}}_0$-module of dimension $N$. Let us also write $\chi_{G}$ for the trace function on the ${\operatorname{\textsl{Co}}}_0$-module ${{\rm CM}}({\mathfrak{l}})_G$. (Recall ${{\rm CM}}({\mathfrak{l}})_G^0=({{A^{f\natural}}})_{3/2}$.) We have $\chi_{G}=\chi_1
+\chi_{24}+\chi_{276} +\chi_{2024} +\chi_{1771}$, and the following
\[ThmChars\] For $\bar{g}\in {\operatorname{Aut}}({{A^{f\natural}}})$, let $\pm g$ be the preimages of $\bar{g}$ in $SO_{24}({{\mathbb R}})$. Then we have $$\begin{gathered}
\label{FrmChars}
\mathsf{tr}|_{{{A^{f\natural}}}}gq^{L(0)-c/24}=
\frac{1}{2}\left(\phi_g(\tau)+\phi_{-g}(\tau)\right)
+\frac{1}{2}(\chi_{G}(g)\eta_{-g}(\tau)
+\chi_{G}(-g)\eta_{g}(\tau))\end{gathered}$$
Suppose that $g\in {\operatorname{\textsl{Co}}}_0$ is of order $m$ and has frame shape $\prod_{k|m}k^{p_k}$. Then $g^{-1}$ has the same frame shape. Let $\{f_i\}_{i=1}^{24}$ be a basis for $_{{{\mathbb C}}}{\mathfrak{l}}={{\mathbb C}}\otimes_{{{\mathbb R}}}{\mathfrak{l}}$ consisting of eigenvectors of $g$ with eigenvalues $\{\xi_i\}_{i=1}^{24}$. Then we have $$\begin{gathered}
\label{charpolyfm}
\det(g-x)=\prod_i(\xi_i-x)= \prod_{k|m}(1-x^k)^{p_k}\end{gathered}$$ and we note also that $\sum_{k|m}kp_k=24$.
Recall that ${{A^{f\natural}}}$ may be described as ${{A^{f\natural}}}=A({\mathfrak{l}})^0\oplus
A({\mathfrak{l}})^0_{{\theta}}$ where $A({\mathfrak{l}})$ is the Clifford module SVOA associated to a $24$-dimensional inner product space ${\mathfrak{l}}$, and $A({\mathfrak{l}})_{{\theta}}$ is a canonically twisted $A({\mathfrak{l}})$-module. It is not hard to derive the following expressions for the trace of $g$ on the complexified spaces $_{{{\mathbb C}}}A({\mathfrak{l}})$ and $_{{{\mathbb C}}}A({\mathfrak{l}})_{{\theta}}$. $$\begin{gathered}
{\sf tr}|_{_{{{\mathbb C}}}A({\mathfrak{l}})}(-g)q^{L(0)-c/24}
=q^{-1/2}\prod_{n\geq 0}\prod_i
(1-\xi_iq^{n+1/2})\\
{\sf tr}|_{_{{{\mathbb C}}}A({\mathfrak{l}})_{{\theta}}}(-g)q^{L(0)-c/24}
=\chi_{G}(-g)q\prod_{n\geq 0}\prod_i
(1-\xi_iq^{n+1})\end{gathered}$$ Substituting $q^r$ for $x$ in (\[charpolyfm\]) and using the fact that $\prod_i\xi_i=1$ we obtain $$\begin{gathered}
\prod_i(1-\xi_iq^r)=\prod_{k|m}(1-(q^{k})^r)^{p_k}.\end{gathered}$$ Then for ${\sf tr}|_{_{{{\mathbb C}}}A({\mathfrak{l}})}(-g)q^{L(0)-c/24}$ for example, we have $$\begin{gathered}
\begin{split}
{\sf tr}|_{_{{{\mathbb C}}}A({\mathfrak{l}})}(-g)q^{L(0)-c/24}
&=q^{-1/2}\prod_{n\geq 0}\prod_{i}
(1-\xi_iq^{n+1/2})\\
&=\prod_{k|m}\left(
q^{-kp_k/48}\prod_{n\geq 0}
(1-(q^k)^{n+1/2})^{p_k}\right)
=\phi_{g}(\tau)
\end{split}\end{gathered}$$ and similarly, we obtain ${\sf tr}|_{_{{{\mathbb C}}} A({\mathfrak{l}})_{{\theta}}}
(-g)q^{L(0)-c/24}=\chi_G(-g)\eta_{g}(\tau)$. To compute the traces of $\bar{g}\in{\operatorname{\textsl{Co}}}_1={\operatorname{\textsl{Co}}}_0/\{\pm 1\}$ on $A({\mathfrak{l}})^0$ we should average over the traces of its preimages $g$ and $-g$ on $A({\mathfrak{l}})$, and similarly for the trace of $\bar{g}$ on $A({\mathfrak{l}})^0_{{\theta}}$. This completes the verification of (\[FrmChars\]).
Acknowledgement {#acknowledgement .unnumbered}
===============
The author is grateful to Richard Borcherds, John Conway, Gerald Höhn, Atsushi Matsuo, Kiyokazu Nagatomo, Marcus Rosellen and Olivier Schiffmann for interesting and useful discussions. The author is also grateful to Jia-Chen Fu for lending a patient and critical ear to many ideas. The author thanks Gerald Höhn for filling a gap in the original treatment of Theorem \[UniqSVOA\], and is extremely grateful to the referees, for suggesting many improvements upon earlier versions. Finally, the author wishes to thank his advisor Igor Frenkel for suggesting this project, and for providing invaluable guidance and encouragement throughout its completion.
[CCN[[$^{+}$]{}]{}85]{}
M. Aschbacher. , volume 10 of [*Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, second edition, 2000.
Richard E. Borcherds and Alex J. E. Ryba. Modular [M]{}oonshine. [II]{}. , 83(2):435–459, 1996.
J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson. . Oxford University Press, Eynsham, 1985. Maximal subgroups and ordinary characters for simple groups, With computational assistance from J. G. Thackray.
J. H. Conway and S. P. Norton. Monstrous moonshine. , 11(3):308–339, 1979.
J. H. Conway. A group of order [$8,315,553,613,086,720,000$]{}. , 1:79–88, 1969.
J. H. Conway. Three lectures on exceptional groups. In [*Finite simple groups (Proc. Instructional Conf., Oxford, 1969)*]{}, pages 215–247. Academic Press, London, 1971.
Conway and [N.J.A.]{} Sloane. . Springer-Verlag, New York, second edition, 1993.
L. Dixon, P. Ginsparg, and J. Harvey. Beauty and the beast: superconformal symmetry in a [M]{}onster module. , 119(2):221–241, 1988.
Chongying Dong, Robert L. Griess, Jr., and Gerald H[ö]{}hn. Framed vertex operator algebras, codes and the [M]{}oonshine module. , 193(2):407–448, 1998.
Chongying Dong and James Lepowsky. , volume 112 of [*Progress in Mathematics*]{}. Birkhäuser Boston Inc., Boston, MA, 1993.
Chongying Dong, Haisheng Li, and Geoffrey Mason. Simple currents and extensions of vertex operator algebras. , 180(3):671–707, 1996.
Chongying Dong, Haisheng Li, and Geoffrey Mason. Twisted representations of vertex operator algebras. , 310(3):571–600, 1998.
Chongying Dong and Geoffrey Mason. Nonabelian orbifolds and the boson-fermion correspondence. , 163(3):523–559, 1994.
Chongying Dong and Geoffrey Mason. Holomorphic vertex operator algebras of small central charge. , 213(2):253–266, 2004.
Chongying Dong and Geoffrey Mason. Rational vertex operator algebras and the effective central charge. , (56):2989–3008, 2004.
Chongying Dong and Geoffrey Mason. Integrability of [$C\sb 2$]{}-cofinite vertex operator algebras. , pages Art. ID 80468, 15, 2006.
Chongying Dong and Kiyokazu Nagatomo. Automorphism groups and twisted modules for lattice vertex operator algebras. In [*Recent developments in quantum affine algebras and related topics (Raleigh, NC, 1998)*]{}, volume 248 of [*Contemp. Math.*]{}, pages 117–133. Amer. Math. Soc., Providence, RI, 1999.
Chongying Dong. Vertex algebras associated with even lattices. , 161(1):245–265, 1993.
Andreas W. M. Dress. Induction and structure theorems for orthogonal representations of finite groups. , 102(2):291–325, 1975.
Chongying Dong and Zhongping Zhao. Modularity in orbifold theory for vertex operator superalgebras. , 260(1):227–256, 2005.
Alex J. Feingold, Igor B. Frenkel, and John F. X. Ries. , volume 121 of [*Contemporary Mathematics*]{}. American Mathematical Society, Providence, RI, 1991.
Igor B. Frenkel, Yi-Zhi Huang, and James Lepowsky. On axiomatic approaches to vertex operator algebras and modules. , 104(494):viii+64, 1993.
Igor B. Frenkel, James Lepowsky, and Arne Meurman. A moonshine module for the [M]{}onster. In [*Vertex operators in mathematics and physics (Berkeley, Calif., 1983)*]{}, volume 3 of [*Math. Sci. Res. Inst. Publ.*]{}, pages 231–273. Springer, New York, 1985.
Igor Frenkel, James Lepowsky, and Arne Meurman. , volume 134 of [ *Pure and Applied Mathematics*]{}. Academic Press Inc., Boston, MA, 1988.
I. B. Frenkel. Two constructions of affine [L]{}ie algebra representations and boson-fermion correspondence in quantum field theory. , 44(3):259–327, 1981.
A. Fr[ö]{}hlich and M. J. Taylor. , volume 27 of [*Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, 1993.
Gerald H[ö]{}hn. , volume 286 of [*Bonner Mathematische Schriften \[Bonn Mathematical Publications\]*]{}. Universität Bonn Mathematisches Institut, Bonn, 1996. Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, 1995.
Yi-Zhi Huang. A nonmeromorphic extension of the [M]{}oonshine module vertex operator algebra. In [*Moonshine, the Monster, and related topics (South Hadley, MA, 1994)*]{}, volume 193 of [*Contemp. Math.*]{}, pages 123–148. Amer. Math. Soc., Providence, RI, 1996.
Hsien-Kuei Hwang. Limit theorems for the number of summands in integer partitions. , 96(1):89–126, 2001.
Anton Kapustin and Dmitri Orlov. Vertex algebras, mirror symmetry, and [D]{}-branes: the case of complex tori. , 233(1):79–136, 2003.
Victor Kac and Weiqiang Wang. Vertex operator superalgebras and their representations. In [*Mathematical aspects of conformal and topological field theories and quantum groups (South Hadley, MA, 1992)*]{}, volume 175 of [ *Contemp. Math.*]{}, pages 161–191. Amer. Math. Soc., Providence, RI, 1994.
G[ü]{}nter Meinardus. Asymptotische [A]{}ussagen über [P]{}artitionen. , 59:388–398, 1954.
Gabriele Nebe, E. M. Rains, and N. J. A. Sloane. The invariants of the [C]{}lifford groups. , 24:99–122, 2001.
Robert A. Rankin. . Cambridge University Press, Cambridge, 1977.
Nils R. Scheithauer. Vertex algebras, [L]{}ie algebras, and superstrings. , 200(2):363–403, 1998.
Jean-Pierre Serre. , volume 42 of [ *Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, second edition, 1977.
Pham Huu Tiep. Globally irreducible representations of finite groups and integral lattices. , 64(1):85–123, 1997.
Yongchang Zhu. . PhD thesis, Yale University, 1990.
Yongchang Zhu. Modular invariance of characters of vertex operator algebras. , 9(1):237–302, 1996.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Chord distributed hash table (DHT) is well-known and often used to implement peer-to-peer systems. Chord peers find other peers, and access their data, through a ring-shaped pointer structure in a large identifier space. Despite claims of proven correctness, i.e., eventual reachability, previous work has shown that the Chord ring-maintenance protocol is not correct under its original operating assumptions. Previous work has not, however, discovered whether Chord could be made correct under the same assumptions. The contribution of this paper is to provide the first specification of correct operations and initialization for Chord, an inductive invariant that is necessary and sufficient to support a proof of correctness, and two independent proofs of correctness. One proof is informal and intuitive, and applies to networks of any size. The other proof is based on a formal model in Alloy, and uses fully automated analysis to prove the assertions for networks of bounded size. The two proofs complement each other in several important ways.'
author:
-
bibliography:
- 'proved.bib'
title: |
Reasoning about Identifier Spaces:\
How to Make Chord Correct
---
Introduction {#sec:intro}
============
Peer-to-peer systems are distributed systems featuring decentralized control, self-organization of similar nodes, fault-tolerance, and scalability. The best known peer-to-peer system is Chord, which was first presented in a 2001 SIGCOMM paper [@chord-sigcomm]. This paper was the fourth-most-cited paper in computer science for several years (according to Citeseer), and won the 2011 SIGCOMM Test-of-Time Award.
The Chord protocol maintains a network of nodes that can reach each other despite the fact that autonomous nodes can join the network, leave the network, or fail at any time. The nodes of a Chord network have identifiers in an [*m*]{}-bit identifier space, and reach each other through pointers in this identifier space. Because the network structure is based on adjacency in the identifier space, and $2^{m} - 1$ is adjacent to 0, the structure of a Chord network is a ring.
A Chord network is used to maintain a distributed hash table (DHT), which is a key-value store in which the keys are also identifiers in the same [*m*]{}-bit space. In turn, the hash table can be used to implement shared file storage, group directories, and many other purposes. Chord has been implemented many times, and used to build large-scale applications such as BitTorrent. And the continuing influence of Chord is easy to trace in more recent systems such as Dynamo [@dynamo].
The basic correctness property for Chord is eventual reachability: given ample time and no further joins, departures, or failures, the protocol can repair all defects in the ring structure. If the protocol is not correct in this sense, then some nodes of a Chord network will become permanently unreachable from other nodes. The introductions of the original Chord papers [@chord-sigcomm; @chord-ton] say, “Three features that distinguish Chord from many other peer-to-peer lookup protocols are its simplicity, provable correctness, and provable performance.” An accompanying PODC paper [@chord-podc] lists invariants of the ring-maintenance protocol.
The claims of simplicity and performance are certainly true. The Chord algorithms are far simpler and more completely specified than those of other DHTs, such as Pastry [@pastry], Tapestry [@tapestry], CAN [@CAN], and Kademlia [@kademlia]. Operations are fast because there are no atomic operations requiring locking of multiple nodes, and even queries are minimized.
Unfortunately, the claim of correctness is not true. The original specification with its original operating assumptions does not have eventual reachability, and [*not one*]{} of the seven properties claimed to be invariants in [@chord-podc] is actually an invariant [@chord-ccr]. This was revealed by modeling the protocol in the Alloy language and checking its properties with the Alloy Analyzer [@alloy-book].
The principal contribution of this paper is to provide the first specification of a version of Chord that is as efficient as the original, correct under reasonable operating assumptions, and actually proved correct. The new version corrects all the flaws that were revealed in [@chord-ccr], as well as some additional ones. The proof provides a great deal of insight into how rings in identifier spaces work, and is backed up by a formal, analyzable model.
Although other researchers have found problems with Chord implementations [@chord-nontrans; @mace; @crystalball], they have not discovered any problems with the specification of Chord. Although other researchers have verified properties of DHTs [@chord-sweden; @pastry-proof], they have not considered failures, which are by far the most difficult part of the problem. Other work on verifiable ring maintenance operations [@ringtop] uses multi-node atomic operations, which are avoided by Chord.
Some motivations and possible benefits of this work are presented below. They are categorized according to the audience or constituency that would benefit.
[*For those who implement Chord or rely on a Chord implementation:*]{} It seems obvious that they should have a precise and correct specification to follow. They should also know the invariant for Chord, as dynamic checking of the invariant is a design principle for enhancing DHT security [@sitmorris].
Critics of this work have claimed that all the flaws in original Chord are either obvious and fixed by all implementers, or extremely unlikely to cause trouble during Chord execution. It is a fact that some implementations retain original flaws, citing [@overlog] not because it is a bad implementation, but simply because the code is published and readable. Concerning whether the flaws cause real trouble or not, Chord implementations are certainly reported to have been unreliable. It is in the nature of distributed systems that failures are difficult to diagnose, and no one knows (or at least tells) what is really going on. Any means to increasing the reliability of distributed systems, especially without sacrificing efficiency, is an unmixed blessing.
[*For those interested in building more robust or more functional peer-to-peer systems based on Chord:*]{} Due to its simplicity and efficiency, it is an attractive idea to extend original Chord with stronger guarantees and additional properties. Work has already been done on protection against malicious peers [@awerbuch-robust; @chord-byz; @sechord], key consistency and data consistency [@scatter], range queries [@rangequeries], and atomic access to replicated data [@atomicchord; @etna].
For those who build on Chord, and reason about Chord behavior, their reasoning should have a sound foundation. Previous research on augmenting and strengthening Chord, as referenced above, relies on ambiguous descriptions of Chord and unsubstantiated claims about its behavior. These circumstances can lead to misunderstandings about how Chord works, as well as to unsound reasoning. For example, the performance analysis in [@chord-churn] makes the assumption that every operation of a particular kind makes progress according to a particular measure, which is easily seen to be false [@chord-ccr].
[*For those interested in encouraging application of formal methods:*]{} This project has already had an impact, as developers at Amazon credit the discovery of Chord flaws [@chord-ccr] with convincing them that formal methods can be applied productively to real distributed systems [@amazon].
The proof of correctness is also turning out to be an important case study. In this paper there are two independent proofs, one informal and one by model checking. The informal proof applies to networks of any size, and provides deep insight into how and why the protocol works. The Alloy model with its automated checking applies only to networks of bounded size, and offers limited insight, but it is an indispensable backup to the informal proof because it guards against human error. Also, it was an indispensable precursor to finding the general proof, because it indicated which theorems were likely to be true.
For those interested in formal proofs, the Alloy-only proof in [@chord-arxiv] has been used as a test case for the Ivy proof system [@ivy], and the new proof given here is being used as a test case for the Verdi proof system [@verdi].
Finally, there are other possible uses for ring-shaped pointer structures in large identifier spaces ([*e.g.,*]{} [@awerbuch-hyperring; @CAN]). The reasoning about identifier spaces used in this paper may also be relevant to other work of this kind.
The paper begins with an overview of Chord using the revised, correct ring-maintenance operations (Section \[sec:overview\]), and a specification of these new operations (Section \[sec:spec\]). Although the specification is pseudocode for immediate accessibility, it is a paraphrase of the formal model in Alloy.
Correct operations are necessary but not sufficient. It is also necessary to initialize a network correctly. Original Chord is initialized with a network of one node, which is not correct, and Section \[sec:initialization\] shows why. This section also introduces the inductive invariant for the proof, because a Chord network can safely be initialized in any state that satisfies the invariant.
Summarizing the previous two sections, Section \[sec:diff\] compares the revised Chord protocol with the original version, explaining how they differ. Together Sections \[sec:initialization\] and \[sec:diff\] present most of the problems with original Chord reported in [@chord-ccr] (as well as previously unreported ones). The problems are not presented first because they make more sense when explained along with their underlying nature and how to remove them.
The proof of correctness is largely based on reasoning about ring structures in identifier spaces. Section \[sec:idspace\] presents some useful theorems about these spaces and shows how they apply to Chord. The actual proof in Section \[sec:proof\] follows a fairly conventional outline. Section \[sec:alloy\] discusses the formal model and model-checked version of the proof.
![Ideal (left) and valid (right) networks. Members are represented by their identifiers. Solid arrows are successor pointers.[]{data-label="fig:valid"}](valid.pdf)
Overview of correct Chord {#sec:overview}
=========================
Every member of a Chord network has an identifier (assumed unique) that is an [*m*]{}-bit hash of its IP address. Every member has a [*successor list*]{} of pointers to other members. The first element of this list is the [*successor*]{}, and is always shown as a solid arrow in the figures. Figure \[fig:valid\] shows two Chord networks with [*m*]{} = 6, one in the ideal state of a ring ordered by identifiers, and the other in the valid state of an ordered ring with appendages. In the networks of Figure \[fig:valid\], key-value pairs with keys from 31 through 37 are stored in member 37. While running the ring-maintenance protocol, a member also acquires and updates a [*predecessor*]{} pointer, which is always shown as a dotted arrow in the figures.

The ring-maintenance protocol is specified in terms of four operations, each of which is executed by a member and changes only the state of that member. In executing an operation, the member often queries another member or sequence of members, then updates its own pointers if necessary. The specification of Chord assumes that inter-node communication is bidirectional and reliable, so we are not concerned with Chord behavior when inter-node communication fails.
A node becomes a member in a [*join*]{} operation. A member is also referred to as a [*live node*]{}. When a member joins, it contacts some existing member to look up a member that is near to it in identifier space, and gets a successor list from that nearby member. The first stage of Figure \[fig:join\] shows successor and predecessor pointers in a section of a network where 10 has just joined.
When a member [*stabilizes*]{}, it learns its successor’s predecessor. It adopts the predecessor as its new successor, provided that the predecessor is closer in identifier order than its current successor. Because a member must query its successor to stabilize, this is also an opportunity for it to update its successor list with information from the successor. Members schedule their own stabilize operations, which should be periodic.
Between the first and second stages of Figure \[fig:join\], 10 stabilizes. Because its successor’s predecessor is 7, which is not a better successor for 10 than its current 19, this operation does not change the successor of 10.
After stabilizing (regardless of the result), a node notifies its successor of its identity. This causes the notified member to execute a [*rectify*]{} operation. The rectifying member adopts the notifying member as its new predecessor if the notifying member is closer in identifier order than its current predecessor, or if its current predecessor is dead. In the third stage of Figure \[fig:join\], 10 has notified 19, and 19 has adopted 10 as its new predecessor.
In the fourth stage of Figure \[fig:join\], 7 stabilizes, which causes it to adopt 10 as its new successor. In the last stage 7 notifies and 10 rectifies, so the predecessor of 10 becomes 7. Now the new member 10 is completely incorporated into the ring, and all the pointers shown are correct.
The protocol requires that a member or live node always responds to queries in a timely fashion. A node ceases to become a member in a [*fail*]{} operation, which can represent failure of the machine, or the node’s silently leaving the network. A member that has failed is also referred to as a [*dead node*]{}. The protocol also requires that, after a member fails, it no longer responds to queries from other members. With this behavior, members can detect the failure of other members perfectly by observing whether they respond to a query before a timeout occurs.
Failures can produce gaps in the ring, which are repaired during stabilization. As a member attempts to query its successor for stabilization, it may find that its successor is dead. In this case it attempts to query the next member in its successor list and make this its new successor, continuing through the list until it finds a live successor.
There is an important operating assumption that successor lists are long enough, and failures are infrequent enough, so that a member is never left with no live successor in its list. Put another way, this is a fairness assumption about the relative rates of failures (which create dead entries in successor lists) and stabilizations (which replace dead entries with live ones).
As in the original Chord papers [@chord-sigcomm; @chord-ton], we wish to define a correctness property of eventual reachability: given ample time and no further disruptions, the ring-maintenance protocol can repair defects so that every member of a Chord network is reachable from every other member. Note that a network with appendages (nodes 50, 53, 63, 9 on the right side of Figure \[fig:valid\]) cannot have full reachability, because an appendage cannot be reached by a member that is not in the same appendage.
A network is [*ideal*]{} when each pointer is globally correct. For example, on the right of Figure \[fig:valid\], the globally correct successor of 48 is 50 because it is the nearest member in identifier order. Because the ring-maintenance protocol is supposed to repair all imperfections, and because it is given ample time to do all the repairs, the correctness criterion can be strengthened slightly, to: [*In any execution state, if there are no subsequent join or fail operations, then eventually the network will become ideal and remain ideal.*]{}
Defining a member’s [*best successor*]{} as its first successor pointing to a live node (member), a [*ring member*]{} is a member that can reach itself by following the chain of best successors. An [*appendage member*]{} is a member that is not a ring member. Of the seven invariants presented in [@chord-podc] (and all violated by original Chord), the following four are necessary for correctness.
- There must be a ring, which means that there must be a non-empty set of ring members ([*AtLeastOneRing*]{}).
- There must be no more than one ring, which means that from each ring member, every other ring member is reachable by following the chain of best successors ([*AtMostOneRing*]{}).
- On the unique ring, the nodes must be in identifier order ([*OrderedRing*]{}).
- From each appendage member, the ring must be reachable by following the chain of best successors ([*ConnectedAppendages*]{}).
If any of these properties is violated, there is a defect in the structure that the ring-maintenance protocol cannot repair, and some members will be permanently unreachable from some other members. It follows that any inductive invariant must imply these properties.
The Chord papers define the lookup protocol, which is used to find the member primarily responsible for a key, namely the ring member with the smallest identifier greater than or equal to the key. The lookup protocol is not discussed further here. Chord papers also define the maintenance and use of finger tables, which greatly improve lookup speed by providing pointers that cross the ring like chords of a circle. Because finger tables are an optimization and they are built from successor lists, correctness does not depend on them.
Specification of ring-maintenance operations {#sec:spec}
============================================
Identifiers and node state {#sec:state}
--------------------------
There is a type [*Identifier*]{} which is a string of [*m*]{} bits. Implicitly, whenever a member transmits the identifier of a member, it also transmits its IP address so that the recipient can reach the identified member. The pair is self-authenticating, as the identifier must be the hash of the IP address according to a chosen function.
The Boolean function [*between*]{} is used to test the order of identifiers. Because identifier order wraps around at zero, it is meaningless to test the order of two identifiers—each precedes and succeeds the other. This is why [*between*]{} has three arguments:
Boolean function between (n1,nb,n2: Identifier)
{ if (n1 < n3) return ( n1 < nb && nb < n2 )
else return ( n1 < nb || nb < n2 )
}
For [*nb*]{} to be [*between n1*]{} and [*n2*]{}, it must be equal to neither. Further properties of identifier spaces are presented in Section \[sec:idspace\].
Each node that is a member of a Chord network has the following state variables:
myIdent: Identifier;
prdc: Identifier;
succList: list Identifier; // length is r
where [*myIdent*]{} is the hash of its IP address, and [*prdc*]{} is the node’s predecessor. [*succList*]{} is the node’s entire successor list; the head of this list is its [*first successor*]{} or simply its [*successor*]{}. The parameter [*r*]{} is the fixed length of all successor lists.
Maintaining a shared-state abstraction {#sec:shared}
--------------------------------------
Reasoning about Chord requires reasoning about the global state, so the protocol must maintain the abstraction of a shared, global state. The algorithmic steps of the protocol must behave as if atomic and interleaved. In each algorithmic step, a node reads the state of at most one other node, and modifies only its own state.
In an implementation, a node reads the state of another node by querying it. If the node does not respond within a time parameter [*t*]{}, then it is presumed dead. If the node does respond, then the atomic step associated with the query is deemed to occur at the instant that the queried node responds with information about its own state.
To maintain the shared-state abstraction, the querying node must obey the following rules:
- The querying node does not know the instant that its query is answered; it only knows that the response was sent some time after it sent the query. So the querying node must treat its own state, between the time it sends the query and the time it finishes the step by updating its own state, as undefined. The querying node cannot respond to queries about its state from other nodes during this time.
- If the querying node is delaying response to a query because it is waiting for a response to its own query, it must return interim “response pending” messages so that it is not presumed dead.
- If a querying node is waiting for a response, and is queried by another node just to find out if it is alive or dead, it can respond immediately. This is possible because the response does not contain any information about its state.
This covers all possibilities except that of a deadlock due to circular waiting for query responses. Freedom from deadlock is covered in the proof of correctness in Section \[sec:proof\].
Join and fail operations {#sec:joinfail}
------------------------
When a node is not a member of a Chord network, it has no Chord state variables, and does not respond to queries from Chord members. To join a Chord network, a node must first calculate its own Chord identifier [*myIdent*]{}. It must also know some member of the network—it does not even matter whether it is a ring member or appendage—and must ask the member to use the lookup protocol to find a member [*newPrdc*]{} such that [*between (newPrdc, myIdent, head(newPrdc.succList))*]{}.
Provided with this information, the node joins in a single atomic step, by executing the following pseudocode:
// Join step
// newPrdc has value from previous lookup
newPrdc: Identifier;
query newPrdc for newPrdc.succList;
if (query returns before timeout) {
succList = newPrdc.succList;
prdc = newPrdc;
}
else abort;
If the query fails then [*newPrdc*]{} has died, and the node has no choice but to try joining again later.
A fail operation is also a single atomic step. When a member node fails or leaves a Chord network, it deletes its Chord state variables and ceases to respond to queries. Fortunately, the proof of correctness shows that a node can re-join safely even if other nodes still have pointers to it from its former episode of membership.
Stabilize and rectify operations {#sec:stabilize}
--------------------------------
A stabilize operation may require a sequence of steps. First, the stabilizing node executes a [*StabilizeFromSuccessor*]{} step:
// StabilizeFromSuccessor step
// newSucc not initialized
newSucc: Identifier;
query head(succList) for
head(succList).prdc and
head(succList).succList;
if (query returns before timeout) {
// successor live, adopt its list as mine
succList =
append (
head(succList),
butLast(head(succList).succList)
);
newSucc = head(succList).prdc;
if (between(myIdent,newSucc,head(succList)))
// predecessor may be a better successor
next step is StabilizeFromPredecessor;
// else stabilization is complete
}
// successor is dead, remove from succList
else
succList =
append(tail(succList),last(succList)+1);
next step is StabilizeFromSuccessor again;
First the node queries its successor for its successor’s predecessor and successor list. If this query times out, then the node’s successor is presumed dead. The node removes the dead successor from its successor list and does another [*StabilizeFromSuccessor*]{} step.[^1] We know that eventually it will find a live successor in its list, because of the operating assumption (from Section \[sec:overview\]) that successor lists are long enough so that each list contains at least one live node.
Once the node has contacted a live successor, it adopts its successor list (all but the last entry) as its own second and later successors. It then tests the successor’s predecessor to see if it might be a better first successor. If so, the node then executes a [*StabilizeFromPredecessor*]{} step. If not, the stabilization operation is complete.


The [*StabilizeFromPredecessor*]{} step is simple. The node queries its potential new successor for its successor list. If the new successor is live, the node adopts it and its successor list. If not, nothing changes. Either way, the stabilization operation is complete.
// StabilizeFromPredecessor step
// newSucc value came from previous step
newSucc: Identifier;
query newSucc for newSucc.succList;
if (query returns before timeout)
// new successor is live, adopt it
succList =
append(newSucc,butLast(newSucc.succList));
// else new successor is dead, no change
At the completion of each stabilization operation, regardless of the result, the stabilizing node sends a message to its successor notifying the successor of its presence as a predecessor. On receiving this notification, a node executes a single-step rectify operation, which may allow it to improve its predecessor pointer.
// Rectify step
// newPrdc value came from notification
newPrdc: Identifier;
if (between (prdc, newPrdc, myIdent))
// newPrdc presumed live
prdc = newPrdc;
else {
query prdc to see if live;
if (query returns before timeout)
no change;
// live newPrdc better than dead old one
else prdc = newPrdc;
};
Initialization and invariant {#sec:initialization}
============================
An [*inductive invariant*]{} is an invariant with the property that if the system satisfies the invariant before any event, then the system can be proved to satisfy the invariant after the event. By induction, if the system’s initial state satisfies the invariant, then all system states satisfy the invariant.
Original Chord initializes a network with a single member that is its own successor, [*i.e.,*]{} the initial network is a ring of size 1. This is not correct, as shown in Figure \[fig:init\] with successor lists of length 2. Appendage nodes 62 and 37 start with both list entries equal to 48. Then 48 fails, leaving members 62 and 37 with insufficient information to find each other.
Clearly the spirit of the operating assumption in Section \[sec:overview\] is that the chosen length of successor lists should provide enough redundancy to ensure safe operation. But we can hardly expect the successor lists to work if the redundancy is thrown away by filling them with duplicate entries. This is the problem with Figure \[fig:init\]—that 62 and 37 have no real redundancy in their successor lists, so one failure disconnects them from the network.
For members of a network with successor list length $r$ to enjoy full redundancy, each member must have $r$ distinct entries in its successor list. For this to be possible, the network must have at least $r + 1$ members, and the inductive invariant must imply that this is so.
The inductive invariant for Chord is the result of a very long and arduous search, some of which is described in [@chord-arxiv]. As one indication of the difficulty, the invariant must imply that the network has a minimum size, yet all operations are local, and no member knows how many other members there are.
As another indication of the difficulty, consider Figure \[fig:wrap\], which is a counterexample to a trial invariant consisting of the conjunction of [*AtLeastOneRing, AtMostOneRing, OrderedRing, ConnectedAppendages, NoDuplicates,*]{} and [*OrderedSuccessorLists*]{}. Again $r = 2$. Let an [*extended successor list*]{} be the concatenation of a node with its successor list. [*NoDuplicates*]{} has the obvious meaning that the entries in any extended successor list are distinct. [*OrderedSuccessorLists*]{} says that for any ordered sublist [*\[x, y, z\]*]{} drawn from a node’s extended successor list, whether the sublist is contiguous or not, [*between \[x, y, z\]*]{} holds.
In Figure \[fig:wrap\], the first stage satisfies the trial invariant, having duplicate-free and ordered extended successor lists such as [*\[52, 3, 45\]*]{} and [*\[45, 20, 31\]*]{}. The appendage node 45 does not merge into the ring at the correct place, but that is part of normal Chord operation (see [@chord-ccr]). The second successor of ring node 52 points outside the ring, but that is also part of normal Chord operation. It is also part of normal Chord operation that 45 changes from being an appendage node to a ring member just because 3 fails. In the case shown in the figure, the result of all these quirks combined is that the ring becomes disordered.
The final invariant is much simpler than the earlier invariant used in [@chord-arxiv]. It also has the major advantage of not requiring an extra operating assumption that is difficult to implement. It was discovered in the process of finding a general informal proof of the assertions verified automatically for networks of bounded size.
To explain the real invariant, we must introduce the concept of a [*principal node*]{}. A principal node is a member that is not skipped by any member’s extended successor list. For example, if 30 is a principal node, then [*\[30, 34, 39\]*]{} and [*\[27, 30, 34\]*]{} and [*\[21, 27, 29\]*]{} can all be extended successor lists, but [*\[27, 29, 34\]*]{} cannot be, because 30 is between 29 and 34, and would therefore be skipped.
The real invariant is the conjunction of only two properties, [*OneLiveSuccessor*]{} and [*SufficientPrincipals*]{}. [*OneLiveSuccessor*]{} simply says that every successor list has at least one live entry. [*SufficientPrincipals*]{} says that the number of principal members is greater than or equal to $r + 1$, where $r$ is the length of successor lists.
The proofs in Section \[sec:idspace\] will show that this deceptively simple invariant implies all of [*AtLeastOneRing, AtMostOneRing, OrderedRing, ConnectedAppendages, NoDuplicates,*]{} and [*OrderedSuccessorLists*]{}. Needless to say, it also implies that the network has a minimum size. (Note that the first stage of Figure \[fig:wrap\] has no principal members, so the figure is not a counterexample to the real invariant.)
A typical Chord network has $r$ from 2 to 5, so the set of principals need only have 3 to 6 nodes. Nevertheless, the existence of these few nodes protects the correctness of a network with millions of members. They wield great and mysterious powers!
Comparison of the versions {#sec:diff}
==========================
The [*join, stabilize,*]{} and [*notified*]{} operations of the original protocol are defined as pseudocode in [@chord-sigcomm] and [@chord-ton]. These papers do not provide details about failure recovery. The only published paper with pseudocode for failure recovery is [@chord-podc], where failure recovery is performed by the [*reconcile, update,*]{} and [*flush*]{} operations. The following table shows how operations of the two versions correspond. Although [*rectify*]{} in the new version is similar to [*notified*]{} in the old version, it seems more consistent to use an active verb form for its name.
[**old**]{} [**new**]{}
------------- -------------
join + join
reconcile
stabilize + stabilize
reconcile +
update
notified + rectify
flush
In both old and new versions of Chord, members schedule their own maintenance operations except for [*notified*]{} and [*rectify*]{}, which occur when a member is notified by a predecessor. Although the operations are loosely expected to be periodic, scheduling is not formally constrained. As can be seen from the table, multiple smaller operations from the old version are assembled into larger new operations. This ensures that the successor lists of members are always fully populated with $r$ entries, rather than having missing entries to be filled in by later operations. An incompletely populated successor list might lose (to failure) its last live successor. If the successor list belongs to an appendage member, this would mean that the appendage can no longer reach the ring, which is a violation of [*ConnectedAppendages*]{} [@chord-ccr].
Another systematic change from the old version to the new is that, before incorporating a pointer to a node into its state, a member checks that it is live. This prevents cases where a member replaces a pointer to a live node with a pointer to a dead one. A bad replacement can also cause a successor list to have no live successor. If the successor list belongs to a ring member, this will cause a break in the ring, and a violation of [*AtLeastOneRing*]{}. Together these two systematic changes also prevent scenarios in which the ring becomes disordered or breaks into two rings of equal size (violating [*OrderedRing*]{} or [*AtMostOneRing*]{}, respectively [@chord-ccr]).
A third systematic change was necessary because the old version does not say anything precise about communication between nodes, and does not say anything at all about atomic steps and maintaining a shared-state abstraction. The new operations are specified in terms of atomic steps, and the rules for maintaining a shared-state abstraction are stated explicitly.
The other major difference is the initialization, as discussed in Section \[sec:initialization\].
In addition to these systematic changes, a number of small changes were made. Some were due to problems detected by Alloy modeling and analysis of the original version. Others were required to ensure that, after each atomic step of a stabilize operation, the global state satisfies the invariant.
These differences do not change the efficiency of Chord operations in any significant way. Checking some pointers to make sure they point to live nodes (new version) requires more queries than in the old version. On the other hand, in the old version stabilize, reconcile, and update operations are all separate, and can all entail queries. In this respect the old version requires more queries than the new version.
There is an additional bonus in the new version for implementers. Consider what happens when a member node fails, recovers, and wishes to rejoin, all of which could occur within a short period of time. It was previously thought necessary for the node to wait until all previous references to its identifier had been cleared away (with high probability), because obsolete pointers could make the state incorrect. This wait was included in the first Chord implementation [@excuses]. Yet the wait is unnecessary, as Chord is provably correct even with obsolete pointers.
In the spirit of [@sitmorris], it is a good security practice to monitor that invariants are satisfied. Both the conjuncts of the inductive invariant are global, and thus unsuitable for local monitoring. The right properties to monitor are [*NoDuplicates*]{} and [*OrderedSuccessorLists*]{}, which can be checked on individual successor lists. These are properties that must be true for Chord networks of any size.
Although the new initialization with $r + 1$ principal nodes may not be inefficient, it is certainly more difficult to implement than initialization of original Chord. An alternative approach might be to start the network with a single node, and monitor the network as a whole until it has $r + 1$ principal nodes. For example, all nodes might send their successor lists (whenever there is a change) around the ring, to be collected and checked by the single original node. Once the original node sees a sufficient set of principal nodes, it could send a signal around the ring that monitoring is no longer necessary. This scheme is discussed further in Section \[sec:preservingbase\].
Reasoning about ring structures in identifier spaces {#sec:idspace}
====================================================
Theorems about identifier spaces
--------------------------------
An identifier space is built from a finite totally-ordered set by adding the concept that its greatest element is less than its smallest element. This makes identifier order circular, so the identifier space itself can be thought of as a ring.
The viewpoint of this paper is that identifier spaces have less structure than algebraic rings. Algebraic rings are generalizations of integer arithmetic, with operators such as sum and product that combine quantities. In Chord identifiers are not quantities, and it makes no sense to add or multiply them. This is in contrast to the formalization of Pastry [@pastry-proof], where distance in the identifier space is assumed to be meaningful and is used in the protocol.
In this section definitions and theorems about identifier spaces are presented in the Alloy syntax. In the Alloy model the concepts of identifier and node (potential network member) are conflated, so that [Node]{} is declared as a totally ordered set upon which an identifier space is built. Details about the Alloy model and bounded verification can be found in Section \[sec:alloy\]. These theorems have been proven for unbounded identifier spaces using merely substitution and simplification.
Section \[sec:state\] already introduced the Boolean function [*between,*]{} defined in Alloy as:
pred between[n1,nb,n2: Node] {
lt[n1,n2] => ( lt[n1,nb] && lt[nb,n2] )
else ( lt[n1,nb] || lt[nb,n2] ) }
where are the notations for less than (in the total ordering), logical and, and logical or, respectively. The definition has the form of an if-then-else expression.
Here is a simple theorem in Alloy syntax:
assert AnyBetweenAny {
all disj n1,n2: Node | between[n1,n2,n1] }
says that for any distinct (disjoint) [*n1*]{} and [*n2, n2*]{} is between [*n1*]{} and [*n1*]{}.
For proofs, we also need a different predicate [*includedIn*]{}, which is like [*between*]{} except that the included identifier can be equal to either of the boundary identifiers:
pred includedIn[n1,nb,n2: Node] {
lt[n1,n2] => ( lte[n1,nb] && lte[nb,n2] )
else ( lte[n1,nb] || lte[nb,n2] ) }
In the [*AnyIncludedInAny*]{} theorem, the two arguments need not be disjoint:
assert AnyIncludedInAny {
all n1,n2: Node | includedIn[n1,n2,n1] }
A very useful theorem allows us to reason about the fact or assumption that [*between*]{} does [*not*]{} hold.
assert IncludedReversesBetween {
all disj n1,n2: Node, nb: Node |
! between[n1,nb,n2]
<=> includedIn[n2,nb,n1] }
Provided that the boundaries of an interval are distinct, if an identifier [*nb*]{} cannot be found in the portion of the identifier space from [*n1*]{} to [*n2*]{} (exclusive), then it must be found in the portion of the identifier space from [*n2*]{} to [*n1*]{} (inclusive).
Theorems about successor lists
------------------------------
This section introduces definitions and theorems about a second kind of ring. Successor lists whose entries are identifiers (in the first kind of ring) are used to create ring-shaped networks (the second kind of ring). A number of terms concerning successor lists in network states were introduced briefly in Section \[sec:initialization\]. For clarity, they will be redefined here.
An [*extended successor list*]{} (ESL) is a successor list with the node that owns it prepended to the list. The length of an ESL is $r + 1$.
A [*principal node*]{} is a member that is not skipped by any ESL. That is, for all principal nodes [*p*]{}, there is no contiguous pair [*\[x, y\]*]{} in any ESL such that [*between \[x, p, y\]*]{}.
The property [*OneLiveSuccessor*]{} holds in a state if every member has at least one live successor.
The property [*SufficientPrincipals*]{} holds in a state if the number of principal nodes is greater than or equal to $r + 1$.
The property [*Invariant*]{} is the conjunction of [*OneLiveSuccessor*]{} and [*SufficientPrincipals*]{}.
The property [*NoDuplicates*]{} holds in a state if no ESL has multiple copies of the same entry.
The property [*OrderedSuccessorLists*]{} holds in a state if for all sublists [*\[x, y, z\]*]{} of ESLs, whether contiguous sublists or not, [*between \[x, y, z\]*]{}.
The remainder of this section proves that [*Invariant*]{} implies the successor-list properties [*NoDuplicates*]{} and [*OrderedSuccessorLists*]{}.
In any ring structure whose state is maintained in successor lists, [*Invariant*]{} implies [*NoDuplicates*]{}.
[*Proof:*]{}
Contrary to the theorem, assume that there is a network state for which [*Invariant*]{} is true and [*NoDuplicates*]{} is false. Then some node has an extended successor list with the form [*\[ ..., x, ..., x, ... \]*]{} for some identifier [*x*]{}.
From [*AnyBetweenAny*]{}, for all principal nodes [*p*]{} distinct from [*x, between \[x, p, x\]*]{}. Because of the definition of principal nodes, all of the principal nodes distinct from [*x*]{} must be listed in the ellipsis between the two occurrences of [*x*]{} in the successor list.
From [*SufficientPrincipals*]{}, the portion of the extended successor list [*\[x, ..., x\]*]{} must have length at least $r + 2$, because there are at least $r$ principal nodes distinct from [*x*]{}. But the length of the entire extended successor list is $r + 1$, which yields a contradiction. $\Box$
If we visualize an identifier space as a ring ordered clockwise, then an ESL is a path that touches the ring wherever the ESL has an entry (as in Figure \[fig:osl\]). This proof shows that the existence of a minimum number of nodes that cannot be skipped by ESLs prevents paths from wrapping around the identifier space, which is a major cause of trouble.
In any ring structure whose state is maintained in successor lists, [*Invariant*]{} implies [*OrderedSuccessorLists*]{}.
[*Proof:*]{}
Contrary to the theorem, assume that there is a network state for which [*Invariant*]{} is true and [*OrderedSuccessorLists*]{} is false. Then some node has an ESL with the form [*\[ ..., x, ..., y, ..., z, ... \]*]{} where [*! between \[x, y, z\]*]{}. From the previous theorem, [*x, y,*]{} and [*z*]{} are all distinct.
From [*IncludedReversesBetween*]{}, [*includedIn \[z, y, x\]*]{}. So the disordered ESL segment [*\[x, ..., y, ..., z\]*]{} wraps around the identifier ring (see Figure \[fig:osl\]), touching the identifier space first at [*x*]{}, passing by [*z*]{}, touching at [*y*]{}, passing by [*x*]{} again, then finally touching at [*z*]{}.
![The dashed line depicts the identifier space. The solid arrows show the path around the identifier space of a segment of an ESL [*\[x, ..., y, ..., z\].*]{}[]{data-label="fig:osl"}](osl.pdf)
The maximum length of the disordered ESL segment is $r + 1$. From [*SufficientPrincipals*]{}, every entry in it must be a principal node, as there are at least $r + 1$ principals, and every principal must be included. From this and [*NoDuplicates*]{}, no entry can be duplicated within this segment.
So [*z*]{} is a principal node, but [*z*]{} is skipped between [*x*]{} and [*y*]{}, which is a contradiction. $\Box$

This proof continues the theme that ESLs must not wrap around the identifier ring, showing that it causes disorder in a successor list. If a disordered successor list became part of the network structure, then the network ring would be disordered, violating [*OrderedRing*]{} from Section \[sec:overview\].
Theorem about networks built on successor lists {#sec:theorem3}
-----------------------------------------------
This section is concerned with proving that [*Invariant*]{} implies the four necessary properties introduced in Section \[sec:overview\].
A network member’s [*best successor*]{} or is the first live node in its successor list.
A [*ring member*]{} is a network member that can be reached by following the chain of best successors beginning at itself.
An [*appendage member*]{} is a network member that is not a ring member.
The property [*AtLeastOneRing*]{} holds in a state if there is at least one ring member.
The property [*AtMostOneRing*]{} holds in a state if, from every ring member, it is possible to reach every other ring member by following the chain of best successors beginning at itself.
The property [*OrderedRing*]{} holds in a state if on the unique ring, the nodes are in identifier order. That is, if nodes [*n1*]{} and [*n2*]{} are ring members, and [*n2*]{} is the best successor of [*n1*]{}, then there is no other ring member [*nb*]{} such that [*between \[n1, nb, n2\]*]{}.
The property [*ConnectedAppendages*]{} holds in a state if, from every appendage member, a ring member can be reached by following the chain of best successors beginning at itself.
In any ring structure whose state is maintained in successor lists, [*Invariant*]{} implies [*AtLeastOneRing*]{}, [*AtMostOneRing*]{}, [*OrderedRing*]{}, and [*ConnectedAppendages*]{}.
[*Proof:*]{}
The best-successor relation [*bestSucc*]{} is a binary relation on network members. We define from it a relation [*splitBestSucc*]{} that is the same except that every principal node [*p*]{} is replaced by two nodes $p_s$ and $p_d$, where $p_s$ ([*s*]{} for source) is in the domain of the relation but not the range, and $p_d$ ([*d*]{} for destination) is in the range of the relation but not the domain. Figure \[fig:splitbestsucc\] displays as graphs the [*bestSucc*]{} and [*splitBestSucc*]{} relations for the same network. It is possible to deduce many properties of the [*splitBestSucc*]{} graph, as follows:
\(1) From [*Invariant*]{}, every member has a best successor. So the only nodes with no outgoing edges are $p_d$ nodes representing principal members only as [*being*]{} best successors.
\(2) $p_s$ nodes have no incoming edges, as they represent principal nodes only as [*having*]{} best successors. There can be other nodes with no incoming edges, because there can be members that are no member’s successor.
[*Note:*]{} The next few points concern maximal paths in the [*splitBestSucc*]{} graph. These are paths beginning at nodes with no incoming edges. By definition, they can only end at $p_d$ nodes, and can have no internal nodes representing principal nodes.
\(3) Just as a successor list does not skip principal nodes, a maximal path of best successors does not skip principal nodes. That is because an adjacent pair [*\[x, y\]*]{} in a chain of best successors is taken from the successor list of [*x*]{}, and the only possible entries between [*x*]{} and [*y*]{} in the successor list are dead entries.
\(4) A maximal path is acyclic. Contrary to this statement, assume that the path contains a cycle [*x leads to x*]{}. Then this path skips all principal nodes, which is a contradiction of the fact that paths have no internal nodes representing principal nodes.
From (1-4), we know that the graph of [*splitBestSucc*]{} is an inverted forest (a “biological” forest, with roots on the bottom and leaves on the top). Each tree is rooted at a $p_d$ node.
\(5) A maximal path is ordered by identifiers. Contrary to this statement, let the path contain [*\[x, ..., y, ..., z\]*]{} where not [*between \[x, y, z\]*]{}. Because the path is acyclic, [*x, y,*]{} and [*z*]{} are all distinct.
From [*IncludedReversesBetween*]{}, [*includedIn \[z, y, x\]*]{}. So the disordered path segment [*\[x, ..., y, ..., z\]*]{} wraps around the identifier ring exactly as depicted in Figure \[fig:osl\]. Note that a path in a [*splitBestSucc*]{} graph is not the same as an ESL, which was the original subject of Figure \[fig:osl\], but shares some properties with it.
From the figure, we can see that [*x*]{} cannot be a principal node $p_s$, because it is skipped by the path segment from [*y*]{} to [*z*]{}. Also [*z*]{} cannot be a principal node $p_d$, because it is skipped by the path segment from [*x*]{} to [*y*]{}. Also [*y*]{} cannot be a principal node, by definition, because it is interior to a [*splitBestSucc*]{} path. So this disordered path segment skips [*every*]{} principal node in identifier space, which contradicts (3).
[*Note:*]{} The next few points concern the [*splitBestSucc*]{} relation restricted to $p_s$ and $p_d$ nodes.
\(6) Every $p_s$ node is a leaf of exactly one tree. It must be a leaf of some tree, because it begins a path of best successors that must end at a $p_d$ node. It cannot be a leaf of more than one tree, because no node has more than one best successor.
\(7) Every tree rooted at a $p_d$ node has exactly one leaf that is a $p_s$ node. It cannot have two such leaves $p1_s$ and $p2_s$, because the source principal closest to the destination principal would be skipped by the path of the furthest source principal.
It must have at least one such leaf $p_s$. Contrary to this statement, imagine that it does not. Then the principal node $pc_s$ closest to $p_d$ in reverse identifier order begins a path that leads to some other principal destination, skipping $p_d$ in identifier order, which is a contradiction.
[*Summary:*]{}
\(6) and (7) show that the [*bestSucc*]{} relation restricted to principal nodes is a bijection. In terms of [*splitBestSucc*]{}, the ring is formed by the concatenation of the unique maximal paths, one from each tree in the forest, starting at $p_s$ nodes. This proves [*AtLeastOneRing*]{} and [*AtMostOneRing*]{}. From (5) we know that the ring is ordered by identifiers, so [*OrderedRing*]{} holds. All the nodes not on these unique maximal paths are appendage members, and each has a path to a principal node on the ring, so [*ConnectedAppendages*]{} holds. $\Box$
Proof of Chord correctness {#sec:proof}
==========================
This section presents the proof of the theorem given in Section \[sec:overview\]:
In any execution state, if there are no subsequent join or fail operations, then eventually the network will become ideal and remain ideal.
The most important part of this theorem is knowing that [*Invariant*]{} holds in all states, because this property and the properties it implies are the ones that all Chord users can count on at all times. We do not expect churn (joins and failures) to ever stop long enough for a network to become ideal. Rather, this part of the theorem simply tells us that the repair algorithm always makes progress, and cannot get into unproductive loops.
Establishing the invariant {#sec:invariant}
--------------------------
First it is necessary to prove that [*Invariant,*]{} which is true of any initial state, is preserved by every atomic step of the protocol.
We begin with a failure step, because it requires a constraint based on the operating assumption in Section \[sec:overview\]: a member cannot fail if it would leave another member with no live successor. In other words, failure steps preserve the property of [*OneLiveSuccessor*]{} by operating assumption. No other kind of step can violate [*OneLiveSuccessor*]{}.
The other conjunct of [*Invariant*]{} is [*SufficientPrincipals*]{}, which says that the number of principal nodes must be at least $r + 1$. Rectify operations cannot violate this property, as they do not affect successor lists. In this section we will show that failure steps of non-principal nodes, join steps, [*StabilizeFromSuccessor*]{} steps, and [*StabilizeFromPredecessor*]{} steps do not cause principal nodes to become skipped in successor lists. This is the only way that they could violate [*SufficientPrincipals.*]{} The remaining case, that of failures of principal nodes, will be discussed in the next section.
Failure of a non-principal member [*m*]{} causes the disappearance of [*m*]{}’s successor list. But only being skipped in a successor list can make a node non-principal, so the disappearance of [*m*]{}’s successor list cannot make another node non-principal.
In a successful join, the new ESL created is [*\[myIdent, newPrdc.succList\]*]{}. We know that there is no principal node between [*myIdent*]{} and [*head (newPrdc.succList)*]{}, because at the time of the query there is no principal node between [*newPrdc*]{} and [*head (newPrdc.succList)*]{}, and [*myIdent*]{} is between those two. We also know that [*newPrdc.succList*]{} cannot skip a principal node, by definition.
There are two cases in a [*StabilizeFromSuccessor*]{} step where a successor list is altered. In the first case the new ESL is a concatenation of pieces of the ESLs of the stabilizing node and its first successor, joined where they overlap at the first successor. Since neither of the original ESLs can skip a principal node, their overlap cannot, either.
In the second case a dead entry is removed from the stabilizing node’s list, which cannot cause it to skip a principal. This leaves an empty space at the end which is temporarily padded with the last real entry plus one. This is the only value choice that preserves the invariant by guaranteeing that no principal node is skipped by accident. It does not matter whether the artificial entry points to a real node or not, as it will be gone by the time that the stabilization operation is complete.
There is only one case in a [*StabilizeFromPredecessor*]{} step where a successor list is altered. The new ESL created is [*\[myIdent, newSucc, butLast (newSucc.succList)\]*]{}. In the previous [*StabilizeFromSuccessor*]{} step, this node tested that [*between \[myIdent, newSucc, head (succList)\]*]{}. This node cannot make any other changes to its successor list between that step and this [*StabilizeFromPredecessor*]{} step, so it is still true. Therefore we know that there is no principal node between [*myIdent*]{} and [*newSucc*]{}, because there is no principal node between [*myIdent*]{} and [*head (succList)*]{}, and [*newSucc*]{} is between those two. We also know that [*\[newSucc, butLast (newSucc.succList)\]*]{} cannot skip a principal node, because it is part of the ESL of [*newSucc*]{}.
Failure of principal nodes {#sec:preservingbase}
--------------------------
It is very clear from Section \[sec:idspace\] that potential problems in a Chord network would be caused by disordered successor lists and paths of best successors, and that disorder is equivalent to wrapping around the identifier space. It is equally clear that principal nodes are anchor points that prevent disorder, and that there must be at least $r + 1$ of them to make sure that no successor list wraps around the identifier space. This is why a Chord network must be initialized to have $r + 1$ principal nodes.
Apart from initialization, a member of a Chord network becomes a principal node when it has been a member long enough so that every node that should know about it does know about it. More specifically, it should appear in the successor lists of its $r$ predecessors, which will happen after a sequence of $r$ stabilizations in which each predecessor learns about the node from its successor.
It is extremely important that Section \[sec:invariant\] showed that none of the operations or steps of operations discussed there can demote a node from principal to non-principal. In other words, the [*only*]{} action that can reduce the size of the set of principal nodes is failure of a principal node itself.
As a Chord network grows and matures, a significant fraction of its nodes will be members long enough to become principals. This means that the number of principal nodes is proportional to the size of the network; once the network is large enough there is no possibility that [*SufficientPrincipals*]{} will be violated. Section \[sec:diff\] presented the idea of global monitoring of small Chord networks as a way to implement initialization with $r + 1$ principal nodes. It is a simple change to continue monitoring until the number of principal nodes has reached some multiple of $r$, after which the network is safe.
Queries have no circular waits
------------------------------
Section \[sec:shared\] explained how inter-node queries must be organized to maintain a shared-state abstraction. Sometimes a node must delay answering a query because it is waiting for the answer to its own query, which raises the specter of deadlock due to circular waiting.
Note that a rectify step only queries to see if a node is still alive, and does not read any of the node’s state. Queries like these can always be answered immediately, so cannot cause waiting.
Note also that a join step requires a query, but no other node can be querying a node that has not joined yet. So the joining node, also, cannot be part of a circular wait.
This leaves queries due to the two stabilization steps, which are always directed to first successors or potential first successors. This means that, if there is a circular wait due to queries, it must encompass the entire ring. This possibility is sufficiently remote to ignore.
Proving progress {#sec:progress}
----------------
This section shows that in a network satisfying [*Invariant*]{}, if there are no join or fail operations, then eventually the network will become ideal—meaning that all its pointers are globally correct—and remain ideal.
Progress proceeds in a sequence of phases. In the first phase, all leading dead entries are removed from successor lists, so that every member’s first successor is its best successor. Every time a member with a leading dead entry begins stabilization, it first executes a [*StabilizeFromSuccessor*]{} step, which will remove the leading dead entry. It will continue executing [*StabilizeFromSuccessor*]{} steps until all the leading dead entries are removed. Eventually all members will stabilize (this is an operating assumption), after which all leading dead entries will be removed from all successor lists.
Needless to say, these effective [*StabilizeFromSuccessor*]{} steps can be interleaved with other stabilize and rectify operations. However, rectify operations do not change successor lists. Even if a stabilization operation causes a node to change its successor, the steps are carefully designed so that the node will not change its successor to a dead entry. So, in the absence of failures, eventually all first successors will be best successors, and will remain so.
In the second phase, which can proceed concurrently with or subsequent to the first phase, all first successors and predecessors become correct. Let $s$ be the current size of the network (number of members). This number is only changed by join and fail operations, and not by repair operations, so it remains the same throughout a repair-only phase as hypothesized by the theorem. The error of a first successor or of a predecessor is defined as 0 if it points to the globally correct member (in the sense of identifier order), 1 if it points to the next-most-correct member, . . . $s - 1$ if it points to the least globally correct member, and $s$ if it points to a dead node.
Whenever there is a merge in the [*bestSucc*]{} or [*splitBestSucc*]{} graph (see Figure \[fig:splitbestsucc\]), there are two nodes [*n1*]{} and [*n2*]{} with successors merging at [*n3*]{}, and for some choice of symbolic names, [*between \[n1, n2, n3\]*]{}. There are three cases: (1) [*n3.prdc*]{} (the current predecessor of [*n3*]{}) is better (has a smaller error) than [*n2*]{}, meaning that [*between \[n2, n3.prdc, n3\]*]{}; (2) [*n3.prdc*]{} is [*n2*]{}; (3) [*n3.prdc*]{} is worse (has a larger error) than [*n2*]{}, meaning that [*between \[n3.prdc, n2, n3\]*]{}. In each of these three cases there is a sequence of enabled operations that will reduce the cumulative error in the network, as follows:
Either [*n1*]{} or [*n2*]{} stabilizes, adopting [*n3.prdc*]{} as its successor and reducing the error of its successor. When the stabilizing node notifies [*n3.prdc*]{} and [*n3.prdc*]{} rectifies, it will change its predecessor pointer if and only if the change reduces error.
stabilizes, adopting [*n2*]{} as its successor and reducing the error of its successor. When [*n1*]{} notifies [*n2*]{} and [*n2*]{} rectifies, it will change its predecessor pointer if and only if the change reduces error.
stabilizes, which will not change its successor, but will have the effect of notifying [*n3*]{}. When [*n3*]{} rectifies, it will reduce the error of its predecessor by changing it to [*n2*]{}.
These cases show that, as long as there is a merge in the [*bestSucc*]{} graph, some operation or operations are enabled that will reduce the cumulative error of successor and predecessor pointers. Equally important, all operations are carefully designed so that a change never increases the error. At the same time, some of these operations will reduce the number of merges. For example, in Figure \[fig:splitbestsucc\], let the merge of 24 and 33 at 35 be an example of Case 2. When 24 changes its successor to 33, which is not currently the successor of any node, the total number of merges is reduced.
As the network is finite, eventually there will be no merges in the [*bestSucc*]{} graph, which means that every node is a ring member. Because the ring is always ordered, the errors of all successors will be 0. The errors of all predecessors will also be 0, because whenever a successor pointer reaches its final value by stabilization, it notifies its successor. That node will update its predecessor pointer, and will never again change it, because no other candidate value can be superior. This is the completion of the second phase.
In the third and final phase, after all first successors are correct, the tails of all successor lists become correct (if they are not already). Let the error of a successor list of length $r$ be defined as the length of its suffix beginning at the first entry that is not globally correct. At the beginning of this phase the maximum error of every successor list is $r - 1$, as the first entry is guaranteed correct.
Let [*n2*]{} be the successor of [*n1*]{}, and let the error of [*n2*]{}’s successor list be $e$. When [*n1*]{} stabilizes, the error of its successor list becomes [*max*]{}$(e - 1,0)$, as it is adopting [*n2*]{}’s successor list, after first prepending a correct entry ([*n2*]{}) and dropping an entry at the end. Thus improvements to successor lists propagate backward in identifier order. In the worst case, after a backward chain of $r - 1$ stabilizations, the successor list of the last node of the chain will be globally correct. The correct list will continue propagating backward, leaving correctness in its wake. $\Box$
The Alloy model and bounded verification {#sec:alloy}
========================================
As introduced in Section \[sec:intro\], there is an Alloy model including specification of the operations, correctness properties, and assertions of the proof.[^2] The reasons for using Alloy for this purpose can be found in [@compare].
The Alloy proof is direct rather than insightful. For example, there are assertions of all the theorems in Section \[sec:idspace\]. The Alloy Analyzer uses exhaustive enumeration to verify automatically that the theorems are true for all model instances up to some size bounds (see below). But unlike Section \[sec:idspace\], this verification gives no insight into why the theorems are true.
The Alloy proof treats progress somewhat differently from Section \[sec:progress\]. The model defines enabling predicates for all operation cases, where an enabling predicate is true if and only if a step or sequence of steps is enabled and will change the state of the network if it occurs. An assertion states that if a network is not ideal, some operation is enabled that will change the state. Another assertion states that if a network is ideal, no operation will change the state. Thus the model does not include the metric part of Section \[sec:progress\], which shows that every state change reduces error.
The model is and has been an indispensable part of this research, for two reasons: First, it protects against human error in the long informal proof. Second, it was a necessary tool for getting to the proof. Without long periods of model exploration, it would not have been possible to discover that the obvious invariants are not sufficient, nor to discover an invariant that is. Without the formal model and automated verification, one wastes too much time trying to prove assertions that are not true.
The model is analyzed for all instances with $r \leq 3$ and $n \leq 9$, where $n$ is the size of the identifier/node space. For the largest instances, the possible number of nodes is more than twice the sufficient number of principal nodes.
It is worth noting what experimenting with models and bounds is like. With $r = 2$, many new counterexamples (to the current draft model) were found by increasing the number of nodes from 5 to 6, and no new counterexamples were ever found by increasing the number of nodes from 6 to 7 or more. No new counterexamples were ever found by increasing $r$ from 2 to 3. This makes $r = 3$ and $n = 9$ seem more than adequate.
Conclusion
==========
The Chord ring-maintenance protocol is interesting in several ways. The design is extraordinary in its achievement of consistency and fault-tolerance with such simplicity, so little synchronization overhead, and such weak assumptions of fairness. Unlike most protocols, which work according to self-evident principles, it is quite difficult to understand how and why Chord works.
Now that our understanding of the protocol has a firm foundation, it should be possible to exploit this knowledge to improve peer-to-peer networks further. If these efficient networks become more robust, they may find a whole new generation of applications. For example, it may be possible to weaken the rather strong assumptions about failure detection. It is certainly possible to enhance security just by checking local invariants, and it may be possible to improve or verify more rigorously enhancements such as protection against malicious peers [@awerbuch-robust; @chord-byz; @sechord], key consistency and data consistency [@scatter], range queries [@rangequeries], and atomic access to replicated data [@atomicchord; @etna].
As a case study in practical verification, the Chord project illustrates the value of a variety of techniques. Simple analysis for bug-finding [@chord-ccr], fully automated verification through bounded model-checking [@chord-arxiv], and informal mathematical proof, all had important roles to play.
Acknowledgments {#acknowledgments .unnumbered}
===============
Helpful discussions with Bharath Balasubramanian, Ernie Cohen, Patrick Cousot, Gerard Holzmann, Daniel Jackson, Arvind Krishnamurthy, Leslie Lamport, Gary Leavens, Pete Manolios, Annabelle McIver, Jay Misra, Andreas Podelski, Emina Torlak, Natarajan Shankar, and Jim Woodcock have contributed greatly to this work.
[^1]: The empty place in the successor list is filled with an artificial entry at the end, created by adding one to the last real entry. The reason for this entry will be made clear by the proof.
[^2]: <http://www.research.att.com/~pamela> $>$ How to Make Chord Correct.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'During the last years, through the combined effort of the insight, coming from physical intuition and computer simulation, and the exploitation of rigorous mathematical methods, the main features of the mean field Sherrington-Kirkpatrick spin glass model have been firmly established. In particular, it has been possible to prove the existence and uniqueness of the infinite volume limit for the free energy, and its Parisi expression, in terms of a variational principle, involving a functional order parameter. Even the expected property of ultrametricity, for the infinite volume states, seems to be near to a complete proof. The main structural feature of this model, and related models, is the deep phenomenon of spontaneous replica symmetry breaking (RSB), discovered by Parisi many years ago. By expanding on our previous work, the aim of this paper is to investigate a general frame, where replica symmetry breaking is embedded in a kind of mechanical scheme of the Hamilton-Jacobi type. Here, the analog of the “time” variable is a parameter characterizing the strength of the interaction, while the “space” variables rule out quantitatively the broken replica symmetry pattern. Starting from the simple cases, where annealing is assumed, or replica symmetry, we build up a progression of dynamical systems, with an increasing number of space variables, which allow to weaken the effect of the potential in the Hamilton-Jacobi equation, as the level of symmetry braking is increased. This new machinery allows to work out mechanically the general $K$-step RSB solutions, in a different interpretation with respect to the replica trick, and lightens easily their properties as existence or uniqueness.'
author:
- 'Adriano Barra [^1] Aldo Di Biasio[^2] Francesco Guerra[^3]'
title: '**Replica symmetry breaking in mean field spin glasses trough Hamilton-Jacobi technique**'
---
Introduction
============
In the past twenty years the statistical mechanics of disordered systems earned an always increasing weight as a powerful framework by which analyze the world of complexity [@bara] [@amit] [@bouchaud] [@CG] [@MPV] [@science] [@parisi]. The basic model of this field of research is the Sherrington-Kirkpatrick model [@sk] (SK) for a spin glass, on which several method of investigation have been tested along these years [@alr] [@ass] [@barra1] [@bovier] [@contucci] [@CLT] [@quadratic] [@talaRSB] [@talaHT]. The first method developed has been the [*replica trick*]{} [@sk2][@parisi2] which, in a nutshell, consists in expressing the quenched average of the logarithm of the partition function $Z(\beta)$ in the form $\mathbb{E}\ln Z(\beta) = \lim_{n\rightarrow 0}\mathbb{E}(Z(\beta)^n-1)/n$. Since the averages are easily calculated for integer values of $n$, the problem is to find the right analytic continuation allowing, in some way, to evaluate the $n \rightarrow 0$ limit, at the least for the case of large systems. Such analytic continuation is extremely complex, and many efforts have been necessary to examine this problem in the light of theoretical physics tools, such as symmetries and their breaking [@parisi3][@parisi4]. In this scenario a solution has been proposed by Parisi, with the well known Replica Symmetry Breaking scheme (RSB), both solving the SK-model by showing a peculiar “picture” of the organization of the underlaying microstructure of this complex system [@MPV], as well as conferring a key role to the replica-trick method itself [@challenge]. The physical relevance, and deep beauty, of the results, obtained in the frame of the replica trick, have prompted a wealth of further research, in particular toward the objective of developing rigorous mathematical tool for the study of these problems. Let us recall, very schematically, some of the results obtained along these lines. Ergodic behavior has been confirmed in [@comets][@CLT], the lack of self-average for the order parameter has been shown in [@pastur], the existence of the thermodynamic limit in [@limterm], the universality with respect to coupling’s distribution in [@carmona], the correctness of the Parisi expression for the free energy in [@g3][@t4], the critical behavior in [@barra3], the constraints to the free overlap fluctuations in [@ac][@gg], and so much other contributions developed to give rise even to textbooks (see for instance [@bovierbook][@hertz][@challenge]). Very recently, new investigations on ultrametricity started ([@arguin][@arguin2]) and allowed even strong statements dealing with the latter [@panchenko], highlighting as a consequence the enquiry for techniques to prove the uniqueness of the Parisi solution, step by step. In this paper we match two other techniques, the broken replica symmetry bound [@g3] and the Hamilton-Jacobi method [@sum-rules][@barra2][@io1], so to obtain a unified and stronger mathematical tool to work out free energies at various levels of RSB, whose properties are easily available as consequences of simple analogies with purely mechanical systems [@io1]. We stress that within this framework, the improvement of the free energy by increasing the replica symmetry breaking steps is transparent. In this first paper we show the method in full details, and pedagogically apply it for recovering the annealed and the replica symmetric solutions, then we work out the first level of RSB and show how to obtain the $1$-RSB Parisi solution with its properties. The paper is organized as follows: In Section ($2$) the SK model is introduced together with its related statistical mechanics definitions. In Section ($3$) the Broken Replica Mechanical Analogy is outlined in full details (minor calculations are reported in the Appendix), while Sections ($4,5,6$) are respectively dedicated to the annealed, the replica symmetric and the $1$-RSB solutions of the SK model with our approach. Section ($7$) deals with the properties of the solutions and Section ($8$) is left for outlooks and conclusions.
The Sherrington-Kirkpatrick mean field spin glass
=================================================
The generic configuration of the Sherrington-Kirkpatrick model [@sk; @sk2] is determined by the $N$ Ising variables $\sigma_i=\pm1$, $i=1,2,\ldots,N$. The Hamiltonian of the model, in some external magnetic field $h$, is $$\label{SK} H_N(\sigma,h;J)=-\frac1{\sqrt N} \sum_{1 \leq i < j
\leq N} J_{ij} \sigma_i\sigma_j- h\sum_{1 \leq i \leq N} \sigma_i.$$ The first term in (\[SK\]) is a long range random two body interaction, while the second represents the interaction of the spins with the magnetic field $h$. The external quenched disorder is given by the $N(N-1)/2$ independent and identically distributed random variables $J_{ij}$, defined for each pair of sites. For the sake of simplicity, denoting the average over this disorder by $\mathbb{E}$, we assume each $J_{ij}$ to be a centered unit Gaussian with averages $$\mathbb{E}(J_{ij})=0,\quad \mathbb{E}(J_{ij}^2)=1.$$ For a given inverse temperature[^4] $\beta$, we introduce the disorder dependent partition function $Z_{N}(\beta,h;J)$, the quenched average of the free energy per site $f_{N}(\beta,h)$, the associated averaged normalized log-partition function $\alpha_N(\beta,h)$, and the disorder dependent Boltzmann-Gibbs state $\omega$, according to the definitions $$\begin{aligned}
\label{Z}
Z_N(\beta,h;J)&=&\sum_{\sigma}\exp(-\beta H_N(\sigma,h; J)),\\
\label{f}
-\beta f_N(\beta,h)&=&\frac1N \mathbb{E}\ln Z_N(\beta,h)=\alpha_N(\beta,h),\\
\label{state}
\omega(A)&=&Z_N(\beta,h;J)^{-1}\sum_{\sigma}A(\sigma)\exp(-\beta
H_N(\sigma,h;J)),\end{aligned}$$ where $A$ is a generic function of $\sigma$.
Let us now introduce the important concept of replicas. Consider a generic number $n$ of independent copies of the system, characterized by the spin configurations $\sigma^{(1)}, \ldots ,
\sigma^{(n)}$, distributed according to the product state $$\Omega=\omega^{(1)} \times \omega^{(2)} \times \dots \times
\omega^{(n)},$$ where each $\omega^{(\alpha)}$ acts on the corresponding $\sigma^{(\alpha)}_i$ variables, and all are subject to the [*same*]{} sample $J$ of the external disorder. These copies of the system are usually called [*real replicas*]{}, to distinguish them from those appearing in the [*replica trick*]{} [@MPV], which requires a limit towards zero number of replicas ($n\to0$) at some stage.
The overlap between two replicas $a,b$ is defined according to $$\label{overlap} q_{ab}(\sigma^{(a)},\sigma^{(b)})={1\over N}
\sum_{1 \leq i \leq N}\sigma^{(a)}_i\sigma^{(b)}_i,$$ and satisfies the obvious bounds $$-1\le q_{ab}\le 1.$$ For a generic smooth function $A$ of the spin configurations on the $n$ replicas, we define the averages $\langle A \rangle$ as $$\label{medie}
\langle A \rangle = \mathbb{E}\Omega
A\left(\sigma^{(1)},\sigma^{(2)},\ldots,\sigma^{(n)}\right),$$ where the Boltzmann-Gibbs average $\Omega$ acts on the replicated $\sigma$ variables and $\mathbb{E}$ denotes, as usual, the average with respect to the quenched disorder $J$.
Thermodynamics through a broken replica mechanical analogy
==========================================================
Once introduced the model, let us briefly discuss the plan we are going to follow. In the broken replica symmetry bound (BRSB) [@g3] it has been shown that the Parisi solution is a bound for the true free energy (the opposite bound has been achieved in [@t4]). This has been done by introducing a suitable recursive interpolating scheme that we are going to recall hereafter. In the Hamilton-Jacobi technique instead [@sum-rules], it has been shown, by introducing a simple two parameter interpolating function, how to recover the replica symmetric solution trhough a mechanical analogy, offering as a sideline a simple prescription, once the bridge to mechanics was achieved, to proof the uniqueness of the replica symmetric solution. The main result of this paper is that the two approaches can be merged such that even the recursive interpolating structure of the BRSB obeys a particular Hamilton-Jacobi description. This result has both theoretical and practical advantages: the former is a clear bridge among improving approximation of the free energy solution and increasing the levels of RSB, the latter is a completely autonomous mechanical tool by which obtain solutions at various RSB steps in further models. The task is however not trivial: the motion is no longer on a $1+1$ Euclidean space-time as in [@sum-rules] but lives in $K+1$ dimensions such that momenta and mass matrix need to be introduced.
To start showing the whole procedure, let us introduce the following *Boltzmannfaktor* $$\label{bfaktor}
B(\{\sigma\} ; \bold{x},t) = \exp \left( \sqrt{\frac{t}{N}}
\sum_{(ij)} J_{ij} \sigma_i \sigma_j + \sum_{a=1}^{K} \sqrt{x_a}
\sum_i J_i^a \sigma_i \right)$$ where both the $J_{ij}$s and the $J_i^a$s are standard Gaussian random variables $\mathcal{N}[0,1]$ i.i.d. The $t$ parameter and each of the $x_a$ maybe tuned in $\mathbb{R}^+$. We will use both the symbol $\bold{x}$ as well as $(x_1,...,x_K)$ to label the $K$ interpolating real parameters coupling the one body interactions. $K$ represents the dimensions, corresponding to the RSB steps in replica trick. Let us denote via $E_a$ each of the averages with respect to each of the $J_a$’s and $E_0$ the one with respect to the whole $J_{ij}$ random couplings. Through eq. (\[bfaktor\]) we are allowed to define the following partition function $\tilde{Z}_N(t; x_1, \dots, x_K)$ and, iteratively, all the other BRSB approximating functions for $a = 0, \dots, K$: $$\begin{aligned}
Z_K & \equiv & \tilde{Z}_N = \sum_{\sigma}B(\{ \sigma \};\bold{x},t), \\
& \dots & \nonumber \\
Z_{a-1}^{m_a} & \equiv & E_a \left( Z_{a}^{m_a} \right), \\
& \dots & \nonumber \\
Z_0^{m_1} & \equiv & E_1 \left( Z_{1}^{m_1} \right).
\end{aligned}$$ We need further to introduce the following interpolating function $$\label{alfa}
\tilde{\alpha}_N(t; x_1, \dots, x_K) \equiv \frac{1}{N} E_0 \log
Z_0,$$ and define, for $a=1, \dots, K$, the random variables $$f_a \equiv \frac{Z_a^{m_a}}{E_a \left( Z_a^{m_a}
\right)},$$ and the generalized states $$\tilde{\omega}_a(.) \equiv E_{a+1} \dots E_K \left( f_{a+1} \dots f_K \omega (.)
\right),$$ the whole in complete analogy with the "broken prescriptions” [@g3]. Of course the corresponding replicated states $\Omega_a$ are immediately generalized with respect to each of the $\omega_a$ state introduced above.
Overall, for $a=0, \dots, K$, we further need the averages $$\langle.\rangle_a \equiv E \left( f_{1} \dots f_a \tilde{\Omega}_a (.) \right).$$
While it is clear that, when evaluated at $t=\beta^2$ and $\bold{x}=0$, our interpolating function $\tilde{\alpha}(t,\bold{x})$ reproduces the definition of the quenched free energy, when evaluated at $t=0$ (which a proper choice for the $\bold{x}$ parameters that we are going to show), it reproduces the Parisi trial solution $f(q=0, y=h)$ at the given $K$ level of RSB: $$\begin{aligned}
& & \tilde{\alpha}_N (t=0; x_1, \dots, x_K) = \nonumber \\
& & = \frac{1}{N} \log
\left[
E_1 \dots
\left[
E_K \left( \sum_{\sigma} \exp \left(\sum_{a=1}^K \sqrt{x_a} \sum_i J_i^a \sigma_i \right) \right)^{m_K}
\right]^{\frac{1}{m_K}}
\dots \right]^{\frac{1}{m_1}}.
\end{aligned}$$ Even though far from being trivial, this is an essential feature of mean field behavior even in the disordered framework; in fact, in the thermodynamic limit the connected correlation inside pure states should go to zero bridging the two body problem to a (collection of) one body model, or better "high temperature model”, whose partition function factorizes: $$\sum_{\sigma} \exp \left(\sum_{a=1}^K \sqrt{x_a} \sum_i J_i^a \sigma_i \right) =
2^N \prod_i \cosh \left( \sum_{a=1}^K \sqrt{x_a} J_i^a
\right),$$ such that, averaging over the $J_i^K$, we get $$\begin{aligned}
&&E_K \left(
\sum_{\sigma} \exp \left(
\sum_{a=1}^K \sqrt{x_a} \sum_i J_i^a \sigma_i
\right)
\right)^{m_K} = \nonumber \\
&&=
2^{N m_K} \prod_i
\int d\mu (z_K) \cosh^{m_K} \left(
\sum_{a=1}^{K-1} \sqrt{x_a} J_i^a + z_K \sqrt{x_K}
\right),
\end{aligned}$$ and so on. Even taking the external field $h$, which is again encoded in a single body interaction and is simply added into the hyperbolic cosine, we get $$\begin{aligned}
\tilde{\alpha}_N (t&=&0; x_1, \dots, x_K) = \log 2 + \nonumber \\
&+&
\log \left[
\int d\mu(z_1) \dots
\left[
\int d\mu(z_K)
\cosh^{m_K} \left(\sum_{a=1}^K \sqrt{x_a} z_a + \beta h \right)
\right]^{\frac{1}{m_K}}
\dots \right]^{\frac{1}{m_1}}. \nonumber
\end{aligned}$$ In the case where $
x_a = \beta^2 (q_a - q_{a-1})
$ the second term does coincide sharply with the solution of the Parisi equation [@MPV].
Let us now define $S(t,\bold{x})$ as Principal Hamilton Function (PHF) for our problem: $$S(t; x_1, \dots, x_K) =
2 \left(
\alpha (t; x_1, \dots, x_K) - \frac{1}{2} \sum_{a=1}^K x_a - \frac{1}{4}t
\right).$$ As proved in the Appendix, the $(x,t)$-streaming of $S(t; x_1,
\dots, x_K)$ are then $$\begin{aligned}
\label{dts}
\partial_t S(t; x_1, \dots, x_K) & = &
-\frac{1}{2} \sum_{a=0}^K (m_{a+1} - m_a) \langle q_{12}^2 \rangle_a, \\
\label{das}
\partial_a S(t; x_1, \dots, x_K) & = &
-\frac{1}{2} \sum_{b=a}^K (m_{b+1} - m_b) \langle q_{12}
\rangle_b.
\end{aligned}$$ It is then possible to introduce an Hamilton-Jacobi structure for $S(x,t)$, which implicity defines a potential $V(t; x_1, \dots,
x_K)$, so to write $$\label{hj}
\partial_t S(t,\bold{x}) + \frac{1}{2} \sum_{a,b=1}^K \partial_a S \, (M^{-1})_{ab} \, \partial_b S + V(t,\bold{x}) = 0.$$ The kinetic term reads off as $$\begin{aligned}
T & \equiv & \frac{1}{2} \sum_{a,b=1}^K \partial_a S (t; x_1, \dots, x_K) \, (M^{-1})_{ab} \, \partial_b S (t; x_1, \dots, x_K) \nonumber \\
&=& \frac{1}{2} \sum_{a,b=1}^K (M^{-1})_{ab} \,
\sum_{c \geq a}^K \sum_{d \geq b}^K
(m_{c+1} - m_c) \langle q_{12} \rangle_c (m_{d+1} - m_d) \langle q_{12} \rangle_d \nonumber \\
&=& \frac{1}{2} \sum_{c,d=1}^K D_{cd} \,
(m_{c+1} - m_c) \langle q_{12} \rangle_c (m_{d+1} - m_d) \langle q_{12} \rangle_d, \nonumber \\
\end{aligned}$$ where we defined $$D_{cd} \equiv \sum_{a=1}^c \sum_{b=1}^d (M^{-1})_{ab}.$$ By the inversion of the mass matrix $$\label{dcdm}
D_{cd} (m_{c+1} - m_c) = \delta_{cd}$$ we obtain the expression $$\begin{aligned}
T & = & \frac{1}{2} \sum_{c=1}^K (m_{c+1}- m_c) \langle q_{12} \rangle_c^2 \\
\label{kin}
& = & \frac{1}{2} \sum_{c=0}^K (m_{c+1}- m_c) \langle q_{12} \rangle_c^2
- \frac{1}{2} (m_1 - m_0) \langle q_{12}
\rangle_0^2.
\end{aligned}$$
Condition (\[dcdm\]) determines the elements of the inverse of the mass matrix $M^{-1}$. In particular we stress that it is symmetric and the not zero values are only on the diagonal and all of them respecting $(M^{-1})_{a, a+1} = (M^{-1})_{a+1, a}$: $$\begin{aligned}
(M^{-1})_{11} & = & \frac{1}{m_2-m_1}, \\
\label{invmaa}
(M^{-1})_{a, a} & = & \frac{1}{m_{a+1}-m_a} + \frac{1}{m_a-m_{a-1}}, \\
(M^{-1})_{a, a +1} & = & -
\frac{1}{m_{a+1}-m_a},
\end{aligned}$$ all the others being zero. The elements of the mass matrix $M$ are determined by the equation $$\sum_b M_{ab} (M^{-1})_{bc} = \delta_{ac},$$ and it is immediate to verify that the following representation holds: $$M_{ab} = 1 - m_{(a \wedge b)}.$$ With this expression for the matrix elements, by substituting eq.s (\[dts\]) and (\[kin\]) into (\[hj\]) we obtain the expression for the potential such that overall $$\begin{aligned}
\partial_t S(t; x_1, \dots, x_K) &+& \frac{1}{2} \sum_{a,b=1}^K \partial_a S \, (M^{-1})_{ab} \, \partial_b S + V(t; x_1, \dots, x_K) = 0, \\
V(t; x_1, \dots, x_K) &=& \frac{1}{2} \sum_{a=0}^K (m_{a+1} - m_a)
(\langle q_{12}^2 \rangle_a - \langle q_{12} \rangle_a^2)
+ \frac{1}{2} (m_1 - m_0) \langle q_{12}
\rangle_0^2. \nonumber
\end{aligned}$$ Once the mechanical analogy is built, it is however prohibitive solving the problem as it is (i.e. integrate the equations of motion); instead we propose an iterative scheme that mirrors the replica symmetry breaking one: at first, by choosing $K=1$, we solve the free field solution (we impose $V(t,\bold{x})=0$) and we recover the annealed expression for the free energy. This is consistent with neglecting the potential as it turns out to be the squared overlap. Then, we avoid perturbation scheme to deal with the source but we enlarge our Euclidean space by considering $K=2$. Again we work out the free field solution to obtain the replica-symmetric expression for the free energy, consistently with neglecting the potential; in fact the source we avoid, this time, is the variance of the overlap: a much better approximation with respect to $K=1$. We go further explicitly by considering the $K=3$ case and we get the $1$-RSB solution in the same way (and so on). Interesting we discover that there is a one to one connection among the steps of replica symmetry breaking in replica trick and the Euclidean dimension in the broken replica mechanical analogy. The latter however incorporates, in a single scheme, even the annealed and the replica symmetry solutions.
$K=1$, Annealed free energy
===========================
Let us now recover some properties of disordered thermodynamics by studying the $K=1$ case so to show how the solution of the free problem coincides with the annealed expression. We assume $x(q)=m_1=1$ in the whole interval $[0,1]$. We show now that, within our approach, this implies a reduction in the degrees of freedom where the Hamilton-Jacobi action lives, such that the PHF depends by $t$ only.
The dynamics involves a $1+1$ Euclidean space-time such that $$Z_1 \equiv Z_K \equiv \tilde{Z}_N \equiv
\sum_{\sigma} \exp \left(
\sqrt{t/N}H_N(\sigma;J) + \sqrt{x} \sum_i J_i \sigma_i \right).$$ $Z_0$ is consequently given by $$Z_0 \equiv E_1 Z_1 =
\exp \left(\frac{N}{2}x \right)
\sum_{\sigma} \exp \left(\sqrt{t/N}H_N(\sigma;J) \right).$$ This implies, into the interpolating function, a linear and separate dependence by the $x$ $$\tilde{\alpha}(t,x) = \frac{x}{2}
+ \frac{1}{N} E_0 \log \sum_{\sigma} \exp
\left( \sqrt{t/N}H_N(\sigma;J)\right).$$ The $x$-derivative of $\tilde{\alpha}(t,\bold{x})$ is immediate, while for the $t$-one we can use the general expression previously obtained (cfr. eq.s (\[dts\],\[das\])) $$\begin{aligned}
\partial_t \tilde{\alpha} & = & \frac{1}{4}
\left[ 1 - \langle q_{12}^2 \rangle_0 \right],\\
\partial_x \tilde{\alpha} & = & \frac{1}{2}.
\end{aligned}$$ As a straightforward but interesting consequence, PHF does not depend on $x$ and we get $$\begin{aligned}
S(t,x) & = & 2 \tilde{\alpha}(t,x) - x - \frac{t}{2} \nonumber \\
& = & \frac{2}{N} E_0 \log \sum_{\sigma} \exp
\left( \sqrt{t/N}H_N(\sigma;J) \right)
- \frac{t}{2}, \\
\partial_t S & = & - \frac{1}{2} \langle q_{12}^2
\rangle_0,\\
\partial_x S & \equiv & v(t,x) = 0, \\
\end{aligned}$$ where $v(t)$ defines the velocity field, which is identically zero such that $ x(t) \equiv x_0 $. In this simplest case, the potential is trivially the $t$-derivative of $S(t,\bold{x})$ with a change in the sign, (the averaged squared overlap): $$V(t,\bold{x}) = \frac{1}{2} \langle q_{12}^2 \rangle_0.$$
Now we want to deal with the solution of the statistical mechanics problem. As we neglect the source (we are imposing $\langle
q_{12}^2 \rangle_0=0$), we can take the initial value for $S(x,t)$ as it must be constant overall the space-time. $$\bar{S} = S (0) = 2 \log 2,$$ and, consequently, we can write the solution of the problem as $$\bar{\alpha}(t, x) = \log 2 + \frac{x}{2} + \frac{t}{4}.$$ At this point it is straightforward to obtain statistical mechanics by posing $t = \beta^2$ and $x = 0$: $${\alpha}_N(\beta) = \log 2 + \frac{\beta^2}{4},$$ which is exactly the annealed free energy.
$K=2$, Replica symmetric free energy
====================================
In this section, by adding another degree of freedom to our mechanical analogy, we want to reproduce the replica symmetric solution of the statistical mechanics problem. We deal with $K=2$. The order parameter is now taken as $$\label{xqbar}
x(q) = x_{\bar{q}}(q) =
\left\{
\begin{array}{ll}
0 & \mbox{if } q \in [0, \bar{q}), \\
1 & \mbox{if } q \in [\bar{q}, 1].
\end{array}
\right.$$ So
$$\begin{aligned}
& & q_1 = \bar{q}, \quad q_2 = q_K \equiv 1 \\
& & m_0 = m_1 = 0, \quad m_2 = m_K = 1, \quad m_3 = m_{K+1} \equiv 1.
\end{aligned}$$
The auxiliary partition function depends on $t$ and on the two spatial coordinates $x_1$ and $x_2$: $$\tilde{Z}_N (t; x_1, x_2) \equiv
\sum_{\sigma} \exp \left(\sqrt{t/N}H_N(\sigma;J)
+ \sqrt{x_1} \sum_i J_i^1 \sigma_i + \sqrt{x_2} \sum_i J_i^2
\sigma_i\right),$$ and with the latter, recursively, we obtain $Z_0$. $$\begin{aligned}
Z_K & \equiv & Z_2 \equiv \tilde{Z}_N, \\
Z_1 & \equiv & \left(E_2 Z_2^{m_2} \right)^{\frac{1}{m_2}} = E_2 Z_2, \\
Z_0 & = & \left(E_1 Z_1^{m_1} \right)^{\frac{1}{m_1}}.
\end{aligned}$$ The function $Z_1$ can be immediately evaluated by standard Gaussian integration as $$Z_1 = \exp \left( N \frac{x_2}{2} \right)
\sum_{\sigma} \exp \left(
\sqrt{t/N}H_N(\sigma;J)
+ \sqrt{x_1} \sum_i J_i^1 \sigma_i \right).$$ Concerning the function $Z_0$ we can write $$\begin{aligned}
\left(E_1 Z_1^{m_1} \right)^{\frac{1}{m_1}} & = & \exp \left[
\frac{1}{m_1} \log E_1 \left[
\exp \left( m_1 \log Z_1 \right)
\right] \right] \\
& = & \exp \left[
\frac{1}{m_1} \log E_1 \left[
1 + m_1 \log Z_1 + o (m_1^2)
\right] \right] \\
& = & \exp \left[
\frac{1}{m_1} \left[
m_1 E_1 \log Z_1 + o (m_1^2)
\right] \right] \\
& = & \exp E_1 \log Z_1 + o (m_1),
\end{aligned}$$ and consequently $$Z_0 = \exp E_1 \log Z_1.$$ In this case, our interpolating function reads off as $$\label{alfars}
\tilde{\alpha}(t, x_1, x_2) = \frac{x_2}{2} +
\frac{1}{N} E_0 E_1 \log \left[
\sum_{\sigma} \exp \left(\sqrt{t/N}H_N(\sigma;J)
+ \sqrt{x_1} \sum_i J_i^1 \sigma_i \right)
\right].$$ Again by using the general formulas sketched in the first section (cfr. eq.s (\[dts\],\[das\])) we get for the derivatives $$\begin{aligned}
\partial_t \tilde{\alpha} & = & \frac{1}{4} \left[1 - \langle q_{12}^2 \rangle_1 \right], \\
\partial_{x_1} \tilde{\alpha} & = & \frac{1}{2} \left[1 - \langle q_{12} \rangle_1 \right], \\
\partial_{x_2} \tilde{\alpha} & = & \frac{1}{2}.
\end{aligned}$$ Evaluating our function at $t=0, x_1=x_1^0, x_2=x_2^0$ we easily find $$\tilde{\alpha}(0; x_1^0, x_2^0) =
\frac{x_2^0}{2}
+ \log 2
+ \int d \mu(z) \, \log \cosh (\sqrt{x_1^0} \, z).$$
Let us introduce now the $K=2$ PHF $$S(t; x_1, x_2) = 2 \left( \tilde{\alpha} - \frac{x_1}{2}
- \frac{x_2}{2}- \frac{t}{4} \right),$$ together with its derivatives $$\begin{aligned}
\partial_t S & = & - \frac{1}{2} \langle q_{12}^2 \rangle_1, \\
\partial_{x_1} S & = & v_1 (t, x_1) = - \langle q_{12} \rangle_1, \\
\partial_{x_2} S & = & 0.
\end{aligned}$$ We observe that, even in this case, there is no true dependence by one of the spatial variables ($x_2$): this is due to the constant value of the last interval $m_K = m_2$ where the order parameter equals one and can be Gaussian-integrated out immediately into the corresponding $Z_2$ getting the pre-factor $\exp(\frac{1}{2} N
x_2^0)$. As a consequence, we can forget the mass matrix as there is no true multidimensional space. Let us write down the Hamilton-Jacobi equation $$\partial_t S(t, x_1)
+ \frac{1}{2} \left( \partial_{x_1} S (t, x_1) \right)^2
+ V (t, x_1) = 0.$$ The potential is given by the function $$V(t, x_1) = \frac{1}{2}
\left( \langle q_{12}^2 \rangle_1 - \langle q_{12} \rangle_1^2
\right),$$ where $$\langle q_{12}^2 \rangle_1 =
E_0 E_1 f_1 \Omega_1 (q_{12}^2) =
E_0 E_1 f_1 \frac{1}{N^2} \sum_{ij}
\left( E_2 f_2 \omega (\sigma_i \sigma_j) \right)^2.$$ When taking $x_1 = 0$ and $t = \beta^2$ the variance of the overlap becomes the source of the streaming. $$V(\beta^2, 0) = \frac{1}{2}
\left( \langle q_{12}^2 \rangle - \langle q_{12} \rangle^2 \right).$$
As usual in our framework, we kill the source (i.e. $V(t,\bold{x})=0$), and obtain for the velocity $$\bar{q}(x_1^0) \equiv - v_1(0, x_1^0)
= \int d \mu (z) \, \tanh^2 (z \sqrt{x_1^0}).$$ This is the well known self-consistency relation of Sherrington and Kirkpatrick, namely $$\bar{q}(\beta) =
\int d \mu (z) \, \tanh^2 (\beta \sqrt{\bar{q}}
z).$$ The free field solution of the Hamilton-Jacobi equation is then the solution in a particular point (and of course the choice is $
\bar{S} (0, x_1^0)$ which requires only a one-body evaluation) plus the integral of the Lagrangian over the time (which is trivially built by the kinetic term alone when considering free propagation). Overall the solution reads off as $$\bar{S} (t, x_1) =
\bar{S} (0, x_1^0)
+ \frac{1}{2} \bar{q}^2(x_1^0) t,$$ by which statistical mechanics is recovered as usual, obtaining for the pressure $$\bar{\alpha} (t; x_1, x_2) =
\log 2 + \int d \mu (z) \, \log \cosh (\sqrt{x_1^0}z)
+ \frac{t}{4} ( 1 - \bar{q} )^2
+ \frac{x_2}{2},$$ that corresponds exactly to the replica-symmetric solution once evaluated at $x_1=x_2=0$ and $t=\beta^2$ and noticing that $0=x(t)=x_1^0-\bar{q}t$. Within our description it is not surprising that the replica symmetric solution is a better description with respect to the annealing. In fact, while annealing is obtained neglecting the whole squared overlap $\langle q_{12}^2\rangle$ as a source term, the replica symmetric solution is obtained when neglecting only its variance. Of course, nor the former neither the latter may correspond to the true solution. However we are understanding that increasing the Euclidean dimensions (the RSB steps in replica framework) corresponds to lessening the potential in the Hamilton-Jacobi framework, and consequently reducing the error of the free field approximation toward the true solution.
$K=3$, $1$-RSB free energy
==========================
The simplest expression of $x(q)$ which breaks replica symmetry is obtainable when considering $K=3$, && 0 = q\_0 < q\_1 < q\_2 < q\_3 = 1,\
&& 0 = m\_1 < m\_2 m < m\_3 = 1. With this choice for the parametrization of $x(q)$ the solution of the Parisi equation \_q f + \^2\_y f + x (\_y f)\^2 = 0 is given by f(0, h; x, ) & = & d(z\_1) d(z\_2) \^m \[ ( z\_1 + z\_2 + h)\] +\
& + & \^2 (1 - q\_2), and, using a label $P$ to empathize that we are considering Parisi prescription, the pressure becomes \_P ( , h; x) & = & 2 + f(0, h; x, ) - \^2 \_0\^1 q x(q) dq\
\[eq:alfa\] & = & 2 - \^2 \[ (m-1) q\_2\^2 -1 - m q\_1\^2 + 2 q\_2\] +\
& + & d(z\_1) d(z\_2) \^m \[ ( z\_1 + z\_2 + h)\].
Now we want to see how it is possible to obtain this solution by analyzing the geodetics of our free mechanical propagation in $3+1$ dimensions.
Let us define \_N (t; x\_1, x\_2, x\_3) \_ , by which Z\_3 & & Z\_K \_N,\
Z\_2 & = & E\_3 Z\_3 = ( ) \_ ,\
Z\_1 & = & (E\_2 Z\_2\^m )\^[1/m]{},\
Z\_0 & = & (E\_1 Z\_1\^[m\_1]{})\^[1/m\_1]{} = (E\_1 Z\_1) = . For the interpolating function we get in this way && \_N (t; x\_1, x\_2, x\_3) E\_0 Z\_0 =\
&& = + E\_0 E\_1 { E\_2 \^m }, while for the derivatives we can use the general formulas, so to obtain \_t & = & \[1 - m q\_[12]{}\^2 \_1 - (1-m) q\_[12]{}\^2 \_2 \],\
\_1 & = & \[1 - m q\_[12]{} \_1 - (1-m) q\_[12]{} \_2 \],\
\_2 & = & \[1 - (1-m) q\_[12]{} \_2 \],\
\_3 & = & . Then we need to evaluate the interpolating function at the starting time: \_N (0; x\_1\^0, x\_2\^0, x\_3\^0) & = & + 2 +\
& + & d(z\_1) .
The $K=3$ PHF, as usual and previously explained for the $K=1,2$ cases, does not depend on the last coordinate (i.e. $x_3$), such that we can ignore it when studying the properties of the solution. S(t; x\_1, x\_2) & = & E\_0 E\_1 E\_2 \^m\
& - & x\_1 - x\_2 - t/2. and the derivatives, implicitly defining the momenta (labeled by $p_1, p_2$), are given by \_t S & = & - q\_[12]{}\^2 \_1 - q\_[12]{}\^2 \_2,\
\_1 S & & p\_1 (t; x\_1, x\_2) = -m q\_[12]{} \_1 -(1-m)q\_[12]{} \_2,\
\_2 S & & p\_2 (t; x\_1, x\_2) = -(1-m)q\_[12]{} \_2. The kinetic energy consequently turns out to be T = q\_[12]{} \_1\^2 + q\_[12]{} \_2\^2, and the potential, which we are going to neglect as usual, is given by V(t; x\_1, x\_2) = . By having two spatial degrees of freedom, the mass matrix has a $2
\times 2$ structure now M\^[-1]{} = (
[cc]{} 1/m & -1/m\
-1/m & 1/\[m(1-m)\]
), M = (
[cc]{} 1 & 1-m\
1-m & 1-m
). Note that the eigenvalues of the mass matrix are always positive defined for $m\in[0,1]$. We can determine now the velocity field v\_1(t; x\_1, x\_2) & = & \_[b=1]{}\^2 (M\^[-1]{})\_[1b]{} p\_b = - q\_[12]{} \_1,\
v\_2(t; x\_1, x\_2) & = & \_[b=1]{}\^2 (M\^[-1]{})\_[2b]{} p\_b = q\_[12]{} \_1 - q\_[12]{} \_2.
So we get all the ingredients for studying the free field solution (the one we get neglecting the source). In this case the equations of motion are x\_1 (t) & = & x\_1\^0 - q\_[12]{} \_1(0; x\_1\^0, x\_2\^0) t x\_1\^0 - |[q]{}\_1 t\
x\_2 (t) & = & x\_2\^0 + ( |[q]{}\_1 - q\_[12]{} \_1(0; x\_1\^0, x\_2\^0) ) t x\_2\^0 + (|[q]{}\_1 - |[q]{}\_2 ) t and we can see that $\bar{q}_1$ and $\bar{q}_2$ satisfy the self-consistency relations in agreement with the replica trick predictions \[eq:q\_1\] q\_1 & = & d(z) \^2,\
\[eq:q\_2\] q\_2 & = & d(z) ,\
(z, y) & = & ( z + y),\
D(z) & = & d(y) \^m (z, y). The PHF is obtained in coherence with the previous cases and obeys |[S]{} (t; x\_1, x\_2) = |[S]{} (0; x\_1\^0, x\_2\^0) + T(0; x\_1\^0, x\_2\^0) t, by which |(t;x\_1, x\_2, x\_3) - - - = |(0;x\_1\^0, x\_2\^0, x\_3\^0) - - + T (0; x\_1\^0, x\_2\^0) , and, remembering that x\_1-x\_1\^0 & = & -|[q]{}\_1 t,\
x\_2-x\_2\^0 & = & (|[q]{}\_1-|[q]{}\_2) t, we get the thermodynamic pressure in the space-time coordinates: |(t;x\_1, x\_2, x\_3) & = & + 2 - \[ -1 + 2 |[q]{}\_2 - m |[q]{}\_1\^2 - (1-m) |[q]{}\_2\^2 \]\
&+& d(z\_1) . In order to get the statistical mechanics result, as usual, we need to evaluate the latter in $t = \beta^2$, $x_1=x_2=x_3=0$, from which $x_1^0 = \bar{q}_1 t$ and $x_2^0 = (\bar{q}_1 - \bar{q}_2) t$, gaining once again (\[eq:alfa\]).
Properties of the $K=1,2,3$ free energies
=========================================
In the previous sections, we obtained solutions for the Hamilton-Jacobi equation in the $K=1, 2, 3$ cases, without saying anything about uniqueness. For $K=1$, the annealed case, there is no true motion so it is clear that there is just a single straight trajectory, identified by the initial point $x_0 = x$, intersecting the generic point $(x,t)$, with $x, t > 0$.
In the $K=2$ problem, well studied in [@sum-rules], one can show uniqueness by observing that the function $t(x_0)$, representing the point at which the trajectory intersects the $x$-axis, is a monotone increasing one of the initial point $x_0$, so that given $x, t > 0 $, there is a unique point $x_0$ (and velocity $\bar{q}(x_0)$, of course) from which the trajectory starts.
For $K=3$, the problem becomes much complicated, because we now have to consider motion in a three dimensional Euclidean space, proving that given the generic point $(x_1, x_2, t)$, with $x_1>0$, $x_2>0$, $t>0$, there exists a unique line passing in $(x_1, x_2)$ at time $t$.
So let us consider the functions F(x\_1, t; x\_1\^0, x\_2\^0) & & x\_1 - x\_1\^0 + |[q]{}\_1(x\_1\^0, x\_2\^0) t,\
G(x\_2, t; x\_1\^0, x\_2\^0) & & x\_2 - x\_2\^0 + |[q]{}\_2(x\_1\^0, x\_2\^0) t - |[q]{}\_1(x\_1\^0, x\_2\^0) t. These functions vanish in the points corresponding to the solutions of the equations of motion, and in particular for all the $A_t \equiv (x_1=0, x_2=0, t>0; \ x_1^0=0, x_2^0=0)$. Labeling with $\partial_1$ and $\partial_2$ the partial derivatives with respect to $x_1^0$ and $x_2^0$, the Dini prescription tells us that if the determinant of the Hessian matrix \[hessiano\] = |
[cc]{} \_1 F & \_2 F\
\_1 G & \_2 G
| is different from zero in a neighborhood of $A_t$, then we can explicitate $x_1^0$ and $x_2^0$ as functions of $x_1$, $x_2$ and $t$, in such neighborhood. This means that the initial point and the velocities, which depend on it, are univocally determined by $x_1$ , $x_2$ and $t$ via the equations of motion.
Calculating the determinant we find & = & (-1 + \_1 |[q]{}\_1 t) (-1 + \_2 |[q]{}\_2 t - \_2 |[q]{}\_1 t) - ( \_2 |[q]{}\_1 t)(+ \_1 |[q]{}\_2 t - \_1 |[q]{}\_1 t)=\
\[hessiano1\] & = & 1 + (\_2 |[q]{}\_1 - \_1 |[q]{}\_1 - \_2 |[q]{}\_2) t + ( \_1 |[q]{}\_1 \_2 |[q]{}\_2 - \_2 |[q]{}\_1 \_1 |[q]{}\_2 ) t\^2. so we should ask, for all $x_1^0>0$ and $x_2^0>0$ (\_2 |[q]{}\_1 - \_1 |[q]{}\_1 - \_2 |[q]{}\_2)\^2 - 4 ( \_1 |[q]{}\_1 \_2 |[q]{}\_2 - \_2 |[q]{}\_1 \_1 |[q]{}\_2 ) to be negative, or in case $\Delta \geq 0$, the zeros of t\_ = correspond to non-invertibility points.
The expression we obtain for the determinant is quite untractable, however we can show uniqueness in a neighborhood of the initial point $x_1^0=0$, $x_2^0=0$. The motion starting from this point has zero velocity, and we saw that it gives the high temperature solution for the mean field spin glass model: Remembering that the transition to the low temperature is continuous, we can expand the Hessian for small values of $x_1^0$ and $x_2^0$ and observe that, for $x_1=0$, $x_2=0$, $t=\beta^2$, the equations of motions become x\_1\^0 & = & \^2 |[q]{}\_1\
x\_2\^0 & = & \^2(|[q]{}\_2 - |[q]{}\_1). When $x_1^0 \rightarrow 0 $ and $x_2^0 \rightarrow 0$ we have also $\bar{q}_1 \rightarrow 0$ and $\bar{q}_2 \rightarrow 0$, so we have an expansion close to the critical point (which is the only region where the control of the unstable $1$-RSB solution makes sense for the SK, being the latter $\infty$-RSB).
For $ \bar{q}_1$ and $ \bar{q}_2 $ we have, retaining terms until the second order: \[q1approx\] |[q]{}\_1(x\_1\^0, x\_2\^0) & & x\_1\^0 - 2(1-m) x\_1\^0 x\_2\^0 - 2(x\_1\^0)\^2\
\[q2approx\] |[q]{}\_2(x\_1\^0, x\_2\^0) & & x\_1\^0 + x\_2\^0 + m x\_2\^0 (x\_2\^0 + 2x\_1\^0) and consequently \[d1q1approx\] \_1 |[q]{}\_1(x\_1\^0, x\_2\^0) & & 1 - 2(1-m) x\_2\^0 - 4 x\_1\^0\
\[d2q1approx\] \_2|[q]{}\_1(x\_1\^0, x\_2\^0) & & - 2(1-m) x\_1\^0\
\[d1q2approx\] \_1|[q]{}\_2(x\_1\^0, x\_2\^0) & & 1 + 2 m x\_2\^0\
\[d2q2approx\] \_2|[q]{}\_2(x\_1\^0, x\_2\^0) & & 1 + 2 m x\_1\^0 + 2 m x\_2\^0. Substituting in (\[hessiano1\]) we find & & 1 - 2 t\
&& +t\^2. and, for $x_1^0, x_2^0 = 0$ (which corresponds to expand the velocities up to the first order in $x_1^0$ and $x_2^0$) we simply obtain (1-t)\^2. This means that in a neighborhood of $x_1 = x_2 = 0$ we have uniqueness, provided that we are not exactly at the critical point $t= \beta^2 = 1 $.
Outlooks and conclusions
========================
In this paper we enlarged the previously investigated Hamilton-Jacobi structure for free energy in thermodynamics of complex systems (tested on the paradigmatic SK model) by merging this approach with the Broken Replica Symmetry Bound technique. At the mathematical level the main achievement is the development of a new method which is autonomously able to give the various steps of replica symmetry breaking (of the replica trick counterpart). At a physical level this methods clearly highlights why increasing the steps of RSB improves the obtained thermodynamics mirroring these increments in diminishing the approximation of a free field propagation in an Euclidean space time of an enlarged free energy, which recovers the proper one of statistical mechanics as a particular, well defined, limit: the main achievement is paving an alternative way to understand RSB phenomenon. However, when increasing the steps of RSB (making smaller the potential we neglect, and so smaller the error) there is a price to pay: each step of replica symmetry breaking enlarges by one dimension the space for the motion of the mechanical action. As a consequence the full RSB theory should live on an Hilbert space: this still deserve more analysis, however the method is already clear and several application may now stem: for example P-spin above the Gardner critical temperature could be solved exactly as well as a consistent part of the plethora of models born in disordered system statistical mechanics whose solutions implies only one step of RSB. We deserve to investigate both the $K\to\infty$ limit to complete the theory as well as its simpler immediate applications.
Appendix: Streaming of the interpolating function $\tilde{\alpha}(t,\bold{x})$ {#appendix-streaming-of-the-interpolating-function-tildealphatboldx .unnumbered}
==============================================================================
In this section we show in all details how to get the streaming of the interpolating function (\[alfa\])
The $t$-streaming of the interpolating function $\tilde{\alpha}(t,\bold{x})$ is given by the following formula:
$$\label{dtalfa}
\partial_t \tilde{\alpha}_N(\bold{x},t) = \frac{1}{4} \left(1
- \sum_{a=0}^{K}(m_{a+1} - m_a) \langle q_{12}^2(\bold{x},t) \rangle_a
\right).$$
To get this result, let us start by $$\label{dtalfa1}
\partial_t \tilde{\alpha}_N(\bold{x},t) = \frac{1}{N} E_0 Z_0^{-1}(\bold{x},t) \partial_t
Z_0(\bold{x},t),$$ and, as it is straightforward to show that $$\label{dtza}
Z_a^{-1}(\bold{x},t)\partial_t Z_a(\bold{x},t) = E_{a+1} \left( f_{a+1} Z_{a+1}^{-1}(\bold{x},t)\partial_t Z_{a+1}(\bold{x},t)
\right),$$ by iteration we get $$\label{dtz0}
Z_0^{-1}(\bold{x},t) \partial_t Z_0(\bold{x},t) = E_1 \dots E_K (f_1 \dots f_K Z_K^{-1}(\bold{x},t)\partial_t Z_K(\bold{x},t)).$$ The $t$-derivative of $Z_K$ is then given by $$\label{dtzk}
Z_K^{-1}(\bold{x},t) \partial_t Z_K(\bold{x},t) = \frac{1}{4 \sqrt{tN}}
\sum_{ij} J_{ij} \omega (\sigma_i \sigma_j),$$ from which $$\label{dtz01}
Z_0^{-1}(\bold{x},t) \partial_t Z_0(\bold{x},t) = \frac{1}{4 \sqrt{tN}}
\sum_{ij} E \left( f_1 \dots f_K J_{ij} \omega (\sigma_i \sigma_j)
\right),$$ where we labeled with $E$ the global average overall the random variables as there is no danger of confusion. All the terms into the sum can be worked out integrating by parts: $$\begin{aligned}
\label{gaussparti}
E \left( f_1 \dots f_K J_{ij} \omega (\sigma_i \sigma_j) \right)
& = &
\sum_{a=1}^K E \left(
f_1 \dots \partial_{J_{ij}}f_a \dots f_K \omega (\sigma_i \sigma_j) \right) \nonumber \\
\quad & + & E \left( f_1 \dots f_K \partial_{J_{ij}} \omega (\sigma_i \sigma_j) \right).
\end{aligned}$$ So we need to calculate the explicit expression of the derivatives with respect to $J_{ij}$ of both $f_a$ as well as $\omega
(\sigma_i \sigma_j)$. For the latter, it is easy to check that $$\label{djijomega}
\partial_{J_{ij}} \omega (\sigma_i \sigma_j)
= \sqrt{\frac{t}{N}} \left( 1 - \omega^2 (\sigma_i \sigma_j)
\right),$$ while for the $f_a$’s we have $$\partial_{J_{ij}} f_a = m_a f_a \left(Z_a^{-1}(\bold{x},t) \partial_{J_{ij}} Z_a(\bold{x},t) \right)
- m_a f_a E_a f_a \left(Z_a^{-1}(\bold{x},t) \partial_{J_{ij}} Z_a(\bold{x},t)
\right).$$ By using the analogy of (\[dtza\]) we get $$\begin{aligned}
Z_a^{-1}(\bold{x},t) \partial_{J_{ij}} Z_a(\bold{x},t)
& = & E_{a+1} \dots E_K \left( f_{a+1} \dots f_K Z_K^{-1} \partial_{J_{ij}} Z_K\right) \nonumber \\
\label{djijza}
& = & \sqrt{\frac{t}{N}} \tilde{\omega}_a (\sigma_i
\sigma_j),
\end{aligned}$$ such that $$\label{djijfa}
\partial_{J_{ij}} f_a = m_a f_a \sqrt{\frac{t}{N}}
\left(\tilde{\omega}_a (\sigma_i \sigma_j) - \tilde{\omega}_{a-1} (\sigma_i \sigma_j) \right).$$ Substituting (\[djijomega\]) and (\[djijfa\]) into (\[gaussparti\]) we obtain $$\begin{aligned}
E \left( f_1 \dots f_K J_{ij} \omega (\sigma_i \sigma_j) \right)
& = & \sqrt{\frac{t}{N}} \sum_{a=1}^K m_a \left[ E \left(f_1 \dots f_a
\tilde{\omega}_a (\sigma_i \sigma_j) \dots f_K \omega (\sigma_i \sigma_j) \right) \right. \nonumber \\
& - & \left. E \left(f_1 \dots f_{a-1} \tilde{\omega}_{a-1} (\sigma_i \sigma_j)
\dots f_K \omega (\sigma_i \sigma_j) \right) \right] \nonumber \\
& + & \sqrt{\frac{t}{N}} E \left( f_1 \dots f_K (1 - \omega^2 (\sigma_i \sigma_j)) \right).
\end{aligned}$$ Overall, an explicit expression for the eq. (\[dtalfa1\]) is given by $$\begin{aligned}
\partial_t \tilde{\alpha} & = & \frac{1}{4N^2} \sum_{a=1}^K \sum_{ij} m_a \left[
E_0 \dots E_a f_1 \dots f_a \tilde{\omega}_a( \sigma_i \sigma_j)
E_{a+1} \dots E_K f_{a+1} \dots f_K \omega( \sigma_i \sigma_j) \right. \nonumber \\
& - & \left. E_0 \dots E_{a-1} f_1 \dots f_{a-1} \tilde{\omega}_{a-1}( \sigma_i \sigma_j)
E_{a} \dots E_K f_{a} \dots f_K \omega( \sigma_i \sigma_j) \right] \nonumber \\
&
\label{dtalfa2}
+ & \frac{1}{4N^2} E f_1 \dots f_K \sum_{ij} (1 - \omega^2(\sigma_i \sigma_j)).
\end{aligned}$$ Once introduced the overlap, we can write the result: $$\begin{aligned}
\partial_t \tilde{\alpha} & = & \frac{1}{4} \sum_{a=1}^K m_a
(\langle q_{12}^2 \rangle_a - \langle q_{12}^2 \rangle_{a-1})
+ \frac{1}{4}(1 - \langle q_{12}^2 \rangle_K) \nonumber \\
&& = \frac{1}{4}\left( \sum_{a=1}^K m_a \langle q_{12}^2 \rangle_a
- \sum_{a=0}^K m_{a+1} \langle q_{12}^2 \rangle_a
+ m_{K+1}\langle q_{12}^2 \rangle_K + 1 - \langle q_{12}^2 \rangle_K\right) \nonumber \\
&& = \frac{1}{4} \left(1
- \sum_{a=0}^{K}(m_{a+1} - m_a) \langle q_{12}^2 \rangle_a \right).
\end{aligned}$$
Now let us focus on the $x$-streaming of the interpolating function $\tilde{\alpha}(t,\bold{x})$ and show that it is given by the following formula:
$$\label{daalfa}
\partial_a \tilde{\alpha}_N(\bold{x},t) = \frac{1}{2} \left(1
- \sum_{b=a}^{K}(m_{b+1} - m_b) \langle q_{12}(\bold{x},t) \rangle_b
\right).$$
In analogy with the $t$-streaming we have $$\begin{aligned}
\label{daalfa1}
\partial_a \tilde{\alpha}_N(\bold{x},t) & = & \frac{1}{N} E_0 Z_0^{-1}(\bold{x},t) \partial_a Z_0(\bold{x},t), \\
\label{daza}
Z_b^{-1}(\bold{x},t)\partial_a Z_b(\bold{x},t) & = & E_{b+1} \left( f_{b+1} Z_{b+1}^{-1}(\bold{x},t)\partial_a Z_{b+1}(\bold{x},t) \right), \\
\label{daz0}
\Rightarrow Z_0^{-1}(\bold{x},t) \partial_a Z_0(\bold{x},t) & = & E_1 \dots E_K (f_1 \dots f_K Z_K^{-1}(\bold{x},t)\partial_a Z_K(\bold{x},t)), \\
\label{dazk}
Z_K^{-1}(\bold{x},t) \partial_a Z_K(\bold{x},t) & = & \frac{1}{2 \sqrt{x_a}}
\sum_{i} J_{i}^a \tilde{\omega} (\sigma_i),
\end{aligned}$$ by which $$\label{daalfa2}
\partial_a \tilde{\alpha} = \frac{1}{N} \frac{1}{2\sqrt{x_a}}
\sum_{i} E \left( f_1 \dots f_K J_{i}^a \tilde{\omega} (\sigma_i) \right).$$ Again by integrating by parts we have $$\label{gausspartia}
\partial_a \tilde{\alpha} = \frac{1}{N} \frac{1}{2\sqrt{x_a}} \sum_{i=1}^N \left[
\sum_{b=1}^K E \left( f_1\dots \partial_{J_i^a}f_b \dots f_K
\tilde{\omega}(\sigma_i) \right) \right.
\left.
+ E \left( f_1 \dots f_K \partial_{J_i^a} \tilde{\omega} (\sigma_i) \right)\right].$$ Let us work out the $J_{i}^a$ by remembering that $Z_b$’s, and consequently $f_b$’s, do not depend on $J_i^{b+1}, \dots, J_i^K$. $$\label{djiaf}
\partial_{J_i^a} f_b =
\left\{
\begin{array}{ll}
0 & \mbox{if } a > b \\
m_a f_a \left(Z_a^{-1}(\bold{x},t) \partial_{J_i^a} Z_a (\bold{x},t) \right) & \mbox{if } a = b \\
m_b f_b \left(Z_b^{-1}(\bold{x},t) \partial_{J_i^a} Z_b(\bold{x},t) \right)
- m_b f_b E_b f_b \left(Z_b^{-1}(\bold{x},t) \partial_{J_i^a} Z_b(\bold{x},t) \right) & \mbox{if } a < b.
\end{array}
\right.$$ The same recursion relationship holds in this case as well: $$Z_b^{-1}(\bold{x},t)\partial_{J_i^a} Z_b(\bold{x},t) =
E_{b+1} \dots E_K \left( f_{b+1} \dots f_K Z_{K}^{-1}(\bold{x},t)\partial_{J_i^a} Z_{K}(\bold{x},t)
\right).$$ Furthermore $$Z_K^{-1}(\bold{x},t) \partial_{J_{i}^a} Z_K (\bold{x},t) =
\sqrt{x_a} \tilde{\omega}(\sigma_i),$$ from which we get $$Z_b^{-1}(\bold{x},t) \partial_{J_{i}^a} Z_b (\bold{x},t)
= \sqrt{x_a} \tilde{\omega}_b (\sigma_i).$$ Consequently, eq.s (\[djiaf\]) can be written as $$\label{djiaf1}
\partial_{J_i^a} f_b =
\left\{
\begin{array}{ll}
0 & \mbox{if } a > b \\
m_a f_a \sqrt{x_a} \tilde{\omega}_a (\sigma_i) & \mbox{if } a = b \\
\sqrt{x_a} m_b f_b \left( \tilde{\omega}_b (\sigma_i)
-\tilde{\omega}_{b-1} (\sigma_i) \right) & \mbox{if } a < b.
\end{array}
\right.$$ The last thing missing is evaluating the derivative of the state $$\label{djiaomega}
\partial_{J_{i}^a} \omega (\sigma_i)
= \sqrt{x_a} \left( 1 - \omega^2 (\sigma_i)
\right),$$ so to write, via the overlap, the analogous for the generalized states. Substituting eq.s (\[djiaomega\]) and (\[djiaf1\]), once expressed via overlaps, into (\[gausspartia\]) we obtain eq. (\[daalfa\]).
Acknowledgements {#acknowledgements .unnumbered}
================
The authors are pleased to thank Pierluigi Contucci, Giuseppe Genovese, Sandro Graffi, Isaac Perez-Castillo and Tim Rogers for useful discussions. AB work is supported by the Smart-Life Project grant.
[9]{}
A. Agostini, A. Barra, L. De Sanctis *Positive-Overlap transition and Critical Exponents in mean field spin glasses*, J. Stat. Mech. P11015 (2006).
M. Aizenman, P. Contucci, [*On the stability of the quenched state in mean field spin glass models*]{}, J. Stat. Phys. [**92**]{}, 765-783 (1998). M. Aizenman, J. Lebowitz and D. Ruelle, [*Some rigorous results on the Sherrington-Kirkpatrick spin glass model*]{}, Commun. Math. Phys. [**112**]{}, 3-20 (1987). M. Aizenman, R. Sims, S. L. Starr, *An Extended Variational Principle for the SK Spin-Glass Model*, Phys. Rev. B **68**, 214403 (2003).
R. Albert, A. L. Barabasi [*Statistical mechanics of complex networks*]{}, Rev. Mod. Phys. **74**, 47-97 (2002). D.J. Amit, [*Modeling brain function. The world of attractor neural networks*]{}, Cambridge University Press, New York (1989).
L.P. Arguin, M. Aizenman, [*On the Structure of Quasi-Stationary Competing Particle Systems*]{}, Ann. Probab. **37**, 3, 1080-1113 (2009). L.P. Arguin, [*Spin-glass computation and probability cascades*]{}, J. Stat. Phys. **126**, 951-976 (2007).
A. Barra, *Irreducible free energy expansion and overlap locking in mean field spin glasses*, J. Stat. Phys. **123**, 601-614 (2006).
A. Barra, *The mean field Ising model trough interpolating techniques*, J. Stat. Phys. **132**, 18-32 (2008).
J.P. Bouchaud, M. Potters, [*Theory of financial risk and derivative pricing. From statistical physics to risk management*]{}, Cambridge Univ. Press (2000). A. Bovier, [*Statistical Mechanics of Disordered Systems. A Mathematical Perspective*]{}, Cambridge Series **18** (2006). A. Bovier, I. Kurkova, [*Local statistics of energy levels in spin glasses*]{}, J. Stat. Phys. **126**, 933-949 (2007). W. Brock, S. Durlauf, [*Discrete choice with social interactions*]{}, Review of Economic Studies **68**, 235-260 (2001). P. Carmona, Y. Hu, [*Universality in Sherrington Kirkpatrick spin glass model*]{}, Ann. Inst. H. Poin. (B), Prob. et Stat. **42**, Issue 2, 215-222 (2006). F. Comets, J. Neveu, [*The Sherrington-Kirkpatrick model of spin glasses and stochastic calculus: the high temperature case*]{}, Commun. Math. Phys. [**166**]{}, n. 3, 549-564 (1995). P. Contucci, C. Giardinà, [*Spin Glass Stochastic Stability: A rigorous proof*]{}, Annales Henri Poincarè, **6**, 915 - 923 (2005). K. H. Fischer, J. A. Hertz, [*Spin Glasses*]{}, Cambridge Studies in Magnetism (1993).
G. Genovese, A. Barra, [*A mechanical approach to mean field spin models*]{}, J. Math. Phys. **50** (2009). F. Guerra, [*Sum rules for the free energy in the mean field spin glass model*]{}, in [*Mathematical Physics in Mathematics and Physics: Quantum and Operator Algebraic Aspects*]{}, Fields Institute Communications [**30**]{}, 161-170 (2001). F. Guerra, *Broken Replica Symmetry Bounds in the Mean Field Spin Glass Model*, Commun. Math. Phys. **233:1**, 1-12 (2003)
F. Guerra, F. L. Toninelli, [*The Thermodynamic Limit in Mean Field Spin Glass Models*]{}, Commun. Math. Phys. [**230:1**]{}, 71-79 (2002).
F. Guerra, F. L. Toninelli, [*Central limit theorem for fluctuations in the high temperature region of the Sherrington-Kirkpatrick spin glass model*]{}, J. Math. Phys. **43**, 6224-6237 (2002).
F. Guerra, F. L. Toninelli, [*Quadratic replica coupling for the Sherrington-Kirkpatrick mean field spin glass model*]{}, J. Math. Phys. [**43**]{}, 3704-3716 (2002). S. Ghirlanda, F. Guerra, [*General properties of overlap distributions in disordered spin systems. Towards Parisi ultrametricity*]{}, J. Phys. A [**31**]{}, 9149-9155 (1998).
S. Kirkpatrick, D. Sherrington, [*Solvable model of a spin-glass*]{}, Phys. Rev. Lett. [**35**]{}, 1792-1796 (1975). S. Kirkpatrick, D. Sherrington, [*Infinite-ranged models of spin-glasses*]{}, Phys. Rev. B [**17**]{}, 4384-4403 (1978). M. Mezard, G. Parisi, M.A. Virasoro, [*Spin glass theory and beyond*]{}, World Scientific, Singapore (1987).
M. Mézard, G. Parisi, R. Zecchina, [*Analytic and Algorithmic Solution of Random Satisfiability Problems*]{}, Science [**297**]{}, 812-815 (2002).
D. Panchenko, [em A connection between Ghirlanda-Guerra identities and ultrametricity]{}, arXiv:0810.0743, (2008).
G. Parisi, [*A simple model for the immune network*]{}, P.N.A.S. vol. 87, no. **1**, 429-433 (1990).
G. Parisi, [*Toward a mean field theory for spin glasses*]{}, Phys. Lett. A [**73**]{}, 3, 203-205 (1979).
G. Parisi, [*A sequence of approximated solutions to the S-K model for spin glasses*]{}, J. Phys. A: Math. Gen. [**13**]{}, L-115 (1980). G. Parisi, [*The order parameter for spin glasses: a function on the interval $0-1$*]{}, J. Phys. A: Math. Gen. [**13**]{}, 1101-1112 (1980). L. Pastur, M. Scherbina, [*The absence of self-averaging of the order parameter in the Sherrington-Kirkpatrick model*]{}, J. Stat. Phys. [**62**]{}, Nos 1/2, 1-19 (1991).
M. Talagrand, [*Replica symmetry breaking and exponential inequalities for the Sherrington Kirkpatrick model*]{}, Ann. Probab. [**28**]{}, no. 3, 1018-1062 (2000). M. Talagrand, [*On the high temperature region of the Sherrington-Kirkpatrick model*]{}, Ann. Probab. **30**, no.1, 364-381 (2002). M. Talagrand, *The Parisi Formula*, Annals of Mathematics **163**, Vol 1, 221-263 (2006).
M. Talagrand, [*The Sherrington Kirkpatrick model: a challenge for mathematicians*]{}, Probab. Rel. Fields [**110**]{}, 109-176 (1998).
[^1]: Dipartimento di Fisica, Sapienza Università di Roma
[^2]: Dipartimento di Fisica, Università di Parma
[^3]: Dipartimento di Fisica, Sapienza Università di Roma & Istituto Nazionale di Fisica Nucleare, Sezione di Roma $1$
[^4]: Here and in the following, we set the Boltzmann constant $k_{\rm B}$ equal to one, so that $\beta=1/(k_{\rm B} T)=1/T$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present results from a [*Chandra*]{} observation of the core region of the nearby X-ray bright galaxy cluster AWM 7. There are blob-like substructures, which are seen in the energy band 2–10 keV, within 10 kpc ($20''''$) of the cD galaxy NGC 1129, and the brightest sub-peak has a spatial extent more than 4 kpc. We also notice that the central soft X-ray peak is slightly offset from the optical center by 1 kpc. These structures have no correlated features in optical, infrared, or radio band. Energy spectrum of the hard sub-peak indicates a temperature higher than 3 keV with a metallicity less than 0.3 solar, or a power-law spectrum with photon index $\sim 1.2$. A hardness ratio map and a narrow Fe-K band image jointly indicate two Fe-rich blobs symmetrically located around the cD galaxy, with the direction perpendicular to the sub-peak direction. In larger scales ($r<60$ kpc), the temperature gradually drops from 4 keV to 2 keV toward the cluster center and the metal abundance rises steeply to a peak of 1.5 solar at $r \approx 7$ kpc. These results indicate that a dynamical process is going on in the central region of AWM 7, which probably creates heated gas blobs and drives metal injection.'
author:
- 'T. Furusho, N. Y. Yamasaki, and T. Ohashi'
title: Chandra observation of the core of the galaxy cluster AWM 7
---
INTRODUCTION
============
Recent [*Chandra*]{} and [*XMM-Newton*]{} data have shown with their advanced imaging and spectroscopic capabilities rather complex features in the cores of rich clusters, including the brightest objects such as M87, Centaurus, and Perseus clusters. The newly emerging properties of the cluster core are roughly summarized into three aspects: the complex brightness structure often correlated with radio lobes, the lack of cool gas with temperatures below 1–2 keV, and the interesting metallicity profiles sometimes characterized by high-metallicity rings. The [*Chandra*]{} images show both filamentary structures and holes that coincide with radio lobes in some clusters. Clear examples are the Perseus cluster [@fabian00; @sfs02], and Hydra A [@mcnam00; @david01]. The lack of the cool gas which was expected from radiative cooling raises a problem of heat source and heating mechanism [e.g. @fabian01; @hans02], which are still an unsolved question. Various possible heat sources have been proposed, but none of the existing models seems to explain the observed thermal structure of the cluster gas in a plausible way. The metal-abundance profile increases simply toward the center in M 87 [@matsu02], while the metallicity profiles in Perseus, Centaurus [@sf02] and Abell 2199 [@johns02] clusters show a peak at a certain radius from the cluster center. The precise origin of these “high-metallicity rings” is not understood yet.
AWM 7 is a nearby, X-ray bright cluster at a redshift of $z=0.0173$ [@kg00]. The ROSAT data show a moderate cooling flow of $\sim$ 60 M$_{\odot}$ yr$^{-1}$, and a temperature drop at the center associated with the central cD galaxy, NGC1129 [@nb95]. The ASCA mapping observation revealed an isothermal intracluster medium (ICM) with a temperature of 4 keV [@furu01], and a Mpc-scale abundance gradient from the central 0.5 solar to about 0.2 solar in the outer region [@ezawa97]. These results suggest that this cluster as a whole has been in a dynamically relaxed state. The ICM distribution, however, is not circularly symmetric but elliptically elongated in the east-west direction. The X-ray peak of the cD galaxy in the PSPC image is shifted from the center of the whole cluster by 30 kpc, and from the optical center of the cD galaxy by 3 kpc [@nb95; @kg00]. AWM 7 therefore is thought to be still in an early stage in the cD cluster evolution. No detection of CO emission from the cD galaxy was reported with an upper limit for the mass of molecular gas to be $<4\times10^8
M_{\odot}$ [@fuji00]. The NRAO image at 1.4 GHz does not show any significant radio emission from the cD galaxy [@burns80].
In this paper, we report on the high spatial resolution X-ray image of AWM 7, along with the temperature and abundance profiles. This relatively young cD cluster is the best target to look into the processes taking place during the evolution of cD clusters and to make comparison with other systems.
We use $H_0=50$ km s$^{-1}$ Mpc$^{-1}$ and $q_0 = 0.5$, indicating 29 kpc for $1'$ and 0.49 kpc for $1''$ at AWM 7. The solar number abundance of Fe relative to H is taken as $4.68 \times 10^{-5}$ [@ag89].
OBSERVATION AND DATA REDUCTION
==============================
The core of AWM 7 was observed with [*Chandra*]{} on 2000 August 19 for 47,850 s using the ACIS-I detector. The center of the cluster was focused on the center of ACIS-I3 with an offset of $Y-3.7$ and $Z+3.7$ arcmin from the on-axis position, and we limit here our analysis to the I3 chip only. The data were taken with the Faint mode, and the CCD temperature was $-120^\circ$C. In order to check if significant flares may affect the data, we produced a light curve for the ACIS chips I0–2 which observed the sky region with relatively higher background contribution than the I3 chip using events in the 0.3–10 keV energy band. We found there was no flare-like time variation. We applied the data screening to keep the count rates within $\pm 10 \%$ of the average of 4.7 count s$^{-1}$, and it resulted in a slight reduction in the usable exposure time to 47,731 s.
THE CD GALAXY REGION
====================
X-ray Image {#sec:img}
-----------
A smoothed image of the central $1'\times 1'$ region of AWM 7 in the 0.5–7 keV band is shown in Figure 1a. The image is smoothed by a Gaussian function with $\sigma=2''$, and corrected for exposure. The X-ray image appears to mainly consist of the diffuse ICM and three emission components. The bright central emission corresponds to the cD galaxy, NGC 1129. The point-like weak emission at $27''$ (13 kpc) southwest of the center is identified as a small galaxy, VV085b, which is more clearly seen in the 2MASS (The Two Micron All Sky Survey) infrared image in Figure 2c. The third component is a very faint extended feature near the cD galaxy with a separation $13''$ (6 kpc) in the southeast of the cD. This component has no cataloged counterpart either in NED or SIMBAD. The image also shows signs of blobs or filamentary structures around the cD galaxy, and possible small X-ray holes at $17''$ east and $18''$ west of the center, such as those seen in other major cD clusters more distinctly.
Figure 1b shows a color-coded X-ray image of the same region, in which X-ray photon energy is shown in three colors. The energy assignments for the 3 colors are 0.5–1.5 keV for red, 1.5–2.5 keV for green, and 2.5–8 keV for blue, respectively. There is an extended yellow region seen near the center, and the shape may be an unresolved small filament like the one seen in the Centaurus cluster [@sf02]. The southeast faint structure (hereafter hard sub-peak) in the cD galaxy is obviously bluer compared with the central region of the cD galaxy. X-ray color of the galaxy VV085b is red. The temperature of this galaxy is lower than the level of NGC 1129, however this source is too faint for a quantitative temperature determination with spectral fits.
In order to look into energy dependent structures, we create the soft (0.5–2.0 keV) and hard (2.0–10.0 keV) band images separately as shown in Figures 2a and 2b. The images are adaptively smoothed ith a minimum significance of $3\sigma$. In these figures, the dotted lines indicate the cutout region whose intensity profiles are plotted in Figure 2d. The images are again for the central $1'$ square region. The hard sub-peak in the southeast region marked by a cross appears very clearly in the hard band image, while it is hardly seen in the soft band. The X-ray flux of this hard sub-peak is very close to the peak at the cD galaxy, as seen in the 2–10 keV band profile (filled circles) in Figure 2d. Since the cluster center was focused on the I3 chip center with an offset of $5'$ from the optical axis, the PSF at the hard sub-peak is wider than the on-axis PSF. The spatial extent of the sub-peak, more than $8''$, however, is wider than the PSF width at the sub-peak position as shown by the dashed curve in Figure 2d. Therefore, this sub-peak emission is unlikely to be from a single point source. The distance between the sub-peak and the cD galaxy center ($13''$ or 6 kpc) is much smaller than the isophotal radius $r_e$ of NGC 1129: $48''$ (23 kpc) [@bacon85]. This sub-peak is thought to be an internal structure of the cD galaxy, if it is not either a foreground or a background source. It is remarkable that the 2MASS image of the cD galaxy shows no sign of a structure in the same position (Figure 2c).
We also notice that, in the hard-band image shown in Figure 2b, there are fainter blob-like features seen just in the northwest and possibly a hole or filament in the west of the cD galaxy. The 1-dimensional profile in Figure 2d in fact shows a peak at $10''$ in the opposite side of the hard sub-peak. Furthermore, this hard band image shows a marked deviation from the spherical symmetry. The brightness shows a fairly sharp cutoff in the north to east direction of the cD, while it is more diffusively extended in the west to south direction. This morphology suggests that the hotter gas component in the central region may be moving to the northeast direction, creating a rather sharp edge in the front and a tailing feature behind. This rather complex nature of the central hot gas, which is only seen in the hard energy band ($> 2$ keV) is an entirely new finding in the present analysis. Note that the boxes in Figure 2b show regions where spectra were extracted as described in §\[sec:cDspec\].
We find another small structure in the very center of the cD galaxy. The X-ray peak in the 0.5–2 keV band is shown by a filled triangle in Figure 2a-c. We derived the peak position from the true data. The soft peak position, (R.A., decl.)$_{\rm J2000}$ = ($2^{\rm
h}54^{\rm m}27^{\rm s}\!.51$, $+41^{\circ}34'46''\!.29$), is slightly shifted to southeast from the 2MASS image center of NGC 1129, (R.A., decl.)$_{\rm J2000}$ = ($2^{\rm h}54^{\rm m}27^{\rm s}\!.34$, $+41^{\circ}34'46''\!.20$), by $1.9''$ (1 kpc) with a statistical error of $0.3''$ as shown in Figure 2c. This feature has been noted by [@kg00] who reported that the peak of the PSPC image was $6''$ offset from the nominal position of NGC 1129. In the 2–10 keV band, coordinate of the X-ray peak at the cD galaxy is (R.A., decl.)$_{\rm
J2000}$ = ($2^{\rm h}54^{\rm m}27^{\rm s}\!.40$, $+41^{\circ}34'46''\!.93$), which is $1.0''$ (0.5 kpc) away from the 2MASS image center. However, the statistical error in this band is $1.5''$ so the uncertaintly range covers both the 2MASS and the soft peaks. Figure 2d shows one-dimensional profiles from southeast to northwest running through the hard sub-peak, and the center of the cD galaxy. The peak of the soft-band profile (open circles) is offset by a few arcsec, but the hard band one (filled circles) takes the maximum almost at the cD center.
Fe Blobs {#sec:feblob}
--------
The left panel of Figure 3 shows an image by limiting the data in a narrow energy band 6–7 keV, which includes the Fe-K line, for the central $1'$ region. The image shows two blob-like features symmetrically located around the cD galaxy center in the northeast and southwest directions. The two blobs contain 25 and 33 counts in the 6–7 keV band, while the surrounding regions of the same size have 9 and 13 counts, respectively. Interestingly, the angle of the line connecting the two blobs is perpendicular to that connecting the cD galaxy center and the hard sub-peak as seen in the 2–10 keV intensity contours. To confirm the enhanced Fe abundance, we calculated hardness ratios between fluxes in 6–7 keV and 2–6 keV for all positions and plotted them in the right panel of Figure 3, by converting them into the abundance values. Since the ICM temperature at $r=10''-20''$ is fairly constant within 20 % in this region, the hardness ratio should show us the approximate equivalent width of Fe-K line excluding the very center. We took running means over $16'' \times 16''$ for each data point, because the photon statistics in the 6–7 keV band was too low to draw a pixel-by-pixel image. The region with good statistics is encircled by the white line, in which the statistical errors are less than 30%. The hardness map also indicates high-abundance regions around the two blobs seen in the left panel. As shown in the spectral fit in §\[sec:2dmap\], the abundance value of the Fe blobs is 1.5–2.0 solar. We also note that in both panels of Figure 3 the southeast hard sub-peak has a very low Fe abundance, although the statistical errors are large.
Spectral Analysis {#sec:cDspec}
-----------------
We first extracted pulse-height spectra for the central three regions, which correspond to the hard peak (cD center), soft peak, and the hard sub-peak as shown in Figure 2b. The region sizes are $7''\times 14''$, $7''\times14''$, and $10''\times10''$, respectively. The data were corrected for CTI using the software provided by Townsley [^1], which significantly improved residuals of the spectra around Fe-K line. We have created background spectra using the blank sky data described by Maxim Markevitch [^2]. Response matrices and effective area files were made for multiples of $32\times 32$ pixels, and averaged with a weight of the number of photons in the region using “mkrmf” and “mkarf” in the CIAO software package distributed by the CXC. Since the response for the ACIS spectrum has a problem below 1 keV in determining the accurate absorption due to the material accumulated on the ACIS optical blocking filter, we added the [acisabs]{} absorption model [^3] available in XSPEC, and fixed the absorption at the Galactic value of $9\times10^{20}$ cm$^{-2}$. The parameter [Tdays]{} was set to 392, and the rest of the parameters were kept at their default values. We fit the spectra with an absorbed single temperature MEKAL model in the energy band 1–9 keV, excluding the 1.8–2.2 keV interval around the mirror Ir edge to minimize the effects of calibration uncertainties.
The spectra and resulting best-fit parameters are shown in Figure 4 and Table 1. The normalizations of the spectra are arbitrarily scaled to lay-out the spectra separately. The spectra of the soft and hard peaks, where most part of the emission comes from the NGC 1129 center, show the temperatures of 1.8 and 2.0 keV. They are consistent with the previous PSPC value of $kT=1.8$ keV at the cluster center [@nb95] and are about half of the ICM temperature in the outer region. The luminosity of the cD galaxy, by adding the former two regions, is derived to be $2.6\times 10^{41}$ erg s$^{-1}$ in the 0.5–10 keV band. The temperature of the hard sub-peak is about 3 keV, higher than the former two regions just as expected from the color X-ray image (Figure 1b). Since the spectrum of the sub-peak contains the hot ICM component along the line of sight, a spectral fit with an additional MEKAL component with a fixed temperature of 2.8 keV, which is roughly an average temperature at $r=10''-20''$ as shown in Figure 6, was carried out and resulted in the sub-peak temperature to be higher than about 5 keV with the best-fit metallicity value of zero solar. The metal abundance of the sub-peak is significantly lower compared with the other 2 regions and the surrounding ICM (see §\[sec:ktz\]). Accordingly, the data also allows a power-law spectrum for the sub-peak component, with the best-fit photon index 1.2 and the 90% error range to be 0–2.5. However, the statistical quality of the data is poor and it is difficult to constrain the actual nature of the sub-peak emission. The estimated luminosity of the sub-peak excluding the ICM component is $1.2\times 10^{40}$ erg s$^{-1}$, which is close to the levels of small elliptical galaxies.
To evaluate the significance of the enhanced Fe abundance, we also created a pulse-height spectrum for the sum of the two Fe blob regions and compared it with the one in the surrounding region. The actual regions where the data were accumulated are shown by the white ellipses in Figure 3a for the Fe blobs, and the surrounding low-metal region was an annular region with $r=5''-20''$ excluding the Fe blobs. Figure 5 shows the resultant spectra for the Fe blobs and the surrounding region. A prominent Fe-K line is indeed recognized in the blob spectrum. We fit the spectra in the same way as we did for the central spectra, i.e. single temperature MEKAL model with a fixed $N_{\rm H}$ value. The Fe blob region shows the metal abundance to be 1.57 (1.03–2.10) solar, compared with 0.73 (0.52–1.04) solar for the surround region, with the values in parentheses showing 90% confidence limits for a single parameter. We looked into the difference in the $\chi^2$ values for two spectral fits, one with a common abundance for the two spectra and the other with separate abundance parameters, and it turned out to be 8.1 (138.8 and 146.9). The reduction of the $\chi^2$ value is more than 2.7 for one additional model parameter, which indicates that the two regions have different metal abundance at the 90% confidence [@malina76].
THE SURROUNDING ICM REGION
==========================
Surface Brightness Profile {#sec:SB}
--------------------------
The top panel of Figure 6 shows the radial distribution of the X-ray surface brightness within a radius of $r<2'$(60 kpc) in the 0.5–10 keV band. Point source candidates were subtracted, and the exposure was corrected for. We center on the X-ray peak of the 0.5–10 keV image, which is the same position as the soft-band peak because the photons below 2 keV are dominant even in the full band image. The X-ray emission from the cD galaxy within a radius of $\sim20''$ is smoothly connected to the ICM without significant jump or edge features. The profile as a whole is very well fitted by a single $\beta$ model with the parameters derived as $\beta=0.28 \pm 0.01$ and $r_{\rm c}=1.4''\pm 0.2$ (0.7 $\pm$ 0.1 kpc). Errors denote the 90% confidence limits for a single parameter. The $\beta$ value is slightly larger than the previous HRI results which showed $\beta=0.25
\pm 0.01$, and the core radius is smaller than the HRI value, $r_{\rm
c}=6''-16''$ [@nb95]. The difference is reasonably explained by the higher spatial resolution of the [*Chandra*]{} data. The $\beta$ value obtained here is much lower than those in typical cD galaxies ($\beta=1-2$) or field elliptical galaxies ($\beta=0.4-0.6$). The central density is obtained to be $n_0=0.053\pm 0.010$ cm$^{-3}$, which is consistent with the value derived from the HRI observation.
Temperature and Abundance Profiles {#sec:ktz}
----------------------------------
We have created annular spectra with a width of $10''$ and fitted them with an absorbed MEKAL model. The point source candidates are subtracted. We have created background spectra in the same way as we performed for the spectra of the central region in §\[sec:cDspec\]. Each annular spectrum can be fitted well by an absorbed single temperature model with a reduced $\chi^2$ of mostly 0.9–1.2. The middle panel of Figure 6 shows the resulting temperature profile. We fit the spectra for two different energy bands, 1–9 keV and 2–9 keV, as indicated by crosses and diamonds in the lower two panels of Figure 6. The best-fit temperatures for these two energy bands are slightly different in each ring by about 0.5 keV. The tendency that the temperature gradually drops toward the center from around 4 keV is the same for both fits. The central temperature is 2 keV, about half of the average ICM temperature in the outer region. Similar features are seen in most cD clusters with ASCA (Ikebe 2001), and is also similar for a universal temperature profile for relaxed lensing clusters reported by [@allen01].
The bottom panel of Figure 6 shows the radial profile of metal abundance. The abundance increases toward the center from 0.5 solar to 1.5 solar at $r=7$ kpc. The peak value exceeding 1 solar is much higher than the previous ASCA result [@ezawa97] which is indicated by the dotted line. The abundance feature was not resolved by the large point spread function of the ASCA X-ray mirror. The central abundance falls back to 0.7 solar with the single temperature fit in the 1–9 keV band, leaving the ring-like metallicity peak at $r=7$ kpc. The similar profile is observed in other cD clusters, such as in the Centaurus cluster [@sf02], Abell 2199 [@johns02], and Abell 2052 [@blant03]. To take the projection effect into account, we also fitted the central spectrum with a two-temperature model with fixed parameters of the projected ICM component as shown with the thick crosses. The resulting best-fit abundance is about 1.2 solar, which is still lower than the peak value. However, the errors now allow the curve to be flat, or a metal concentration at the center is allowed if one takes the result (1.6 solar) for the 2–9 keV fit. The deprojected abundance profile of M 87 shows the same result, which decreases to the center in the whole band fit, but increases to the very center in the fit above 2 keV [@matsu02]. Unfortunately, the smaller core of AWM 7 with rather low surface brightness, compared with the one of Centaurus, M 87, etc, makes confirmation of the low abundance at the very center of AWM 7 difficult.
Two-dimensional Map {#sec:2dmap}
-------------------
Since the radial profile always suppresses fine angular structures, we studied the spectra in 4 azimuthal directions separately for the central $2'\times2'$ region in steps of $\Delta r=10''$. We fit the spectra with a single temperature model in the 1–9 keV band. Figure 7 shows the resulting 2-dimensional maps of temperature and abundance. The map shows that the temperature in the west half is almost isothermal with $kT = 3.0-3.5$ keV excluding the central region with $r < 20''$. The southeast region is hotter than the average by 1.0 keV, while the surface brightness distribution is very smooth and symmetric. The abundance map again shows the high metallicity regions in the northeast of the second ring and the southwest of the third ring. These two regions exactly correspond to the two blobs described in §\[sec:feblob\]. The best-fit abundances of the northeast and southwest blobs are $1.9\pm
0.5$ solar and $2.8^{+0.5}_{-0.9}$ solar, respectively, while the surrounding regions show 1.0–1.4 solar with typical errors of 0.2–0.4 solar. The abundance distribution is almost uniform with $Z=0.7-0.9$ solar in the four directions outside the Fe blobs.
DISCUSSION
==========
We have analyzed the [*Chandra*]{} data of the galaxy cluster AWM 7, which brought us the X-ray view of the cluster center with the best spatial resolution ever achieved. In particular, the 2–10 keV image showed interesting structures in the cD galaxy. We found an extended emission with fairly hard spectrum ($kT \gtrsim 3$ keV) at a position 6–7 kpc off from the cD center. This source has no optical counterpart. Other weaker blob-like features are also seen in the 2–10 keV image in the central region. Based on the Fe band intensity map and the hardness ratio map, we found two extended regions with strong Fe-K line emission almost symmetrically aligned with respect to the cD center. The directions of these Fe blobs from the cD center are perpendicular to that of the hard sub-peak. We derived radial profiles of the surface brightness, temperature, and metal abundance for the surrounding ICM within a radius of $2'$, which showed some common X-ray features with other cD clusters. Below, we will discuss about the implications of these new and interesting results.
Hard Sub-Peak and Hot Blobs
---------------------------
The sub-peak recognized in the hard band image has a spatial extent of about 4 kpc if it belongs to AWM 7, and it is certainly more extended than the PSF. The X-ray luminosity at the distance of AWM 7 is estimated to be $1.2 \times 10^{40}$ erg s$^{-1}$, which is comparable to those of elliptical galaxies. Other fainter blobs which are suggested in the hard band image (Figure 2b) have lower luminosities by several factors. Here, we examine the possibility whether this hard sub-peak really belongs to AWM 7 or not. The observed properties of this source, i.e. an extended X-ray emission with a temperature $\gtrsim 3$ keV (or nonthermal) and the lack of optical counterpart, makes the possibility of foreground or background object very unlikely. Possible extragalactic objects with extended emission are elliptical or starburst galaxies, or distant clusters, but the absence of optical counterpart is a severe problem. Furthermore, the complex structure seen in the hard-band image strongly suggest that the whole feature is the emission of AWM 7 itself and that the hard sub-peak is simply the brightest blob among the other fainter ones. With these consideration, we may conclude that the sub-peak feature is a gas cloud residing in the core region of AWM 7. We note that a number of X-ray blobs with a similar size (3 kpc) have been found in the core region of the cluster 2A 0335+096 [@mazz03], in which blob luminosities are several times $10^{41}$ erg s$^{-1}$. A notable difference is that the blob temperatures in 2A 0335+096 ($\sim 1.5$ keV) are similar to the surrounding ICM level, while in AWM 7 the blobs seem to be hotter or the emission is nonthermal.
Let us look into the nature of the hot blobs. If the emission is thermal, the temperature of the sub-peak is higher than 3 keV and the metal abundance is relatively low at $\lesssim 0.3$ solar. These properties are quite unique, and do not match the gas properties of either elliptical galaxies or surrounding ICM. Since there is no radio emission observed at the cluster core, non-thermal pressure such as due to magnetic field is unlikely to be a major source to confine the hot gas in the sub-peak component. In that case, the hard sub-peak and other fainter blobs may not be in pressure equilibrium with the surrounding gas. The sound crossing time to smooth out the density structure is $t_{\rm cross}=9\times 10^6$ yr, which is shorter than either the conduction time of $1.3\times10^7$ yr, or the cooling time of $t_{cool}=7\times10^8$ yr. If there is indeed no external pressure which stops the structure dissipating away, these hot blobs have to be created very recently, within 1–10 Myr. If major part of the hot gas in the central region is moving, as suggested from the sharp-edge feature in Figure 2b, the blobs may be continuously created in the turbulence.
Metal Distribution
------------------
The projected radial profile of the metal abundance shows a steep gradient, reaching about 1.5 solar. The abundance may fall back at the center, and apparently creates a high metallicity ring at $r=10''-20''$ (5–10 kpc). The off-center peak in the abundance profile was found in some major cD cluster cores. The radius of the metallicity peak is, however, smaller in AWM 7 than those in other clusters, such as $r=15$ kpc for Centaurus [@sf02], $r=30$ kpc for Abell 2199 [@johns02], and $r=60$ kpc for Perseus [@sfs02]. @mf03 suggests that the central abundance begins to decrease with the thermal evolution of ICM with a non-uniform metal distribution, which can account for the smaller peak radius of the rather young cluster AWM 7.
Our 2-dimensional hardness analysis showed that the observed high metallicity peak was mainly caused by the two Fe blobs symmetrically located around the cD center. The angles of the Fe blobs from the cD center are perpendicular to hard sub-peak direction. This configuration suggests that the high metallicity gas may have been expelled from the cD galaxy due to some pressure invoked along the northwest to southeast direction. The hard sub-peak and the fainter one on the other side may be causing such a pressure, for example, by falling into the cD galaxy on both sides. If this is the case, the relatively higher temperature of the blobs can be explained as due to shock or compressional heating.
The total mass of Fe contained in the Fe blobs is estimated to be $2\times10^4\ M_{\odot}$. The star formation rate of NGC 1129 is $0.04
M_{\odot}$ yr$^{-1}$ within $r=1.6$ kpc [@mcoc89]. Assuming that 1 solar mass of Fe is injected into ICM when a total of 100 solar mass of the stars are formed, then $M_{Fe}=4\times 10^5\ M_{\odot}$ has been produced within 1 Gyr. Only 5% of the total Fe mass is enough to account for the Fe blob. Therefore, at least the amount of Fe in the Fe blobs can be explained by the metal production in the cD galaxy, even though the reason why metal injection occurs in a directional way is not clear.
Gas Heating in the cD Galaxy
----------------------------
The radial temperature distribution shows a smooth and monotonic decrease from 4 keV to 2 keV at the cluster center. The temperature decrease toward the cluster core has been previously recognized as cooling flows, but the data from [*XMM-Newton*]{} showed that the low-temperature gas with $kT \lesssim 1$ keV is missing in many cD clusters. In AWM 7, the radiative cooling time at the center is estimated to be $t_{cool}=3\times 10^8$ yr, which is shorter than the cluster age. However, the central temperature seems to be maintained at 2 keV, which is about half of the gas temperature in the outer region. This situation is similar to the feature seen in many other cD cluster centers observed with [*Chandra*]{} and [*XMM-Newton*]{} [e.g. @peter02]. It seems that in AWM 7 also, the thermal structure in the cluster center is determined by a certain balance between radiative cooling and some heat source. We note that the same problem about the unknown heat source so far recognized in cooling flow systems is also met in this relatively poor cluster.
Several scenarios to account for the heat source in cluster cores are proposed by several authors. One of the heating scenario is to invoke radio jets from the central AGNs [@hans02]. In the case of AWM 7, however, no radio activity has been observed in NGC 1129, which makes the AGN jets unlikely to be the heat source. Another possibility to suppress the cooling is the heat input through thermal conduction from the surrounding hotter ICM gas. The necessity for unrealistic fine tuning of the heat conductivity over a wide region is found in explaining the observed temperature distribution [@kita03]. Therefore, heat conduction is difficult as the heating mechanism.
In the present case, we consider that the gas motion in the central region may be at least related to the heat source. As already shown earlier, there are hot blobs which are hotter than the gas in the cD center. We also discussed that these blobs may have been created by a possible bulk motion of the gas in the central region, as suggested by the morphology in the hard energy band. Although we do not know the detailed feature and the exact cause of such a motion, it is enough to heat up a considerable fraction of the gas up to $\gtrsim 3$ keV as seen in the sub-peak spectrum. If this energy is brought in to the central cool gas, it would certainly be able to compensate the energy carried away by the radiative cooling.
The position of the soft X-ray peak shows a significant offset by $1
\pm 0.2$ kpc from the 2MASS position of the cD galaxy center, whereas the X-ray peak in the hard band suggests a smaller offset by $0.5 \pm
0.8$ kpc. The offset between the soft X-ray and the 2MASS peaks suggests that there is a relative motion between the hot gas and the galaxy core. In the optical band, there is a chain of four small galaxies extending in the southwest direction, so the galaxy distribution is out of symmetry around the cluster core. @pele90 reported that there is a large twist in the position angle of NGC 1129 at a radius of $10''-20''$ in the optical band. They suggest that this twist could be a result of a galaxy merger. With these optical features as well as the offset of the soft X-ray peak, we may consider that the central region of AWM 7 is fairly dynamic and that it may well connected with the unidentified heat source.
Since ASCA showed a Mpc-scale abundance gradient and isothermal gas distribution, it has been thought that AWM 7 has not experienced cluster-scale mergers nor significant mixing. The present [*Chandra*]{} data revealed kpc-scale structures and suggested that the central region can be dynamic. Further direct information about the dynamical motion of the gas blobs (hard sub-peak and Fe blobs) will give us clearer view about the actual physical process occurring in the cluster core. X-ray imaging spectroscopy facilitated by a new technique, such as microcalorimeters on Astro-E2, will bring us rich information about the evolution of cD galaxies and clusters.
We would like to thank Dr. Y. Ikebe, and the referee for useful comments. T. F. is supported by the Japan Society for the Promotion of Science (JSPS) Postdoctoral Fellowships for Research Abroad. This work was partly supported by the Grants-in Aid for Scientific Research No. 12304009 from JSPS. NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA and the SIMBAD data base is operated by the Centre de Donnes astronomiques de Strasbourg.
Allen, S.W., Schmidt, R.W., & Fabian, A.C. 2001, MNRAS, 328, L37 Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 Bacon, R., Monnet, G., & Simien, F. 1985, A&A, 152, 315 Blanton, E.L., Sarazin, C.L., McNamara, B.R. 2003, ApJ, 585, 227 Böhringer H., Matsushita K., Churazov E., Ikebe Y., Chen Y., 2002, A&A, 382, 804 Burns, J.O., White, R.A., Hanisch, R.J. 1980, AJ, 85, 191 David, L., Nulsen, P.E.J., McNamara, B.R., Forman, W., Jones, C., Robertson. B., Wise. M., 2001, ApJ, 557, 546 Ezawa, H., et al. 1997, ApJ, 490, L33 Fabian A.C., et al. 2000, MNRAS, 318, L65 Fabian A.C., Mushotzky R.F., Nulsen P.E.J., Peterson J., 2001, MNRAS, 321, L20 Fujita, Y., Tosaki, T., Nakamichi, A., Kuno, N. 2000, PASJ, 52, 235 Furusho, T., et al. 2001, PASJ, 53, 421 Ikebe, Y. 2001 (astro-ph/0112132) Johnstone, R.M., Allen, S.W., Fabian, A.C., Sanders, J.S. 2002, MNRAS, 336, 299 Kitayama, T., & Masai, K. in preparation Koranyi, D.M., & Geller, M.J. 2000, AJ, 119, 44 Malina, R., Lampton, M., & Bowyer, S. 1976, ApJ, 209, 678 Matsushita, K., Belsole, E., Finoguenov, A., Bohringer, H. 2002, A&A, 386, 77 Mazzotta, P., Edge, A., Markevitch, M. 2003, ApJ submitted (astro-ph/0303314) McNamara B.R. & O’Connell, R.W. 1989, AJ, 98, 2018 McNamara B. et al. 2000, ApJ, 534, L135 Morris, R.G., & Fabian, A.C. 2003, MNRAS, 338, 824 Neumann, D.M., & Böhringer, H. 1995, A&A, 301, 865 Peletier et al. 1990, AJ, 100, 1091 Peterson, J.R., Kahn, S.M., Paerels, F.B.S., et al. 2002, ApJ submitted (astro-ph/0210662) Sanders, J.S. & Fabian, A.C., 2002, MNRAS, 331, 273 Schmidt, R.W., Fabian, A.C., & Sanders, J.S. 2002, MNRAS, 337, 71
----------------------- ------------------ ------------------------ ------------------------ -------------- -------------------------
region offset$^\dagger$ $kT$ $Z$ $\chi^2$/dof $L_{\rm X}$ $^\ddagger$
(kpc) (keV) (solar) (erg/s)
soft peak 2.0 2.02$^{+0.12}_{-0.14}$ 0.89$^{+0.28}_{-0.24}$ 100/81 $1.2\times 10^{41}$
hard peak (cD center) 0.8 1.79$^{+0.19}_{-0.06}$ 0.68$^{+0.16}_{-0.14}$ 76/81 $1.4\times 10^{41}$
hard sub-peak 6.4 2.94$^{+0.35}_{-0.19}$ 0.34$^{+0.18}_{-0.15}$ 104/77 $1.2\times 10^{40}$
----------------------- ------------------ ------------------------ ------------------------ -------------- -------------------------
: Best-fit parameters of the spectral fit for the 3 selected regions
$^\dagger$ Distance from the center of the cD galaxy.
$^\ddagger$ X-ray luminosity in the 0.5–10 keV band assuming that the distance is 104 Mpc.
(a)(b)
(c)(d)\
[^1]: http://www.astro.psu.edu/users/townsley/cti/install.html
[^2]: http://asc.harvard.edu/cal
[^3]: http://legacy.gsfc.nasa.gov/docs/xanadu/xspec/models/acisabs.html
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We systematically investigate the perpendicular magnetocrystalline anisotropy (MCA) in Co$-$Pt/Pd-based multilayers. Our magnetic measurement data shows that the asymmetric Co/Pd/Pt multilayer has a significantly larger perpendicular magnetic anisotropy (PMA) energy compared to the symmetric Co/Pt and Co/Pd multilayer samples. We further support this experiment by first principles calculations on the CoPt$_2$, CoPd$_2$, and CoPtPd, which are composite bulk materials that consist of three atomic layers in a unit cell, Pt/Co/Pt, Pd/Co/Pd, Pt/Co/Pd, respectively. By estimating the contribution of bulk spin-momentum coupling to the MCA energy, we show that the CoPtPd multilayer with the symmetry breaking has a significantly larger perpendicular magnetic anisotropy (PMA) energy than the other multilayers that are otherwise similar but lack the symmetry breaking. This observation thus provides an evidence of the PMA enhancement due to the structural inversion symmetry breaking and highlights the asymmetric CoPtPd as the first artificial magnetic material with bulk spin-momentum coupling, which opens a new pathway toward the design of materials with strong PMA.'
author:
- 'Abdul-Muizz Pradipto'
- Kay Yakushiji
- Woo Seung Ham
- Sanghoon Kim
- Yoichi Shiota
- Takahiro Moriyama
- 'Kyoung-Whan Kim'
- 'Hyun-Woo Lee'
- Kohji Nakamura
- 'Kyung-Jin Lee'
- Teruo Ono
title: 'Enhanced perpendicular magnetocrystalline anisotropy energy in an artificial magnetic material with bulk spin-momentum coupling'
---
[1.4]{}
[1.5]{}
The interplay between magnetism, electronic structure and spin-orbit coupling (SOC) in materials has led to the possibility to utilize both the charge and spin degrees of freedom of electrons for spintronic devices [@Wolf:2001; @Chappert:2007; @Bader:2010; @Ohno:2010]. Among the most important effects emerging from the SOC is the magnetocrystalline anisotropy (MCA), resulting in a preferred magnetization direction with respect to the crystallographic structure of materials [@Falicov:1990; @Daalderop:1992]. For the practical design of devices, perpendicular magnetic anisotropy (PMA) with respect to surface/interface planes is strongly desired [@Ikeda:2010; @Chiba:2012; @Dieny:2017], as manipulation of the magnetic moment directions and/or the magnetic domain walls can be done more efficiently [@Mangin:2006; @Sinha:2013]. The origin of MCA has initially been attributed to the orbital localization as a result of reduced dimensionality [@Neel:1954; @Bruno:1989b]. It was also shown that the MCA is strongly related to the SOC of the electronic states near the Fermi level [@Nakamura:2009]. Consequently, manipulation of the band structure around the Fermi level provides a natural way to tune the MCA. This can be achieved by the modification of the orbital occupation, for example via the application of an external gate voltage [@Maruyama:2009; @Niranjan:2010], or by direct chemical manipulations of the band structure. The latter is commonly done by doping or impurities [@Besser:1967; @Khan:2017] or by engineering the materials interfaces [@Nakamura:2010; @Nakamura:2017]. Another milestone in understanding the MCA comes from the proportionality of MCA and the anisotropy of SOC-induced orbital magnetic moment, which was proposed by Bruno [@Bruno:1989], and has been confirmed in different materials [@Weller:1994; @Weller:1995; @Stohr:1999].
Another mechanism of MCA, which depends on broken inversion symmetry, has been proposed [@Barnes:2014; @Kim:2016]. In systems with broken inversion symmetry, the SOC becomes odd in the momentum $\vec{k}$ space [@Grytsyuk:2016]. The oddness of SOC in the $\vec{k}$ space is visible for instance from the Rashba model of SOC [@Bychkov:1984] which has the form $\mathcal{H}_R=\alpha_R\left(\vec{k}\times\hat{z}\right)\cdot\vec{\sigma}$ that depends on linear term of $\vec{k}$. Here the $\alpha_R$ is the Rashba parameter, and $z$ is the direction of inversion-symmetry-breaking-induced potential gradient. We note that the broken inversion symmetry results not only in the linear-in-$k$ contribution but also in higher-odd-order contributions. From now on, we use the term “Rashba" to describe all odd-order-in-$k$ contributions for simplicity. Since the Rashba interaction only develops in non-centrosymmetric systems, the strength of Rashba parameter can provide an indication to the degree of structural inversion symmetric breaking. Although it was originally proposed in nonmagnetic materials [@Bychkov:1984; @Picozzi:2014; @Manchon:2015], Rashba-type spin splitting was later observed also in magnetic systems, such as on the Gd(0001) grown on an oxide substrate [@Krupin:2005]. Recently, the MCA has been analyzed using the SOC model of Rashba [@Barnes:2014; @Kim:2016] and it was shown that MCA changes with the increasing Rashba parameter strength, i.e. the more asymmetric the system is, the stronger it develops the MCA. Such description is very insightful for the understanding of MCA, however, its verification is still lacking.
[lcc\*[1]{}[D[.]{}[.]{}[4]{}]{}@cc\*[1]{}[D[.]{}[.]{}[4]{}]{}@cc\*[1]{}[D[.]{}[.]{}[5]{}]{}@]{} System &&& &&& &&&\
------------------------------------------------------------------------
$E_{\rm MCA}$ (SCF) &&& 0.028 &&& 0.027 &&& 0.043\
------------------------------------------------------------------------
$k_y-$dependent $E_{\rm MCA}$:\
------------------------------------------------------------------------
$E_{\rm MCA}^{+k_y}$ &&& 0.015 &&& 0.013 &&& -0.072\
------------------------------------------------------------------------
$E_{\rm MCA}^{-k_y}$ &&& 0.015 &&& 0.013 &&& 0.113\
------------------------------------------------------------------------
$E_{\rm MCA}^{\rm even}$ (from Eq. ) &&& 0.030 &&& 0.026 &&& 0.041\
------------------------------------------------------------------------
$E_{\rm MCA}^{\rm odd}$ (from Eq. )&&& 0.000 &&& 0.000 &&& 0.185\
\[tabSummary\]
We report in this work our analysis on the effects of asymmetric stacking of Co$-$Pt/Pd-based multilayer systems by combining experimental magnetic measurements and first-principles calculation. Co/Pt and Co/Pd-based layered structures have been well known to exhibit strong PMA of about 3$-$10 Merg/cm$^3$ [@Mangin:2006; @Garcia:1989; @Canedy:2000; @Meng:2006; @Yakushiji:2010; @Kato:2012]. We show that the broken inversion symmetry plays a significant role in the appearance of PMA in this material. These results present an evidence to the PMA driven by structural asymmetry as suggested in previous theoretical analyses [@Barnes:2014; @Kim:2016] and may provide a useful guide in the design of materials with strong PMA.
![(a) $M-H$ loops of \[Co(0.2 nm)/Pt(0.2 nm)\]$_{10}$, \[Co(0.2 nm)/Pd(0.2 nm)\]$_{10}$ and \[Co(0.2 nm)/Pd(0.2 nm)/Pt(0.2 nm)\]$_{10}$ films. The op and ip denote $M-H$ loops with out-of-plane and in-plane magnetic fields, respectively. (b) High-resolution TEM image for the full stack of the \[Co(0.2 nm)/Pd(0.2 nm)/Pt(0.2 nm)\]$_{10}$ film.[]{data-label="Expt"}](Fig_Expt){width="1.0\columnwidth"}
In order to study the effect of the symmetry of the stacking structure on the magnetic behavior, three variations of the multilayer structures were fabricated as follows: \[Co(0.2 nm)/Pt (0.2 nm)\]$_{10}$, \[Co (0.2 nm)/Pd(0.2 nm)\]$_{10}$ and \[Co(0.2 nm)/Pd(0.2 nm)/Pt(0.2 nm)\]$_{10}$. Our films have been fabricated with a sputtering apparatus (Canon-Anelva C-7100) on thermally oxidized Si substrates at room temperature. A Ta(4 nm)/Ru(1 nm)/Pt(0.5 nm) seed/buffer layer was first deposited on the substrate. Then a Co-based multilayer was grown by alternate deposition of Co, Pd and Pt at room temperature [@Yakushiji:2010]. Fig. \[Expt\]a shows their $M-H$ (magnetization vs. magnetic field) loops measured at room temperature. All of them display typical perpendicularly magnetized properties as the sharp reversals in the out-of-plane (op) loop and the gradual saturation in the in-plane (ip) loop with a substantial perpendicular magnetic anisotropy field ($H_k$). The values of the effective PMA ($K_{\rm eff} = H_kM_s/2$) were estimated to be 5.7, 4.8 and 10.8 Merg/cm$^3$ derived from $H_k$ and the saturation magnetization ($M_s$), for \[Co (0.2 nm)/Pt (0.2 nm)\]$_{10}$, \[Co (0.2 nm)/Pd (0.2 nm)\]$_{10}$ and \[Co (0.2 nm)/ Pd (0.2 nm)/Pt (0.2 nm)\]$_{10}$, respectively. It is obvious that the asymmetric stacking, the Co/Pd/Pt-multilayer, exhibited twice larger $K_{\rm eff}$ than the symmetric stackings, the Co/Pt- and Co/Pd-multilayers. Fig. \[Expt\]b shows the cross-sectional high-resolution transmission electron microscopy image for the full stack of the \[Co(0.2 nm)/Pd(0.2 nm)/Pt(0.2 nm)\]$_{10}$ multilayer. It suggests that an fcc(111) oriented Co/Pd/Pt-multilayer part was formed on the hcp-c-plane oriented Ru buffer layer with a flat interface. Larger $K_{\rm eff}$ values of the asymmetric samples with respect to those of the symmetric ones are consistently observed in various samples with different thicknesses, see Supplemental Material [@Supple].
We now turn to our first-principles calculations in the basis of Density Functional Theory (DFT) approach to CoPt$_2$, CoPd$_2$, and CoPtPd model systems in order to understand the origin of this behavior (see Supplemental Material [@Supple] for the detail of calculations). The structures resemble that of nonmagnetic noncentrosymmetric semiconductor BiTeI [@Ishizaka:2011], which exhibits bulk Rashba spin-momentum coupling. The calculated $E_{\rm MCA}$ per unit volume are summarized in the first row of Table \[tabSummary\]. From the positive $E_{\rm MCA}$ it is clearly visible that all systems possess PMA. Secondly, while CoPt$_2$ and CoPd$_2$ bulk systems have comparable MCA energies of around 0.027$-$0.028 meV/[Å]{}$^3$, we find stronger MCA energies of more than 0.040 meV/[Å]{}$^{3}$ for the CoPtPd system, in qualitative agreement with our experiment. If MCA arose entirely from each individual interface, the expected MCA for CoPtPd would be $0.028/2+0.027/2=0.0275$ meV/[Å]{}$^{3}$ considering that there are two types of interfaces, Co/Pt and Co/Pd. This value is smaller than the calculated value of 0.043 meV/[Å]{}$^{3}$ (see Table \[tabSummary\]) by 0.0155 meV/[Å]{}$^{3}$, which is sizable. Other contribution apart from the interfacial effect should therefore play a role in the larger MCA in CoPtPd system compared to the other cases, and it may be related to one of the following scenarios: (i) the difference in the number of valence electrons in all unit cells [@Daalderop:1992]; (ii) the difference in the SOC strength [@Stohr:1999]; (iii) the anisotropy of orbital magnetic moment, as proposed by Bruno [@Bruno:1989]; (iv) the different orbital hybridization that occurs in the considered systems [@Weller:1994]; and/or finally (v) the structural inversion symmetry breaking along $z$ [@Barnes:2014; @Kim:2016] that leads to the bulk Rashba-type splitting [@Ishizaka:2011] in the CoPtPd case, compared to the other two systems.
The trivial scenarios (i) and (ii) can immediately be ruled out, as discussed in the Supplemental Material [@Supple], due to the same total number of valence electrons in the unit cell, which is 29, and the presence of Pd in CoPtPd which has larger PMA than CoPt$_2$. Additionally, the anisotropy of the orbital magnetic moment, induced as a direct consequence of SOC, can be considered by defining $\mu_{\rm orb}^{\rm anis}=\mu_{\rm orb}^{\rm ip}-\mu_{\rm orb}^{\rm op}$. From our fully self-consistent SOC calculations, we obtain $\mu_{\rm orb}^{\rm anis}$ to be $-0.025$ $\mu_B$ for CoPt$_2$, which is larger than $-0.023$ $\mu_B$ for CoPtPd. Therefore, a larger MCA does not trivially correspond to a larger orbital moment anisotropy in the CoPtPd system here, thus making scenario (iii) inapplicable. Indeed, it has been pointed out that an extra care should be taken when considering the Bruno model for systems with strong spin-orbit coupling such as those containing $5d$ transition metal elements [@Andersson:2007].
To assess the relevance of scenario (iv), we calculated the Densities of States (DOSs) of these systems, as shown in Fig. S1 of the Supplemental Material [@Supple]. The calculated magnetic moments in all systems are found to be more than 2.0 $\mu_B$ for Co and around 0.3 $\mu_B$ for both Pt and Pd, hence the MCA should be driven by Co. We additionally calculate the relative contribution of each atomic layer to the PMA, by artificially switching on and off the SOC of the atoms. When the SOC of Co is switched off, in all considered cases the MCA vanish, confirming the crucial role of Co moment to drive the MCA. In Table \[tabSummary\], however, both CoPt$_2$ and CoPd$_2$ systems give comparable perpendicular MCA despite the nonnegligible difference in the Co$-d$ bandwidth in both cases. This observation suggests that the role of Co$-$Pt and Co$-$Pd orbital hybridizations in driving the PMA, as implied by the scenario (iv), is not significant. The remaining scenario which might explain the origin of the enhanced MCA in CoPtPd is therefore the structural inversion symmetry breaking along $z$ due to the presence of both Pt $and$ Pd layers sandwiching the Co layer, as suggested by scenario (v).
![Band structure along the $k_x=k_z=0$, i.e. along the (0,$-\frac{1}{2}$,0)$\rightarrow$(0,$\frac{1}{2}$,0) path, without spin-orbit coupling, (a) and (b), and with SOC, (c) and (d); and the two dimensional ($k_x,k_y$) in-plane Fermi surface, i.e. at $k_z=0$, together with the contour map of $k-$dependent MCA energy within this surface, (e) and (f). Left and right panels show the plots for CoPt$_2$ and CoPtPd, respectively. The m$_{\rm +x}$, m$_{\rm -x}$, and m$_{\rm z}$ labels indicate the magnetization along respectively the $+x$, $-x$, and $z$ directions. Likewise, E(m$_{\alpha}$) indicates the energy of $\alpha$ direction-imposed magnetization.[]{data-label="map2D"}](Fig_map){width="1.0\columnwidth"}
As in the case of nonmagnetic systems [@Bychkov:1984; @Picozzi:2014; @Manchon:2015], the broken inversion symmetry in magnetic materials also results in the band splitting in the presence of SOC, as described by the Rashba model. This effect has previously been demonstrated in Gd(001) magnetic surfaces [@Krupin:2005]. We note however that the Rashba effect manifests differently in the nonmagnetic and magnetic systems, as can be summarized in the following [@Krupin:2005]: In nonmagnetic systems, the presence of Rashba interaction splits the otherwise degenerate up- and down-spin states. In ferromagnetic systems, the degeneracy of the up- and down-spins are already lifted, and Rashba interaction enhances/reduces the splitting depending on the sign of $\vec{k}\cdot(\hat{z}\times\vec{m})$. The resulting splitting is thus asymmetric with respect to $\vec{k}$ and the sign of asymmetry gets reversed when $\vec{m}$ is reversed. The asymmetric splitting is most pronounced along the $\vec{k}$ direction parallel to $\hat{z}\times\vec{m}$.
Therefore, in order to consider the role of structural asymmetry in the present model systems, we first visualize the spin splitting by following the prescription introduced in previous works [@Krupin:2005; @Park:2013; @Grytsyuk:2016]. We choose two different directions of in-plane magnetization, i.e. along $+x$ and $-x$ directions. The results are presented in Fig. \[map2D\] for the CoPt$_2$ system representing the symmetric case and CoPtPd in which the inversion symmetry is broken. The band structures are presented in Figs. \[map2D\]a-\[map2D\]d along the ($k_x,k_y,k_z$)=(0,$-\frac{1}{2}$,0)$\rightarrow$(0,$\frac{1}{2}$,0) path. Near the Fermi level, the band structures are mainly dominated by the Co minority-spin states, as shown Figs. \[map2D\]a and \[map2D\]b without SOC. The majority-spin bands, on the other hands, are largely dispersive, and dominated by the Pt and Pd bands. When the SOC is switched on, the characteristic Rashba-type splitting for ferromagnetic system [@Krupin:2005] emerges in the asymmetric CoPtPd case (Fig. \[map2D\]d), but is absent in the band structure of CoPt$_2$ case (Fig. \[map2D\]c).
The Rashba-type splitting also manifests further in the distortion of Fermi contour (Figs. \[map2D\]e and \[map2D\]f). The Fermi surface of CoPt$_2$ does not show any shift in the $(k_x,k_y)$ plane for both in-plane magnetization directions. On the other hands, the Fermi surface of CoPtPd is shifted towards positive $k_y$ direction when the magnetization is oriented along $+x$ direction, as indicated by the red arrow in Fig. \[map2D\]f. Additionally, switching the magnetization direction along the $-x$ direction, which is an opposite direction, gives a mirror symmetric Fermi contour along the $k_y$ [@Grytsyuk:2016], i.e. $$\label{k-splitting}
E(m_{+x},+k_y)=E(m_{-x},-k_y),$$ as indicated by the blue arrow in Fig. \[map2D\]f. Such mirror symmetry is disappearing along $k_x$ for this particular magnetization direction, as pointed out by Grytsyuk, *et al.* [@Grytsyuk:2016]. The two-dimensional contour map of $E_{\rm MCA}$ within the ($k_x,k_y$) plane is fully symmetric along $k_y$ for CoPt$_2$, in contrast to the nonsymmetric CoPtPd case.
We can proceed further by defining the $E_{\rm MCA}$ as the $E(m_{+x})-E(m_z)$ or $E(m_{-x})-E(m_z)$. Both definitions give the same total MCA energy as reported in Table \[tabSummary\] when the evaluation is done within the whole Brillouin zone. However, the $k-$dependent $E_{\rm MCA}$ can provide an additional insight, since it can be virtually decomposed into $E_{\rm MCA}^{+k_y}$ and $E_{\rm MCA}^{-k_y}$, in which $E_{\rm MCA}^{\pm k_y}=E(m_x,\pm k_y)-E(m_z,\pm k_y)$. By integrating the $E_{\rm MCA}^{\pm k_y}$ within half of the Brillouin zone, i.e. within all $(k_x,k_z)$ space for each $+k_y$ and $-k_y$, respectively, we obtain the results as reported in Table \[tabSummary\]. In this regard, one can define an even contribution $$\label{even}
E_{\rm MCA}^{\rm even}=E_{\rm MCA}^{+k_y}+E_{\rm MCA}^{-k_y},$$ which is exactly the $E_{\rm MCA}$ within the whole Brillouin zone. The estimation of this contribution to each model system is also shown in Table \[tabSummary\]. The small difference that occurs between $E_{\rm MCA}$ (SCF) and $E_{\rm MCA}^{\rm even}$ clearly comes from the different SOC treatment in both cases, since the evaluation of $E_{\rm MCA}^{+k_y}$ and $E_{\rm MCA}^{-k_y}$ cannot be done self-consistently. Interestingly, another quantity, $$\label{odd}
E_{\rm MCA}^{\rm odd}=E_{\rm MCA}^{+k_y}-E_{\rm MCA}^{-k_y},$$ can also be defined. This quantity provides an estimation on the shift of the Fermi surface, thus the estimation of the degree of inversion symmetry breaking. As summarized in the last row of Table \[tabSummary\], $E_{\rm MCA}^{\rm odd}$ is zero for both symmetric CoPt$_2$ and CoPd$_2$. The $E_{\rm MCA}^{\rm odd}$ in CoPtPd is on the other hand very large, implying that the system is highly nonsymmetric.
To gain an additional insight on the role of the broken inversion symmetry, one may use a simple argument in which the MCA energy is expressed as $E_{\rm MCA}=\frac{\lambda}{\varepsilon_{\rm uo}}$ in the second-order perturbation theory, where $\varepsilon_{\rm uo}$ denotes the energy gap between the occupied and unoccupied states, while the $\lambda$ contains the spin-orbit interaction between these states and depends on the SOC coupling constant. The $E_{\rm MCA}^{+k_y}$ and $E_{\rm MCA}^{-k_y}$ can be given by $E_{\rm MCA}^{+k_y}=\frac{1}{2}\frac{\lambda}{\varepsilon_{\rm uo}-\Delta\varepsilon}$ and $E_{\rm MCA}^{-k_y}=\frac{1}{2}\frac{\lambda}{\varepsilon_{\rm uo}+\Delta\varepsilon}$, in which $\pm\Delta\varepsilon$ denotes the widening or narrowing of the energy gap due to the departure from the symmetric band structure, and hence is absent ($\Delta\varepsilon=0$) in the symmetric cases (Fig. \[map2D\]c). The $\pm\Delta\varepsilon$ is therefore only present in the asymmetric systems, and in the case of CoPtPd (Fig. \[map2D\]d), $\pm\Delta\varepsilon$ describes for instance the gap widening and narrowing at $k_y=-0.2$ and $k_y=0.2$. The total contribution is nothing but the $E_{\rm MCA}^{\rm even}$ in Eq. as summarized in Table \[tabSummary\]. However, in the specific cases, where the broken inversion symmetry is present ($\Delta\varepsilon\neq0$), the total MCA energy will be given by $$E_{\rm MCA}^{\rm even}\approx\left(\frac{\lambda}{\varepsilon_{\rm uo}}\right)\frac{1}{1-(\frac{\Delta\varepsilon}{\varepsilon_{\rm uo}})^2}\cdot
\label{MCA_asym}$$ Since $(\frac{\Delta\varepsilon}{\varepsilon_0})^2$ is always positive, Eq. indicates that the presence of $\Delta\varepsilon$ due to the asymmetry will lead to an increase of MCA. In other words, the enhancement of $E_{\rm MCA}^{\rm even}$ due to the inversion symmetry breaking is given by $\Delta E_{\rm MCA}^{\rm even}=\left(\frac{\Delta\varepsilon}{\varepsilon_{\rm uo}}\right)^2 E_{\rm MCA}^{\rm even}$, showing that such modification indeed occurs only in asymmetric systems, thus capturing the enhancement of MCA energy due to the spin-momentum coupling, as proposed recently [@Barnes:2014; @Kim:2016].
At this stage, it is appealing to consider the origin of Rashba-type splitting in the CoPtPd. Returning to the band structure of CoPtPd (with SOC, Fig. \[map2D\]d), the splitting can be seen for instance near the Fermi level around $k_y=\pm 0.2$ and at the energy of $-1$ eV around the $\Gamma$ point. Furthermore, this splitting is likely to be induced by the Pt and Pd states. Around the Fermi level, for example, the Rashba-type splitting at around $k_y=\pm 0.2$ is clearly dominated by the states indicated by the red broken rectangle in Fig. \[map2D\]b. We further switched off the SOC of Co and we obtained an $E_{\rm MCA}^{\rm odd}$ of 0.193 meV/[Å]{}$^{3}$, resembling closely the $E_{\rm MCA}^{\rm odd}$ in Table \[tabSummary\]. Additionally, $E_{\rm MCA}^{\rm even}$ vanishes, showing the significance of the SOC of Co for the MCA. On the other hands, when only the SOC of Co is maintained while switching off the SOC of both Pt and Pd, the $E_{\rm MCA}^{\rm odd}$ diminishes down to $-0.010$ meV/[Å]{}$^{3}$, confirming the crucial role of the SOC of Pt and Pd to drive the Rashba-type splitting. Interestingly, the $E_{\rm MCA}^{\rm even}$ also becomes practically zero in the latter case, implying that despite Co being the magnetic moment carrier, the SOC of Co alone is not sufficient to induce large MCA.
In real materials, interlayer mixing is likely to occur during the growth process. Such mixing can influence the MCA, especially for multilayer systems with small monolayer thickness as in our cases. We have therefore performed the calculation to estimate the effect of the intermixing (see Supplemental Material [@Supple]). We found that although the calculated MCA is quantitatively altered by the intermixing, the perpendicular MCA of CoPtPd is consistently larger than that of CoPt$_2$ case, indicating that the effect of inversion symmetry breaking on MCA is still effective in multilayers with interlayer mixing, although detailed effects of the structural disordering to the MCA require a further study.
Finally we note the similarity between the structure of CoPtPd system in the present work and that of the nonmagnetic noncentrosymmetric semiconductor BiTeI [@Ishizaka:2011]. The crystal structure of BiTeI consists of triangular network of single layers of Bi, Te, and I. In this system, a giant bulk Rashba-type splitting has also been recently observed [@Ishizaka:2011]. Our work thus highlights the presence of bulk Rashba-type effect due to the broken inversion symmetry in magnetic materials, as well as a possible direct consequence in terms of the enhanced MCA.
In summary, from our combined first-principles calculation and experimental results, we provide for the first time an evidence for the asymmetric-structure-driven enhancement of PMA in the transition metal multilayers. In agreement with previous prediction on the MCA modulation due to the change in the Rashba parameter [@Kim:2016], we show that the breaking of inversion symmetry does not only lead to a modification, but also to an enhancement, to the PMA. While the PMA is realized through Co which carries the magnetic moments, the breaking of inversion symmetry can significantly enhance the perpendicular MCA strength due to the presence of Pt and Pd with strong SOC. This work may provide a guideline on the design of materials with strong PMA, as well as a suggestion to the investigation of possible enhancement of other SOC-related properties, such as the anomalous and spin Hall effect, due to the broken structural inversion symmetry.
A.-M. P. was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. 15H05702. H.-W. L. acknowledges financial support from the National Research Foundation of Korea (NRF, Grant No. 2018R1A5A6075964). K.-J. L. and K.-W. K. acknowledge the KIST Institutional Program (Projects No. 2V05750 and No. 2E29410). K.-W. K. also acknowledges financial support from the German Research Foundation (DFG) (No. SI 1720/2-1). Work was also in part supported by JSPS KAKENHI Grant No. 16K05415, the Cooperative Research Program of Network Joint Research Center for Materials and Devices, and Center for Spintronics Research Network (CSRN), Osaka University. Computations were performed at Research Institute for Information Technology, Kyushu University.
[40]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , , , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , , , , , , , , ****, ().
, , , ****, ().
, , , , , , , , , , ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , , , , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , ****, ().
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose. However, applying stylistic variations is still largely a manual process, and there have been little efforts towards automating it. In this paper we explore automated methods to transform text from modern English to Shakespearean English using an end to end trainable neural model with pointers to enable copy action. To tackle limited amount of parallel data, we pre-train embeddings of words by leveraging external dictionaries mapping Shakespearean words to modern English words as well as additional text. Our methods are able to get a BLEU score of $31+$, an improvement of $\approx6$ points over the strongest baseline. We publicly release our code to foster further research in this area. [^1]'
author:
- |
Harsh Jhamtani [^2], Varun Gangal , Eduard Hovy, Eric Nyberg\
Language Technologies Institute\
Carnegie Mellon University\
[{jharsh,vgangal,hovy,ehn}@cs.cmu.edu ]{}
bibliography:
- 'emnlp2017.bib'
title: 'Shakespearizing Modern Language Using Copy-Enriched Sequence-to-Sequence Models'
---
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Taylor Berg-Kirkpatrick and anonymous reviewers for their comments. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program.
[^1]: https://github.com/harsh19/Shakespearizing-Modern-English
[^2]: \* denotes equal contribution
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A scaling theory of replica symmetry breaking (RSB) in the SK-model is presented in the framework of critical phenomena for the scaling regime of small inverse RSB-orders $1/\kappa$, small temperatures $T$, and small magnetic fields $H$. We employ the pseudo-dynamical picture (98, 127201 (2007)) with two critical points ${\cal CP}1$ and ${\cal CP}2$ where separated temperature- and magnetic field-scaling is obtained near the order function’s pseudo-dynamical limits $lim_{_{a\rightarrow\infty}}q(a)=1$ and $lim_{_{a\rightarrow0}}q(a)=0$ at $(T=0,H=0)$. An unconventional scaling hypothesis for the free energy is given, modeling this separated scaling in accordance with detailed numerical self-consistent solutions for up to $200$ orders of RSB. Divergent correlation-lengths $\xi_{{\cal CP}1}(T)\sim T^{-\nu_{_{\hspace{-.03cm}T}}}$ and $\xi_{{\cal CP}2}(H)\sim H^{-\nu_{_{\hspace{-.03cm}H}}}$ describe the RSB-criticality as a long-range correlation effect occurring on the pseudo-lattice of RSB-orders. Rational-valued exponents $\nu_{_T} =3/5$ and $\nu_{_H}=2/3$ are concluded with high precision from high-order RSB scaling (in analogy with finite size scaling) and using a new fixed point extrapolation method. Power laws, scaling relations, and scaling functions are analyzed. Near ${\cal CP}1$, the non-equilibrium susceptibility is found to decay like $\chi_1=\kappa^{-5/3}f_{1}(T/\kappa^{-5/3})$, the $T=0$-entropy like $S\sim\chi_1^2$, while ($T$-normalized) Parisi box sizes diverge like $a_i=\kappa^{5/3} f_{a_i}(T/\kappa^{-5/3})$, with $f_{1}(\zeta)\sim\zeta$ and $f_{a_i}(\zeta)\sim1/\zeta$ for $\zeta\rightarrow\infty$, $f(0)$ finite. Near ${\cal CP}2$, where the magnetic field $H$ controls the critical behavior (while temperature is irrelevant), a power law $H^{2/3}$ is retrieved for plateau-height of the order function $q(a)$ according to $q_{pl}(H)=\kappa^{-1}f_{pl}(H^{2/3}/\kappa^{-1})$ with $f_{pl}(\zeta)|_{\zeta\rightarrow\infty}\sim\zeta$ and $f_{pl}(0)$ finite. The order function $q(a)$ links ${\cal CP}1$ with ${\cal CP}2$ and is obtained as a fixed point function $q^*(a^*)$ of RSB-flow, in agreement with integrated fixed-point energy and susceptibility distributions. Similarities with directed polymers in $1+1$ dimensions, with $d=1$ solution and Flory-Imry-Ma type solutions of the KPZ-equation are discussed.'
address: 'Institut f. Theoretische Physik, Universität Würzburg, Am Hubland, 97074 Würzburg, FRG'
author:
- 'R. Oppermann, M.J. Schmidt'
title: 'Universality class of replica symmetry breaking, scaling behavior, and the low-temperature fixed-point order function of the Sherrington-Kirkpatrick model'
---
Introduction
============
The far-reaching usefulness of spin glass theories [@MPV; @apyoung-book; @parisi-book] and of its key structural elements such as frustration, disorder, hierarchical order, ultrametricity, complexity, freezing transition, is witnessed well by applications entering even life-sciences and trans-disciplinary research fields. Physical models, where these key structures acquired a specific mathematical meaning, find very broad applications beyond their origin in frustrated magnetism. Let us mention, apart from fields like neural networks, computer science, and econophysics, the fascinating sociological applications to opinion- and group dynamics [@s.galam1; @s.galam2], biological applications to RNA-folding [@f.david; @p.higgs; @laessig-wiese] including the quantum chromodynamical analogy and random matrix theory[@zee-orland]. It seems natural to search for universal features of unifying models both in the general sense and in the precise meaning of the renormalization group[@f.david].
The 3SAT optimization problem and its close relation with the $T=0$ Sherrington-Kirkpatrick model [@SK] or RNA-folding in biophysics [@p.higgs], where glass transitions exist within the secondary RNA-structure [@emarinari; @mmueller; @f.david], provide examples where even the zero temperature limit is either exact or close to the realistic situation. In physics, spin glass phases are usually confined to a low temperature regime and some applications are rather remote from it. Yet, knowing the ground state structure remains important. For one of the most fertile standard models, the Sherrington-Kirkpatrick model [@SK] (SK-model), the hierarchical ground state structure is meanwhile confirmed [@Talagrand] as predicted by Parisi a long time ago [@Parisi3; @Parisi1]. Explicit analytic solutions on the other hand or meaningful approximations are still required. They may lead to improved understanding and could be potentially fruitful for progress in more complicated (non-mean-field finite-range, quantum-) models.
The attempt to link the SK-model behavior deep inside its ordered (spin glass) phase with the theory of critical phenomena may appear unmotivated at first sight, since the infinite-ranged spin interaction suggests ’only’ mean-field behavior. However the SK-model is not simple below its mean-field transition. Its replica theory [@binder-young] allows to imagine how the Ising Hamiltonian with infinite-ranged random interaction can become potentially critical, when it is dressed-up with the hierarchical order parameter structure in the replica symmetry broken (RSB) phase[@parisi-book]. We shall argue in this paper that, as the number of tree levels of this hierarchical structure grow to infinity, a particular correlation length between the tree levels can be defined which diverges as $T\rightarrow0$. It allows to describe critical behavior due to the accumulated effect of ever finer structures at the highest tree level. This property specifies a kind of universality class, which helps to compare with similar behavior in different physical systems and in other scientific areas like biology, sociology, (mathematical) psychology as well, where evidently frustrated random (and in cases range-free) interactions are important.
Nonanalytic power laws (with rational exponents) for the SK-model had been discussed in many different respects, as for example the finite-size cutoff (or finite spin number) dependence [@boettcher; @bouchaud-energy-exponents; @mikemoore]. One may also mention the exponent of the Almeida Thouless line[@binder-young]. However a link to specific critical points was not made. In the present paper, we shall report progress in understanding replica symmetry breaking in the Sherrington-Kirkpatrick model [@SK] as a critical phenomenon; this refers to scaling behavior on one hand and to (numerically determined) fixed point functions under RSB-order flow $\kappa\rightarrow\infty$ on the other. Nonanalytic scaling behavior is described as a function of the inverse RSB-order decreasing to zero either together with temperature $T\rightarrow0$, or together with the external magnetic field. Temperature- and field-scaling are well separated and reside in opposite limits of a pseudo-dynamic variable $1/a$ (see Ref.), as sketched by Fig.\[fig:scaling-variables\] [^1].
Critical phenomena are in general categorized by universality classes and described by criteria like global symmetries. Certain details (on shorter range) become irrelevant and suppressed in the regime of divergent correlation lengths. In the early years of the development of phase transition theory and critical phenomena, Kadanoff’s initializing ideas of universality and rescaling, Stanley’s scaling theory, and Wilson’s renormalization group led to the modern understanding of critical behavior [@RG-review]. In recent years the functional renormalization group was advocated to understand better disorder-related criticality [@wiese].
Freezing transitions into spin glass phases were analyzed in renormalization framework too, the ordered phase itself remained however mysterious, in particular for the non-mean-field models. In a famous work on scaling in spin glasses D. Fisher and Sompolinsky [@dfisher-hsompolinsky] explained the complications of mean field models (or mean field regimes of finite-range spin glasses above $d=6$ and $d=8$) and the multiple violations of scaling relations. In particular they mentioned the violation of temperature- versus magnetic field scaling within the ordered phase. In a different manner, we re-encounter this problem and explain a certain decoupling of field- from temperature-scaling by the presence of two different critical points of RSB in the low temperature limit.
Crucial questions like the relationship between Parisi’s RSB and the Fisher-Huse droplet theory [@fisher-huse] of the ordered phase of real spin glasses (or their reconciliation) became - since a long time but perhaps currently with more good reason - a field of intensive research[@cyrano; @monthus-garel-condmat07-dec]. Since droplet theory is interpreted to govern the ordered phase by a $T=0$ fixed point, it appears very desirable to understand RSB as a $T=0$ fixed point theory too. Attempts have been independently undertaken by several authors and also in different fields of application, as the examples in Refs. show.
The latter point is elaborated in the present article. Despite the mean field character of the SK-model, RSB introduces apparently nonanalytic critical behavior of one-dimensional type (the unbroken replica-symmetric solution does not show any of these phenomena) together with special diverging correlation lengths. The challenge to handle RSB-effects correctly and to make the SK-solution a fruitful basis for real physical applications led us to a scaling theory intimately based on extreme high order numerical results.
In previous publications [@prl2007; @prl2005] we reported the existence of two critical points and of discrete spectra which survived in the limit of infinite replica symmetry breaking ($\infty$-RSB) for the SK-model at $T=0$, perhaps surprising since the $\infty$-RSB limit is in generally known only as the ’continuum limit’. Indeed, a continuum scaling theory, dealing with the $T\rightarrow 0-$limit at $\kappa=\infty$) was published by Pankov [@pankov-prl] recently. Its role and limitation to the temperature-controlled critical point ${\cal CP}1$ has been addressed in our previous publication [@expcpaper] together with a comparison of our work with the much older so-called PaT-scaling [@PaT]. In the present article we do neither use Pankov- nor PaT-scaling, but construct a different scaling approach, which includes RSB-order-scaling, and is exclusively guided by the theory of critical phenomena. In accordance with previous (naive) functional renormalization group arguments [@prl2005] we analyze the approach to full RSB formation ($\kappa\rightarrow\infty$) not only at $T=0$ but also in the $(H,T)$-plane for small values of temperature $T$ and magnetic field $H$ and, of course, as a function of RSB-order. (neither real space nor real-time space are involved as a consequence of the SK-model’s nature).
We suppose here that RSB orders, counted by integers $1,2,...,\kappa$, can be viewed as equidistant sites forming a pseudo-lattice. In analogy with a real-space lattice, which needs to be infinitely large in order to allow diverging correlation lengths and hence support critical phenomena, the pseudo-length cutoff $\kappa$ must be sent to infinity. The known fact that increasingly high orders of RSB are needed (for good approximations) as the temperature decreases towards zero implies the role of $T$ as an effective cutoff of nonanalytic behavior in the RSB-limit ($T$ playing the role of a symmetry breaking relevant perturbation in standard critical phenomena). Thus it also inherits the idea of scaling RSB-order $\kappa$ with temperature $T$. Conversely, a maximum RSB order $\kappa$ serves as a cutoff of criticality. A speciality of RSB is that it appears in the shape of a pseudo-dynamical critical phenomenon [@prl2005; @prl2007], which recalls the celebrated dynamical representation of Sompolinsky [@sompolinsky]. A technically important difference however being the absence of a stochastic field, which we reserve for more complicated couplings to faster degrees of freedom [@pssc2007].
A scaling theory, near $T=0$ in particular, is important for several different reasons. First, it expresses the numerically determined features of the SK-model in a universal form, which helps to identify model-independent features and places the SK-model and its RSB in a wider context. Let us mention that directed polymers (or for example the queuing transition and the totally asymmetric exclusion process [@k.johansson; @queuing-denNijs], or certain partial differential equations, involve rational exponents as multiples of $1/3$ too). The scaling theory also puts constraints on the shape of an effective field theory. It has the virtue of isolating critical features which must be represented correctly by an effective theory that simplifies the SK-model. [^2] The simpler theory should allow to control generalizations to finite range or other complications. The scaling theory offers also a special look on eventual scenarios of an RSB breakdown, as it may occur due to finite range interactions. The collapse of finite $T_c$ below a lower critical dimension, will eventually combine RSB-criticality with the freezing transition as $T_c=0$.\
[*The paper is organized as follows:*]{}\
Section \[scaling-concept\] and \[RSB-correlation-length\] describe the basic elements of the present scaling theory. The spaces spanned by the scaling variables at both critical points are described in section \[scaling-concept\]. In section \[RSB-correlation-length\] a correlation length is introduced on the pseudo-lattice of RSB-orders (to our best knowledge for the first time) and, anticipating the self-consistent numerical results of the following Sections \[fixed-point-function\]-\[U-distribution\] (for details see Ref.), the role of finite temperatures (or finite magnetic fields) as soft cut-offs of the divergence of this correlation length is explained.
Section \[fixed-point-function\] demonstrates how the order parameter function $q(a)$ can be regarded and obtained as a fixed point function $q^*(a^*)$ under RSB-flow $\kappa\rightarrow\infty$.
Section \[finite-T-scaling\] includes and combines finite temperature scaling near critical point ${\cal CP}1$ with RSB-order scaling. Scaling functions are obtained, which fit the detailed data of $200$ RSB-orders, and explain the non-commuting singular limit $\kappa\rightarrow\infty, T\rightarrow 0$. In a similar way, \[finite-H-scaling\] includes magnetic field scaling near ${\cal CP}2$.
In Section \[free-energy-scaling\] we present unconventional scaling-contributions to the free energy, to the entropy, and internal energy, which are compatible with the numerical self-consistent solutions. In Section \[U-distribution\] the ground state energy distribution is given as a function of pseudo-time and also shown as a function of the (normalized) Parisi levels $l/\kappa$ such that the flow towards a energy-per-level fix point function results as the RSB-order tends to infinity.
In Section \[pseudo-dynamical-scaling\] we finally consider pseudo-dynamic scaling of the order function $q(a)$ in the vicinity of both critical points before concluding with details of $q(a)$ as revealed by its derivatives in Section \[q(a)-derivatives\].
The scaling scenario {#scaling-concept}
====================
We introduce the (RSB-)scaling idea by viewing the formation of full RSB as a critical phenomenon with two critical points in the pseudo-dynamic limits $a=0$ and $a=\infty$ at $T=0$, $H=0$. We do not a priori impose a relationship between the two critical points, but consider the pseudo-dynamical crossover between them by means of the order function $q(a)$ on $0\leq a\leq\infty$. Fig.\[fig:scaling-variables\] illustrates the relative position of the two critical points and the sets of scaling variables near these points.
In particular one may notice that the dynamical variable $1/a$ and the RSB-order $\kappa$ define a $(1+1)$-dimensional analogy of problems with one time- and one real space dimension.
Since the free energy or internal energy are integrals over all pseudo-times, as for example given below in Eqs.(\[eq:U\]),(\[eq:F\]), we do not start from a single scaling hypothesis for the free energy $F$. Instead we construct the scaling hypothesis for each of the two different scaling contributions, originating in these separated critical points and. As Fig.\[fig:scaling-variables\] shows, a different set of scaling variables should be used in order to match the numerical results.
It is remarkable that temperature- and magnetic field-scaling become decoupled, because they belong to different scaling regimes. Scaling with the respect to the order $\kappa$ of RSB measures the approach of the equilibrium solution at $\kappa=\infty$ (full RSB) and therefore can be viewed as a kind of non-equilibrium dynamics (in the sense that each finite order is unstable towards higher RSB-orders). Thus an element of dynamic scaling is contained. Using the pseudo-time $1/a$ as an additional scaling variable, we analyze the order function $q(a; T,H)$ and its pseudo-dynamic scaling behavior. A dynamic crossover between the two critical points ${\cal CP}1$ and ${\cal CP}2$ is then described by means of $q(a)$. Moreover, the order function is evaluated as a fixed point function of the RSB-flow letting $\kappa\rightarrow\infty$.
The present scaling theory is then fitted to high precision numerical data, which were obtained recently for the Sherrington-Kirkpatrick model given by the Hamiltonian $${\cal H}=\sum_{i<j} J_{ij}s_i s_j-H\sum_i s_i$$ with quenched, infinite-ranged, and Gaussian-distributed random couplings $J_{ij}$ (with variance $J^2/N$) between classical spins $s_i=\pm1$. The method was described in Ref. and will not be described again in this article. It allowed not only to go beyond earlier high-order studies [@prl2007], but contained also new analytical elements. As a consequence we are able to predict the values of critical exponents, evaluate amplitudes, calculate analytical models of various scaling functions including cases with very singular crossover.
The numerical material includes the self-consistent solutions in all orders of RSB up to\
i) the current maximum of $\kappa=200$ RSB at $T=0$ and $H=0$,\
ii) $50$ orders for a dense grid of finite temperatures in the range $0\leq T\leq 0.3$ for $H=0$, and\
iii) $20$ orders of RSB for a dense grid of finite magnetic fields $0\leq H\leq 0.5$ at zero temperature.\
[*We note that all energies are given in units of $J$*]{}.
Divergent correlation length on the one-dimensional pseudo-lattice of RSB-orders {#RSB-correlation-length}
================================================================================
The high order self-consistent solutions led us to consider a pseudo-lattice of RSB-orders with unit lattice constant. The maximal order $\kappa$, for which self-consistent results have been obtained numerically, can be viewed as a (sharp) pseudo-length cutoff. In analogy with the well-known finite size scaling of critical phenomena, one may consider scaling by varying this finite maximum RSB-order $\kappa$. Naturally this defines a one-dimensional problem without translational invariance though. Moreover it is known that finite temperatures or finite fields serve each as a soft-cutoff for the maximal order of RSB, which is needed to obtain good approximations: the higher the temperature the less RSB orders are needed to obtain a certain quality of results[^3].
In other words, higher orders become uncorrelated because they only have weak and/or negligible effects. In this sense the correlation length of different RSB orders becomes cutoff by finite $T$ or $H$. Anticipating our results below, this definition of the correlation lengths $\xi_{\kappa}(a,T,H)$ shows power law divergences with rational-valued exponents given by $$\label{nu-T}
\xi_{{\cal CP}1}\equiv \xi_{\kappa}(a=\infty,H=0,T)\sim T^{-\nu_{T}},\quad \nu_T=3/5$$ $$\label{nu-H}
\xi_{{\cal CP}2}\equiv \xi_{\kappa}(a=0,H,T=0)\sim H^{-\nu_H}, \quad \nu_H=2/3$$
In the chapters below, we shall find scaling functions of the form $f(\kappa/\xi_{\kappa})$. Apparently the correlation length exponent $\nu_H$ cannot be simply related to $\nu_T$, since in the vicinity of one of the two critical points one finds either $T$ and no $H$-dependence (${\cal CP}1$) or vice versa (${\cal CP}2$) (unlike conventional scaling where $T\sim H^{1/\beta\delta}$). The power laws (1) and (2) should also hold for $a\gg\xi_0$ and $a\ll\xi_0$ respectively, where $\xi_0$ was introduced in Ref. as a finite characteristic length ($\approx 1.13$) which sets the scale for crossover from almost linear regime, $q\sim a$, through a maximal-curvature crossover ($a\approx \xi_0$) to $1-q\approx 1/a^2$ behavior of the order function $q(a,T=0,H=0)$.
Let us note some similarities with critical behavior of other systems. Very remarkable appears the fact that the exponent $\nu_T=\frac35$, describing the RSB-correlation-length divergence at ${\cal CP}1$, coincides with the value given by Garel and Orland for the variational [*domain-wall solution*]{} of $(1+1)-$dimensional directed polymers [@garel-orland].
For general $d<2$ these authors reported $\nu=3/(d+4)$, while a second solution called domain-solution yields $\nu=1/(4-d)$. Hence, in $(1+1)$-dimensional case of directed polymers, these two solutions give $\nu=3/5$ and $\nu=1/3$ respectively. If $H^2$ in our case could be scaled like $T$ then our second correlation length exponent $\nu_H$ would agree with the domain solution for directed polymers (DP). Moreover, the roughness-exponent of the DP in $1+1$ dimension assumes precisely this value $2/3$. Of course, these are only hints, and universality classes can coincide accidentally at integer dimensions. It appears however very tempting to analyze whether our two critical points ${\cal CP}1$ and ${\cal CP}2$, distinguished by the opposite pseudo-dynamic limits $a=\infty$ and $a=0$ respectively, find their counterparts in those domain- and domain-wall solutions of the directed polymers. Whether such a formal analogy holds (despite the infinite-range spin glass interaction), should probably be decided by comparison or mapping of the corresponding field theories. Searching for eventual relations in higher dimensions for finite range spin glass interactions is also an exciting question. We shall come back to related questions and pseudo-dynamic scaling in section \[pseudo-dynamical-scaling\].\
The $T=0$ order function as a fixed point function $q^*(a^*)$ in the RSB limit ($\kappa=\infty$) {#fixed-point-function}
================================================================================================
The idea of finding the $T=0$ order function as a fixed point function in the RSB-limit arose from renormalization group arguments as designed in Ref. . It reemerges now in a literally obvious way when we plot in Figure \[fig:q(a)-bounds\] the whole set of numerical self-consistent solutions $\{a_l^{(\kappa)}, q_{l+1}^{(\kappa)}\}$ (and $\{a_l^{(\kappa)}, q_{l}^{(\kappa)}\}$) for $l=1...\kappa$ of all evaluated RSB-orders $\kappa=1,2,...,200$. These data become dense for large enough $\kappa$ and approach the desired order function $q(a)$ in the limit $\kappa\rightarrow\infty$, which can be viewed as a fixed point function $q^*(a^*)$.
The unusual form $q^*(a^*)$ can be justified as follows: the parameters $a_l^{(\kappa)}$ and $q_l^{(\kappa)}$ can be viewed as functions of the continuous variable $l/\kappa\rightarrow\zeta$ in the $\kappa\rightarrow\infty$ limit. The fixed point solutions $a^*(\zeta)$ and $q^*(\zeta)$ can be combined by eliminating the variable $\zeta$, which results in the special form $q^*(a^*)$ where the variable itself is made up from continuously distributed fixed points. In the following we use $q(a)$ and $q^*(a^*)$ synonymously and distinguish them only if necessary.
At finite large orders one may define interpolating functions $q_{l+1}(a_l)\rightarrow q_<(a)$ and $q_{l}(a_l)\rightarrow q_{>}(a)$ which yield lower and upper bounds for the exact solution $q(a)\equiv q^*(a^*)$ at each value of $a$. Figure \[fig:q(a)-bounds\] illustrates that this channel between lower and upper bound becomes extremely small for high orders $\kappa=O(10^2)$. An illustration of the exact $q(a)$ being confined within such a channel as the RSB order $\kappa$ increases towards infinity, is provided in a more detailed way by zooming different regions of crossover between ${\cal CP}1$ and ${\cal CP}2$ in Figures \[orderf-fixpoints\],\[fig:q(a)fixedpoint-vicinity\].
Figure \[fig:q(a)-bounds\] moreover shows deviations of $q(1/a)$ from Gaussian behavior, which is a good approximation for small $1/a$. In both representations $q(a)$ and $q(1/a)$ it illustrates the existence of special lines which terminate obviously in fixed points - in fact there is a hierarchy of fixed points lying dense on the interval $0\leq a\leq\infty$. We shall make explicit use of these fixed points below.
Indeed, $200$ calculated orders of RSB for $T=0$ already yield an almost continuous function $q_{l}\left(\frac{a_l +a_{l+1}}{2}\right)$ which finally turns into $q(a)$ in the RSB-limit. Our previously published analytical model function [@prl2007] satisfies almost perfectly this constraint; as mentioned in Ref. it turned out that a small ’mass’ function $w(a)$ in $$q_{model}(a)=\frac{a}{\sqrt{a^2+w(a)}} {_1}F_1\left(\alpha,\gamma,-\frac{\xi^2}{a^2+w(a)}\right)
\label{model-function}$$ models even the full crossover regime. The function $w(a)$ tends to a small constant $w(0)\approx 0.067$. This cutoff (of in general nonanalytic behavior for arbitrary parameters $\alpha$, $\gamma$) guarantees a strictly linear $q(a)$ relation in accordance with our high order data. In the crossover regime between these two dynamic critical points, $w(a)$ can be modeled to depress the maximum error of $q(a)$ below $O(10^{-4})$ at each pseudo-time. A unique choice of $w(a)$ is not yet found, but excellent fits are obtained with $w(a)$ monotonically decreasing from $w(0)\approx 0.067$ to $w(\infty)=0$. Using the high order data we have thus been able to improve the analytic approximation of the $T=0$ order function $q(a)$.
Fixed points calculated from the RSB-flow towards $\kappa=\infty$
-----------------------------------------------------------------
The full set of self-consistent solutions for order parameters $q_l$ and (T-normalized) Parisi box sizes $a_l\equiv m_l(T)/T|_T=0$ can be described by matrix elements $p_{l,\kappa}\equiv \{a_{l,\kappa}\equiv a^{(\kappa)}_l,q_{l,\kappa}\equiv q^{(\kappa)}_l\}$ labeled by RSB-order $\kappa$ and level number $l$. Since the number of $q_l$-parameters exceeds by one the number of $a_l$-parameters (in each order of RSB), a second complementary set of matrix elements $\tilde{p}_{l,\kappa}\equiv \{a_{l,\kappa},q_{l+1,\kappa}\}$ should also be taken into account. These points $p_{l,\kappa}$ and $\tilde{p}_{l,\kappa}$ are displayed in the Figures $2-6$ and observed to approach the exact $q(a)\equiv q^*(a^*)$ along characteristic lines given below by Eq.(\[eq:k(l)-lines\]) as $\kappa\rightarrow\infty$ ($p$ from above and $\tilde{p}$ form below $q(a)$ since $q_{l+1,\kappa}< q_{l,\kappa}$).
The set of all RSB-solutions up to a maximum order $\kappa$, as plotted in Fig.\[fig:q(a)-bounds\] with a cutoff at $\kappa=200$, is then described by two triangular matrices with entries $\{a_{l,\kappa},q_{l,\kappa}\}$ (or with $\{a_{l,\kappa},q_{l+1,\kappa}\}$); the level-numbers $l$ run from $1$ to $\kappa$ for each RSB-order $\kappa$.
Along infinitely many lines in $(l,\kappa)$-space - the leading ones are very clearly visible in Figures \[orderf-fixpoints\],\[fig:q(a)fixedpoint-vicinity\] (and shown as calculated in Figs.5,6) - we observe very smooth behavior of slowly changing parameters $(a_{l,\kappa},q_{l,\kappa})$ which allow low order Padé-approximants to match these data and to join in fixed points $p^*$ of the order function curve for $\kappa=\infty$. A special case is the origin where the fixed point is obtained with the extreme accuracy of $O(10^{-13})$).
Typical examples of such characteristic lines in $(l,\kappa)$-space can be given by the linear relation among the labels $$\{l+k,\kappa=\frac{m}{n}l+k-1\}
\label{eq:k(l)-lines}$$ (viewing $l\geq l_{min}\equiv l_0=n$ as the running index) with steps of $\Delta l=n$ and $m,n,k$ integer-valued. The choice of $m/n$ selects one fixed point of the RSB-flow as $\kappa\rightarrow\infty$ with $l\rightarrow\infty$. Steps of $\Delta l=n$ are required to generate integer values for $\kappa$ (otherwise we wouldn’t have numerical data). The integer $k$ distinguishes different lines which all meet in the same fixed point. Thus the fixed point $(a^*,q^*)$ is labeled by $m$ and $n$ or just by the rational number $m/n$. We have evaluated more than $50$ fixed points belonging to the exact order function $q(a)$. The higher $n$ the larger must be the steps $\Delta l$, hence one needs higher orders of RSB to find enough data points for reasonable curve-fitting through these points. This is one limitation of the method, but the almost linear character of a large number of these lines allows to calculate in principle a number of fixed points much larger than the order of RSB.
Discrete spectra in the $\kappa=\infty$ RSB-limit at zero temperature
---------------------------------------------------------------------
While the fixed point functions can be derived for all pseudo-time values $1/a$, the points $a=0$ and $a=\infty$ remain special limits. In a recent article [@prl2007] we have shown that infinitely large subclasses of certain self-consistent parameters ratios remain discrete at $T=0$ or $H=0$ even in the continuum limit. These discrete levels reside in the limits $a=0$ and $a=\infty$ when $\kappa=\infty$. Finite temperatures lift the discrete spectrum at $a=\infty$ into the continuum, while a magnetic field has a similar effect on the discrete levels at $a=0$. The ratios assume the value $1$ then. The discrete spectra therefore emphasize the critical nature of the points $a=0$ and $a=\infty$. We present in the following subsections new results for these $T=0$ levels of parameter ratios and, in Section \[finite-T-scaling\], describe their singular finite $T$ crossover.
### Level distribution at 2
At the critical point ${\cal CP}2$ the sub-class of small self-consistent parameters $q_k$ and $a_k$, which vanish in the $\infty$RSB limit (and condense into ${\cal CP}2$), obey $$\frac{q_{\bar{l}+2}}{q_{\bar{l}+1}} = \frac{2l-1}{2l+1}
\quad {\rm and}\quad \frac{a_{\bar{l}+1}}{a_{\bar{l}}}=\frac{l}{l+1},$$ with $\bar{l}\equiv \kappa-l$ and $l=1,2,...$; thus the ratios of these parameters are discrete and almost equidistant [@prl2007]. Recurring these relations to the smallest parameters of each RSB-order $\kappa$, hence to $q_{\kappa+1}$ and $a_{\kappa}$ respectively, we obtain $$%q_{\bar{k}+1}=(2k+1)q_{\kappa+1},\quad a_{\bar{k}}=(k+1)a_{\kappa}.
q_{\kappa+1-l}=(2l+1)q_{\kappa+1},\quad a_{\kappa-l}=(l+1)a_{\kappa}.$$ The RSB flow of numerical data up to 200RSB allow to conclude that these minimal parameters vanish like $$\begin{aligned}
q_{min}&\equiv& q_{\kappa+1} = \frac{1.03059}{\kappa}+\frac{1.31705}{\kappa^2}+O(1/\kappa^3),\\
a_{min}&\equiv& a_{\kappa} = \frac{2.77275}{\kappa}+\frac{3.54347}{\kappa^2}+O(1/\kappa^3).\end{aligned}$$ The discretized slope of the order function in the point $a=0$, assumes the 200RSB value $$\frac{q_{\bar{l}}-q_{\bar{l}-1}}{a_{\bar{l}}-a_{\bar{l}-1}}=\frac{2 q_{\kappa+1}}{a_{\kappa}}=0.74345$$ or, by Padé approximation of the RSB flow and extrapolation to $\infty-$RSB, one obtains $$q'(0)=2 \hspace{.1cm} lim_{\kappa\rightarrow\infty} \frac{q_{\kappa+1}}{a_{\kappa}}=0.743368.$$ As the calculation of fixed points of q(a) in the linear small-a regime shows, this agrees with the slope of the continuous $q(a)$ for $a\rightarrow 0$. The slope of the order function in ${\cal CP}_2$ provides one almost exact constraint for the order function $$q'_{model}(a=0)=\frac{1}{\sqrt{w(0)}}{_1}F_1\left(\alpha,\gamma,-\frac{\xi^2}{w(0)}\right)=0.743368.$$
### Level distribution at 1
In the large $a$ limit the characteristic feature are discrete spectra of $1-q_l$-ratios, which are shown in Figure \[fig:large-q-ratios\]. In addition Figure \[fig:large-q-coefficient\] shows that the $\frac{1}{a^2}$ coefficient of the almost continuous order function converges towards 0.41 except for the largest a-levels. At zero temperature the order function differs from $1$ by $0.41/a^2$. Thus, according to the large-$a$ expansion of our analytical model, the expansion coefficient is constrained to satisfy $$\begin{aligned}
q(a)&=&1-\frac{\alpha\xi^2}{\gamma}\frac{1}{a^2}+O(1/a^{4})\nonumber\\
&=& 1-0.41\frac{1}{a^2}+O(1/a^4),\end{aligned}$$ putting a constraint on $\alpha\xi^2\gamma$. Further constraints can be found from very precise numerical characteristics; it is planned to use this analysis to narrow down the choice of an analytical order function model.
The discrete spectrum yields a coefficient which differs notably from this value, as one can see from Figure \[fig:large-q-ratios\] (right) for the leading divergent $a_l$ parameters.
Approach of equilibrium at $T=0$: leading and sub-leading scaling contributions
-------------------------------------------------------------------------------
The nonequilibrium susceptibility $\chi_1$ is a characteristic quantity measuring the distance from the equilibrium solution at $\kappa=\infty$. The entropy had been seen [@expcpaper] to vanish like the square of $\chi_1$. The numerical solutions[@expcpaper] for $\chi_1$, evaluated for all $200$ leading RSB-orders, are well fitted by the $T=0$-form $$\begin{aligned}
\label{eq:kappa-decay}
& &\chi_1(\kappa,T=0)\cong \frac{0.86}{(\kappa_0+\kappa)^{5/3}}+\frac{1.85}{(\kappa_0+\kappa)^4}+...\\
&\hspace{-.2cm}=& 0.86 \kappa^{-5/3}-1.83 \kappa^{-8/3}+3.12 \kappa^{-11/3}+1.85 \kappa^{-4}+...\nonumber\end{aligned}$$ with $\kappa_0\cong 1.278$. As discussed in Ref. the numerical uncertainty of $O(10^{-6})$ in the exponent is so small that the expectation of a rational-valued exponent due to one-dimensionality leads to the firm prediction of $\chi_1\sim\kappa^{-5/3}$. The quality and density of the numerical results is even high enough to predict the subleading correction and the amplitudes as well.
Finite temperature scaling near the critical point ${\cal CP}1$ $(a=\infty,T=0,H=0)$ {#finite-T-scaling}
====================================================================================
Naturally one would like to start with a scaling hypothesis for the free energy $F$. However the SK-model has two critical points at $T=0$ and the free energy picks up contributions from both; in the RSB-limit, it can be expressed by integrals over entire crossover range from $a=0$ $({\cal CP}_2)$ to $a=\infty$ $({\cal CP}_1)$ involving the order function $q(a)$.
Thus it turns out useful to start with the scaling behavior of the self-consistent parameters $a_l$ and $q_l$, which teaches us how to embed scaling into the order function $q(a,T)$ mediating the crossover between the two critical regimes. Finally, by expressing free energy and internal energy in terms of the order function, and by linking the entropy with the non-equilibrium susceptibility, we shall arrive at consistent scaling predictions for $F$, $U$, and $S$ b+elow.
Let us begin with temperature-normalized block size parameters $$a_l(\kappa,T)\equiv \frac{m_l(\kappa,T)}{T}$$ where we consider first scaling in the $(\kappa,T)$-plane for fixed label $l$. We must analyze the singular behavior near the critical point ${\cal CP}1$, where diverging $a_l(\kappa,T=0)\rightarrow\infty$ for $\kappa\rightarrow\infty$ lead to discretely spaced ratios $a_l(\infty,0)/a_{l-1}(\infty,0)$ in the $\infty-$RSB limit. We identified the large order power law divergence $$a_l(\kappa,T=0)\sim \kappa^{5/3},\quad {\kappa\rightarrow\infty}
\label{eq:5/3-law}$$ for the subclass of large parameters $a_l$ (their number also grows to infinity as $\kappa\rightarrow\infty$).
The linear temperature decay of all Parisi box sizes $m_l(\kappa,T)\sim T$ holds for all [*finite*]{} RSB-orders, but not all $m's$ should vanish in the RSB limit at zero temperature, since the break point is not expected to be at $m_{1}=0$ (even in the $T\rightarrow 0$-limit[@Crisanti2002]). Thus, one should describe a non-commuting limits $T\rightarrow0$ and $\kappa\rightarrow\infty$ properly.
The Taylor series, valid as a low temperature expansion for any fixed finite RSB-order, $$m_l(\kappa,T)\equiv a_l(\kappa,T)\hspace{.1cm}T=a_l(\kappa,0)T+\frac12 a_l'(\kappa,0)T^2+O(T^3)$$ will anyway break down for those levels $l$ for which the expansion coefficients diverge as $\kappa\rightarrow\infty$. In accordance with the anomalous power law (\[eq:5/3-law\]) it will be shown below by means of the fixed point order function that the correct scaling form for this ${\cal CP}_1$-divergent parameter sub-class reads $$a_l(\kappa,T)=\kappa^{5/3} f_{a_l}(T/\kappa^{-5/3}),$$ where the scaling function is well approximated by a low order $(2,3)$) Padé series (one may also use $(1,2)$ or $(3,4)$ series) $$f_{a_l}(x)=\frac{c_{0,l}+c_{1,l} x}{1+d_{1,l} x+d_{2,l} x^2}.$$
This form fits well the available finite $T$ data up to $50$-RSB and satisfies $$f_{a_l}(0)=c_{0,l} \quad{\rm finite\hspace{.1cm} and}\quad f_{a_l}(x)\sim \frac{1}{x}\quad {\rm for}\quad x\rightarrow\infty.$$ The crossover line can be described by the characteristic (crossover) temperature $$T_1(\kappa)\sim\kappa^{-5/3}.$$
Beyond the crossover line, for temperatures $T\gg T_1(\kappa)$, the box sizes $m_l(x)=x\hspace{.1cm}f_{a_l}(x)$, which belong to the ${\cal CP}1$-divergent sub-class of $a_l$’s, approach finite temperature-independent values. One obtains $$\label{l-dep}
\lim_{x\rightarrow\infty}m_l(x)=c_{1,l}/d_{2,l}.$$ While direct fits of our numerical data yield already a crude estimation of $m_1(\infty)$ for the break point, it was mentioned in Ref. that $50$-RSB is not sufficient to determine the break point for arbitrary low temperatures. Yet, for $T=0.015$ a reliable break point value was determined by another procedure.
Here we are interested to obtain a good estimation of the breakpoint in close connection with the scaling picture. Therefore we employ the fixed point method and indeed succeed in finding a good approximation down to even lower temperatures and also answer the question whether the limit $m_l(\infty)$ in Eq.(\[l-dep\]) shows a level index dependence or not. For finite $m_l$ an $l$-dependence would have implied a discrete distribution. We shall find in subsection\[subsec:breakpoint\] that all ratios become level-independent in the large $x$ limit $$\frac{m_l(x)}{m_{l-1}(x)}=\frac{a_l(x)}{a_{l-1}(x)}=\frac{f_{a_l}(x)}{f_{a_{l-1}}(x)}\rightarrow 1\quad
{\rm for}\quad x\rightarrow\infty.$$ The crossover from discrete parameter spectra for $T\ll T_1(\kappa)$ to the continuum on the other side of the crossover line, for $T\gg T_1(\kappa)$, is a rather singular effect mediated by the scaling function. We introduced above a scaling function which allows to suppress the discrete spacing between $q$- and $a$-parameters as one moves through the crossover line $T_1(\kappa) \sim \kappa^{-5/3}.$
Forbidden level crossing at finite temperatures determines the break point {#subsec:breakpoint}
--------------------------------------------------------------------------
We employ now the RSB-fixed-point technique to extract approximate values for the break point for rather low temperatures.
For this purpose, we consider fixed finite temperatures $T$ and fixed level numbers $l$ (down to lowest $T$ and $l$ small to catch the diverging-$a$ subclass near ${\cal CP}1$) and study the RSB-flow of the solutions $\{a_l(\kappa),q_l(\kappa)\}$, and also those of the complementary type $\{a_l(\kappa),q_{l+1}(\kappa)\}$, from low orders up to $\kappa=50$ as illustrated by Fig.\[fig:breakpoint\] for an arbitrarily picked temperature $T=0.03$. Padé-approximants fit the RSB-flow well and these extrapolated curves meet precisely in the same point. These cu+rves would cross each other, but then violate the reality condition of the self-consistent method beyond the level crossing point. We consider the level crossing point therefore as the limit of the nontrivial part of the order function, hence as the breakpoint.
The scenario remains the same for arbitrary fixed temperatures, only the extrapolation range increases with the level number and therefore becomes less accurate for smaller temperatures. Yet reliable solutions were obtained down to temperatures $T\approx 0.005$. The Figure Insert emphasizes the fact that the solutions indeed reach the level crossing point as $\kappa\rightarrow\infty$.
Approaching zero temperature and the RSB limit along the crossover line, $x$ fixed, with $T_1(\kappa)\sim\kappa^{-5/3}$, leads to a discrete set of different Parisi box sizes $m_l(\kappa=\infty,T=0)$.
Nonequilibrium susceptibility $\chi_1$ {#subsec:chi1-scaling}
--------------------------------------
The scaling form of the non-equilibrium susceptibility $\chi_1(\kappa,T)$ can be given in terms of a scaling function $f_1$ by $$\chi_1(\kappa,T)=\kappa^{-5/3}f_1(T/\kappa^{-5/3})$$ where $f_1(x)\sim x$, $x\rightarrow\infty$, and $f_1(0)\approxeq 0.86$, reproduces the data and the leading $\kappa$-decay at $T=0$, as in Eq.(\[eq:kappa-decay\]).
Magnetic field scaling at critical point ${\cal CP}2$ (diverging pseudotimes $1/a\rightarrow\infty$) {#finite-H-scaling}
====================================================================================================
The magnetic field dependence at $T=0$ is expected to yield a plateau-like cutoff of the order function of similar shape as described in the Parisi form $q(x)$. We study now the field dependence of the smallest order parameter $q_{\kappa+1}(H,T=0)$ in $\kappa$-th order of RSB. $20$ orders of RSB turn out to be enough to extract the exponent describing the decay of $q_{\kappa+1}$ as the order of RSB tends to infinity. Guided by the results of finite temperature, where one single non-trivial rational exponent appeared, we observe an exponent $2/3$ to provide a reasonable picture for extrapolation towards $\infty$ RSB.
We first identify the $q_i\sim\kappa^{-1}$-law for (infinitely many) order parameters which vanish as $\kappa\rightarrow\infty$. The scaling hypothesis for $(\kappa,H)$-scaling, valid for the vanishing order parameters $q_i(\kappa,H)$, can be formulated as $$q_i(\kappa,H,T=0)=\frac{1}{\kappa} f_{i}\left(\frac{H^{2/3}}{1/\kappa}\right)
\label{eq:Hfield-exponent}$$ with $f_i(0)\neq 0$ and $f_i(x\rightarrow\infty)\sim x$.
The numerical procedure chosen in order to arrive at this proposal has been to extrapolate to $\infty$RSB the smallest $q_1$-values at fixed non-vanishing small magnetic fields. The higher the field the less orders of RSB are needed (similar as in the case of finite temperatures). Twenty steps of RSB generate almost exact results down to $H\approx 0.15$. Extrapolation of the RSB-flow is hence reliable down to much smaller field-values, where one has already entered the critical regime. Thus many RSB fixed point values (at $\kappa=\infty$) are well approximated and can be used to match a power law w.r.t. the magnetic field. In this way the magnetic field exponent of Eq.(\[eq:Hfield-exponent\]) is found to differ only by $0.003$ from the value $2/3$ which led to the assumption that this rational number is exact.
Scaling behavior of the free energy $F$, internal energy $U$, and entropy $S$ {#free-energy-scaling}
=============================================================================
Low temperature expansions of internal energy $U$, entropy $S$, and the free energy $F=U+TS$ were reported in the framework of our high order RSB analysis, and found in agreement with already known results. In the present context of scaling theory, we also look for scaling of RSB-parameters together with temperature and also small field variation. A useful way to study the RSB-flow in terms of $\kappa-$scaling is by invoking the internal energy formula at $T=0$ and $H=0$ $$\begin{aligned}
\label{eq:U}
& &\hspace{-.4cm}U(\kappa,T=H=0)=-\chi_1-\frac12 \sum_{l=1}^{\kappa}a_l (q_l^2-q_{l+1}^2) \\
& \hspace{-.4cm}\Rightarrow& \lim_{\kappa\rightarrow\infty}U(\kappa,0)=-\frac12 \int_{0}^{\infty}da\hspace{.1cm}(1-q(a)^2)\end{aligned}$$ The summation includes contributions from both critical points and from the crossover regime in between. Consequently one cannot expect to obtain scaling laws from a single hypothesis imposed on the total free energy. The problem has more in common with critical dynamics, however with two critical points in the long pseudo-time limit ($1/a\rightarrow\infty$) and in the short pseudo-time limit ($1/a\rightarrow0$). As reported in Ref. the free energy has a low T expansion in the RSB limit given by $F=F(T=0)-S(T=0)T+\sum_{k=2} f_k T^k$, where the leading temperature behavior is $F(\kappa=\infty,T)-F(\infty,0)\sim T^3$. The leading large-$\kappa$ correction of the $T=0$ free energy has also been reported to decay like $\kappa^{-4}$.
In the large-$a$ regime, temperatures scale like $\kappa^{-5/3}$ and hence the large-$a$ scaling contribution is $\delta F\sim \kappa^{-5}$. Thus the leading temperature dependence belongs to a sub-leading $\delta F\sim \kappa^{-5}$ correction.
We attempt to distinguish singular scaling contributions from both critical points from the non-singular contributions to the free energy. The small $a$-regime contribution can be estimated from $$\begin{aligned}
\label{eq:F}
& &F(\kappa=\infty,T=0)=U(\kappa=\infty,T=0)=E_0(H)\nonumber\\
& &=-\frac12 \int_0^{\infty}da\hspace{.1cm}(1-q^2(a))-M(H) H,
\label{eq:T0energy}\end{aligned}$$ where $M(H)$ denotes the field-generated magnetization. Recalling the small-$a$ expansion of the order function, $q(a)\sim a-const. a^3+O(a^5)$, one must expect an $H^{10/3}$-contribution from the plateau-regime, which implies also an $O(\kappa^{-5})$ contribution. The free energy data are compatible with an $H^{10/3}$ small field scaling part.
It must be concluded that the leading correction $\kappa^{-4}$ must originate in the intermediate $a$- regime (not yet identified in detail). It can, after all what was said before, not be assumed to be a scaling contribution. We should therefore attribute it to the regular free energy part.
The entropy was found to obey[@expcpaper] $$S(\kappa,T=H=0)=-\frac14 \chi_1(\kappa,T=H=0)^2.$$ It is known that only the large-a regime near ${\cal CP}1$ is responsible for the leading $\kappa$-behavior of $\chi_1$ at zero temperature, hence this holds also for the $T=0$-entropy. Since thermal behavior is also caused by the ${\cal CP}1$ contributions, we can therefore claim that the scaling-contribution to the entropy obeys $$S_s(\kappa,T)=\kappa^{-10/3}f_S(T^2/\kappa^{-10/3}),$$ with $f_S(x)=-0.72\hspace{.1cm}x$ (see Ref.) for $x\rightarrow\infty$ and $f_S(x)\approxeq -0.185$ for $x\rightarrow0$. Thus the entropy contributes to the leading $O(T^3)$ low temperature correction of the free energy. This $TS$-term contributes again only a sub-leading correction $\delta F\sim \kappa^{-5}$ from the large $\kappa$-scaling regime.
Let us recall the large-$\kappa$ dependence of the free energy at zero temperature, well described by the optimal fitting form $$F(\kappa,T=0)=F(\infty,0)+\frac{c_4}{(\kappa+\kappa_0)^4}+\frac{c_5}{(\kappa+\kappa_0)^5}+..,$$ where excellent Padé-fits yield the constant $\kappa_0=1.28$. The leading correction $\kappa^{-4}$ does neither originate from the scaling regime near ${\cal CP}1$ nor from that near ${\cal CP}2$, and hence must be expected not to scale. We therefore consider it as part of a regular $F$-contribution $F_{reg}(\kappa,T,H)$.
Thus we propose that the free energy consists of a sum of a regular and of two singular parts, where the latter ones scale according to whether they are ${\cal CP}1$- or ${\cal CP}2$-critical.
As a consequence of this two-critical point picture and in agreement with the numerical data, we separate two singular contributions, which offer different scaling behavior, from a regular part $F_{reg}$ by $$F(\kappa,T,H)=F_{reg}(\kappa,H,T)+F^{({\cal CP}1)}_s(\kappa,T)+F^{({\cal CP}2)}_s(\kappa,H)$$ where the magnetic-field controlled critical point ${\cal CP}2$ and the temperature-controlled critical point ${\cal CP}1$ contribute respectively $$F^{({\cal CP}1)}_s(\kappa,T)=\kappa^{-5}f_{cp1}(T/\kappa^{-5/3}),$$ and $$F^{({\cal CP}2)}_s(\kappa,H)=\kappa^{-5}f_{cp2}(H^{2/3}/\kappa^{-1})$$ with $f_{cp1}(x)\sim x^3, f_{cp2}\sim x^5$ for $x\rightarrow \infty$ and both finite for $x\rightarrow0$. This claim refers to the leading scaling behavior at ${\cal CP}1$ and ${\cal CP}2$; corrections with analytic $T$-dependence near ${\cal CP}2$ and analytic field-dependent corrections near ${\cal CP}1$ may occur.
A contribution $-\frac12 \chi(\kappa) H^2$-term, which yields the linear equilibrium susceptibility from $-\partial_H^2 F$, belongs to the regular part $F_{reg}$ with $\chi(\kappa\rightarrow\infty,T<T_c)=1$. It is interesting trying to translate the given power laws into scaling with the number $N$ of spins for the finite $N$ SK-models [^4] which corresponds to a finite size system with $N=L^d$, $d$ denoting the real space dimension. Scaling with $L$ or $N$ delivered a leading correction $\sim N^{-2/3}$ for the finite SK-model [@boettcher; @bouchaud-energy-exponents]. If we would assume scaling of the leading correction $\kappa^{-4}$ with $N$, a scaling function depending on $N^{-1/6}/\kappa^{-1}$ would result[@thg-privcom; @ds-privcom]. However this rests on the assumption that the leading $N^{-2/3}$ energy correction arises from the entire $a$-regime. Many open questions seem to show up here.
Fixed point distributions {#U-distribution}
=========================
Ground state energy $E_0$
-------------------------
We can extract more detailed information from our numerical analysis of RSB in the SK-model [@prl2007; @expcpaper] beyond the calculation of the global ground state energy. The RSB-flow of the energy level distribution and naturally the energy density $\epsilon_0(a)$ as a function of pseudo-times can be given. In the latter case, a test of our analytic order function model against the numerical results [@expcpaper] is provided by the use of $q(a)$ and of $q'(a)$. Both are required in the ground state energy formula in Eq.\[eq:T0energy\] according to $$\label{gs-energy}
E_0=\int_0^{\infty}da\hspace{.1cm}\epsilon_0(a)=
-\int_0^{\infty}da\hspace{.1cm}a\hspace{.06cm}q'(a) q(a).$$ Using the analytic form (3) and high RSB-order results for $\kappa=100,110,120,...200$, we obtain Fig.\[fig:energy-distrib1\]. [^5] We do not find exponential tails in this energy distribution, instead observe simple power law decay in the limits of small and large $a$.
A second important representation shows the energy level contributions from $$\quad \epsilon_0(l,\kappa)= -\frac14 \lim_{T\rightarrow 0} \hspace{.1cm} a_l(\kappa)\left(q_l^2(\kappa)-q_{l+1}^2(\kappa)\right),$$ as a function of normalized level index $l/\kappa$, and with boundary conditions $a_0=\beta,\hspace{.1cm}q_0=1$. The sum over all energy levels $\epsilon_0(l,\kappa)$ with level index $l=0,1,2,...\kappa$ for each calculated RSB-order yields the RSB-flow of the ground state energy $$E_0(\kappa)=\sum_{l=0}^{\kappa}\epsilon_0(l,\kappa)$$ towards the exact value[@expcpaper] $E_0(\kappa=\infty)=E_0$.
A proper normalization of level numbers by the RSB-order $\kappa$, displays the level-distributions for each RSB-order on the same interval of unit length. Subsequent rescaling of the energy level allows to visualize the RSB-flow towards one fixed point energy distribution (which of course depends on the rescaling factor [^6]). Fig.\[fig:energy-distrib2\] shows two choices ($l$- and $\kappa$-rescaling of $\epsilon_0(l,\kappa)$) - in both cases the convergence towards the fixed point function is obvious.
Fixed points (under RSB-flow) have been calculated in the same way as shown before for the order function. For example, fixing $l/\kappa$ to a rational number $m/n$ within the unit interval, one can see many of the leading fixed points in Fig.\[fig:energy-distrib2\] following the RSB-flow along vertical lines fixed by $m/n$. The piecewise dense set of calculated fixed points was obtained by an extrapolated Padé approximation for $n=2,...,51$ with $m=1,...,n-1$. These fixed points are shown in Fig.\[fig:energy-distrib2\] together with their fit function, obtained here as an $(8,8)$-Padé series. The fixed points are piecewise dense with some gaps near ’leading’ fixed points (eg at , which become however closed as one higher orders are evaluated. The fit function (interpolating between the dense regions) represents an approximation for the exact fixed point energy distribution function $\epsilon_0^*(\zeta)$ with $l/\kappa\rightarrow\zeta$ in the $\infty$-RSB limit. The numerical integration of the approximated function $\rho_{\epsilon}^*(\zeta)$ (which corresponds to $\epsilon^*(a)$ of Eq.(\[gs-energy\]) transformed form $0\leq a\leq\infty$ onto the unit interval $0\leq \zeta\leq1$) yields [^7] $$E^*_0\equiv E^*(T=0)=\int_0^{1}d\zeta\hspace{.1cm}\rho_{\epsilon}^*(\zeta)|_{approx}\approx -0.76314.$$ By reproducing the correct value[@expcpaper] up to $O(10^{-5})$, this provides a good test of the fixed point method. An alternative calculation, using Eq.(\[gs-energy\]) with plugged in fixed point order function confirms the numerical value $E_0^*$. The inserted figure shows the magnitude of energy-corrections per level $l$ occurring from $200$-RSB to the exact $\infty$-RSB energy per level (recall that $l$ labels the Parisi boxes of the RSB order parameter).
Different power law decays are observed in the small $l/\kappa$ (${\cal CP}1$) and in the $l/\kappa\approx 1$ range near (${\cal CP}2$).
An analytical modeling of the fixed point energy distribution must be attempted in the future; it might reveal more valuable information about the relation with directed polymers and/or with the KPZ-universality classes[@praehofer].
Energy distribution functions play an important role in the characterization of directed polymers [@monthus-pre69; @monthus-pre73; @monthus-pre74]. Generalized Gumbel statistics[@bertin-gumbel] were found to describe the statistical fluctuations of global quantities (like the energy). It is perhaps in this respect where a clear distinction between the directed polymers and the present universality class can be made. But this detailed comparison is beyond the scope of the present paper and should be attempted in the future.
Equilibrium susceptibility per level
------------------------------------
To conclude this section we extend the described method to the $\chi(a)$-density of the equilibrium susceptibility $\chi$ and in particular to the distribution per level $l$. In the RSB-limit, the total $\chi$ is known to be equal to $1$ in the entire ordered phase. The RSB flow thus moves towards a fixed point function $\chi(a)=a\hspace{.1cm}q'(a)$ with the property $\int_0^{\infty} da \chi(a)=1$ (this had been used before as a constraint for our analytical order function model [@prl2005; @prl2007]).
Let us now study the RSB-flow of the discrete representation $\chi(l,\kappa)=a_l(\kappa)(q_l(\kappa)-q_{l+1}(\kappa))$. The result for the susceptibility per level $l$ (normalized by RSB-order $\kappa$) is shown in Fig.\[fig:chi\] (and corresponds to the energy per level distribution shown in the preceding figure).
The shape recalls universal distributions observed for the KPZ growth processes [@praehofer]. This relation or mapping must be studied in the future, particularly because - as explained in Ref. - the related statistical fluctuations have been associated with universal critical behavior.
Beyond the flow of the finite RSB orders $\kappa=10,20,30,...,200$ we have added the fixed point function $\rho^*_{\chi}(\zeta)$ for the susceptibility density (ie $\chi(a)$ transformed onto the unit interval $0\leq\zeta\leq1$) which must obey $\int_0^1 d\zeta\hspace{.1cm}\rho^*_{\chi}(\zeta)=1$. A simple approximate calculation of the interpolating fixed point function reproduces the exact constraint with an error of only $O(10^{-5})$. Again this confirms the power of the method, which can eg be used to test analytical proposals.
Small changes from $200$-RSB to $\infty$-RSB are resolved in Fig.\[fig:susc-density\] and in Fig.\[fig:energy-distrib2\]. Their tendency is to make the distribution more symmetric. Yet the distribution per normalized level remains asymmetric as for the energy distribution (as a function of the dense levels $l/\kappa$), a universal fact that has been observed as a special feature of the SK-model in contrast to symmetrical distributions finite-range spin glasses.
Scaling with the pseudo-dynamical variable of the order function $q(a)$ {#pseudo-dynamical-scaling}
=======================================================================
In previous publications we found a Langevin-type representation [@prl2007; @pssc2007] for a logarithmic derivative of the order function $q(a)$ with respect to $1/a$. This ordinary differential equation (without stochastic field) is much simpler than the exact partial differential equations, which is a consequence of the existence of scaling behavior and of homogeneous functions. It is well-known that scale invariance and the so-called similarity method reduces partial to ordinary differential equations [@debnath-book]. Therefore, at least near the critical points one can expect ordinary differential equations to describe RSB.
The Langevin-type of differential equation could however be reshaped in terms of different pseudo-dynamic variable $a$, $1/a$ or other forms. The differential equation remains to be relaxational and thus there remains some arbitrariness in the choice of the proper ’time’ variable $\tau$. If we wish to apply dynamic scaling to the RSB-representation, we are unfortunately bound to make a definite choice. Let us consider $a+1/a$ as a pseudo-time in order to conform with the expectation that critical behavior at either of the points ${\cal CP}1$ or ${\cal CP}2$ should occur in the long-time limit. Then at ${\cal CP}1$ we would get $\tau\sim a\rightarrow\infty$ while $\tau\sim\frac{1}{a}\rightarrow\infty$ at ${\cal CP}2$.
We may now consider pseudo-dynamical scaling by studying the $a$-dependent quantities like the order function near ${\cal CP}1$ and ${\cal CP}2$.
Near ${\cal CP}1$ the order function obeys $$q(a,\kappa,T)=1+a^{-2} f_q(T^2/a^{-2},a^2/\kappa^{10/3})$$ with $f_q(x,0)\sim x$, $f_q(0,x)\sim x$, and $f_q(0,0)$ finite. In terms of the transformed order function $\phi\sim \partial_{1/a}log(q(a))$ one gets $\phi\sim 1/a$ at $\kappa=\infty, T=0$ and $\phi\sim T$ at $a=\infty,\kappa=\infty$. This would allow to extract an exponent $\beta=1$, and together with $\xi_{\kappa}\sim a^{3/5}\sim T^{-3/5}$ the correlation exponent $\nu=3/5$ results. Then, using $\tau\sim 1/a$ as a pseudo-time variable near ${\cal CP}1$, the dynamic exponent follows from $\tau\sim\xi_{\kappa}^z$ as $z=5/3$, remarking also that $z\hspace{.1cm}\nu=1$.
Given the already mentioned similarities with directed polymers, the known relationship between those and the KPZ-equation [@laessig-KPZ] suggests a comparison between pseudo-dynamics of RSB in the SK-model and the dynamic KPZ-behavior.
We note that dynamic critical exponents were recently reported by Canet and Moore [@KPZ-canet-moore] for two universality classes of the KPZ-equation. For one type of approximate solution of the Flory-Imry-Ma or RSB-type, the dynamic exponent assumed the value $z=(4+d)/3$ below two dimensions, hence $z=5/3$ in $d=1$. Hence we state that the exact pseudo-dynamic critical exponent of RSB in the SK-model maps to the one of FIM or RSB-approximate solution of the KPZ-equation in 1D (provided one agrees to make the choice of $1/a$ being the pseudo-time which corresponds to the real time of KPZ). As in the DP-analogy, this should correspond to the domain-wall solution and hence to ${\cal CP}1$.
There is however also the known exact result of the 1D KPZ-equation $z=3/2$ also given by Canet and Moore[@KPZ-canet-moore].
One may suspect that this result should be mappable to pseudo-dynamic behavior of RSB-SK near the second critical point ${\cal CP}2$. Indeed, if we would conserve $z\hspace{.1cm}\nu=1$, the same exponent $z=3/2$ would be obtained near ${\cal CP}2$. We do not have any reason for this choice, and the explicit scaling of the order function near ${\cal CP}2$ does not confirm this value, neither for the choice $\tau=1/a$ nor for $\tau=a$. This question must remain open.
Detailed structure of the order function derivatives $q'(a)$ and $q''(a)$. {#q(a)-derivatives}
==========================================================================
The derivatives depend much more specifically on the pseudo-time variable than $q(a)$ itself. Failure of an analytic model function becomes detectable more easily in the derivatives. In order to control our modeling, we studied analytical fits first of all $200$-RSB data, and secondly of the $50$ calculated fixed points. Taking $q'(a)$ directly form the analytical form $q(a)$ as given by Eq.(\[model-function\]), we find good agreement with the discretized slope calculated from the fixed points. This is demonstrated in the main part of Fig.\[fig:q-derivatives\]. In addition the insert shows the second derivative $\partial_a^2 q(a)$, where the two analytic models (red and blue curves) show a small difference. We note in passing that the shape of the 2nd derivative $q''(a)$ shows a similarity with the 2-loop correction in the $Y$-correlator of ($1+1$)-dimensional random bond pinned manifolds[@middleton] (we don’t know whether this similarity has a deeper reason).
The maximum seen in $q'(a)$ expresses the Crisanti-Rizzo curvature[@Crisanti2002; @prl2007], a slight non-linearity of the order function in the small $a$ regime. It is however this contribution, which renders an analytical fit rather awkward. An analytical model which fits well the neighborhood of the critical points $a=0$ and $a=\infty$ can have a simpler shape[@prl2005], but we want to get the pseudo-dynamic crossover right as well. Global quantities like the energy (integral over all $a$), picking up only small contributions nearby the critical points, depend on the crossover regime modeling. This can be seen in Eq.(\[gs-energy\]) as well as in Fig.12 for the energy density.
Conclusions
===========
In this article we formulated a scaling theory of the flow towards full replica symmetry breaking (RSB) at $T=0$, for finite temperatures, and for finite magnetic fields in the SK-model. Several fixed point functions of RSB-flow were evaluated.
The analysis was guided by
1\. a large set of high-precision numerical data, with up to $200$ self-consistently solved orders of replica symmetry breaking for the $T=0$ SK-spin glass and still a high number of orders for finite temperatures and magnetic fields,
2\. by the identification of two critical points (at zero temperature and zero magnetic field), which are distinguished by two different pseudo-dynamic limits as obtained in an analytic picture of a Langevin-type equation in Ref., and
3\. by representing nonanalytic behavior near each of these critical points in the framework of the scaling theory of critical phenomena.
Power laws and scaling functions were identified by fitting the leading $200$ RSB-orders of self-consistent solutions deep inside the SK spin glass phase; non-integer exponents were found and identified as rational numbers, characteristic of one-dimensional RSB-behavior. This 1D-character originates in correlations on the pseudo-lattice of RSB-orders $\kappa$. By means of scaling functions we demonstrated how these nonanalytic 1D-correlations enter in temperature- and field-dependent power laws in the ordered phase.
The universality class of replica symmetry breaking in the SK-model called for comparison with other physical systems, and shows similarities with directed polymers.
The decoupling of a magnetic field sensitive critical point from a temperature-sensitive one was embedded in an unconventional scaling hypothesis for the free energy and found to be consistent with the numerical data.
The RSB flow was used to generate an order parameter fixed point function, serving as a crossover between the two different pseudo-dynamical critical limits. Its fine structure was revealed by the leading derivatives, again confirming excellent agreement between analytical model and fixed point function.\
Acknowledgments
===============
We are indebted to Kay Wiese, Markus Müller, Thomas Garel, Andrea Crisanti, David Sherrington, Haye Hinrichsen, and Stefan Boettcher for stimulating discussions and helpful remarks. We thank Tommaso Rizzo for useful remarks and for sending recent work prior to publication [@parisi-rizzo]. We thank the DFG for partial and continued support of this research under grant Op28/7-1.
[99]{} M. Mézard, G. Parisi, M.A. Virasoro, [*Spin Glass Theory and Beyond*]{} (World Scientific, Singapore, 1987) A.P. Young, [*Spin Glasses and Random Fields*]{}, (World Scientific, 1998) G. Parisi, [*Field Theory, Disorder and Simulations*]{} (World Scientific, Singapore, 1992) S. Galam, Y. Gefen, Y. Shapir, J.Math.Sociology 9, 1 (1982) S. Galam, J.Math.Psychology 30, 426 (1986), condmat/9901022 and references therein P.G. Higgs, 76, 704 (1996) F. David, K.J. Wiese, 98, 128102 (2007) M. Lässig, K.J. Wiese, 96, 228101 (2006) H. Orland, A. Zee, Nucl.Phys. B620 \[FS\], 456 (2002) E. Marinari, A. Pagnani, F. Ricci-Tersenghi, 65, 041919 (2002) M. Müller, 67, 021914 (2003) D. Sherrington, S. Kirkpatrick, Phys. Rev. Lett. 35, 1972 (1975) M.Talagrand, Annals of Mathematics 163, 221 (2006) and\
‘Spin Glasses: A Challenge for Mathematicians : Cavity and Mean Field Models’, Springer-Verlag (2003) G. Parisi, J.Phys. A 13, L115 (1980) G. Parisi, 50, 1946 (1983) K. Binder, A.P. Young, Rev.Mod.Phys. 58, 801 (1986) S. Boettcher, Eur.Phys.J. B46, 501 (2005) J.-P. Bouchaud, F. Krzakala, O.C. Martin, 68, 224404 (2003) T. Aspelmeier, A. Billoire, E. Marinari, M. Moore, cond-mat/07113445 (2007) R. Oppermann, M.J. Schmidt, D. Sherrington, 98, 127201 (2007) M.E. Fisher, Rev.Mod.Phys. 46, 597 (1974) P. Le Doussal, M. Müller, K.J. Wiese, condmat/07113929 (2007) D.S. Fisher, H. Sompolinsky, 54, 1063 (1985) D. Fisher, D. Huse, 38, 373, and 386 (1988), 56, 1601 (1986) C. De Dominicis, I. Giardina, E. Marinari, O.C. Martin, F. Zuliani, 72, 014443 (2002) C. Monthus, T. Garel, cond-mat/07123358 (2007) R. Oppermann, D. Sherrington, 95, 197203 (2005) S. Pankov, 96, 197204(2006) M. Müller, S. Pankov, 75,144201 (2007) M.J. Schmidt, R. Oppermann, cond-mat/0801175, accepted for publication in G.Parisi, G. Toulouse, J.Physique Lett 41, L361 (1980) H. Sompolinsky, 47, 935 (1981) R. Oppermann, M.J. Schmidt, Phys.Stat.Sol(c)4, 3347, (2007) K. Johansson, Commun. Math. Phys. 209, 437 (2000) M. Ha, J. Timonen, M. den Nijs, 68, 056122 (2003) T. Garel, H. Orland, 55, 226 (1996) T. Garel, private communication D. Sherrington, private communication S.N. Majumdar, cond-mat/0701193 C. Monthus, T. Garel, 69, 061112 (2004) C. Monthus, T. Garel, 73, 056106 (2006) C. Monthus, T. Garel, 74, 051109 (2006) E. Bertin, 95, 170601 (2005) M. Prähofer, H. Spohn, 84, 4882 (2000) M. Mézard, G. Parisi, J. Phys. I1, 809 (1991) L. Debnath, [*Nonlinear partial differential equations*]{}, Birkhäuser, (2004) A.A. Middleton, P. Le Doussal, K.J. Wiese, 98, 155701 (2007) L. Canet, M.A. Moore, 98, 200602 (2007) M. Kardar, G. Parisi, Yi-Cheng Zhang, 56, 889 (1986) A. Crisanti, T. Rizzo, 65, 046137 (2002) M. Lässig, Nucl. Phys. B 448 \[FS\], 559 (1995) G. Parisi, T. Rizzo, preprint (2008)
[^1]: in addition to $1/\kappa$ we also consider scaling with respect to the pseudo-dynamic variable $1/a$, treating both as quasi-continuous scaling variables - one may imagine the analogy of a large enough lattice such that the discreteness of momenta can be neglected.
[^2]: The crossover behavior (between critical points) may not need an exact representation.
[^3]: For example in order to guarantee a nonnegative entropy when temperature decreases towards zero, one must scale $\kappa$ up like $T^{\nu_T}$ such that the RSB-order stays larger than $\xi_{\kappa}(T)$
[^4]: We thank Thomas Garel for drawing our attention to the paper by Bouchaud et al[@bouchaud-energy-exponents]
[^5]: On the given scale, all numerical results fall almost exactly onto the single analytical curve for $\epsilon_0(a)$; only extreme magnification reveals the RSB-flow of the numerical data and tiny deviations from the analytical model in the crossover regime between ${\cal CP}1$ and ${\cal CP}2$.
[^6]: One may choose rescaling factors such that discrete spacing of energy levels would survive even in the fixed point function ($\kappa=\infty$) near $l/\kappa=0$ and $l/\kappa=1$; this would correspond to the discrete spectra of parameter ratios discussed in the paper.
[^7]: We tacitly assume here that the the density functions $\rho^*_{\epsilon}(\zeta)$ and also $\rho^*_{\chi}(\zeta)$ (below) are Riemann-integrable. The upgrade from the set of rationable numbers $l/\kappa$ to a continuous variable $\zeta$ could in principle hide a mathematically subtle problem, if the density functions were highly discontinuous and would for example require a Lebesgue integral
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Predictions of the gap-probability renormalization model for single and double diffraction dissociation cross sections in proton-proton collisions at the LHC are presented and compared with recent CMS measurements.'
author:
- |
[*Konstantin Goulianos*]{}\
The Rockefeller University, 1230 York Avenue, New York, NY 10065, USA
title: Phenomenology of single and double diffraction dissociation
---
Introduction {#sec:intro}
============
Measurements at the [lhc]{} have shown that there are sizable disagreements among Monte Carlo [(mc)]{} implementations of “soft” processes based on cross sections proposed by various physics models, and that it is not possible to reliably predict all such processes, or even all aspects of a given process, using a single model [@ref:d2012_pheno; @D2012_talks_vs_models; @ref:dis13_pheno]. In the [cdf]{} studies of diffraction at the Tevatron, all processes are well modeled by the [mbr]{} (Minimum Bias Rockefeller) [mc]{} simulation, which is a stand-alone simulation based on a unitarized Regge-theory model, [renorm]{} [@RENORM], employing inclusive nucleon parton distribution functions ([pdf]{}’s) and [qcd]{} color factors. The [renorm]{} model was updated in a presentation at [eds-2009]{} [@EDS2009_total] to include a unique unitarization prescription for predicting the total $pp$ cross section at high energies, and that update has been included as an [mbr]{} option for simulating diffractive processes in [pythia8]{} since version [pythia8]{}.165 [@PYTHIA8.165], to be referred here-forth as [pythia8-mbr]{}. In this paper, we briefly review the cross sections [@MBR_note] implemented in this option of [pythia8]{} and compare the [sd]{} and [dd]{} predictions with [lhc]{} measurements.
Cross sections
==============
The following diffraction dissociation processes are considered in [pythia8-mbr]{}: $$\begin{aligned}
\hbox{\sc sd}& pp\rightarrow Xp&{\rm Single\:Diffraction\;(or\;Single\;Dissociation)},\\
{\rm or}&pp\rightarrow pY&{\rm (the\;other\;proton\;survives)}\nonumber\\
\hbox{\sc dd}&pp\rightarrow XY&{\rm Double\;Diffraction\;(or\;Double\;Dissociation)},\\
\hbox{{\sc cd} (or {\sc dpe})}&pp\rightarrow pXp&{\rm Central\;Diffraction\;(or\;Double\;Pomeron\;Exchange)}.
\label{eqn:processes}\end{aligned}$$
The [renorm]{} predictions are expressed as unitarized Regge-theory formulas, in which the unitarization is achieved by a renormalization scheme where the Pomeron (${I\!\! P}$) flux is interpreted as the probability for forming a diffractive (non-exponentially suppressed) rapidity gap and thereby its integral over all phase space saturates at the energy where it reaches unity. Differential cross sections are expressed in terms of the ${I\!\! P}$-trajectory, $\alpha(t)=1+\epsilon +\alpha't = 1.104 + 0.25~\mbox({\rm GeV}^{-2})\cdot t$, the ${I\!\! P}$-$p$ coupling, $\beta(t)$, and the ratio of the triple-${I\!\! P}$ to the ${I\!\! P}$-$p$ couplings, $\kappa \equiv g(t)/\beta(0)$. For large rapidity gaps, $\Delta y\geq 3$, for which ${I\!\! P}$-exchange dominates, the cross sections may be written as, $$\begin{aligned}
\frac{d^2\sigma_{SD}}{dtd\Delta y} & = & \frac{1}{N_{\rm gap}(s)} \left[ \frac{~ ~ \beta^2(t)}{16\pi}e^{2[\alpha(t)-1]\Delta y}\right] \cdot \left\{ \kappa \beta^2(0) \left( \frac{s'}{s_{0}}\right)^{\epsilon}\right\}, \label{eqSD}\\
\frac{d^3\sigma_{DD}}{dtd\Delta y dy_0} & = & \frac{1}{N_{\rm gap}(s)} \left[ \frac{\kappa\beta^2(0)}{16\pi}e^{2[\alpha(t)-1]\Delta y}\right] \cdot \left\{ \kappa \beta^2(0) \left( \frac{s'}{s_{0}}\right)^{\epsilon}\right\}, \label{eqDD}\\
\frac{d^4\sigma_{DPE}}{dt_1dt_2d\Delta y dy_c} & = & \frac{1}{N_{\rm gap}(s)} \left[\Pi_i\left[ \frac{\beta^2(t_i)}{16\pi}e^{2[\alpha(t_i)-1]\Delta y_i}\right]\right] \cdot \kappa \left\{ \kappa \beta^2(0) \left( \frac{s'}{s_{0}}\right)^{\epsilon}\right\}, \label{eqCD}\end{aligned}$$ where $t$ is the 4-momentum-transfer squared at the proton vertex, $\Delta y$ the rapidity-gap width, and $y_0$ the center of the rapidity gap. In Eq. (\[eqCD\]), the subscript $i=1, 2$ enumerates Pomerons in a [dpe]{} event, $\Delta y=\Delta y_1 + \Delta y_2$ is the total rapidity gap (sum of two gaps) in the event, and $y_c$ is the center in $\eta$ of the centrally-produced hadronic system.
Results
=======
In this section, we present as examples of the predictive power of the [renorm]{} model some results reported by the [totem]{}, [cms]{}, and [alice]{} collaborations for $pp$ collisions at $\sqrt s=7$ TeV, which can be directly compared with [renorm]{} formulas without using the [pythia8-mbr]{} simulation.
Another example of the predictive power of [renorm]{} is shown in Fig. 2, which displays the total [sd]{} (left) and total [dd]{} (right) cross sections for $\xi<0.05$, after extrapolation into the low mass region from the measured [cms]{} cross sections at higher mass regions, presented in [@ref:ciesielski], using [renorm]{}.
![Measured [sd]{} (left) and [dd]{} (right) cross sections for $\xi<0.05$ compared with theoretical predictions; the model embedded in [pythia8-mbr]{} provides a good description of all data.[]{data-label="fig:fig2"}](eds13pheno_fg1l.eps "fig:"){width="47.00000%"} ![Measured [sd]{} (left) and [dd]{} (right) cross sections for $\xi<0.05$ compared with theoretical predictions; the model embedded in [pythia8-mbr]{} provides a good description of all data.[]{data-label="fig:fig2"}](eds13pheno_fg1r.eps "fig:"){width="47.00000%"}\
KG\*: this “data” point was obtained after extrapolation into the unmeasured low mass region(s) from the measured [cms]{} cross sections [@ref:ciesielski] using the [mbr]{} model.
Summary\[sec:conclusion\]
=========================
Pre-[lhc]{} predictions for the [sd]{} and [dd]{} cross sections at high energies, based on the [renorm]{} special parton-model approach to diffraction, employing inclusive proton parton distribution functions and [qcd]{} color factors have been reviewed. The predictions of the model are in good agreement with the [cms]{} results presented at this conference [@KG:cmsdiff].
Acknowledgments
===============
I would like to thank the Office of Science of the Department of Energy for supporting the Rockefeller experimental diffraction physics programs at Fermilab and [lhc]{} on which this research is anchored.
[99]{} K. Goulianos, [*Predictions of Diffractive Cross Sections in Proton-Proton Collisions*]{}, in proceedings of [*Diffraction 2012: International Workshop on Diffraction in High Energy Physics, 10-15 September 2012*]{}, AIP Conf. Proc. [**1523**]{} 107 (2013), doi:http://dx.doi.org/10.1063/1.4802128.
See models presented by various authors in proceedings of [*Diffraction 2012*]{}, AIP Conf. Proc. [**1523**]{} (to be published).
K. Goulianos, [*Predictions of Diffractive, Elastic, Total, and Total-Inelastic pp Cross Sections vs LHC Measurements*]{}, to appear in proceedings of [*XXI International Workshop on Deep-Inelastic Scattering and Related Subject -DIS2013, 22-26 April 2013, Marseilles, France*]{}. K. Goulianos, [*Hadronic Diffraction: Where do we Stand?*]{}, in proceedings of [*Les Rencontres de Physique de la Vallee d’Aoste: Results and Perspectives in Particle Physics, La Thuile, Italy, February 27 - March 6, 2004*]{}, Frascati Physics Series, Special 34 Issue, edited by Mario Greco, arXiv:hep-ph/0407035 (2004).
K. Goulianos, [*Diffractive and Total $pp$ Cross Sections at [lhc]{}*]{}, in proceedings of [*13th International Conference on Elastic and Diffractive Scattering (Blois Workshop) - Moving Forward into the LHC Era, CERN, Geneva, Switzerland, June 29-July 3, 2009*]{}, CERN-Proceedings-2010-02, edited by Mario Deile, arXiv:1002.3527v2 (2010).
T. Sjöstrand, S. Mrenna and P. Skands, [*JHEP05 (2006) 026, Comput. Phys. Comm. 178 (2008) 852*]{}, arXiv:hep-ph/0603175 (2006), arXiv:0710.3820 (2007).
R. Ciesielski and K. Goulianos, [*MBR Monte Carlo Simulation in PYTHIA8*]{}, arXiv:1205.1446 (2012).
R. Ciesielski ([cms]{} Collaboration), [*Measurements of diffraction in p-p collisions in CMS*]{}, to appear in proceedings of [*XXI International Workshop on Deep-Inelastic Scattering and Related Subject -DIS2013, 22-26 April 2013, Marseilles,France*]{}. K. Goulianos (on behalf of the CMS oollaboration), [*[cms]{} results on soft diffraction*]{}, in proceedings of [*EDS Blois 2013*]{}, arXiv:1309.5705 (2013) - Report-no: EDSBlois/2013/20.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It is shown that any, possibly singular, Fano variety $X$ admitting a Kähler-Einstein metric is K-polystable, thus confirming one direction of the Yau-Tian-Donaldson conjecture in the setting of Q-Fano varieties equipped with their anti-canonical polarization. The proof is based on a new formula expressing the Donaldson-Futaki invariants in terms of the slope of the Ding functional along a geodesic ray in the space of all bounded positively curved metrics on the anti-canonical line bundle of $X.$ One consequence is that a toric Fano variety $X$ is K-polystable iff it is K-polystable along toric degenerations iff $0$ is the barycenter of the canonical weight polytope $P$ associated to $X.$ The results also extend to the logarithmic setting and in particular to the setting of Kähler-Einstein metrics with edge-cone singularities. Furthermore, applications to bounds on the Ricci potential and Perelman’s $\lambda-$entropy functional on $K-$unstable Fano manifolds are given.'
address: 'Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, Sweden'
author:
- 'Robert J. Berman'
title: 'K-polystability of Q-Fano varieties admitting Kähler-Einstein metrics'
---
Introduction
============
Let $(X,L)$ be a polarized projective algebraic manifold. i.e. $L$ is an ample line bundle over $X.$ According to the fundamental Yau-Tian-Donaldson conjecture in Kähler geometry (see the recent survey [@p-s2]) the first Chern class $c_{1}(L)$ contains a Kähler metric $\omega$ with *constant scalar curvature* if and only if $(X,L)$ is *K-polystable.* This notion of stability is of an algebro-geometric nature and has its origin in Geometric Invariant Theory (GIT). It was introduced by Tian [@ti1] and in its most general form, due to Donaldson [@d0] it is formulated in terms of polarized $\C^{*}-$equivariant deformations $\mathcal{L}\rightarrow\mathcal{X}\rightarrow\C$ of $(X,L)$ called *test configurations* for the polarized variety $(X,L).$ ** Briefly, to any test configuration $(\mathcal{X},\mathcal{L})$ one associates a numerical invariant $DF(\mathcal{X},\mathcal{L}),$ called the *Donaldson-Futaki invariant,* and $X$ is said to be K-polystable if $DF(\mathcal{X},\mathcal{L})\leq0$ with equality if and only if *$(\mathcal{X},\mathcal{L})$* is isomorphic to a product test configuration (the precise definitions are recalled in section \[sub:K-polystability-and-test\]). The test configuration $(\mathcal{X},\mathcal{L})$ plays the role of a one-parameter subgroup in GIT and the Donaldson-Futaki invariant corresponds to the Hilbert-Mumford weight in GIT. Accordingly, the Yau-Tian-Donaldson conjecture is sometimes also referred to as the manifold version of the celebrated Kobayashi-Hitchin correspondence between Hermitian Yang-Mills metrics and polystable vector bundles.
In the case when the connected component $\mbox{Aut}(X)_{0}$ containing the identity of the the automorphism group is trivial, i.e. $X$ admits no non-trivial holomorphic vector fields, it was shown by Stoppa [@st] that the existence of a constant scalar curvature metric in $c_{1}(L)$ indeed implies that $(X,L)$ is K-polystable. The case when $\mbox{aut\ensuremath{(X)}}$ is non-trivial leads to highly non-trivial complications, related to the case when $DF=0$ and was treated by Mabuchi in a series of papers [@ma2; @ma3]. In this note we will be concerned with the special case when $\omega$ is a *Kähler-Einstein metri*c of positive scalar curvature. Equivalently this means that the Ricci curvature of $\omega$ is positive and constant: $$\mbox{Ric }\ensuremath{\omega=\omega,}$$ i.e. $L$ is the anti-canonical line bundle $-K_{X}$ and $X$ is a Fano manifold. In the seminal paper of Tian [@ti1] it was shown, in the case when $\mbox{Aut}(X)_{0}$ is trivial, that $X$ is K-stable along all test configurations $\mathcal{X}$ with normal central fiber $\mathcal{X}_{0}$ (in particular the central fiber has no multiplicities). Here we will show that the assumption on $\mbox{Aut}(X)_{0}$ can be removed, as well as the normality assumption on $\mathcal{X}_{0}.$ In fact, we will allow $X$ to be a general, possibly singular, Fano variety and prove the following
\[thm:k-poly intro\]Let $X$ be a Fano variety admitting a Kähler-Einstein metric. Then $X$ is K-polystable.
It should be pointed out that, following Li-Xu [@l-x], we assume that the total space $\mathcal{X}$ of the test configuration is normal to exclude some pathological test-configurations that had previously been overlooked in the literature (as explained in [@l-x] ). As follows from the results of Ross-Thomas [@r-t] this does not effect the notion of K semi-stability. Moreover, by a remark of Stoppa [@st-1] K-polystability for all normal test configuration is equivalent to having $DF(\mathcal{X},\mathcal{L})\leq0$ for all test configurations with equality iff $(\mathcal{X},\mathcal{L})$ is isomorphic to a product away from a subvariety of codimension at least two.
We recall that, by definition, $X$ is a Fano variety if it is normal and the anti-canonical divisor $-K_{X}$ is defined as an ample $\Q-$line bundle (such a variety is also called a $\Q-$Fano variety in the literature) and, following [@bbegz], $\omega$ is said to be a *Kähler-Einstein metric* on $X$ if $\omega$ is a bona fide Kähler-Einstein metric on the regular locus $X_{reg}$ of $X$ and the volume of $\omega$ on $X_{reg}$ coincides with the top-intersection number $c_{1}(-K_{X})^{n}[X].$ The existence of such a metric actually implies that the singularities are rather mild in the sense of the Minimal Model Program in birational geometry [@bbegz], more precisely the singularities of $X$ are Kawamata log terminal (klt, for short),
One motivation for considering the general singular setting is that singular Kähler-Einstein varieties naturally appear when taking Gromov-Hausdorff limits of smooth Kähler-Einstein varieties [@ti2; @do-s]. This is related to the expectation that one may be able to form *compact* moduli spaces of K-polystable Fano varieties if singular ones are included, or more precisely those with klt singularities; compare the discussions in [@l-x] and [@od-s-s] (where the surface case is considered). From this point of view it may be illuminating to compare the previous theorem with the classical (non-Fano) case of irreducible curves of genus $g\geq2.$ As shown by Deligne-Mumford, including singular nodal curves $X$ with $K_{X}$ (=the dualizing sheaf) ample yields a compact moduli space $\bar{\mathcal{M}_{g}}$ and all such curves are asymptotically Chow and Hilbert stable [@mor] and in particular K-semistable [@r-t] (see [@od] for a recent direct proof of K-stability valid in a higher dimensional setting). The link to the previous theorem comes from the fact that any curve $X$ in $\bar{\mathcal{M}_{g}}$ admits a Kähler-Einstein metric $\omega$ on $X_{reg}$ such that the area of $\omega$ on $X_{reg}$ coincides with $c_{1}(K_{X})[X].$ Of course, as opposed to the Fano case the Kähler-Einstein metric on a curve $X$ of genus at least two is *negatively* curved (and complete on $X_{reg}).$ On the other hand, one striking feature of the Fano setting is that it is enough to consider *normal* varieties and even those with “mild singularities” (klt) in the sense of the Minimal Model Program in birational geometry (compare [@od; @l-x]) . Note also that the case of Fano varieties with quotient singularities, i.e. $X$ is a Fano orbifold was previously studied by Ding-Tian [@di-ti].
Another motivation for allowing $X$ to be singular comes from the toric setting considered in [@b-b-2], where it was shown that the existence of a Kähler-Einstein metric on a toric Fano variety $X$ is equivalent to $X$ being $K-$polystable with respect to *toric* test configuration. In turn, this latter property is equivalent to the canonical rational weight polytope $P$ associated to $X$ having zero as its barycenter. However, the question weather the existence of a Kähler-Einstein metric on the toric variety $X$ implies that $X$ is K-polystable for *general* test configurations was left open in [@b-b-2]. Combining the previous theorem with the results in [@b-b-2] we thus deduce the following
A toric Fano variety is K-polystable iff it is K-polystable with respect to toric test configurations. In particular, if $P$ is a reflexive lattice polytope, then the toric Fano variety $X_{P}$ associated to $P$ is $K-$polystable if and only if $0$ is the barycenter of $P.$
We recall that *reflexive* lattice polytopes $P$ (i.e. those for which the dual $P^{*}$ is also a lattice polytope) correspond to toric Fano varieties whose singularities are Gorenstein, i.e $-K_{X}$ is an ample line bundle (and not only a $\Q-$line bundle). This huge class of lattice polytopes plays an important role in string theory, as they give rise to many examples of mirror symmetric Calabi-Yau manifolds [@ba]. Already in dimension three there are 4319 isomorphism classes of such polytopes [@k-s] and hence including *singular* Fano varieties leads to many new examples of K-polystable and K-unstable Fano threefolds (recall that there are, all in all, only 105 families of *smooth* Fano threefolds).
As explained in section \[sub:The-logarithmic-setting\] the theorem above extends to the logarithmic setting of Kähler-Einstein metrics on *log Fano variates* $(X,D),$ as considered in [@bbegz]. In particular, this shows that if $D$ is an effective $\Q-$divisor with simple normal crossings, and coefficients $<1,$ on a projective manifold $X$ such that the logarithmic first Chern class of $(X,D)$ contains a Kähler-Einstein metric $\omega$ with *edge-cone singularities* along $D$ in the sense of [@do-3; @cgh; @j-m-r], then the pair $(X,D)$ is log K-polystable in the sense of [@do-3; @li1; @o-s]. In a companion paper it will also be shown that any Fano variety admitting canonically balanced metrics, in the sense of Donaldson [@do3], associated to $(X,-kK_{X})$ for $k$ sufficiently large, is K-semistable.
The starting point of the proof of Theorem \[thm:k-poly intro\] is the following result of independent interest, which expresses the Donaldson-Futaki invariant in terms of the Ding functional $\mathcal{D}$ (see formula \[eq:def of ding functional\]):
\[thm:DF=00003Dding intro\]Let $X$ be a Fano variety with klt singularities, *$(\mathcal{X},\mathcal{L})$ a test configuration for $(X,-K_{X})$ (assumed to have normal total space) and $\phi$ a locally bounded metric on $\mathcal{L}$ with positive curvature current. Setting $t:=-\log|\tau|^{2}$ and denoting by $\phi^{t}=\rho(\tau)^{*}\phi_{\tau},$ the corresponding ray of locally bounded metrics on $-K_{X},$ the following formula holds:* $$-DF(\mathcal{X},\mathcal{L})=\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi^{t})+q,$$ where $q$ is a non-negative rational number determined by the polarized scheme $(\mathcal{X}_{0},\mathcal{L}_{|\mathcal{X}_{0}})$ with the property that
- If $q=0$ then the central fiber $\mathcal{X}_{0}$ is generically reduced and $\mathcal{L}$ isomorphic to $-K_{\mathcal{X}/\C}$ on the regular locus of $\mathcal{X}$ (In particular,$\mathcal{X}$ is then $\Q-$Gorenstein)
- If $\mathcal{X}_{0}$ is normal with klt singularities (i.e. the test configuration is special) then $q=0.$
In the case when $\mathcal{X}$ is smooth and the support of $\mathcal{X}_{0}$ has simple crossing we have that $q=0$ iff $\mathcal{X}_{0}$ is reduced and $\mathcal{L}$ is isomorphic to $-K_{\mathcal{X}/\C}.$
More precisely, we will give an explicit expression for the number $q$ in the previous theorem, expressed in terms of a given log resolution of the central fiber $\mathcal{X}_{0}.$ The previous theorem should be compared with the results of Paul-Tian [@p-t] and Phong-Ross-Sturm [@p-r-s; @p-t] - concerning the general setting of a polarized manifold $(X,L)$ - which express $DF(\mathcal{X},\mathcal{L})$ in terms of the asymptotic derivative of the Mabuchi functional, plus a correction term taking multiplicities into account, under the assumption that the total space $\mathcal{X}$ be smooth and the metric $\phi$ on $\mathcal{L}$ be smooth and strictly positively curved. It should also be pointed out that, in the case when $(X,L)=(X,-K_{X})$ with $X$ smooth and $\mathcal{X}_{0}$ normal, Ding-Tian [@di-ti] showed that the asymptotic derivative of the Mabuchi functional is equal to the generalized Futaki invariant of $\mathcal{X}_{0}.$
In order to prove Theorem \[thm:k-poly intro\] we apply Theorem \[thm:DF=00003Dding intro\] to a weak geodesic ray, emanating from the Kähler-Einstein metric on $-K_{X}.$ We can then exploit a recent result of Berndtsson [@bern2] (and its generalization to singular Fano varieties in [@bbegz]) concerning convexity properties of the Ding functional which immediately gives the K-semistability part of Theorem \[thm:k-poly intro\]. As for the proof of Theorem \[thm:DF=00003Dding intro\] it uses, among other things, a result of Phong-Ross-Sturm [@p-r-s] which in the Fano case expresses the Donalson-Futaki invariant $DF$ in terms of the weight of certain Deligne pairings over the central fiber of the test configuration. We will also use the, closely related, intersection theoretic formulation of the Donaldson-Futaki invariant in [@od; @w; @l-x].
In fact, we will give *two* alternative proofs of Theorem \[thm:k-poly intro\]. The first one is shorter and uses a very recent result of Li-Xu [@l-x]. This latter remarkable result, which confirms a conjecture of Tian, says that it is enough to test K-polystability for *special test configurations* $\mathcal{X},$ i.e. such that the central fiber is normal with klt singularities. This will allow us to restrict our attention to test configurations where the central fiber is a priori known to be reduced, which simplifies the characterization of the the case $DF(\mathcal{X},\mathcal{L})=0\text{ }$(which could alternatively also be dealt with using semi-stable reduction as in [@l-x; @a-l-d]). The second proof is based on a refined analysis of the singularities of $L^{2}-$metrics of certain adjoint direct image bundles, but is in a sense more direct as it uses neither the Minimal Model Program, nor semi-stable reduction.
It may be illuminating to compare the approach here with the original approach of Tian [@ti1] in the case of a non-singular Fano variety. As shown by Tian [@ti1], building on the results in [@di-ti]), in the case when $\mathcal{X}$ is a special test configuration, the invariant $DF$ can be computed in terms of the asymptotics of *Mabuchi’s K-energy* functional along a one-parameter family $\phi_{k}^{t}$ of smooth metrics induced by a fixed (relatively) projective embedding of $\mathcal{X}$ determined by a sufficiently large tensor power of the relative anti-canonical bundle (which in current terminology is called a *Bergman geodesic* at level $k).$ The sign properties of $DF$ are then determined using that, in the presence of a Kähler-Einstein metric, the Mabuchi’s K-energy functional is *proper* (if there are no non-trivial holomorphic vector fields on $X),$ which is the content of deep result of Tian [@ti1]. Here we thus show that the Mabuchi functional and the smooth Bergman geodesic may be replaced by the Ding functional and a weak (bounded) geodesic, respectively, and the properness result with Berndtsson’s convexity result. One technical advantage of the Ding functional is that, unlike the Mabuchi functional, it is indeed well-defined along a weak geodesic, as previously exploited in [@bern2; @bbegz] in the context of the uniqueness problem for Kähler-Einstein metrics. Thus the approach in this paper is in line with the programs of Phong-Sturm [@p-s] and Chen-Tang [@ch] for calculating Donaldson-Futaki invariants by using (weak) geodesic rays associated to test configurations.
In the case when $X$ is a smooth Kähler-Einstein Fano variety with $\mbox{Aut}(X)_{0}$ trivial the properness of the Ding functional was shown by Tian [@ti1] as a consequence of his properness result for the Mabuchi functional. It was later shown in [@p-s-s-w] that if center of the group $\mbox{Aut}(X)_{0}$ is finite then the Ding functional is still proper (in an appropriate sense), but the properness in the case of general Kähler-Einstein manifold is still open. The generalization of the properness result (even when $\mbox{Aut}(X)_{0}$ is trivial) to singular Fano varieties and more generally log Fano varieties also appears to be a challenging open problem. Anyway, these subtle issues are bypassed in the present approach.
It should be pointed out that the second point in Theorem \[thm:DF=00003Dding intro\] is not used in the proof of Theorem \[thm:k-poly intro\]. However, as discussed in section \[sec:Outlook-on-the\], it fits naturally into Tian’s program [@ti2] for establishing the existence part of the Yau-Tian-Donaldson conjecture - in particular when generalized to the setting of singular Fano varieties. The relation to Tian’s program comes from the following immediate consequence of Theorem \[thm:k-poly intro\] (applied to the universal family $\mathcal{X}$ provided by the universal property of the corresponding Hilbert scheme).
\[cor:mab along bergman intro\]Let $X$ be a Fano variety embedded in $\P^{N}$ such that $\mathcal{O}(1)_{|X}=-kK_{X}$ and $\rho$ the one-parameter subgroup defined by a $\C^{^{*}}-$action on $\P^{N}.$ Assume that the limiting cycle $X_{0},$ as $\tau\rightarrow0,$ of the corresponding one-parameter family of varieties $X_{\tau}:=\rho(\tau)_{*}X,$ is normal with klt singularities and that $DF(X_{0},\mathcal{O}(1)_{|X_{0}})<0.$ Then the the Ding functional (and hence also the Mabuchi functional) tends to infinity along the corresponding curve $\phi_{k}^{t}$ of Bergman metrics on $-K_{X}$ (i.e. $\phi_{k}^{t}:=\rho(\tau)^{*}\phi_{FS|X_{\tau}}/k,$ where $\phi_{FS}$ is the Fubini-Study metric on $\mathcal{O}(1)).$
It may be worth stressing that, in Theorem \[thm:DF=00003Dding intro\] and its Corollary above, it is not assumed that the total space $\mathcal{X}$ is smooth and this is why we need to assume that the central fiber $\mathcal{X}_{0}$ has klt singularities (even if the original Fano variety $X$ is smooth). The point is that this assumption allows us to apply inversion of adjunction [@ko] to conclude that $q=0$ in Theorem \[thm:DF=00003Dding intro\] (even though the singularities of $\mathcal{X}$ along $\mathcal{X}_{0}$ may prevent the central fiber of a log resolution of $(\mathcal{X},\mathcal{X}_{0})$ from being reduced). More generally, as the proof reveals, the implication that $q=0$ in Theorem \[thm:DF=00003Dding intro\] holds as long as $\mathcal{X}_{0}$ is reduced and the log pair $(\mathcal{X},(1-\delta)\mathcal{X}_{0})$ is klt for any sufficiently small positive number $\delta,$ which is automatically the case if the total space $\mathcal{X}$ is smooth.
We also give some applications of Theorem \[thm:DF=00003Dding intro\] to bounds on the Ricci potential and Perelman’s entropy type $\lambda-$functional [@pe] (see section \[sub:Applications-to-bounds\]), which can be seen as analogs of Donaldson’s lower bound on the Calabi functional [@do2]. In particular, we obtain the following
\[thm:perelman intro\]Let $X$ be an $n-$dimensional Fano manifold and set $V:=c^{1}(X)^{n}.$ If $X$ is $K-$unstable, then $$\sup_{\omega\in\mathcal{K}(X)}\lambda(\omega)<nV,$$ where $\mathcal{K}(X)$ denotes the space of all Kähler metrics in $c_{1}(X).$
As is well-known $\lambda(\omega)\leq nV$ on the space $\mathcal{K}(X)$ and, as recently shown by Tian-Zhang [@ti-zhu] in their study of the Kähler-Ricci flow, if a Fano manifold $X$ admits a Kähler-Einstein metric $\omega_{KE}$ then $\lambda(\omega_{KE})=nV,$ or more generally: if Mabuchi’s K-energy is bounded from below on $\mathcal{K}(X),$ then supremum of $\lambda$ is equal to $nV.$ In the light of the Yau-Tian-Donaldson conjecture it seems thus natural to conjecture that $X$ is $K-$semistable if and only if the supremum of $\lambda$ is equal to $nV$ (the “if direction” is the content of the previous theorem). Finally, it may be worth pointing out that a more precise version of Theorem \[thm:perelman intro\] will be obtained, where the supremum of $\lambda$ is explicitly bounded in terms of minus the supremum of the Donaldson-Futaki invariants over all (normalized) destabilizing test configurations for $(X,L)$ (see Cor \[cor:bound on lambda f\]).
### Organization of the paper {#organization-of-the-paper .unnumbered}
After having recalled some preliminary material in section \[sec:The-proof-of\] the Ding metric associated to a special test configuration $\mathcal{X}$ is introduced and its curvature properties are studied. A proof of Theorem \[thm:k-poly intro\] can then be given, based on the deep results in [@l-x] concerning special test configurations , which only uses that the Ding metric is positively curved. On the other hand this proof relies heavily on the fact that the central fiber of $\mathcal{X}$ is a priori assumed reduced. In section \[sec:Singularity-structure-of\] we introduce the generalized Ding metric associated to any test configuration and study its singularities - in particular how the singularities are related to the multiplicities of the central fiber of the test configuration. The section is concluded with the proof of Theorem \[thm:DF=00003Dding intro\], which then allows us to give an alternative second proof of Theorem \[thm:k-poly intro\], which is independent of [@l-x]. In section \[sub:Applications-to-bounds\] the applications to bounds on the Ricci potential and Perelman’s entropy functional are given. In section \[sub:The-logarithmic-setting\] the generalizations to log Fano varieties are explained and in the final Section \[sec:Outlook-on-the\] relations to Tian’s program are discussed.
### Acknowledgments {#acknowledgments .unnumbered}
Thanks to Bo Berndtsson, Sébastien Boucksom, Dennis Eriksson, Yuji Odaka, Julius Ross and Song Sun for helpful discussions and comments. In particular, thanks to Tomoyuki Hisamoto and David W Nyström for discussions on norms of test configurations and the relations to their works [@hi] and [@n] (compare Remark \[rem:Lemma–infty norm\]). This paper is a revision of the first version of the paper which appeared on ArXiv. The main new features are the explicit formula for the Donaldson-Futaki invariant of a general (and not only special) test configuration in terms of the slope of the Ding functional (Theorem \[thm:df=00003Dding\]), the applications to bounds on the Ricci potential and Perelman’s entropy type functional $\lambda$ and the discussion on the existence problem for Kähler-Einstein metrics on Fano varities. The bound on the functional $\lambda$ was inspired by a preprint of He [@he], which appeared during the revision of the present paper, where different bounds on $\lambda$ were obtained on Fano manifolds admitting holomorphic vector fields (generalizing [@t-z--]).
\[sec:The-proof-of\]The proof of Theorem \[thm:k-poly intro\] using special test configurations
===============================================================================================
Setup: Kähler-Einstein metrics on Fano varieties
------------------------------------------------
Let $X$ be an $n-$dimensional normal compact projective variety. By definition, $X$ is said to be a *Fano variety* if the anti-canonical line bundle $-K_{X}:=\det(TX)$ defined on the regular locus $X_{reg}$ of $X$ extends to an ample $\Q-$line bundle on $X,$ i.e. there exists a positive integer $m$ such that the $m$th tensor power $-mK_{X_{reg}}$ extends to an ample line bundle over $X.$ Since, $X$ is normal this equivalently means that the anti-canonical divisor $-K_{X}$ of $X$ defines an ample $\Q-$line bundle. In practice, we will only consider Fano varieties with *klt singularities* (also called *log terminal singularities* in the literature), i.e. there exists a smooth resolution $p:\, X'\rightarrow X,$ which is an isomorphism over $X_{reg},$ such that $$p^{*}K_{X}=K_{X'}+D,\label{eq:resolution}$$ where $D=\sum c_{i}E_{i}$ is a $\Q-$divisor on $X'$ with simple normal crossings for $E_{i}$ $p-$exceptional with $c_{i}<1$ (the analytical characterization of the klt condition will be recalled below). Through out the paper we will use additive notation for line bundles, as well as metrics. This means that a metric $\left\Vert \cdot\right\Vert $ on a line bundle $L\rightarrow X$ is represented by a collection of local functions $\phi(:=\{\phi_{U}\})$ defined as follows: given a local generator $s$ of $L$ on an open subset $U\subset X$ we define $\phi_{U}$ by the relation $$\left\Vert s\right\Vert ^{2}=e^{-\phi_{U}},$$ where $\phi_{U}$ is upper semi-continuous (usc). It will convenient to identify the additive object $\phi$ with the metric it represents. Of course, $\phi_{U}$ depends on $s$ but the curvature current $$dd^{c}\phi:=\frac{i}{2\pi}\partial\bar{\partial}\phi_{U}$$ is globally well-defined on $X$ and represents the first Chern class $c_{1}(L),$ which with our normalizations lies in the integer lattice of $H^{2}(X,\R).$ We will denote by $\mathcal{H}_{b}(X,L)$ the space of all locally bounded metrics $\phi$ on $L$ with positive curvature current, i.e. the local representations $\phi_{U}$ are all bounded and $dd^{c}\phi_{U}\geq0$ in the sense of currents. Fixing $\phi_{0}\in\mathcal{H}_{b}(X,L)$ and setting $\omega_{0}:=dd^{c}\phi_{0}$ the map $\phi\mapsto v:=\phi-\phi_{0}$ thus gives an isomorphism between the space $\mathcal{H}_{b}(X,L)$ and the space $PSH(X,\omega_{0})\cap L^{\infty}(X)$ of all bounded $\omega_{0}-$psh functions, i.e. the space of all bounded usc functions $v$ on $X$ such that $dd^{c}v+\omega_{0}\geq0.$
### \[sub:K=0000E4hler-Einstien-metrics\]Kähler-Einstein metrics
In the special case when $L=-K_{X}$ any given metric on $\phi\in\mathcal{H}_{b}(X,L)$ induces a measure $\mu_{\phi}$ on $X,$ which may be defined as follows: if $U$ is a coordinate chart in $X_{reg}$ with local holomorphic coordinates $z_{1},...,z_{n}$ we let $\phi_{U}$ be the representation of $\phi$ with respect to the local trivialization of $-K_{X}$ which is dual to $dz_{1}\wedge\cdots\wedge dz_{n}.$ Then we define the restriction of $\mu_{\phi}$ to $U$ as $\mu_{\phi}=e^{-\phi_{U}}dz_{1}\wedge\cdots\wedge dz_{n}.$ In fact, this expression is readily verified to be independent of the local coordinates $z$ and hence defines a measure $\mu_{\phi}$ on $X_{reg}$ which we then extend by zero to all of $X.$ The Fano variety $X$ has *klt singularities* precisely when the total mass of $\mu_{\phi}$ is finite for some and hence any $\phi\in\mathcal{H}_{b}(X,L)$ (see [@bbegz] for the equivalence with the usual algebraic definition involving discrepancies on smooth resolutions of $X)$. Abusing notation slightly we will use the suggestive notation $e^{-\phi}$ for the measure $\mu_{\phi}.$ This notation is compatible with the usual notation used in the context of adjoint bundles: if $s$ is a holomorphic section of $L+K_{X}\rightarrow X$ and $\phi$ is a metric on $L$ then $|s|^{2}e^{-\phi}$ may be naturally identified with a measure on $X.$ In particular, letting $L=-K_{X}$ and taking $s$ to be the canonical section $1$ in the trivial line bundle $L+K_{X}$ gives us back the measure $\mu_{\phi}.$ More generally, if $(X,D)$ is a log pair (see section \[sub:The-logarithmic-setting\] below) and $\phi$ is a locally bounded metric on $-(K_{X}+D)$ then one obtains a measure $\mu_{\phi}$ on $X$ by using the natural identification between $-(K_{X}+D)$ and $-K_{X}$ on the complement of the support of $D$ in $X$ and extending by zero to all of $X$ (compare [@bbegz]). Abusing notation, we will sometimes write $\mu_{\phi}=e^{-(\phi+\log|s_{D}|^{2})},$ where $s_{D}$ is the (multi-) section cutting out $D.$ These constructions are compatible with taking resolutions $p,$ as in \[eq:resolution\]: if $\phi$ is a metric on $-K_{X},$ then $p^{*}\phi$ is a metric on $-(K_{X'}+D)$ and $p_{*}(\mu_{p^{*}\phi})=\mu_{\phi}.$
Following [@bbegz] $\omega$ is said to be a *Kähler-Einstein metric* on $X$ if it is Kähler metric on $X_{reg}$ with constant Ricci curvature, i.e. $\mbox{Ric \ensuremath{\omega=\omega}}$on $X_{reg}$ and $\int_{X_{ref}}\omega^{n}=c_{1}(-K_{X})^{n}.$ As shown in [@bbegz] this equivalently means that the Fano variety $X$ in fact has klt singularities and $\omega$ extends to a Kähler current defined on the whole Fano variety $X,$ such that $\omega$ is the curvature current of a locally bounded (and in fact continuous) metric $\phi_{KE}$ on the $\Q-$line bundle $-K_{X}$ such that $$(dd^{c}\phi_{KE})^{n}=Ve^{-\phi_{KE}}/\int_{X}e^{-\phi_{KE}}.\label{eq:k-e equatio for phi in def}$$ The measure appearing the left hand side above is the *Monge-Ampère measure* of $\phi_{KE}$ defined in sense of pluripotential theory (see [@bbegz] and references therein for the singular setting).
\[sub:K-polystability-and-test\]K-polystability and test configurations
-----------------------------------------------------------------------
Let us start by recalling Donaldson’s general definition [@d0] of K-stability of a polarized variety $(X,L),$ generalizing the original definition of Tian [@ti1]. First, a *test configuration* for $(X,L)$ consists of a scheme $\mathcal{X}$ and a relatively ample line bundle $\mathcal{L}\rightarrow\mathcal{X}$ with a $\C^{*}-$action $\rho$ on $\mathcal{L}$ and a $\C^{*}-$equivariant flat surjective morphism $\pi:\,\mathcal{X}\rightarrow\C$ (where the base $\C$ is equipped with its standard $\C^{*}-$action) such that $(X_{1},\mathcal{L}_{1})$ is isomorphic to $(X,rL)$ for some integer $r.$ In fact, by allowing $\mathcal{L}$ to be a $\Q-$line bundle we may as well assume that $r=1.$ More generally, for a *semi-test configuration* we only require that $\mathcal{L}$ be relatively semi-ample. The *Donaldson-Futaki invariant* $DF(\mathcal{X},\mathcal{L})$ of a test configuration is defined as follows: consider the $N_{k}-$dimensional space $H^{0}(X_{0},kL_{0})$ over the central fiber $X_{0}$ and let $w_{k}$ be the weight of the $\C^{*}-$action on the complex line $\det H^{0}(X_{0},kL_{0}).$ Then the Donaldson-Futaki invariant of $DF(\mathcal{X},\mathcal{L})$ is defined as a the sub-leading coefficient in the expansion of $w_{k}/kN_{k}$ in powers of $1/k$ (up to normalization): $$\frac{w_{k}(\det H^{0}(X_{0},kL_{0}))}{kN_{k}}=c_{0}+\frac{1}{k}\frac{1}{2}DF(\mathcal{X},\mathcal{L})+O(\frac{1}{k^{2}}),$$ where $N_{k}:=\dim(H^{0}(X_{0},kL_{0}).$ The polarized variety $(X,L)$ is said to be *K-semistable* if, for any test configuration, $DF(\mathcal{X},\mathcal{L})\leq0$ and *K-polystable* if moreover equality holds iff $(\mathcal{X},\mathcal{L})$ is a product test configuration, i.e. $\mathcal{X}$ is isomorphic to $X\times\C.$ Following [@l-x] we also assume that the total space $\mathcal{X}$ of the test configuration is normal, to exclude some pathological phenomena observed in [@l-x] (then the morphism $\pi$ is automatically flat; see Prop 9.7 in [@ha]). We also recall that $(X,L)$ is said to be *$K-$unstable* if it is not K-semistable, i.e. there exists a *destabilizing test configuration* in the sense that $DF(\mathcal{X},\mathcal{L})>0.$
In this paper we will be concerned with test configurations $(\mathcal{X},\mathcal{L})$ for a Fano variety with its anti-canonical polarization, i.e. $X$ is a Fano variety and $L=-K_{X}$ so that the restriction of $\mathcal{L}$ to the complement $\mathcal{X}^{*}$ of the central fiber coincides with the $\Q-$line bundle defined by the dual of the relative canonical divisor $K_{\mathcal{X}/\C}:=K_{\mathcal{X}}-\pi^{*}K_{\C}$ (which we will sometimes denote by $K$ to simplify the notation). Note that, in general, $K_{\mathcal{X}/\C}$ does not extend as a $\Q-$line bundle over the central fiber, but following [@l-x] we say that a normal variety $\mathcal{X}$ with a $\C^{*}-$equivariant surjective morphism $\pi$ to $\C$ is a *special test configuration for the Fano variety $X$* if $\mathcal{X}_{1}=X,$ the total space $\mathcal{X}$ is $\Q-$Gorenstein and the central fiber is reduced and irreducible and defines a Fano variety with klt singularities. Then we set $\mathcal{L}=-K_{\mathcal{X}/\C}.$ Moreover, since the Donaldson-Futaki is independent of the lift of the $\C^{*}-$action on $\mathcal{X}$ we may and will assume that the $\C^{*}-$action on $-K_{\mathcal{X}/\C}$ is the canonical lift of the $\C^{*}-$action on $\mathcal{X}$ to $-K_{\mathcal{X}/\C}.$ It will be useful to recall the following essentially well-known characterization of special test configurations:
\[lem:char of special test\]Let $(\mathcal{X},\mathcal{L})$ be a general test configuration (with a priori non-normal total space) for $(X,-K_{X}),$ where $X$ is a Fano variety. Assume that the central fiber $\mathcal{X}_{0}$ is normal. Then $\mathcal{X}$ and $\mathcal{X}_{0}$ are both normal $\Q-$Gorenstein varieties and $\mathcal{L}_{|\mathcal{X}_{0}}$ is isomorphic to $-K_{\mathcal{X}_{0}},$ i.e. $\mathcal{L}$ is isomorphic to $K_{\mathcal{X}/\C}.$ Moreover, if $\mathcal{X}_{0}$ has klt singularities, then so has $\mathcal{X}.$ In other words, a test configuration is special iff the central fiber is reduced with klt singularities.
For completeness we provide a proof (thanks to Yuji Odaka for his help in this matter). It follows from general commutative algebra that if $\pi:\,\mathcal{X}\rightarrow\C$ is a morphism projective and flat over $\C,$ with normal fibers, then $\mathcal{X}$ is also normal. In particular, the canonical divisor $K_{\mathcal{X}}$ is a well-defined Weil divisor. By assumption $-mK_{\mathcal{X}}$ and $\mathcal{L}$ are Cartier and linearly equivalent on $\mathcal{X}^{*}$ and hence $-mK_{\mathcal{X}}+\mathcal{L}$ is linearly equivalent to a Weil divisor $D$ supported in the central fiber. But the central fiber is Cartier (since it is cut out by $\pi^{*}t)$ and hence, since it is assumed irreducible $-mK_{\mathcal{X}}+\mathcal{L}$ is linearly equivalent to a multiple of $\mathcal{X}_{0},$ which means that $-mK_{\mathcal{X}}$ is a sum of Cartier divisors, hence Cartier, i.e. $\mathcal{X}$ is $\Q-$Gorenstein. More precisely, $K_{\mathcal{X}}$ is linearly equivalent to $\mathcal{L}$ modulo a pull back from the base and thus it follows from adjunction that the restriction of $\mathcal{L}$ to $\mathcal{X}_{0}$ is linearly equivalent to $K_{\mathcal{X}_{0}},$ which concludes the proof of the first statement. Finally, if $\mathcal{X}_{0}$ has klt singularities it now follows from inversion of adjunction [@ko] that $\mathcal{X}$ also has klt singularities.
Very recently Li-Xu [@l-x] used methods from the Minimal Model Program in birational geometry to establish the following result which confirms a conjecture of Tian:
\[thm:(Li-Xu)–Let\](Li-Xu) [@l-x] Let $X$ be a Fano variety. Then $(X,-K_{X})$ is K-polystable iff $DF(\mathcal{X},\mathcal{L})\leq0$ for any special test configuration for $X$ with equality iff $\mathcal{X}$ is a product test configuration.
\[rem: semi-st vs mmp\]We briefly recall that the starting point of the proof in [@l-x] is to replace a given test configuration $\mathcal{X}$ for $X$ by a semi-stable family $\mathcal{Y}\rightarrow\mathcal{X}\rightarrow\C.$ Then the proof proceeds by producing an appropriate special test configuration from $\mathcal{Y},$ by running a relative Minimal Model Program (MMP) with scaling (on the log canonical modification of $\mathcal{Y}$). The reduction to special test configurations will simplify the proof of Theorem \[thm:k-poly intro\], but, as explained in section \[sub:An-alternative-proof\], an independent proof can also be given.
Before continuing we recall [@ti1; @d0] that the total space $\mathcal{X}$ of a test configuration may, using the relative linear systems defined by $r\mathcal{L}$ for $r$ sufficiently large, be equivariantly embedded as a subvariety of $\P^{N}\times\Delta$ so that $r\mathcal{L}$ becomes the pull-back of the relative $\mathcal{O}(1)-$hyperplane line bundle over $\P^{N}\times\Delta.$ We will denote by $\phi_{FS}$ the metric on $\mathcal{L}$ obtained by restriction of the fiberwise Fubini-Study metrics on $\P^{N}\times\{\tau\}.$
\[sub:Deligne-pairings-and\]Deligne pairings and the Ding type metric
---------------------------------------------------------------------
The Donaldson-Futaki invariant may also be expressed in terms of Deligne pairings [@p-r-s]. First recall that if $\pi:\,\mathcal{X}\rightarrow B$ is a proper flat morphism of relative dimension $n$ and $L_{0},...,L_{n}$ are line bundles over $\mathcal{X}$ then the Deligne pairing $\left\langle L_{0},...,L_{n}\right\rangle $ is a line bundle over $B,$ which depends in a multilinear fashion on $L_{i}$ [@zh; @p-r-s]. Moreover, given Hermitian metrics $\phi_{0},...,\phi_{n}$ there is natural Hermitian metric $\phi_{D}$ on $\left\langle L_{0},...,L_{n}\right\rangle $ which has the following fundamental properties:
- its curvature is given by $$dd^{c}\phi_{D}=\pi_{*}(dd^{c}\phi_{1}\wedge\cdots\wedge dd^{c}\phi_{n})\label{eq:curvature of the deligne pairing}$$
- if $\phi$ and $\psi$ are metrics in $\mathcal{H}_{b}(L)$ with $\phi_{D}$ and $\psi_{D}$ denoting the induced metrics on the top Deligne pairing $\left\langle L,...,L\right\rangle $ in the absolute case when $B$ is a point, then we have the following “change of metric formula”: $$\phi_{D}-\psi_{D}=(n+1)\mathcal{E}(\phi,\psi):=\int_{X}(\phi-\psi)(dd^{c}\phi)^{n-j}\wedge(dd^{c}\psi)^{j}\label{eq:change of metric formula as energy}$$
We also recall (see [@bbegz] for the singular setting) that the first variation of the functional $\mathcal{E}(\cdot,\psi)$ on $\mathcal{H}_{b}(L)$ is given by $$\frac{d}{dt}_{|t=0}\mathcal{E}(\phi_{0}(1-t)+\phi_{1}t,\psi)=\int(\phi_{1}-\phi_{0})(dd^{c}\phi_{0})^{n}\label{eq:variational prop of energy}$$ Let us now come back to the general setting of a test configuration $\mathcal{L}\rightarrow\mathcal{X}\rightarrow\C$ for a polarized variety $(X,L).$ Under appropriate regularity assumptions it was shown in [@p-r-s] that the following holds:
\[pro:(Phong-Ross-Sturm)-:-The\](Phong-Ross-Sturm) [@p-r-s]: The Donaldson-Futaki invariant of a test configuration $(\mathcal{X},\mathcal{L})$ is minus the weight over $0$ of the following line bundle over $\C:$ $$\eta:=\frac{1}{(n+1)L^{n}}\left(\mu\left\langle \mathcal{L},...,\mathcal{L}\right\rangle -(n+1)\left\langle -K_{\mathcal{X}/\C},\mathcal{L}...,\mathcal{L}\right\rangle \right),$$ where $\mu$ is the numerical constant $n(-K_{X})\cdot L^{n-1}/L^{n}$ expressed in terms of the algebraic intersection numbers on $X.$
More precisely, it was shown in [@p-r-s] that, up to natural isomorphisms, the Knudson-Mumford expansion of the determinant line bundle $\det(\pi_{*}(kL))\rightarrow\Delta$ (with fibers $\det H^{0}(X_{\tau},kL_{\tau}))$ satisfies $$\det(\pi_{*}(kL))/kN_{k}=\frac{1}{(n+1)L^{n}}\left\langle \mathcal{L},...,\mathcal{L}\right\rangle -\frac{1}{k}\frac{1}{2}\eta+O(\frac{1}{k^{2}})$$ and $\eta$ is thus naturally isomorphic to the CM-line bundle introduced by Paul-Tian [@p-t]. The proofs in [@p-r-s] were carried out under the assumption that the total space $\mathcal{X}$ and the central fiber $X_{0}$ be non-singular (in particular, there are no multiple fibers), but as pointed out in [@p-r-s] the regularity assumptions can be relaxed and in particular the previous proposition applies when $\mathcal{X}$ is a special test configuration for a Fano variety $X.$
\[rem:compactification\]For completeness and future reference we note that an alternative direct proof of the previous proposition can be given, which is valid for any *$\mathcal{X}$* such that $K_{\mathcal{X}/\C}$ is well-defined as a $\Q-$line bundle (compare Prop 16 [@w]). Indeed, as shown in [@od; @w; @l-x], for any normal $\mathcal{X}$ the corresponding DF-invariant may be written as a sum of $n+1-$fold algebraic intersection numbers (we follow the notation in Prop 1 in [@l-x]): $$(n+1)L^{n}DF(\mathcal{X},\mathcal{\bar{\mathcal{L}}}):=-\mu\mathcal{\bar{\mathcal{L}}}\cdot\mathcal{\bar{\mathcal{L}}}\cdots\mathcal{\bar{\mathcal{L}}}-(n+1)K\cdot\mathcal{\mathcal{\bar{\mathcal{L}}}}\cdots\mathcal{\mathcal{\bar{\mathcal{L}}}},\label{eq:df as intersection}$$ computed on the natural equivariant compactification $\bar{\mathcal{X}}\rightarrow\P^{1}$ of $\mathcal{X}$ induced by the projective compactification of the affine line $\C.$ Here $K$ denotes the relative canonical (Weil) divisor on the compactified fibration and $\mathcal{\bar{\mathcal{L}}}$ denotes extension of $\mathcal{L}$ to the compactification, induced by the fixed action $\rho.$ For a semi-test configuration the latter formula can be taken as the definition of the DF-invariant. In particular, if $K$ is well-defined as a $\Q-$Cartier divisor (i.e. as a $\Q-$line bundle) then it follows from the standard push-forward formula that $$DF(\mathcal{X},\mathcal{L})=-\int_{\P^{1}}\pi_{*}(\mu(c_{1}(\mathcal{\bar{\mathcal{L}}})^{n+1})-(n+1)c_{1}(K)\wedge c_{1}(\mathcal{\bar{\mathcal{L}}})^{n}),$$ which, according to \[eq:curvature of the deligne pairing\] coincides with the degree of the corresponding sum of Deligne pairings over $\P^{1}.$ But, this is nothing but minus the weight over $0$ of the $\C^{*}-$action on $\eta.$
We also recall that, as explained in [@p-r-s], in the case when the total space $\mathcal{X}$ and the central fiber $X_{0}$ are smooth and $\mathcal{L}$ is equipped with a smooth metric $\phi$ with strictly positive curvature along the fibers, there is a naturally induced Mabuchi type metric on $\eta$ obtained by equipping $K_{X/\C}$ with the metric induced by the volume form $(dd^{c}\phi)^{n}$ and taking the metric on $\eta$ to be the one induced by the sum of the corresponding Deligne pairings. However, in the present paper we will introduce a different metric on $\eta$ which is defined when $X$ is a Fano variety and $\mathcal{X}$ is a special test configuration (this setting will be generalized to a general test configuration in Section \[sec:Singularity-structure-of\]).
Thus we assume again that $\mathcal{X}$ is a special test configuration for the Fano variety $X$ so that $\mathcal{L}=-K_{X/\C}.$ Then we have that $$\eta:=-\frac{1}{(-K_{X})^{n}(n+1)}\left\langle -K_{\mathcal{X}/\C},...,-K_{\mathcal{X}/\C}\right\rangle ,$$ Given a metric $\phi$ on $-K_{X/\C}$ we will equip $\mathcal{\eta}$ with an induced metric $\Phi$ that we will refer to as the *Ding metric: $$\Phi:=\phi_{D}+v_{\phi},\label{eq:def of ding metric}$$* where $\phi_{D}$ is the usual Deligne metric on $\eta$ and $v_{\phi}$ is the following function on $\C:$ $$v_{\phi}(\tau):=-\log\int_{X_{\tau}}e^{-\phi_{\tau}}$$ We note that $\phi$ and $\phi+c(t)$ induced the same Ding metric on $\eta.$ The definition of the Ding metric is made so that, in the absolute case where $\C$ is a replaced with a point and $\psi$ a fixed reference metric on $-K_{X},$ the functional
$$\mathcal{D}(\phi,\psi):=\Phi-\Psi=-\frac{1}{(-K_{X})^{n}}\mathcal{E}(\phi,\psi)+v_{\phi}-v_{\psi},\label{eq:def of ding functional}$$
coincides (up to an additive constant) with a *functional* introduced by Ding (see [@bbegz] for the singular setting and references therein). As is well-known, this latter functional has the crucial property that its critical points are precisely the Kähler-Einstein metrics. Indeed, by \[eq:variational prop of energy\] and the definition of $v_{\phi}$ the critical point equation $d_{\phi}\mathcal{D=}0$ is equivalent to the Kähler-Einstein equation $$(dd^{c}\phi)^{n}=e^{-\phi}/\int_{X}e^{-\phi}\label{eq:k-e equation for phi}$$ We will also have use for the following lemma which is an abstract version of the Kempf-Ness approach to the Hilbert-Mumford criterion, used to test stability in Geometric Invariant Theory (compare [@p-s2]). Its formulation involves the classical notion of a *Lelong number* $l_{\Phi}(0)$ at zero of a subharmonic function $\Phi$ on the unit-disc in $\C,$ which may be defined as the sup over all positive numbers $\lambda$ such that $\Phi(\tau)\leq\lambda\log|\tau|^{2}$ close to $\tau=0$ (equivalently, $l_{\Phi}(0)=\int_{\{0\}}(dd^{c}\Phi)).$
\[lem:Hilbert-Mumford\]Let $F$ be a line bundle over the unit-disc $\Delta$ in $\C$ equipped with a $\C^{*}-$action $\rho$ compatible with the standard one on $\Delta$ and fix an $S^{1}-$invariant metric $\Phi$ on $F$ with positive curvature current. Then the weight $w$ of the $\C^{*}$ action on the complex line $F_{0}$ may be computed from the following formula: $$-\lim_{t\rightarrow\infty}\frac{d}{dt}\log\left\Vert \rho(\tau)s\right\Vert _{\Phi}^{2}=w-l_{\Phi}(0)\label{eq:w as deric}$$ for $t=-\log|\tau|,$ $s$ a fixed holomorphic section of $F,$ where $l_{\Phi}(0)(\geq0).$
When $\Phi$ is smooth the lemma is indeed equivalent to the “Hilbert-Mumford criterion” in Geometric Invariant Theory (GIT). In the case when $\Phi$ is merely bounded, with positive curvature current, we fix a smooth $S^{1}-$invariant metric $\Phi_{0}$ and write $\Phi=\Phi_{0}+U$ where $U$ is a function on $\Delta.$ Then $\log\left\Vert \rho(\tau)s\right\Vert _{\Phi}^{2}=\log\left\Vert \rho(\tau)s\right\Vert _{\Phi_{0}}^{2}-U.$ Now, for $\tau$ close to zero we may write $U(\tau)=\phi(\tau)-\phi_{0}(\tau)$ for $\phi$ and $\phi_{0}$ subharmonic functions with $\phi_{0}$ smooth (indeed, $\phi:=-\log\left\Vert s'\right\Vert _{\Phi}^{2}$ and $\phi_{0}:=-\log\left\Vert s'\right\Vert _{\Phi_{0}}^{2}$ for $s'$ a fixed trivializing section of $F$ on a neighborhood of $0).$ But it is well-known, and easy to verify, that if $\Phi$ is an $S^{1}-$invariant subharmonic function on a neighborhood of $0$ in $\C$ then $-\lim_{t\rightarrow\infty}\frac{d}{dt}U(\tau)$ coincides with the Lelong number $l_{\phi}(0)$ of $\phi$ at $0.$ Applying this latter fact to $\phi$ and $\phi_{0}$ thus concludes the proof. In fact, a direct proof of the lemma can be given along these lines: indeed, we may as well suppose that, in a small neighborhood of $\tau=0$ there is a trivialization such that $\rho(\tau)s=\tau^{w}$ and hence minus the derivative in the right hand side of \[eq:w as deric\] is equal to $-\lim_{t\rightarrow\infty}\log e^{-tw}-l_{\Phi}(0),$ as desired.
It should be pointed out that a variant of the previous lemma was already used (implicitly) in [@p-r-s] in conjunction with Prop \[pro:(Phong-Ross-Sturm)-:-The\] to compute the Donaldson-Futaki invariant in terms of asymptotics of Mabuchi’s K-energy functional, by equipping $\eta$ with the Mabuchi type metric. One of the main points of the present paper is the observation that, in the Fano case, the analysis simplifies (and also has the virtue of extending to the case when $X$ is singular) if one instead equips $\eta$ with the Ding metric introduced above. This will be explained below, but we first show in the next section how to equip the relative anti-canonical line bundle over $\mathcal{X}$ with a special metric extending a given one on the special fiber $X_{1}.$ This builds on ideas introduced in the work of Phong-Sturm [@p-s; @p-s1b] and Chen-Tang [@ch].
The Monge-Ampère equation on $\mathcal{X}$ and geodesic rays
------------------------------------------------------------
Let $X$ be a Fano variety and $\mathcal{X}$ a special test configuration for $X.$ Denote by $M$ the variety with boundary obtained by restricting $\mathcal{X}$ to the unit-disc $\Delta\subset\C.$ Given a locally bounded metric $\phi_{1}$ with positive curvature on $-K_{X}$ we let $\phi$ be the metric on $-K\rightarrow M$ defined as the following envelope: $$\phi:=\sup\{\psi:\,\,\,\psi\leq\phi_{1}\,\mbox{on\,\ensuremath{\partial M\}}}\label{eq:def of envelope}$$ where $\psi$ ranges over all locally bounded metrics with positive curvature form on $\mathcal{L}\rightarrow M$ and $\phi_{1}$ denotes the $S^{1}-$invariant metric on $\partial M$ induced by the given metric (since we are not a priori assuming that $\psi$ is continuous the boundary condition above means that, locally, $\limsup_{z_{i}\rightarrow z}\psi(z_{i})\leq\phi_{1}(z)$ for any sequence $z_{i}$ approaching a boundary point $z).$ Occasionally, we will use the logarithmic real coordinate $t=-\log|\tau|$ on the punctured disc $\Delta^{*}.$ We note that if $X$ is identified with the fiber $X_{1}$ of $\mathcal{X}$ then we can use the action $\rho$ to identify the metrics $\phi_{\tau}$ on $X_{\tau}$ with a curve of metric $$\phi^{t}:=\rho(\tau)^{*}\phi_{\tau}\label{eq:def of geodesic ray as pull-back}$$ on $-K_{X}.$ Next we will show that the metric $\phi$ above can be seen as a solution to a Dirichlet problem for the Monge-Ampère operator on $M.$ In fact, it will be convenient to formulate the result for any test configuration (where we recall $X$ and the total space $\mathcal{X}$ are always assumed to be normal varieties).
\[prop:reg for ma-eq\]Let $(\mathcal{X},\mathcal{L})$ be a test configuration for the polarized variety $(X,L).$ Then the following holds:
- $\phi$ is $S^{1}-$invariant
- $\phi$ is locally bounded with positive curvature current and upper semi-continuous in $M$
- $\phi_{\tau}\rightarrow\phi_{1}$ uniformly as $|\tau|\rightarrow1$ (with respect to any fixed trivializing of $\mathcal{L}$ close to a given boundary point).
- In the interior of $M$ we have that $(dd^{c}\phi)^{n+1}=0$ in the sense of pluripotential theory.
The first point follows immediately from the extremal defining of $\phi.$ It will be convenient to identify the metric $\phi_{1}$ with a $\C^{*}-$invariant metric on $\mathcal{L}$ over the punctured unit-disc $\Delta^{*}$ using the action $\rho.$ We will also, abusing notation slightly, identify the coordinate $\tau$ with the psh function on $\pi^{*}\tau$ on $\mathcal{X}.$ Let us first construct a *barrier,* i.e. a continuous metric $\tilde{\phi}$ on *$\mathcal{L}$* with positive curvature current such that $\tilde{\phi}=\phi_{1}$ on $\partial M$ and $\tilde{\phi}_{\tau}\rightarrow\phi_{1}$ as $|\tau|\rightarrow1.$ To this end first observe that for $\epsilon>0$ sufficiently small there exist a continuous metric $\phi_{U}$ with positive curvature on *$\mathcal{L}\rightarrow U$* over the open set $U:=\{|\tau|\leq\epsilon\}\subset\mathcal{X}.$ Indeed, we can set $\phi_{U}=\phi_{FS}$ for the Fubini-Study metric induced by a fixed embedding of $\mathcal{X}$ (see the end of section \[sub:K-polystability-and-test\]). Finally, we set $\tilde{\phi}:=\max\{\phi_{1}+\log|\tau|,\phi_{U}-C\}$ for $C$ sufficiently large so that $\tilde{\phi}=\phi_{U}$ for $|\tau|$ sufficiently small and $\tilde{\phi}=\phi_{1}+\log|\tau|$ for $|\tau|>\epsilon/2.$ Since $\tilde{\phi}$ is a candidate for the sup defining $\phi$ we conclude that $$\phi\geq\tilde{\phi}\geq\phi_{1}+\log|\tau|\label{eq:lower bound on env}$$ Next, let us show that $\phi$ is locally bounded from above or equivalently that there exists a constant $C'$ such that $$\phi\leq\phi_{FS}+C'\label{eq:upper bound on env}$$ Accepting this for the moment we deduce that the envelope $\phi$ is finite with positive curvature current. Moreover, the upper bound also implies that the upper semi-continous regularization $\phi^{*}$ of $\phi$ is a candidate for the sup defining $\phi,$ forcing $\phi=\phi^{*}$ in the interior of $M,$ i.e. $\phi$ is upper semi-continuous there. To prove the previous upper bound we note that since any candidate $\psi$ for the sup defining $\phi$ satisfies $\psi\leq\phi_{FS}+C$ on the set $E:=\partial M$ it follows from general compactness properties of positively curved metrics (or more generally, $\omega-$psh functions) that there is a constant $C'$ such that $\psi\leq\phi_{FS}+C'$ on all of $M.$ Indeed, by a simple extension argument we may as well assume that $u:=\psi-\phi_{FS}$ extends as an $\omega-$psh function to some compactification $\hat{\mathcal{X}}$ of $\mathcal{X}$ for some semi-positive form current $\omega$ with continuous potentials. But since $u\leq C$ on the non-pluripolar set $E$ it then follows from Cor 5.3 in [@g-z] that $u\leq C'$ on all of $\hat{\mathcal{X}}$ (strictly speaking the variety $\hat{\mathcal{X}}$ is assumed non-singular in [@g-z], but we may as well deduce the result by pulling back $u$ to a smooth resolution of $\hat{\mathcal{X}}).$ Alternatively, $u$ can be shown to be bounded from above by using the maximum principle to bound it by a solution to a Dirchlet type problem for the Laplace operator wrt a fixed Kähler metric on a resolution of $M$ (compare the argument for the upper bound in [@p-s1b]).
Let us next consider the behavior of $\phi$ on $\Delta^{*}$ by identifying $\phi_{\tau}$ with $\phi^{t}$ as above for $t\in[0,\infty[.$ Since $\phi$ is positively curved and $S^{1}-$invariant it follows that $\phi^{t}$ is convex in $t$ and in particular the right derivative $\dot{\phi}$ wrt $t$ exists at $t=0$ and $\dot{\phi}\leq(\phi^{t_{0}}-\phi^{0})/t_{0}$ for any fixed positive number $t_{0}.$ From the upper bound \[eq:upper bound on env\] we thus deduce that $\dot{\phi}\leq C_{0},$ which, combined with the lower bound \[eq:lower bound on env\], means that there exists a constant $C_{T}$ such that $\left|\dot{\phi}\right|\leq C_{I}$ for any $t\in[0,T].$ By convexity this means that $|\phi^{t}-\phi^{0}|\leq Ct$ as $t\rightarrow0$ which thus proves the convergence in the third point of the proposition.
As for the final point, the vanishing of the Monge-Ampère measure $(dd^{c}\phi)^{n+1}$ on the regular part of the interior of $M$ is a standard local argument, which follows from comparison with the solution of the homogenuous Monge-Ampère equation on small balls. Since, the Monge-Ampère measure on a locally bounded metric does not charge pluripolar sets and in particular not the singular locus of $M$ this concludes the proof.
According to the previous proposition the envelope $\phi$ thus induces a *weak geodesic ray* $\phi^{t}$ (formula \[eq:def of geodesic ray as pull-back\]) in the space $\mathcal{H}_{b}(X,L)$ of all bounded positively curved metrics, starting at a given metric (compare [@p-s]). For much more precise regularity results (given suitably smooth data on $\partial M)$ expressed on a smooth resolution of $\mathcal{X}$ we refer to the paper [@p-s1b] and to [@ch]. However, the point here is that the modest regularity results above will be adequate for our purposes and that are valid for any given locally bounded positively curved metric $\phi_{1}.$
\[sub:Curvature-properties-of\]Curvature properties of the Ding metric associated to a special test configuration
-----------------------------------------------------------------------------------------------------------------
The key analytical input in the (first) proof of Theorem \[thm:k-poly intro\] is the following result which we will deduce from a recent result of Berndtsson [@bern2] and Berndtsson-Paun [@be-p]. It is a variant of results about positivity of direct images previously established in [@bern1] (which concerned smooth metrics). In the case when $X$ is singular we will more precisely be using the generalization obtained in [@bbegz] In fact, the more precise result about the vanishing of the Lelong numbers will not be needed fro the (first) proof of Theorem \[thm:k-poly intro\] - it will be obtained as a special case of the more general situation considered in section \[sec:Singularity-structure-of\].
\[thm:positivity of v\]Let $\mathcal{X}$ be a special test configuration and $\phi$ an $S^{1}-$invariant locally bounded metric on $-K_{\mathcal{X}/\C}$ with positive curvature current. Then the corresponding function $v_{\phi}(\tau)$ on $\Delta$ has the following properties;
- $v_{\phi}(\tau)$ is subharmonic in $\tau$ (i.e. convex in $t)$ and its Lelong number at $\tau=0$ vanishes.
- If $v_{\phi}(\tau)$ is harmonic in $\tau$ (i.e. affine in $t)$ then the fibration $\mathcal{X}$ is a product test configuration.
Consider the holomorphically trivial restricted fibration $\mathcal{X}^{*}\rightarrow\Delta^{*}$ over the punctured disc (which is trivialized by action the $\rho).$ In the case when the fibers are smooth Fano varieties it was shown in [@bern2] that $v_{\phi}(\tau)$ is subharmonic and the result was extended in [@bbegz] to the case of Fano varieties with klt singularities. Moreover, since we are assuming that the fibers $\mathcal{X}_{\tau}$ have klt singularities $v_{\phi}(\tau)$ is finite for any $\tau$ (including $\tau=0)$ and it is thus tempting to conclude that $v_{\phi}$ is bounded on $\Delta$ (e.g. by appealing to an appropriate continuity result at $\tau\rightarrow0$), but this seems to be non-trivial to prove. Even if the latter boundedness may very well turn out to be true it will be enough for our purposes to know that $v_{\phi}$ is bounded from above (which is the only thing needed for the proof of Theorem \[thm:positivity of v\]) and that the Lelong number of $v_{\phi}$ at $\tau=0$ vanishes (which is used in Cor \[cor:mab along bergman intro\]) and this is indeed the case for a special test configuration (by Cor \[cor:vanshing of lelong for gen ding\] below).
As for the last point it was shown in [@bern2; @bbegz] that in the case when $v_{\phi}(\tau)$ is harmonic, for $\tau\in\Delta^{*}$ (i.e. $v_{\phi}(e^{-t})$ is affine in $t$) there is a biholomorphic map $F_{\tau}$ mapping $X_{1}\rightarrow X_{\tau}$ such that $F_{\tau}^{*}\phi_{\tau}=\phi_{1}$ (the results in [@bern2; @bbegz] were formulated in terms of a fixed trivialization, which in our setting means that $\phi_{\tau}$ is identified with the geodesic ray $\phi^{t}).$ Using the equivariant embedding in the end of section \[sub:K-polystability-and-test\] we can identify $F_{\tau}$ with a family of embeddings of $X$ into $\P^{N}.$ Now, the previous pull-back relation combined with the assumption that $\phi$ is a locally bounded metric on $-K_{\mathcal{X}/\C}\rightarrow\mathcal{X}$ (or equivalently: $\phi-\phi_{FS}$ is a bounded function on the total space of $\mathcal{X}\rightarrow\Delta)$ we deduce that $$\sup_{X}|F_{\tau}^{*}\phi_{FS}^{\P^{N}}-\phi_{1}|\leq C\label{eq:uniform bound on map}$$ uniformly for $\tau\in\Delta^{*}.$ But then it follows (just as in the proof of Lemma 6.1 in [@ti1]) that $F_{\tau}$ converges, as $\tau\rightarrow0,$ to a biholomorphic map between $X$ and $X_{0},$ showing that $\mathcal{X}$ is a product test configuration. In fact, as pointed out in [@ti1] this last step only uses that $X_{0}$ is reduced. Moreover, since we are assuming that $\mathcal{X}$ is normal it is enough to assume that $X_{0}$ is generically reduced. For completeness we recall the argument. First, by \[eq:uniform bound on map\] we may assume that $F_{\tau}$ converges to a holomorphic map $F_{0}$ from $X_{1}$ onto the complex space $(X_{0})_{red},$ underlying the scheme $X_{0}.$ The map $F_{0}$ is finite, since by construction it pull-backs an ample line bundle to another ample line bundle. Moreover, if $X_{0}$ is generically reduced then the degree of $F_{0}$ is equal to one, i.e. $F_{0}$ is generically one-to-one and hence the map $(\tau,x)\mapsto(\tau,F_{\tau}(x))$ from $X\times\C$ onto $\mathcal{X}$ is a biholomorphism away from a subvariety of codimension two. But then it follows from normality that the map defines a biholomorphism, which concludes the proof.
\[cor:pos etc of ding metric\]Let $\mathcal{X}$ be a special test configuration and $\phi$ an $S^{1}-$invariant locally bounded metric on $-K_{\mathcal{X}/\C}$ with positive curvature current. Then the corresponding Ding type metric $\Phi$ on the top Deligne pairing $\eta$ of $-K_{\mathcal{X}/\C}\rightarrow\Delta$ has the following properties:
- Its curvature defines a positive current on $\Delta.$
- It is continuous on $\Delta^{*},$ up to the boundary $\partial\Delta$ and its Lelong number at $\tau=0$ vanishes.
- If the curvature vanishes on $\Delta^{*}$ then $\mathcal{X}$ is a product test configuration.
*Positivity:* first we note that the curvature of the usual (non-twisted) Deligne metric $\phi_{D}$ on $\eta$ is non-negative, as follows immediately from the push-forward formula \[eq:curvature of the deligne pairing\] (and a standard approximation argument). Equivalently, since $\phi_{D}$ is bounded from above (see below) it is enough to consider the holomorphically trivial case over $\Delta^{*}$ where the result amounts to a well-known property of the functional $\mathcal{E}$ (see [@bbegz]). Combined with the positivity in the previous theorem this shows that $\Phi$ has positive curvature current.
*Continuity etc:* Let us first verify that if $\phi$ is a locally bounded positively curved metric on $\mathcal{L}$ over the special test configuration $\mathcal{X}$ then the Deligne metric $\phi_{D}$ on $\eta$ is locally bounded on $\Delta$ and continuous at the boundary of $\Delta.$To this end, we first recall that if $\psi$ is a smooth (in a suitable sense) metric on $\mathcal{L}$ then it was shown by Moriwaki [@mo] that the corresponding Deligne metric $\psi_{D}$ on the top Deligne product of $\mathcal{L}$ is continuous. But since $\phi$ is a locally bounded metric on $\mathcal{L}$ we have that $u:=\phi-\psi$ is a bounded function on $\mathcal{X}$ and hence it follows from the change of metric formula \[eq:change of metric formula as energy\] that $$\left|\psi_{D}-\phi_{D}\right|\leq(\sup_{\mathcal{X}}|u|)c^{1}(L)^{n}$$ is bounded (where $L$ denotes the restriction of $\mathcal{L}$ to a generic fiber) and hence $\phi_{D}$ is locally bounded, as desired. Alternatively, the local boundedness of $\phi_{D}$ can be verified directly by induction over the relative dimension, using the recursive definition of $\phi_{D}$ [@zh]. Similarly, the continuity at $\tau=1$ follows from continuity properties at $\tau=1$ of $\phi_{\tau}.$ Indeed, by Prop \[prop:reg for ma-eq\] we have that $\phi_{\tau}\rightarrow\phi_{1}$ uniformly as $\tau\rightarrow1,$ i.e. $\phi^{t}\rightarrow\phi^{0}$ and hence it follows from the change of metrics formula that, in a fixed local trivialization close to $\tau=1,$ we have $$\left|(\rho(\tau)\phi_{\tau})_{D}-(\phi_{1})_{D}\right|\leq C'\sup_{X_{\tau}}|(\rho(\tau)^{*}\phi_{\tau})-\phi_{1}|\rightarrow0$$ and also $v_{\phi}(\tau)\rightarrow v_{\phi}(1).$ This shows in particular that, over $\Delta^{*},$ the Ding metric $\Phi(=\phi_{D}+v_{\phi})$ may be identified with a subharmonic locally bounded $S^{1}-$invariant function which is continuous up to $\partial\Delta.$ But then it is convex as a function of $t$ and hence continuous on $\Delta^{*}-\{0\}.$ Moreover, since $\phi_{D}$ is bounded in a neighborhood of $\tau=0$ the Lelong number of $\Phi$ at $\tau=0$ is equal to the Lelong number of $v_{\phi},$ which vanishes according to the previous theorem.
Finally, if the curvature vanishes on $\Delta^{*}$ then it follows from the previous theorem that $v_{\phi}(e^{-t})$ is affine and hence the fibration $\mathcal{X}$ is a product test configuration.
Completion of the proof of Theorem \[thm:k-poly intro\] using special test configurations
-----------------------------------------------------------------------------------------
Let $\phi_{1}$ be a fixed metric in $\mathcal{H}_{b}(-K_{X})$ and denote by $\phi$ the corresponding envelope \[eq:def of envelope\]. Since the corresponding Ding type metric on $\eta$ is positively curved and locally bounded on $\Delta^{*}$ with vanishing Lelong number at $\tau=0$ (by Cor \[cor:pos etc of ding metric\]) it follows from Prop \[pro:(Phong-Ross-Sturm)-:-The\] combined with Lemma \[lem:Hilbert-Mumford\] and the definition of the Ding functional (formula \[eq:def of ding functional\]) that the Donaldson-Futaki invariant $DF(\mathcal{X},\mathcal{L})$ of a test configuration, which by Theorem \[thm:(Li-Xu)–Let\] may be assumed to be special, may be computed as $$-DF(\mathcal{X},\mathcal{L}):=\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\rho(\tau)^{*}\phi_{\tau},\phi_{1})=:\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(t)$$ Let us now take $\phi_{1}$ to be a Kähler-Einstein metric so that $\frac{d}{dt}\mathcal{D}(t)\geq0$ for $t=0.$ Of course, if $\phi^{t}:=\rho(\tau)^{*}\phi_{\tau}$ were smooth in $t,$ then equality would hold. Anyway, the inequality is all we need (and it follows easily from the continuity of $\phi^{t}$ as $t\rightarrow0,$ combined with the convexity of $\mathcal{E}$ wrt the affine structure and the dominated convergence theorem applied to $v_{\phi}$ ). Now by Cor \[cor:pos etc of ding metric\] $\mathcal{D}(t)$ is convex and hence $\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(t)\geq\frac{d}{dt}_{t=0^{+}}\mathcal{D}(t)\geq0,$ i.e. $DF(\mathcal{X})\leq0.$ Moreover, by convexity $DF(\mathcal{X})=0$ iff $\mathcal{D}(t)$ is affine and hence it follows from Cor \[cor:pos etc of ding metric\] that $\mathcal{X}$ is a product test configuration which thus concludes the proof.
\[sec:Singularity-structure-of\]Singularity structure of the generalized Ding metric
=====================================================================================
In this section we will prove a general formula expressing the Donaldson-Futaki invariant of a general test configuration in terms of the Ding functional and a correction term (which will prove Theorem \[thm:DF=00003Dding intro\] in the introduction and provide a second proof of Theorem \[thm:k-poly intro\], not relying on [@l-x]). To this end we are led to introduce a generalized Ding metric.
Let $(\mathcal{X},\mathcal{L})$ be test configuration for a Fano variety $(X,-K_{X})$ and fix an equivariant log resolution $p:\,\mathcal{X}'\rightarrow\mathcal{X}$ of $(\mathcal{X},\mathcal{X}_{0})$ and write $\mathcal{L}':=p^{*}\mathcal{L}.$ Then $(\mathcal{X}',\mathcal{L}'$) is a semi-test configuration for $(X,-K_{X}).$ In order to define a generalized Ding metric we first assume, to fix ideas, that the original Fano variety $X$ is smooth with $\mathcal{L}$ a line bundle over $\mathcal{X}$ and define a new line bundle $\delta'\rightarrow\C$ by $$\delta':=-\frac{1}{L^{n}(n+1)}\left\langle \mathcal{L}',...,\mathcal{L}'\right\rangle +\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C})\rightarrow\C,\label{eq:ding line bundle}$$ (when $X$ is smooth the direct image sheaf $\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C})$ is indeed a line bundle, as explained below). Given a metric $\phi$ on $\mathcal{L}\rightarrow\mathcal{X}$ we denote by $\Phi'$ the *generalized Ding metric* on $\delta',$ defined as the Deligne metric on the Deligne pairing twisted by the $L^{2}-$metric on $\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C}),$ induced by $\phi':==p^{*}\phi.$ Note that in general $\mathcal{L}$ is only assumed to be a $\Q-$line bundle, i.e. $r\mathcal{L}$ is a line bundle for some positive integer $r$ and then we may simply define $\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C}):=\pi'_{*}(r(\mathcal{L}'+K_{\mathcal{X}'/\C})/r$ as a $\Q-$line bundle (which is easily seen to be independent of $r)$ and let $\Phi'$ be the metric defined by the corresponding $L^{2/r}-$norm (compare the general setting in [@be-p2]).
Turning to the case of a general Fano variety $X$ with klt singularities, first recall that, since the variety $\mathcal{X}^{*}(:=\mathcal{X}-\mathcal{X}_{0})$ is klt we have $p^{*}K_{\mathcal{X}}=K_{\mathcal{X}'}+D^{*}$ on $\mathcal{X}'-\mathcal{X}'_{0}$ for a (sub) klt $\Q-$divisor $D^{*},$ whose closure in $\mathcal{X}'$ we will denote by $D.$ We can uniquely decompose $D=D'-E'$ as a sum of effective divisors. We may and will assume that the log resolution is such that the support of $D$ has simple normal crossings and is transversal to $\mathcal{X}'_{0}$ (even if this assumption will not be used). We then define $$\delta':=-\frac{1}{L^{n}(n+1)}\left\langle \mathcal{L}',...,\mathcal{L}'\right\rangle +\pi'_{*}(\mathcal{L}'+D'+K_{\mathcal{X}'/\C})\rightarrow\C,$$ and denote by $\Phi'$ the corresponding metric on $\delta',$ which is defined using the log adjoint $L^{2}-$metric on $\pi'_{*}(\mathcal{L}'+D'+K_{\mathcal{X}'/\C})$ (defined wrt a fixed multi-section $s_{D'}$ cutting out $D').$ To see that $\pi'_{*}(\mathcal{L}'+D'+K_{\mathcal{X}'/\C})$ is indeed a line bundle over $\C$ first note that over $\C^{*},$ where the sheaf is globally free, any fiber may be identified with $H^{0}(X',E'),$ where $X'=p^{*}X$ and $E'$ is $p-$exceptional, so that $\dim H^{0}(X',E')=1.$ The extension property to all of $\C$ then follows from general principles: the direct image sheaf is clearly torsion-free and since the base is a curve any torsion-free sheaf is automatically locally free (see [@ha]).
The Lelong number of the generalized Ding metric
------------------------------------------------
In this section we will study the Lelong number $l_{0}$ of the generalized Ding metric at $\tau=0$ (as explained in section \[sub:Curvature-properties-of\] the metric is continuous away from $\tau=0).$ The key result is the following proposition, which complements the general results of Berndtsson-Paun [@be-p], which imply that $l_{0}\geq0.$
\[prop:lelong of direct image\]Assume that $\mathcal{X}$ is a smooth variety and $\pi:\,\mathcal{X}\rightarrow\C$ a proper projective morphism over $\C$ which is non-singular (i.e. a submersion) over the punctured disc $\Delta^{*}$ and such that the support of the central fiber $\mathcal{X}_{0}$ has simple normal crossings. Let $\mathcal{L}\rightarrow\mathcal{X}$ be a semi-positive line coinciding with $-K_{\mathcal{X}/\C}$ over $\Delta^{*}$ such that $H^{0}(\mathcal{X},\mathcal{L}+K_{\mathcal{X}})_{|\mathcal{X}_{0}}$ is non-trivial and $\phi$ a (possible singular) metric on $\mathcal{L}$ with positive curvature current and such that, under restriction, $e^{-\phi}$ is locally integrable on each component of $\mathcal{X}_{0}.$
- If $\mathcal{X}_{0}$ is reduced, then the Lelong number $l_{0}$ at $\tau=0$ of the induced $L^{2}-$metric on the line bundle $\pi_{*}(\mathcal{L}+K_{\mathcal{X}/\C})\rightarrow\C$ vanishes.
- If $\phi$ is locally bounded, but $\mathcal{X}_{0}$ is possibly non-reduced then $$l_{0}=\max_{i}\frac{m_{i}-1-c_{i}}{m_{i}}$$ where $c_{i}$ is the order of vanishing along the reduced component $E_{i}$ of $\mathcal{X}_{0}$ of a trivializing (multi-) section $s$ of $\pi_{*}(\mathcal{L}+K_{\mathcal{X}/\C})\rightarrow\C$ and $m_{i}$ is the order of vanishing of $\pi^{*}\tau.$
For simplicity we first assume that $\mathcal{L}$ is a line bundle (and not only a $\Q-$line bundle). Fix a local trivializing section of $\pi_{*}(\mathcal{L}+K_{\mathcal{X}/\C})\rightarrow\C$ over a neighborhood $V$ of $0\in\C.$ It may be identified with a global holomorphic section $s$ of $\mathcal{L}+K_{\mathcal{X}/\C}\rightarrow\mathcal{X}_{|V}.$ Fix a local coordinate $\tau$ on $\C$ and let $v(\tau):=-\log\left\Vert s\right\Vert _{L_{\Phi}^{2}}^{2}$ be the corresponding local weight of the $L^{2}-$metric on $\pi_{*}(\mathcal{L}+K_{\mathcal{X}/\C}),$ which is subharmonic functions on $V$ (by [@be-p; @be-p2]). It will be useful to write the Lelong number $l_{0}$ of $v$ at $0$ as follows $$l_{0}=\inf\left\{ l:\,\int_{V}e^{-(v(\tau)+(1-l)\log|\tau|^{2})}d\tau\wedge d\bar{\tau}<\infty\right\} \label{eq:lelong as inf}$$ Noting that $|s|^{2}e^{-\phi}\otimes d\tau\wedge d\bar{\tau}$ defines measure on $\mathcal{X}_{|V}$ we can rewrite the integral above as $$\int_{V}e^{-(v(\tau)+(1-l)\log|\tau|^{2}}d\tau\wedge d\bar{\tau}=\int_{\mathcal{X}_{|V}}|s|^{2}e^{-\phi}e^{-(1-l)\log|\tau|^{2}}\otimes d\tau\wedge d\bar{\tau}$$ Now assume that $\mathcal{X}_{0}$ is reduced and fix a point $x_{0}\in\mathcal{X}_{0}$ and a small neighborhood $U$ of $x_{0}.$ We may assume that $\mathcal{L}$ and $\mathcal{X}$ are holomorphically trivial over $U$ and that $\tau$ can be complemented with a local coordinate $z\in\C^{n}$ such that $(\tau,z)$ define local holomorphic coordinates on $U.$ In the corresponding trivialization of $\mathcal{L}+K_{\mathcal{X}/\C}$ we may write the measure $|s|^{2}e^{-\phi}\otimes d\tau\wedge d\bar{\tau}$ as $|s|^{2}e^{-\phi}dz\wedge d\bar{z}\wedge d\tau\wedge d\bar{\tau}$ where $s$ is identified with a local holomorphic function and $\phi$ with a local psh function. Since $\mathcal{X}_{0}$ is assumed to be reduced we have that $\tau$ vanishes to order one along any reduced component $E_{i}.$ If now $p\in E_{i}$ it thus follows from the Ohsawa-Takegoshi theorem (see section 2 in [@d-k] for the precise formulation needed here) that $$\int_{U}e^{-\phi}e^{-(1-l)\log|\tau|^{2}}\otimes d\tau\wedge d\bar{\tau}\leq C\int_{U\cap\{s_{i}=0\}}e^{-\phi}dV_{n-1}<\infty\label{eq:finte integral}$$ where the finiteness follows from the very assumption on $\phi.$ Since the point $x_{0}$ was arbitrary an $l$ is any given constant in $]0,1[$ this shows that $l_{0}=0$ in formula \[eq:lelong as inf\], as desired.
Turning to the proof of the second point we assume that $\phi$ is locally bounded, but that $\mathcal{X}_{0}$ is possibly non-reduced. Given a point $x_{0}\in\mathcal{X}_{0}$ we may assume that $z=(z_{1},...,z_{n})$ and $\tau=z_{1}^{m_{1}}\cdots z_{r}^{m_{r}}$ so that $$\int_{U}|s|^{2}e^{-\phi}e^{-(1-l)\log|\tau|^{2}}\otimes d\tau\wedge d\bar{\tau}\sim\int\prod_{i=1,..r}\frac{1}{\left|z_{i}^{2}\right|^{(1-l)m_{i}-c_{i}}}idz_{i}\wedge d\bar{z_{i}}$$ Since the integrability exponent of $\frac{1}{\left|z_{i}^{2}\right|}$ is equal to one this concludes the proof of the second point. The case when $r\mathcal{L}$ is a line bundle is treated in exactly the same way, after replacing $|s|^{2}$ with $|s|^{2/r}$ and $\phi$ with $\phi/r.$
For completeness we also give a proof of the positivity of the curvature of the $L^{2/r}-$metric [@be-p], which by basic extension properties of subharmonic functions amounts to showing that $v(\tau)\leq C$ on $V.$ First, in the case when $r=1$ it follows from the Ohsawa-Takegoshi extension theorem there exists a constant $A$ independent of $\tau\neq0$ such that, for any holomorphic section $s^{(\tau)}\in H^{0}(\mathcal{X}_{\tau},L_{|\mathcal{X}_{\tau}}+K_{\mathcal{X}_{\tau}})$ there exists $S^{(\tau)}\in H^{0}(\mathcal{X}_{|V},\mathcal{L}+K_{\mathcal{X}^{*}/\C})$ such $S_{|\mathcal{X}_{\tau}}^{(\tau)}=s^{(\tau)}\otimes d\tau$ and $\left\Vert S^{(\tau)}\right\Vert _{\mathcal{X}_{|V}}\leq A\left\Vert s^{(\tau)}\right\Vert _{\mathcal{X}_{\tau}}$ (in terms of the $L^{2}-$norms induced by $\phi).$ In our case $H^{0}(\mathcal{X}_{\tau},L_{|\mathcal{X}_{\tau}}+K_{\mathcal{X}_{\tau}})$ is one-dimensional and since $H^{0}(\mathcal{X},\mathcal{L}+K_{\mathcal{X}})_{|\mathcal{X}_{0}}$ is assumed non-trivial $S:=S^{(\tau)}$ can thus be taken to be independent of $\tau$ and not identically zero giving $\int_{\mathcal{X}_{|V}}|S|^{2}e^{-\phi}e^{-(1-l)\log|\tau|^{2}}\otimes d\tau\wedge d\bar{\tau}\leq Ae^{-v(\tau)}.$ But since $\phi$ is locally bounded from above and $S$ is non-trivial the lhs in the previous inequality is trivially bounded from below by a positive constant (just integrate over a fixed small ball in $\mathcal{X}_{|V}$ where $S\neq0)$ and hence $v(\tau)\leq C$ as desired. For a general $r$ the same argument applies if one instead uses the $L^{2/r}-$version of the Ohsawa-Takegoshi extension theorem established in [@be-p2].
\[rm: flat implies L is minus K\]Since the induced metric on $\pi_{*}(\mathcal{L}+K_{\mathcal{X}/\C})$ is positively curved we have $l_{0}\geq0$ and hence $c_{i_{0}}\leq m_{i_{0}}-1,$ where $i_{0}$ is the index realizing the maximum in the second point of the previous proposition. This is in general false if the assumption that $H^{0}(\mathcal{X},\mathcal{L}+K_{\mathcal{X}})_{|\mathcal{X}_{0}}$ be non-trivial is removed.
\[cor:vanshing of lelong for gen ding\]Let $(\mathcal{X},\mathcal{L})$ be a special test configuration for a Fano variety $(X,-K_{X})$ with klt singularities and $\phi$ locally bounded metric on $\mathcal{L}$ with positive curvature current. Then the Ding metric has vanishing Lelong number at $\tau=0.$
Strictly speaking this is not a corollary of the previous proposition. But if, for example, we knew that $\mathcal{X}_{0}'$ were reduced for some resolution then the corollary would follow immediately from the previous proposition applied to the resolution $\mathcal{X}'$ (also using the klt condition to get a klt divisor $D'$ on $\mathcal{X}').$ We will instead apply the general algebraic form of inversion of adjunction directly on $\mathcal{X}.$ By assumption $\mathcal{X}_{0}(:=(\pi^{*}\tau=0))$ defines a reduced Cartier divisor in $\mathcal{X}$ such that the underlying variety has klt singularities. But then it follows from Theorem 7.5 and Corollary 7.6 in [@ko] that $(\mathcal{X},(1-\delta)\mathcal{X}_{0}))$ is a klt pair in a neighborhood of $\mathcal{X}_{0},$ which implies the integrability property \[eq:finte integral\] (for $l=\delta)$ when reformulated in analytic terms (using that $\phi$ is assumed locally bounded); see for example [@bbegz]. Note that this argument does not imply that, on a resolution, $\mathcal{X}_{0}'$ is reduced since there may be cancellations coming from the coefficients $c_{i}.$
Expressing the Donaldson-Futaki invariant in terms of the Ding functional for a general test-configuration
----------------------------------------------------------------------------------------------------------
It follows immediately from the projection formula applied to the the intersection theoretic formula \[eq:df as intersection\] for $DF(\mathcal{X},\mathcal{L})$ that, setting $$\eta':=-\frac{1}{(n+1)L^{n}}\left\langle \mathcal{L}',...,\mathcal{L}'\right\rangle +\frac{1}{L^{n}}\left\langle \mathcal{L}'+K_{\mathcal{X}'/\C}+D',\mathcal{L}'...,\mathcal{L}'\right\rangle ,$$ where $\eta'$ is thus a line bundle over $\C$ defined in terms of Deligne pairings on the fixed log resolution, we have $$DF(\mathcal{X},\mathcal{L})=DF(\mathcal{X}',\mathcal{L}')=w_{0}(\eta')$$
We have that $-DF(\mathcal{X},\mathcal{L})\geq w_{0}(\delta')$
Using $w_{0}(\eta)=w_{0}(\eta')$ and decomposing $$\eta'=\delta'+\left(\frac{1}{L^{n}}\left\langle K_{\mathcal{X}'/\C}+D'+\mathcal{L}',\mathcal{L}',...,\mathcal{L}'\right\rangle -\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C}+D')\right)\label{eq:eta as sum of delta' + correction}$$ reveals that it is enough to show that $w_{0}\left(\frac{1}{L^{n}}\left\langle K_{\mathcal{X}'/\C}+D'\text{+}\mathcal{L}',\mathcal{L}',...,\mathcal{L}'\right\rangle -\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C})+D'\right)=$ $$=(K_{\bar{\mathcal{X}'}/\P^{1}}+D'+\bar{\mathcal{L}'})\cdot\bar{\mathcal{L}'}\cdots\bar{\mathcal{L}'}-\deg\pi'_{*}(\bar{\mathcal{L}'}+K_{\bar{\mathcal{X}'}/\P^{1}}+D')\geq0,$$ where we have used the the compactification $\bar{\mathcal{X}'}$ of the resolution $\mathcal{X}'$ and the corresponding extension $\bar{\mathcal{L}'}$ of $\mathcal{L}'$ in the first equality (compare Remark \[rem:compactification\]). To simplify the notation we consider the case when $X$ is smooth so that $D'=0,$ but the general case is essentially the same. Note that the formula above involving the degrees is invariant under $\mathcal{L}'\rightarrow\mathcal{L}'\otimes\pi'^{*}\mathcal{O}_{\P^{1}}(m)$ and hence we may as well assume that $\deg\pi'_{*}(\bar{\mathcal{L}'}+K_{\bar{\mathcal{X}'}/\P^{1}})=0$ (this corresponds to a performing an overall twisting of the original action $\rho$ on $\mathcal{L}).$ But the latter vanishing means that the line bundle $\pi'_{*}(\bar{\mathcal{L}'}+K_{\bar{\mathcal{X}'}/\P^{1}})\rightarrow\P^{1}$ admits a global non-trivial holomorphic section $s,$ unique up to scaling by a non-zero complex constant. In particular, $s$ induces a global holomorphic section $\bar{\mathcal{L}'}+K_{\bar{\mathcal{X}'}/\P^{1}}\rightarrow\P^{1}.$ This means that $\bar{\mathcal{L}'}+K_{\bar{\mathcal{X}'}/\P^{1}}$ is linearly equivalent to an effective divisor $E$ (whose support is contained in the central fiber). But then it follows, since $\bar{\mathcal{L}'}$ is relatively semi-ample, that $$(K_{\bar{\mathcal{X}'}/\P^{1}}+\bar{\mathcal{L}'})\cdot\bar{\mathcal{L}'}\cdots\bar{\mathcal{L}'}=E\cdot\bar{\mathcal{L}'}\cdots\bar{\mathcal{L}'}\geq0\label{eq:intersection in terms of e}$$ which thus concludes the proof.
Now we can prove the following more precise version of Theorem \[thm:DF=00003Dding intro\], stated in the introduction:
\[thm:df=00003Dding\]Let $X$ be a Fano variety with klt singularities and *$(\mathcal{X},\mathcal{L})$ a test configuration (with normal total space) for $(X,-K_{X})$ with $\phi$ denoting a locally bounded metric on $\mathcal{L}$ with positive curvature current. Then, setting $\phi^{t}:=\rho(\tau)^{*}\phi_{\tau},$ we have* $$-DF(\mathcal{X},\mathcal{L})=\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi^{t})+q,\label{eq:df in terms of ding in thm}$$ where $q$ is a non-negative rational number determined by the polarized central fiber $(\mathcal{X}_{0},\mathcal{L}_{|\mathcal{X}_{0}})$ with the following properties
- If $q=0,$ then $\mathcal{X}_{0}$ is generically reduced and $\mathcal{X}$ is $\Q-$Gorenstein with $\mathcal{L}$ isomorphic to $-K_{\mathcal{X}/\C}.$
- If the central fiber of $\mathcal{X}$ is a normal variety with klt singularities (equivalently: the test configuration is special) then $q=0.$
- More precisely, if $(\mathcal{X}',\mathcal{X}'_{0})$ is a given log resolution of $(\mathcal{X},\mathcal{X}{}_{0})$ with $E_{i}$ denoting the reduced components of $\mathcal{X}'_{0},$ then the following formula holds $$q=\max_{i}\frac{m_{i}-c_{i}-1}{m_{i}}+\frac{1}{L^{n}}\sum_{i}c_{i}\mathcal{L}'^{n}\cdot E_{i},\label{eq:formula for q in thm}$$ where $m_{i}$ and $c_{i}$ are the order of vanishing along $E_{i}$ of $\mathcal{X}'_{0}$ of $\pi'^{*}\tau$ and any given non-trivial meromorphic (multi-) section $s'$ of $\mathcal{L}'+\mathcal{K}_{\mathcal{X}'/\C}+D'\rightarrow\mathcal{X}',$ respectively.
- In the case when $\mathcal{X}$ is smooth and the support of $\mathcal{X}_{0}$ has simple normal crossing we have that $q=0$ iff $\mathcal{X}_{0}$ is reduced and $\mathcal{L}$ is isomorphic to $-K_{\mathcal{X}/\C}.$
Let us start by proving the formula \[eq:df in terms of ding in thm\] in the third point. To simplify the notation we consider the case when $X$ is smooth so that $D'=0,$ but the proof in the general case is essentially the same.
Fix a trivializing section $s'$ of $\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C})\rightarrow\C.$ The section $s'$ induces an isomorphism between $\mathcal{L}$ and $-K_{\mathcal{X}^{*}/\C^{*}}$ over $\mathcal{X}^{*}.$ In fact, since the formula for $DF(\mathcal{X},\mathcal{L})$ is invariant under an overall twist of the action on $\mathcal{L}$ we may as well assume that $s'$ is an invariant section and hence, using the notation in the previous lemma $\deg\pi'_{*}(\bar{\mathcal{L}'}+K_{\bar{\mathcal{X}'}/\P^{1}})=0.$ We also fix a trivializing section $s_{0}$ of the top Deligne pairing of $\mathcal{L}$ over $\tau=0.$ By Lemma \[lem:Hilbert-Mumford\] $$w(\delta')=-\lim_{t\rightarrow\infty}\frac{d}{dt}\log\left\Vert \rho(\tau)S_{0}\right\Vert _{\Phi'}^{2}+l_{0},$$ where $S_{0}=s_{0}\otimes s'_{0}\in\delta'_{|\tau=0}$ and $l_{0}(\geq0)$ is the Lelong number of the metric $\Phi'$ on $\delta'.$ Moreover, setting $\phi^{t}=\rho(\tau)^{*}\phi_{\tau}$ we can write $$-\log\left\Vert \rho(\tau)S_{0}\right\Vert _{\Phi}^{2}=-\log\left\Vert s_{0}\right\Vert _{\phi^{t}}^{2}-\log\left\Vert s'_{0}\right\Vert _{\phi^{t}}^{2}=-\frac{1}{L^{n}}\mathcal{E}(\phi_{t})+0+\log\int_{X}e^{-\phi_{t}}+c_{0}$$ using the previous identifications, where $c_{0}$ is a fixed constant which comes from the change of metrics formula for the Deligne pairing \[eq:change of metric formula as energy\]. Finally, using $-DF(\mathcal{X},\mathcal{L})=w_{0}(\eta')$ and the decomposition formula in Lemma \[eq:eta as sum of delta’ + correction\] together with formula \[eq:intersection in terms of e\] this proves formula \[eq:df in terms of ding in thm\]. Note that if we would have chosen another trivializing section $\tilde{s}$ then $\tilde{s}=e^{f(\tau)}s$ where $f(\tau)$ is harmonic function on $\Delta,$ which corresponds to changing $\phi^{t}$ to $\phi^{t}+f(e^{-t})$ which anyway leaves the Ding function invariant and the Lelong number invariant, as it must. As for the formula \[eq:formula for q in thm\] it follows from Prop \[prop:lelong of direct image\]. More precisely, when $s'$ defines a trivialization of the corresponding direct image bundle the formula follows immediately from the previous proposition. Now, a general section may be written as $f(\tau)s'$ for $f(\tau)$ a meromorphic function, whose vanishing (or pole) order at $\tau=0$ we denote by $m.$ Since the formula for $q$ is invariant under $c_{i}\rightarrow c_{i}+m$ the case of a general section thus follows.
To prove the first point we assume that $q=0$ (this is the direction that is used in the proof of Theorem \[thm:k-poly intro\]). We take $s'$ to be defined by a trivialization section as above, to ensure that the first term in \[eq:formula for q in thm\], coming from the Lelong number of the $L^{2}-$metric is non-negative. Let $E_{i}$ be the reduced components of $\mathcal{X}'_{0}$ and denote by $I$ the set of all indices $i$ such that $E_{i}$ is not $p-$exceptional for the log resolution $p.$ For any such $i$ we have $\mathcal{L}'^{n}\cdot E_{i}>0$ and hence if $q=0$ it follows that $c_{i}=0$ and hence $m_{i}=0$ for any $i\in I.$ But since $\mathcal{X}$ is normal we may, by Hironaka’s theorem, take $p$ to be an isomorphism on $p^{-1}(\mathcal{X}-\mathcal{Z}),$ where $\mathcal{Z}$ is a subvariety of codimension at least two (containing the singular locus of $\mathcal{X})$ with $\mathcal{X}_{0}$ reduced at any point in $\mathcal{X}-\mathcal{Z}$ (using $m_{i}=0$ for any $i\in I).$ Moreover, since $c_{i}=0$ for any $i\in I$ we have that $\mathcal{L}$ is isomorphic to $K_{\mathcal{X}/\C}$ on $\mathcal{X}-\mathcal{Z}$ and hence, since the codimension of $\mathcal{X}-\mathcal{Z}$ is at least two $\mathcal{L}$ is the unique extension of $K_{\mathcal{X}/\C}$ from the regular locus of $\mathcal{X},$ which, by definition, means that $\mathcal{X}$ is $\Q-$Gorenstein. The second point follows from Cor \[cor:vanshing of lelong for gen ding\] (together with Lemma \[lem:char of special test\]) and the last point from Prop \[prop:lelong of direct image\].
\[rm: difference of pos\]It may be worth pointing out that unless $\phi_{t}$, appearing in the previous theorem, is a (weak) geodesic $\mathcal{D}(\phi^{t})$ may not be convex in $t,$ since the corresponding generalized Ding metric $\Phi'$ may not be positively curved. But this fact does not effect the proof of the previous theorem, which only uses the the generalized Ding metric is a *difference* of positively curved metrics, where the positive (and a priori singular) part comes from the $L^{2}-$metric on the direct image bundle $\pi'_{*}(\mathcal{L}'+K_{\mathcal{X}'/\C}).$
### \[sub:An-alternative-proof\]An alternative proof of Theorem \[thm:k-poly intro\] using the singularity structure of the generalized Ding metric
We can now give a proof of Theorem \[thm:k-poly intro\] that does not use the deep results about special test configurations in [@l-x], which are based on the Minimal Model Program (nor semi-stable reduction). Given an arbitrary test configuration $(\mathcal{X},\mathcal{L})$ for $(X,-K_{X})$ Theorem \[thm:df=00003Dding\] gives that for any bounded geodesic $\phi^{t}$ ray emanating from any given metric on $\mathcal{L}$ which is associated to a the given test configuration we have $$-DF(\mathcal{X},\mathcal{L})=\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi^{t})+q,\,\,\,\,\, q\geq0$$ Now, if $X$ admits a Kähler-Einstein metric, that we take to be equal to $\phi^{0},$ then it follows from convexity, as before, that $\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi^{t})\geq0$ and hence by the previous inequality $-DF(\mathcal{X},\mathcal{L})\geq0.$ Moreover if $DF(\mathcal{X},\mathcal{L})=0$ then it must be that $\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi_{t})=0,$ which is equivalent to $\mathcal{D}(\phi_{t})$ being affine and $q=0.$ In particular, by Theorem \[thm:df=00003Dding\] the central fiber of $\mathcal{X}$ is generically reduced and, as explained in the proof of Cor \[cor:pos etc of ding metric\], it thus follows that $\mathcal{X}$ is isomorphic to a product test configuration, as desired (recall that we are assuming that $\mathcal{X}$ is normal).
\[sub:Applications-to-bounds\]Applications to bounds on the Ricci potential and Perelman’s $\lambda-$entropy functional
-----------------------------------------------------------------------------------------------------------------------
Let now $X$ be a Fano manifold and denote by $\mathcal{K}(X)$ the space of all Kähler metrics $\omega$ in $c_{1}(X)$ (equivalently, $\omega=dd^{c}\phi$ for some strictly positively curved metric $\phi$ on $-K_{X}).$ In this section we will use the normalization $V:=c_{1}(X)^{n}:=\int_{X}\omega^{n}.$ Recall that the Ricci potential $h_{\omega}$ is the function on $X$ defined by $dd^{c}h_{\omega}=\mbox{Ric }\omega-\omega$ together with the normalization condition $\int e^{h_{\omega}}\omega^{n}/V=1,$ which in terms of the previous notation means that $h_{dd^{c}\phi}=:=h_{\phi}=-\log(\frac{(dd^{c}\phi)^{n}/V}{e^{-\phi}/\int e^{-\phi}}).$ Note in particular that $$\left\Vert 1-e^{h_{\omega}}\right\Vert _{L^{1}(X,\omega)}=\left\Vert \frac{1}{V}(dd^{c}\phi)^{n}-\frac{e^{-\phi}}{\int e^{-\phi}}\right\Vert ,$$ where the norm in the rhs is the total variation norm on the space of absolutely continuous probability measures on $X.$
Next, let $(\mathcal{X},\mathcal{L})$ be a test configuration of a polarized manifold $(X,L)$ and define its “$L^{\infty}-$norm” by $$\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}:=\left\Vert \frac{d\phi^{t}}{dt}_{|t=0}\right\Vert _{L^{\infty}(X)},\label{eq:def of l infty norm of test}$$ where $\phi^{t}$ is the (weak) geodesic determined by $\mathcal{X},$ emanating from any fixed reference metric $\phi^{0}\in\mathcal{H}(X,L).$ The point is that if $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}\neq0$ then the *normalized Donaldson-Futaki invariant* $DF(\mathcal{X},\mathcal{L})/\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}$ is independent of base changes of $(\mathcal{X},\mathcal{L}),$ induced by $\tau\rightarrow\tau^{m}$ (which correspond to reparametrizations of $\phi^{t},$ induced by $t\mapsto mt$$).$ We will be relying on the following lemma which is a special case of a very recent result of Hisamoto [@hi]:
\[lem:l infty norm\]The number $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}$ is well-defined, i.e. it is independent of $\phi_{0}.$
Now we can prove the following theorem, using a slight variant of the proof of Theorem \[thm:DF=00003Dding intro\].
Let $X$ be a Fano manifold. Then $$\inf_{\omega\in\mathcal{K}(X)}\left\Vert 1-e^{h_{\omega}}\right\Vert _{L^{1}(X,\omega)}\geq\sup_{(\mathcal{X},\mathcal{L})}\frac{DF(\mathcal{X},\mathcal{L})}{\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}},$$ where $(\mathcal{X},\mathcal{L})$ ranges over all test configurations $(\mathcal{X},\mathcal{L})$ such that $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}\neq0.$ Moreover, if equality holds and the infimum is attained at some $\omega$ and the supremum is attained at $(\mathcal{X},\mathcal{L})$ (with $\mathcal{X}$ normal), then $(\mathcal{X},\mathcal{L})$ is isomorphic to a product test configuration. In particular, $$\inf_{\omega\in\mathcal{K}(X)}\int h_{\omega}e^{h_{\omega}}\frac{\omega^{n}}{V}\geq\frac{1}{2}\sup_{(\mathcal{X},\mathcal{L})}\left(\frac{DF(\mathcal{X},\mathcal{L})}{\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}}\right)^{2}$$ where the sup ranges over all destabilizing $(\mathcal{X},\mathcal{L})$ (i.e. $DF(\mathcal{X},\mathcal{L})>0)$ with the same same necessary conditions for equality as before. In particular, if $X$ is K-unstable then both infimums above are strictly positive.
Fix $(\mathcal{X},\mathcal{L})$ and $\phi^{0}\in\mathcal{H}(X,-K_{X})$ and denote by $\phi^{t}$ the corresponding (weak) geodesic. By convexity of the Ding functional, combined with Theorem \[thm:DF=00003Dding intro\] (using that $q\geq0)$, we have $$\int_{X}\left(\frac{1}{V}(dd^{c}\phi_{0})^{n}-\frac{e^{-\phi}}{\int e^{-\phi}}\right)\frac{d\phi^{t}}{dt}=-\frac{d}{dt}\mathcal{D}(\phi^{t})_{t=0}\geq-\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi^{t})\geq DF(\mathcal{X},\mathcal{L}).\label{eq:proof of lower bound ricci potential}$$ Applying Hölder’s inequality with exponents $(q,p)=(1,\infty)$ thus gives $$\left\Vert 1-e^{h_{\omega}}\right\Vert _{L^{1}(X,\omega)}\left\Vert \frac{d\phi^{t}}{dt}_{|t=0}\right\Vert _{L^{\infty}(X)}\geq DF(\mathcal{X},\mathcal{L})\label{eq:holder}$$ and using the independence in the previous lemma then concludes the proof of the first inequality of the Theorem. The second inequality then follows immediately from the classical Csiszar-Kullback-Pinsker inequality between the relative entropy and the total variation norm. As for the equality case it follows, just as in the second proof of Theorem \[thm:k-poly intro\], from the equality cases in \[eq:proof of lower bound ricci potential\]. Finally, if $X$ is $K-$unstable then there exists, by definition, a test configuration such that $DF(\mathcal{X},\mathcal{L})>0$ and for any such test configuration the inequality \[eq:holder\] forces $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}>0,$ which concludes the proof.
Recall that in the definition of a test configuration $(\mathcal{X},\mathcal{L})$ we have fixed an action $\rho$ on $\mathcal{L}$ and thus the norm $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}$ certainly depends on $\rho.$ Indeed, twisting $\rho$ with a character of $\C^{*}$ shifts the tangent of $\phi^{t}$ with a constant. On the other hand, $DF(\mathcal{X},\mathcal{L})$ is independent of such a twist and hence the previous theorem still holds if we replace $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}$ with its (smaller) normalized version obtained by replacing the $L^{\infty}(X)-$norm in the definition \[eq:def of l infty norm of test\] with the quotient norm on the quotient space $L^{\infty}(X)/\R.$
\[rem:Lemma–infty norm\]As pointed out above Lemma \[lem:l infty norm\] is a special case of a general result of Hisamoto [@hi], saying that the measure $(\frac{d\phi^{t}}{dt})_{*}MA(\phi^{t})$ on $\R$ only depends on the test configuration $(\mathcal{X},\mathcal{L})$ and moreover is equal to the limiting normalized weight measures for the $\C^{*}-$action, as conjectured by Witt-Nyström [@n], who settled the case of product test configurations. In particular, by [@hi] all the $L^{p}-$norms $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{p}$ of $\frac{d\phi^{t}}{dt}$ (integrating against $MA(\phi^{t}))$ only depend on $(\mathcal{X},\mathcal{L})$ and coincide with the limits of the corresponding $l^{p}-$norms of the weights $\{\lambda_{i}^{(k)}\}.$ In particular, letting $p\rightarrow\infty$ gives Lemma \[lem:l infty norm\]. Using this the proof of the previous theorem shows that the theorem holds, more generally, when $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{\infty}$ is replaced by $\left\Vert (\mathcal{X},\mathcal{L})\right\Vert _{p}$ for $p\in[1,\infty]$ and the $L^{1}-$norm with the corresponding $L^{q}-$norm, where $q$ is the Young (Hölder) dual of $p.$ In fact, as shown in [@b-h-n] a similar argument can be used to give a new proof and extend to general $L^{p}-$norms Donaldson’s lower bound on the Calabi functional [@do2].
Next, we recall that Perelman’s W-functional [@pe; @ti-zhu; @t-z--; @he], when restricted to the space $\mathcal{K}(X)$ of all Kähler metrics in $c_{1}(X),$ is given by
$$W(\omega,f):=\int_{X}(R+|\nabla f|^{2}+f)e^{-f}\omega^{n}$$ (as usually in the Kähler setting where the volume of the metrics is fixed we have set Perelman’s parameter $\tau$ to be equal to $1/2).$ Then Perelman’s $\lambda-$entropy functional on $\mathcal{K}(\omega)$ is defined as
$$\lambda(\omega)=\inf_{f\in\mathcal{C}^{\infty}(X):\,\int e^{-f}\omega^{n}=1}W(\omega,f)$$ [@pe; @ti-zhu; @t-z--; @he] and in particular $\lambda(\omega)\leq W(\omega,0)=nV.$
\[cor:bound on lambda f\]Let $X$ be an $n-$dimension Fano manifold. Then where $V=c_{1}(X)^{n}$ and $(\mathcal{X},\mathcal{L})$ ranges of all destabilizing test configurations for $(X,-K_{X}).$ In particular, if $X$ is K-unstable then $\lambda\leq nV-\epsilon$ for some positive number $\epsilon.$
As explained in [@he] $\lambda(\omega)+\int h_{\omega}e^{h_{\omega}}\omega^{n}\leq nV$ (using $W(\omega,f)\leq W(\omega,-h_{\omega})$ and one integration by parts) and hence the corollary follows immediately from the previous theorem.
The previous inequality was inspired by the result in [@t-z--] and its extension to general non-invariant Kähler metrics in [@he], saying that $$\sup_{\omega\in\mathcal{K}(X)}\lambda(\omega)\leq nV-\sup_{\xi\in\mbox{Lie}G}H(\xi),$$ with equality if $X$ admits a Kähler-Ricci soliton, where $\mbox{Lie}G$ is the Lie algebra of a maximal compact subgroup in $\mbox{Aut\ensuremath{_{0}(X)}and }$$H$ is a certain concave functional on $\mbox{Lie}G,$ defined in [@t-z--]. The proof in [@he] was based on the convexity of the functional $v_{\phi^{t}},$ while we here use the convexity of the whole Ding functional.
\[sub:The-logarithmic-setting\]The logarithmic setting
======================================================
Let us briefly recall the more general setting of Kähler-Einstein metrics on log Fano varieties [@bbegz] and log K-stability [@do-3; @li1; @o-s]. In a nutshell, this setting is obtained from the previous one by replacing the canonical line bundle $K_{X}$ with the *log canonical line bundle* $K_{(X,D)}:=K_{X}+D$ of a given *log pair* $(X,D),$ i.e. $X$ is a normal variety and $D$ is a $\Q-$divisor on $X$ such that $K_{X}+D$ is defined as a $\Q-$line bundle. For example, $(X,D)$ is said to be a *(weak) log Fano variety* if $-K_{(X,D)}$ is ample (nef and big). A *log Kähler-Einstein metric* $\omega$ associated to $(X,D)$ is, by definition, a current $\omega$ in $c_{1}(-K_{(X.D}),$ defining a Kähler metric on $X_{reg}-D,$ with locally bounded potentials on $X$ and such that $$\mbox{Ric }\ensuremath{\omega-[D]=\omega,}$$ holds in the sense of currents, where $[D]$ denotes the current of integration defined by $D.$ Equivalently [@bbegz], this means that $\omega$ is the curvature current of a locally bounded metric $\phi_{KE}$ on $-K_{(X,D)}$ satisfying $$(dd^{c}\phi_{KE})^{n}=Ce^{-(\phi_{KE}+\log|s_{D}|^{2})}$$ (for some constant $C)$ in the sense of pluripotential theory, where we recall that $e^{-(\phi+\log|s_{D}|^{2})}$ denotes the measure associated to a metric $\phi$ on $-K_{(X,D)};$ see section \[sub:K=0000E4hler-Einstien-metrics\].
The definitions are compatible with log resolutions. More precisely, if $p:\, X'\rightarrow X$ is a log resolution of the log pair $(X,D),$ i.e. $p$ is a proper birational morphism such that $\mbox{Supp}p^{*}D+E$ has simple normal crossings, where $E$ is the exceptional divisor of $p,$ then $$p^{*}K_{(X,D)}=K_{X'}+D',\label{eq:pull-back of log can}$$ for a $\Q-$divisor $D'$ on $X'$ (by Hironaka’s theorem we may and will assume that $p$ is an isomorphism away from $p^{-1}(X_{sing}\cup\mbox{Supp}D_{sing}).$
Hence if $(X,D)$ is a weak log Fano variety, then so is $(X',D')$ and $\phi_{KE}$ is a log Kähler-Einstein metric for $(X,D)$ iff $p^{*}\phi_{KE}$ is a log Kähler-Einstein metric for $(X',D').$ In general, a log pair $(X,D)$ is said to have *klt singularities* if the coefficients of $D'$ in formula \[eq:pull-back of log can\] are $<1$ for any log resolution, which equivalently means that the measure $e^{-(\phi+\log|s_{D}|^{2})}$ has finite mass on $X$ for some (and hence any) locally bounded metric $\phi$ on $-K_{(X,D)}.$
\[ex:edge-cone\]If $X$ is smooth and $D=(1-\beta)D_{0}$ for $\beta\in]0,1]$ and $D_{0}$ a smooth hypersurface such that $-(K_{X}+D)$ is ample, then $(X,D)$ is a log Fano variety. As shown in [@j-m-r], under the assumption that the log Mabuchi functional (or the log Ding functional) be proper $(X,D)$ admits a log Kähler-Einstein metric $\omega_{KE},$ which moreover has edge-cone singularities along the hypersurface $D_{0},$ forming cones of angle $2\pi\beta$ in the transversal directions of $D_{0}$ (as shown in [@j-m-r] the metric even admits a complete polyhomogenous expansion along $D_{0}).$ One would expect that the existence of a log Kähler-Einstein metric for $(X,(1-\beta)D_{0})$ implies that the log Mabuchi functional and the log Ding functional are proper (modulo the existence of holomorphic vector fields tangent to $D_{0}),$ but this is only known in the usual case when $\beta=1.$ As announced in [@j-m-r] the results also extend to the general log smooth case, i.e. the case when $X$ is smooth and $D$ is klt with simple normal crossings: $D=\sum(1-\beta_{i})D_{i}$ with $\beta_{i}\in]0,1].$ However, one of the main points of the approach in the present paper is that it only relies on very weak regularity properties of the metric (the local boundedness of $\phi_{KE})$ and that it is independent of any properness assumption. It should also be pointed that, under the assumption that $\beta_{i}\in]0,1/2[,$ in the previous log smooth case, it is shown in [@cgh] that *any* log Kähler-Einstein metrics has edge-cone singularities, even though the properness of the corresponding functionals is not known.
The notion of K-stability has also been generalized to the log setting (see [@do-3; @li1; @o-s]). Briefly, a test configuration for a log Fano variety $(X,D)$ consists of a test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ where $L=-K_{(X,D)}.$ The $\C^{*}-$action, applied to the support of $D$ in $\mathcal{X}_{1}$$,$ induces a $\C^{*}-$invariant divisor $\mathcal{D}^{*}$ in $\mathcal{X}^{*}$ and we denote by $\mathcal{D}$ its closure in $\mathcal{X}.$ The corresponding *log Donaldson-Futaki invariant* $DF(\mathcal{X},\mathcal{L};\mathcal{D})$ was defined in [@li1] (by imposing linearity it is enough to consider the case when $D$ is reduced and irreducible). A direct calculation reveals that, up to normalization, the definition in [@li1] is equivalent to replacing the relative canonical divisor $K$ in the intersection theoretic formula \[eq:df as intersection\] with the relative log canonical divisor $K+\mathcal{D},$ defined as a Weil divisor (compare [@o-s]): $$DF(\mathcal{X},\mathcal{L};\mathcal{D})=-\mu\mathcal{\bar{\mathcal{L}}}\cdot\mathcal{\bar{\mathcal{L}}}\cdots\mathcal{\bar{\mathcal{L}}}-(n+1)(K+\mathcal{D})\cdot\cdot\mathcal{\mathcal{\bar{\mathcal{L}}}}\cdots\mathcal{\mathcal{\bar{\mathcal{L}}}},,\label{eq:log df as intersection-1}$$ where now $\mu=n(-(K_{X}+D))\cdot L^{n-1}/L^{n}.$ We can hence take the latter formula as the definition of the invariant $DF(\mathcal{X},\mathcal{L};\mathcal{D}).$ Finally, $(X,D)$ is said to be *log K-polystable* if, for any test configuration, $DF(\mathcal{X},\mathcal{L};\mathcal{D})\leq0$ with equality iff the test configuration is equivariantly isomorphic to a product test configuration.
Let $(X,D)$ be a log Fano variety admitting a log Kähler-Einstein metric, where $D$ is an effective $\Q-$divisor on $X.$ Then $(X,D)$ is log K-polystable.
The theorem thus confirms one direction of the log version of the Yau-Tian-Donaldson conjecture formulated in [@li1]. The proof of the theorem proceeds, mutatis mutandis, as the proof in the previous case when $D=0,$ and we will hence only give some brief comments on the modifications needed. The key point is that the convexity results for $v_{\phi}(\tau):=-\log\int_{\mathcal{X}_{\tau}}e^{-(\phi+\log|s_{D}|^{2})},$ for $\phi$ a locally bounded metric with positive curvature current on $-K_{(X,D},$ are still valid in the logarithmic setting as long as $D$ is effective (as shown in [@bbegz]). Also, as pointed out in the end of [@l-x] the proof of Theorem \[thm:(Li-Xu)–Let\] also applies in the log setting and hence it is enough to consider special test configurations in the log setting, where the role of $\eta$ is now played by the top Deligne pairing of $-(K+\mathcal{D}).$ Anyway, the alternative proof in section \[sub:An-alternative-proof\] immediately extends to the log setting if one performs a log resolution of $\mathcal{D}+\mathcal{X}_{0}.$
\[sec:Outlook-on-the\]Outlook on the existence problem for Kähler-Einstein metrics on $\Q-$Fano varieties
=========================================================================================================
An immediate consequence of Theorem \[thm:df=00003Dding\] applied to a special test configuration is the following
\[cor:mab diverges\]Let $X$ be a Fano variety with klt singularities and $\mathcal{X}$ a special test configuration for $X$ such that $DF(\mathcal{X})<0.$ Fix a smooth and positively curved metric $\phi$ on $-K_{\mathcal{X}/\C}$ (more generally, local boundedness is enough) and set $\phi^{t}:=\rho^{*}\phi_{\tau}.$ Then the Ding functional $\mathcal{D}$ and the Mabuchi functional $\mathcal{M}$ both tend to infinity along $\phi^{t},$ as $t\rightarrow\infty.$
By Theorem \[thm:df=00003Dding\] we have that $\lim_{t\rightarrow\infty}\frac{d}{dt}\mathcal{D}(\phi^{t})>0$ and hence $\mathcal{D}(\phi^{t})\rightarrow\infty.$ Since, by an inequality of Bando-Mabuchi, we have $\mathcal{D}(\phi^{t})\leq\mathcal{M}(\phi^{t})$ (see [@bbegz] for the general singular case) this concludes the proof.
We recall that the Mabuchi functional $\mathcal{M}$ admits a natural extension to the space $\mathcal{H}_{b}(-K_{X})$ taking values in $]-\infty,\infty]$ such that $\mathcal{M}(\phi)$ is finite precisely when the measure $MA(\phi)$ has finite pluricomplex energy and relative entropy [@bbegz]. In particular, by the regularity results in [@p-s1b], $\mathcal{M}(\phi^{t})$ is finite, for any fixed $t,$ if the initial metric $\phi_{0}$ is smooth and hence under the assumption in the previous corollary $\mathcal{M}(\phi^{t})\rightarrow\infty$ tends to infinity as $t\rightarrow\infty.$ See [@p-s; @ch] for related results in the case when the total space $\mathcal{X}$ is assumed smooth.
As will be next briefly explained the previous corollary fits naturally into Tian’s program for proving that any K-polystable Fano manifold admits a Kähler-Einstein metric (see the outline in [@ti2]and references therein). It should be pointed out that there has recently has been great progress on Tian’s program [@do-s] and we refer the reader to [@ti2; @do-s] for further background and references. After recalling Tian’s program in the smooth setting we will then comment on further complications arising when considering general $\Q$- Fano varieties.
The case of a smooth Fano variety $X$
-------------------------------------
The starting point of Tian’s program is the continuity equation $$\mbox{Ric \ensuremath{\omega_{t}=t\omega_{t}+(1-t)\eta,}}\label{eq:aubins eq}$$ where $\omega_{0}$ is a given Kähler metric of positive Ricci curvature $\eta$ and $t\in[0,1]$ is a fixed parameter. Let $I$ be the set of all $t$ such that a solution $\omega_{t}$ exists. As shown by Aubin $I\cap[0,1[$ is open and non-empty and hence to prove the existence of a Kähler-Einstein metrics, i.e. that $1\in I,$ it is enough to show that $I$ is closed. More precisely, denoting by $T$ the boundary of $I$ and taking $t_{i}\rightarrow T$ we can write $\omega_{t_{i}}=dd^{c}\phi_{t_{i}}$ for suitably normalized metrics $\phi_{t_{i}}$ on $-K_{X}$ (e.g. satisfying $\sup_{X}(\phi_{t_{i}}-\phi_{0})=0)$ and to show that $I$ is closed it is enough to establish the following $C^{0}-$estimate: $$\sup_{X}\left|\phi_{t_{i}}-\phi_{0}\right|\leq C\label{eq:aubin c0 estimate}$$ (then the higher order estimates follow using the Aubin-Yau $C^{2}-$estimate, Evans-Krylov theory and elliptic boot strapping). Before continuing we recall that the following properties hold along the continuity path \[eq:aubins eq\] (for a fixed $t_{0}\in I):$ $$(i)\,\mbox{Ric \ensuremath{\omega_{t}\geq t_{0}\omega_{t},\,\,\,\,(ii)\,\,\,\mathcal{M}(\phi_{t})\leq C_{0},}}\label{eq:prop along cont path}$$ where the first property follows immediately from the fact that $\eta\geq0$ and the second one from the well-known fact that $\mathcal{M}(\phi_{t})$ is decreasing in $t.$
In order to relate the desired $C^{0}-$estimate \[eq:aubin c0 estimate\] to algebraic properties of $X$ Tian proposed the following conjecture stated in terms of the Bergman function $\rho_{\omega}^{(k)}(x),$ at level $k,$ associated to a Kähler metric $\omega$ on $X:$ $$\mbox{\ensuremath{\rho_{\omega}^{(k)}(x)=\sum_{i=0}^{N_{k}}|s_{i}^{(\phi)}|^{2}e^{-k\phi},}}$$ where $\phi$ is any metric on $-K_{X}$ with curvature form $\omega$ and $\{s_{i}^{(\phi)}\}$ is any base in $H^{0}(X,-kK_{X})$ which is orthonormal wrt the $L^{2}-$norm $\left\Vert \cdot\right\Vert _{k\phi}$ on $H^{0}(X,-kK_{X})$ determined by $\phi,$ i.e. $\left\Vert s\right\Vert _{k\phi}^{2}=\int_{X}|s|^{2}e^{-k\phi}\omega^{n}.$
\[(Tian’s-partial-estimate).\](Tian’s partial $C^{0}-$estimate). Given $t_{0}\in]0,1],$ let $\mathcal{K}(X,t_{0})$ be the space of all Kähler metrics $\omega$ in $c_{1}(X)$ such that Ric$\omega\geq t_{0}\omega.$ Then there exists a $k>0$ and $\delta>0$ such that $kL$ is very ample and for any $\omega\in\mathcal{K}(X,t_{0}),$ $$\inf_{X}\mbox{\ensuremath{\rho_{\omega}^{(k)}(x)\geq\delta}}$$
(more precisely, the conjecture says that $k$ can be chosen arbitrarily large). If the previous conjecture holds then, as follows immediately from the definition of $\rho_{\omega}^{(k)},$ the desired $C^{0}-$estimate holds \[eq:aubin c0 estimate\] iff $$\sup_{X}\left|\phi_{t_{i}}^{(k)}-\phi_{0}\right|\leq C\label{eq:tians partial c0 estimate}$$ where now $\phi_{t_{i}}^{(k)}$ is the Bergman metric at level $k$ determined by $\phi_{t_{j}},$ i.e. $\phi_{t_{j}}^{(k)}=\frac{1}{k}\log\sum_{i=0}^{N_{k}}|s_{i}^{(\phi_{t_{j}})}|^{2}.$ In other words: $\phi_{t_{i}}^{(k)}$ is the scaled pull-back of the Fubini-Study metric $\phi_{FS}$ on $\mathcal{O}(1)\rightarrow\P^{N_{k}}$ under the Kodaira map $F_{j}$ determined by $\phi_{t_{j}}:$ $$F_{j}:\, X\rightarrow\P^{N_{k}},\,\,\,\,\,\phi_{t_{i}}^{(k)}=F_{j}^{*}\phi_{FS},\,\,\,\, F_{j}(X):=V_{j}$$ i.e. $F_{i}(x)=[s_{0}(x):s_{1}(x):\cdots s_{N_{k}}(x)],$ where now $(s_{i})$ is a fixed base, which is orthonormal wrt the $L^{2}-$norm determined by $\phi_{t_{j}}$ (strictly speaking, due to the choice of base $V_{i}$ is only determined modulo action of the the unitary group $U(N_{k}+1),$ but since this group is compact this fact will be immaterial in the following). After passing to a subsequence we may assume that the projective subvariety $V_{j}:=F_{j}(X)\subset\P^{N_{k}},$ converges, in the sense of cycles, to an algebraic cycle $V_{\infty}$ in $\P^{N_{k}}.$ As indicated by Tian [@ti2] the validity of the previous conjecture would imply that the cycle $V_{\infty}$ is reduced, irreducible and even defining a *normal* variety and we will thus assume that this is the case (see [@do-s] for a proof under the extra assumption of an upper bound on the Ricci curvature). Next, following Tian [@ti2] we note that, by a compactness argument, there is a one parameter subgroup $\rho:\C^{*}\rightarrow GL(N_{k}+1,\C)$ such that $$\sup_{X}\left|\phi_{t_{j}}^{(k)}-\rho(\tau_{i})^{*}\phi_{FS}\right|\leq C$$ where $\rho(\tau_{i})V_{0}$ also converges to the normal variety $V_{\infty},$ as points in the corresponding Hilbert scheme. But then it follows from the universal property of the Hilbert scheme that $\rho$ determines a (special) test configuration $(\mathcal{X},\mathcal{L})$ with central fiber $V_{\infty}$ and such that $\rho(\tau)^{*}\phi_{FS}$ is of the same form as in Cor \[cor:mab diverges\] (as formulated in Cor \[cor:mab along bergman intro\] in the introduction of the paper).
Now, assuming that $X$ is K-stable (for simplicity we consider the case when $X$ admits no non-trivial holomorphic vector fields, but the general argument is similar) we have that $DF(\mathcal{X},\mathcal{L})\leq0$ with equality iff $(\mathcal{X},\mathcal{L})$ is equivariantly isomorphic to a product test-configuration (recall that the total space $\mathcal{X}$ here is automatically normal and even $\Q-$Gorenstein, by Lemma \[lem:char of special test\]). In the latter case, $\rho(\tau)^{*}\phi_{FS}-\phi$ is trivially bounded and hence the desired $C^{0}-$estimate \[eq:aubin c0 estimate\] then holds, showing that $X$ indeed admits a Kähler-Einstein metric (assuming the validity of Tian’s conjecture). The main issue is thus to exclude the case of $DF(\mathcal{X},\mathcal{L})<0$ and this is where Cor \[cor:mab diverges\] enters into the picture. However, to apply the latter corollary we still need to know that $V_{\infty}$ has klt singularities. By an observation in [@bbegz] this would follow from the normality and $\Q-$Gorenstein property of $V_{\infty}$ if the regular locus $(V_{\infty})_{reg}$ admits a current $\omega_{\infty}$ such that, for some $\epsilon>0,$ $$\mbox{Ric \ensuremath{\omega_{\infty}\geq\epsilon\omega_{\infty}\,\,\,\mbox{on\,\ensuremath{(V_{\infty})_{reg}}}}}\label{eq:pos ricci curvature on limiting cycle}$$ More precisely, in order to make sense of the previous condition we also assume that $\omega_{\infty}$ has locally bounded potentials $\phi_{\infty}$ on $(V_{\infty})_{reg}$ and that the corresponding Monge-Ampère measure $\omega_{\infty}^{n}$ has a local density of the form $e^{-\psi_{\infty}}$ with $\psi_{\infty}$ in $L_{loc}^{1}$ so that $\mbox{Ric \ensuremath{\omega_{\infty}:=dd^{c}\psi_{\infty}}.}$
Let $Y$ be a normal variety which admits a current $\omega_{\infty}$ on its regular locus with strictly positive Ricci curvature in the sense of \[eq:pos ricci curvature on limiting cycle\]. Then $Y$ has klt singularities.
Thus, assuming the validity of Tian’s Conjecture \[(Tian’s-partial-estimate).\] and the existence of a current $\omega_{\infty}$ as in \[eq:pos ricci curvature on limiting cycle\] we deduce from Cor \[cor:mab diverges\] that if the second alternative $DF(\mathcal{X},\mathcal{L})<0$ holds, then the Ding functional $\mathcal{D}$ tends to infinity along $\rho(\tau_{i})^{*}\phi_{FS}$ and hence it is unbounded from above along $\phi_{t_{j}}^{(k)}.$ But this implies that $\mathcal{D}$ is also unbounded along the original sequence $\phi_{t_{j}}$ (also using that if $|\psi-\psi'|\leq C$ then $|\mathcal{D}(\psi)-\mathcal{D}(\psi')|\leq2C,$ as follows immediately from the definition \[eq:def of ding functional\]). But, since $\mathcal{D}\leq\mathcal{M},$ this contradicts the property $(ii)$ in formula \[eq:prop along cont path\] hence it must be that the first alternative, $DF(\mathcal{X},\mathcal{L})=0,$ holds and thus $X$ admits a Kähler-Einstein metric, as desired.
### Comments on the existence of a current $\omega_{\infty}$ as in \[eq:pos ricci curvature on limiting cycle\].
It seems natural to expect that (given the validity of Tian’s Conjecture \[(Tian’s-partial-estimate).\]) the existence of $\omega_{\infty}$ is automatic and that $\omega_{\infty}$ may be obtained as a suitable limit of the sequence $\omega_{t_{j}}.$ For example, this is the case for a general toric (possibly K-unstable) toric variety, as follows from a result of Li [@li0]. Moreover, the convergence theory of Cheeger-Colding-Tian [@c-c-t] suggest that any Gromov-Hausdorff limit of $(X,\omega_{t_{j}})$ (which exists by Gromov compactness) has conical singularities and induces a Kähler current $\omega_{\infty}$ on $V_{\infty},$ which has conical singularities on the regular locus of $V_{\infty}.$ In particular, this would mean that it satisfies the regularity assumptions underlying the condition \[eq:pos ricci curvature on limiting cycle\]. In fact, the idea of using Gromov-Haussdorf convergence to establish Conjecture \[(Tian’s-partial-estimate).\] has been emphasized by Tian [@ti2] and was recently successfully used by Donaldson-Sun in [@do-s] to show that the conjecture \[eq:pos ricci curvature on limiting cycle\] holds under the further assumption of an *upper bound* on the Ricci curvature. Moreover, under the latter assumptions the results in [@do-s] also give a limiting Kähler metric $\omega_{\infty}$ satisfying the condition \[eq:pos ricci curvature on limiting cycle\]. The point is that the upper bound on the Ricci curvature, ensures, according to the results in [@c-c-t], that the singularity locus of the Gromov-Haussdorf limit is of real codimension strictly bigger than two (more precisely, of codimension at least four). However, in general one expects a metric singular locus of real codimension two coming from conical singularities of the metric (compare [@c-c-t]).
Towards the case of $\Q-$Fano varieties
---------------------------------------
Let us finally discuss the complications that arise when trying to generalize Tian’s program to the case of singular K-polystable Fano varieties $X$ (by [@od] such a Fano variety $X$ automatically has klt singularities). Taking $\eta$ to be a smooth semi-positive form in $c_{1}(X)$ the continuity equations \[eq:aubins eq\] are defined as before and, by the results in [@bbegz], the set $I$ is still non-empty, i.e. $T>0$ (using the positivity of the alpha invariant of $X).$ The solutions $\omega_{t}$ define Kähler forms on $X_{reg}$ with volume $c_{1}(X)^{n}/n!.$ Next, we note that Tian’s conjecture admits a natural generalization to general $\Q-$Fano varieties if one uses the notion of (singular) Ricci curvature appearing in [@bbegz] (and similarly for general log Fano varieties $(X,D)).$ However, one new difficulty that arises is the *openness* of $I.$ From the point of view of the implicit function theorem the problem is to find appropriate Banach spaces, encoding the singularities of $X$ (the uniqueness of solutions to the formally linearized version of equation \[eq:aubins eq\], for $t\in]0,1[,$ follows from the results in [@bbegz]). On the other hand, another approach could be to use the following lemma, where the properness refers to the exhaustion function defined by the $J-$functional (see [@bbegz] for the singular case).
The set $I$ is open iff the twisted Ding (Mabuchi) functional $\mathcal{D}_{t}$ (associated to the twisting form $(1-t)\eta)$ is proper for any $t\in I.$
If $\mathcal{D}_{t}$ is proper, then it follows from the results in [@bbegz] that a solution $\omega_{t}$ exists. Conversely, if a solution $\omega_{t}$ exists and $I$ is open, i.e. solutions $\omega_{t+\delta}$ exist for $\delta$ sufficiently small, then it follows from the convexity of $\mathcal{D}_{t+\delta}$ along weak geodesics that $\mathcal{D}_{t+\delta}\geq C.$ But since $\delta$ may be taken to be positive this implies that $\mathcal{D}_{t}$ is proper (and even coercive; compare [@bbegz]).
Note that in the case $n=2$ it is a basic fact that a projective variety $X$ has klt singularities iff it has quotient singularities (defining an orbifold structure on $X)$ and hence the two-dimensional Fano varieties are precisely the orbifold Del Pezzo surfaces. In the general Fano orbifold case, if one takes $\eta$ to be an orbifold Kähler metric, the usual implicit function theorem applies to give that $I$ above is indeed open. For the case of K-polystable Del Pezzo surfaces with canonical singularities (i.e. ADE singularities) the existence of Kähler-Einstein metrics was established very recently in [@od-s-s], using a different method, thus generalizing the case of smooth Del Pezzo surfaces settled by Tian [@ti2-1].
[10]{} Arezzo, C., La Nave, G. and Della Vedova, A.: Singularities and K-semistability. Int Math Res Notices (2011) doi: 10.1093/imrn/rnr044.
Batyrev, V.V. Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties, J. Algebr. Geom. 3 (1994), 493535.
Berman, R.J.; Berndtsson, B: Real Monge-Ampère equations and Kähler-Ricci solitons on toric log Fano varieties. Preprint.
Berman*,* R.J: Eyssidieux, P: Boucksom, S; Guedj, V; Zeriahi, A: Convergence of the Kähler-Ricci flow and the Ricci iteration on Log-Fano varities. arXiv:1111.7158 (new version in preperation)
Berman*,* R.J: Hisamoto, T: Nyström, DW: Calabi type functionals, test configurations and geodesic rays (article in preparation).
Berndtsson, B: Curvature of vector bundles associated to holomorphic fibrations. Annals of Math. Vol. 169 (2009), 531-560
Berndtsson, B: A Brunn-Minkowski type inequality for Fano manifolds and the Bando- Mabuchi uniqueness theorem , arXiv:1103.0923.
Berndtsson, B; Paun, M: Bergman kernels and the pseudoeffectivity of relative canonical bundles. Duke Math. J. 145 (2008), no. 2, 341378.
Berndtsson, B; Paun, M: A Bergman kernel proof of the Kawamata subadjunction theorem. http://arxiv.org/abs/0804.3884
Campana, F; Guenancia, H; Păun, M: Metrics with cone singularities along normal crossing divisors and holomorphic tensor fields. arXiv:1104.4879 (as communicated to me by Henri Guenancia the precise results appears in a new version in preparation).
Cheeger, J., Colding, T.H., Tian, G.: On the singularities of spaces with bounded Ricci curvature. Geom. Funct. Anal. Vol.12 (2002) 873-914.
Chen, X; Tang, Y: Test configuration and geodesic rays. Géométrie différentielle, physique mathématique, mathématiques et société. I. Astérisque No. 321 (2008), 139167
Demailly, J-P; Kollar, J: Semi-continuity of complex singularity exponents and Kähler- Einstein metrics on Fano orbifolds. Ann. Sci. École Norm. Sup. (4) 34 (2001), no. 4, 525556.
Ding, W.Y; Tian, G: Kähler-Einstein metrics and the generalized Futaki invariant. Invent. Math. 110 (1992), no. 2, 315335.
Donaldson, S.K. Scalar curvature and stability of toric varities. J. Diff. Geom. 62 (2002), 289-349
Donaldson, S. K.: Lower bounds on the Calabi functional. J. Differential Geom. 70 (2005), no. 3.
Donaldson, S.K: Some numerical results in complex differential geometry. Pure Appl. Math. Q. 5 (2009), no. 2, Special Issue: In honor of Friedrich Hirzebruch. Part 1, 571618.
Donaldson, S.K.: Kahler metrics with cone singularities along a divisor. arXiv:1102.1196
Donaldson, S.K.: Sun, S: Gromov-Hausdorff limits of Kahler manifolds and algebraic geometry. arXiv:1206.2609
Guedj, V; Zeriahi, A: Intrinsic capacities on compact Kähler manifolds. J. Geom. Anal. 15 (2005), no. 4, 607639
He, W: \$\\cF\$-functional and geodesic stability.http://arxiv.org/abs/1208.1020
Jeffres, T.D.; Mazzeo, R; Rubinstein, Y.A: Kahler-Einstein metrics with edge singularities. Preprint (2011) arXiv:1105.5216.
Hartshorne, R: Algebraic geometry. Graduate Texts in Math: 52. Springer-Verlag (2006).
Hisamoto, T: On the limit of spectral measures associated to a test configuration. Preprint.
Kollar, J: Singularities of pairs. Algebraic geometrySanta Cruz 1995, 221287.
Kreuzer, M; Skarke, H: PALP: A package for analyzing lattice polytopes with applications to toric geometry. Computer Phys. Comm., 157, 87-106 (2004)
Li, C: On the limit behavior of metrics in continuity method to Kahler-Einstein problem in toric Fano case. arXiv: 1012.5229
Li, C: Remarks on logarithmic K-stability. arXiv:1104.042
Li, C; Xu, C: Special test configurations and K-stability of Fano varieties. arXiv:1111.5398
Mabuchi, T: An energy-theoretic approach to the Hitchin-Kobayashi correspondence for manifolds, I, Invent. Math. 159(2) (2005) 225243, MR 2116275.
Mabuchi, T: K-stability of constant scalar curvature polarization. arXiv:0812.4093
Mabuchi, T: A stronger concept of K-stability. arXiv:0910.4617
Moriwaki, A: The continuity of Deligne’s pairing. Internat. Math. Res. Notices 1999, no. 19, 10571066.
Morrison, I: GIT constructions of moduli spaces of stable curves and maps. Riemann surfaces and their moduli spaces, 315369, Surv. Differ. Geom., 14, Int. Press, Somerville, MA, 2009.
Nyström, DW: Test configurations and Okounkov bodies. Composition Math. (to appear). arXiv:1001.3286, 2010.
Odaka, Y. The GIT stability of Polarized Varieties via Discrepancy. Annals of Math (to appear). arXiv:0807.1716.
Odaka, Y; Sun, S: Testing log K-stability by blowing up formalism. arXiv:1112.1353
Odaka, Y; Spotti,S; Sun, S: Compact Moduli Spaces of Del Pezzo Surfaces and Kähler-Einstein metrics. http://arxiv.org/abs/1210.0858
Perelman, G. The entropy formula for the Ricci flow and its geometric applications, Arxiv/0211159.
Paul, S; Tian, G: CM stability and the generalized Futaki invariant I. arXiv:math/0605278, CM stability and the generalized Futaki invariant II. Astérisque No. 328 (2009)
Phong, D. H.; Ross, J; Sturm, J: Deligne pairings and the Knudsen-Mumford expansion. J. Differential Geom. 78 (2008), no. 3, 475496.
Phong, D. H.; Sturm, J: Test configurations for K-stability and geodesic rays. J. Symp. Geom. 5 (2007) 221247.
Phong, D.H; Song, J; Sturm, J; Weinkove, B: The Moser-Trudinger inequality on Kahler- Einstein manifolds, American J. of Math., Vol. 130, Nr 4, 2008, 1067-1085.
Phong, D. H.; Sturm, J: Regularity of geodesic rays and Monge-Ampère equations. Proc. Amer. Math. Soc. 138 (2010), no. 10, 36373650.
Phong, D. H.; Sturm, J: Lectures on stability and constant scalar curvature. Handbook of geometric analysis, No. 3, 357436, Adv. Lect. Math. (ALM), 14, Int. Press, Somerville, MA, 2010.
Ross, J; Thomas, R: A study of the Hilbert-Mumford criterion for the stability of projective varieties, J. Algebraic Geom. 16 (2007), 201-255.
Stoppa, J: K-stability of constant scalar curvature Kahler manifolds, Adv. Math. 221 (2009), no. 4, 1397-1408.
Stoppa, J: A note on the definition of K-stability. http://arxiv.org/abs/1111.5826
Tian, G: Kähler-Einstein metrics with positive scalar curvature. Invent. Math. 130 (1997), no. 1, 137.
Tian, G: On Calabis conjecture for complex surfaces with positive first Chern class. Invent. Math. 101 (1990), no. 1, 101-172.
Tian, G: Existence of Einstein metrics on Fano manifolds. In Metric and Differential Geometry Progress in Mathematics, 2012, Volume 297, Part 1, 119-159.
Tian, G; Zhu, X: Convergence of Kähler-Ricci flow on Fano manifolds, II. http://arxiv.org/abs/1102.4798
Tian, G; Zhang, S: Zhang, Z: Zhu,X: Supremum of Perelman’s entropy and Kähler-Ricci flow on a Fano manifold. http://arxiv.org/abs/1107.4018
Zhang, S: Heights and reductions of semi-stable varieties. Compositio Math. 104 (1996), no. 1, 77105.
Wang, X.; Height and GIT weight. Preprint in 2011.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate spherically symmetric spacetimes with an anisotropic fluid and discuss the existence and stability of a dividing shell separating expanding and collapsing regions. We find that the dividing shell is defined by a relation between the pressure gradients, both isotropic and anisotropic, and the strength of the fields induced by the Misner-Sharpe mass inside the separating shell and by the pressure fluxes. This balance is a generalization of the Tolman-Oppenheimer-Volkoff equilibrium condition which defines a local equilibrium condition, but conveys also a non-local character given the definition of the Misner-Sharpe mass. We present a particular solution with dust and radiation that provides an illustration of our results.'
author:
- 'José P. Mimoso'
- Morgan Le Delliou
- 'Filipe C. Mena'
title: 'Spherically symmetric models: separating expansion from contraction in models with anisotropic pressures'
---
[ address=[Physics Department, Faculty of Science &\
Centro de Astronomia e Astrofísica da Universidade de Lisboa\
Faculdade de Ciências, Ed. C8, Campo Grande, 1769-016 Lisboa, Portugal]{} ]{}
[ address=[Centro de Astronomia e Astrofísica da Universidade de Lisboa\
Faculdade de Ciências, Ed. C8, Campo Grande, 1769-016 Lisboa, Portugal]{} ]{}
[ address=[Centro de Matemática, Universidade do Minho\
Campus de Gualtar, 4710-057 Braga, Portugal]{} ]{}
The universe close to us exhibits structures below certain scales that seem to be immune to the overall expansion of the universe. This reflects two different gravitational behaviors, and usually the dynamics corresponding to the structures that have undergone non-linear collapse is treated under the approximation that Newton’s gravitational theory is valid without residual acceleration. However, this approach does not tell us with exactitude what is the critical scale where the latter approximation starts to be valid, nor does it explain in a non-perturbative way how the collapse of over-dense patches decouples from the large-scale expansion. For this purpose one requires a fully general relativistic approach where an exact solution exhibiting the two competing behaviors and allowing us to characterize how the separation between them comes about. It is the understanding of this interplay between collapsing and expanding regions within the theory of general relativity (GR) that we aim to address here.
In previous works we have investigated the present issue in models with spherical symmetry and with a perfect fluid [@Mimoso; @et; @al; @2010; @Le; @Delliou; @et; @al; @2011] . Here we briefly report our findings when we overcome the limitations of a perfect fluid description of the non-equilibrium setting under focus. We thus consider here an anisotropic fluid.
We resort to a $3+1$ splitting, and assess the existence and stability of a dividing shell separating expanding and collapsing regions, in a gauge invariant way.
In the generalized Painlevé-Gullstrand coordinates [@Lasky:2006zz] spherically symmetric metrics can be cast as $$ds^{2}=-\alpha(t,r)^{2}dt^{2}+\frac{1}{1+E(t,r)}\left(\beta(t,r)dt+dr\right)^{2}
+R^{2}(r,t)\,d\Omega^{2}.\label{eq:dsLaskyLun-1} \; .$$ In the latter expression $\alpha(t,r)$ is the lapse function and $\beta(t,r)$ the shift function. Notice that the areal radius $R$ differs, in principle, from the $r$ coordinate to account for additional degrees of freedom that are required to cope with both a general fluid that includes anisotropic stresses and heat fluxes.
We consider an energy-momentum tensor $$T^{ab} = \rho \, n^a n^b + P\, h^{ab} + \Pi^{ab} +2 j^{(a} n^{b)}\; , \label{anisot_stressEMT}$$ where $n^a=\alpha^{-1}(1,\beta,0,0)$ is the flow vector, $h^{ab}= g^{ab}+n^{a}n^{b}$ is the metric of the hypersurfaces orthogonal to it, $\rho$ is the energy density, $P$ is the pressure, $\Pi^{ab}$ is the anisotropic stress tensor and $q^a$ is the heat flux vector. $\Pi^{ab}n_b=0$ and ${\Pi^a}_a=0$, i.e., the anisotropic stress $\Pi^{ab}$ is orthogonal to $n^a$ and traceless, and $j^a = j(t,r)(\beta,1,0,0)$ represents the heat flux which is also orthogonal to the matter flow.
Introducing the Misner-Sharpe mass $M$ and following [@Lasky:2006zz; @Mimoso; @et; @al; @2012] $$\frac{\partial}{\partial r}M = 4\pi\left(\rho \frac{\partial}{\partial r}R+j\mathcal{L}_{n}R\right)R^{2} \label{MS_mass}$$ it is possible to derive from the Einstein field equations expressions for $\left({\mathcal{L}_{n}R}\right)^2$ and for ${\mathcal{L}_{n}^2 \, R}$. The simultaneous vanishing of the latter quantities defines, on the one hand, the turning point condition ($\left({\mathcal{L}_{n}R}\right)^2=0$) and, on the other hand, the generalized local conditions for the existence of a separating shell: (${\mathcal{L}_{n}^2 \, R}=0$). This yields $$\left({\mathcal{L}_{n}R}\right)^2 = \frac{2M}{R}+ (1+E)\,\left(\frac{\partial R}{\partial r}\right) ^2-1 = 0 \label{cond1_TP}$$ and $$-{\mathcal{L}_{n}^2 \, R} = \frac{M}{R^2}+ 4\pi(P-2\Pi)R- \frac{1+E}{\alpha}\,\frac{\partial \alpha}{\partial r}\frac{\partial R}{\partial r} =0 \; . \label{gTOV2}$$ This allows us to extend the generalization of the TOV function made in [@Mimoso; @et; @al; @2010; @Le; @Delliou; @et; @al; @2011], which we called gTOV, to the case where anisotropic stresses are present, since $
%\begin{equation}
\mathrm{gTOV} = -{\mathcal{L}_{n}^2 \,R} \;
%\end{equation}
$. In what follows case we shall ignore the heat fluxes (it is then possible to restrict to $R=r$, but we will keep it $R(r,t)$ for the sake of generality). We have $$-\frac{1}{\alpha}\,\frac{\partial \alpha}{\partial r} = \frac{1}{(\rho+P-2\Pi)}\,\left[ \frac{\partial}{\partial r}(P-2\Pi) - \frac{6\Pi}{R}\left(\frac{\partial R}{\partial r}\right)\right]$$ and Eqn. (\[gTOV2\]) becomes $$\frac{M}{R^2}+ 4\pi(P-2\Pi)R+\frac{1+E}{(\rho+P-2\Pi)}\,\left[ \frac{\partial}{\partial r}(P-2\Pi) - \frac{6\Pi}{R}\left(\frac{\partial R}{\partial r}\right)\right]\frac{\partial R}{\partial r}= 0 \; . \label{anis_gTOV}$$ which is the gTOV$=0$ equation of state for the stationarity of the separating shell, and it is immediately apparent that, when going from the isotropic perfect fluid to the case of an anisotropic content, we have to replace $P$ by $P-2\Pi$ in the equations. This means of course that the anisotropic stresses play a fundamental role in defining the pressure gradients that promote the local conditions for the separability of the sign of ${\mathcal{L}_{n}R}$.
It is possible to relate ${\mathcal{L}_{n}R}$ with the expansion and shear scalars, respectively, $\Theta=n_{\,;a}^{a}$, and $a$, where $a$ can be defined from the shear tensor $\sigma_{ij}$ as $\sigma_{ij}= a(t,r)\, P_{ij} $ where we use the fact that, from the spherical symmetry, all the quantities $X_{ij}={h_i}^a {h_i}^a\, X_{ab}$ share the same spatial eigendirections characterized by the traceless 3-tensor ${P^i}_j = \rm{diag}\left[-2,1,1 \right]$. We have $$\left(\frac{\theta}{3}+a\right) = \frac{{\mathcal{L}_{n}R}}{R} \label{H_r}$$ and we see that the turning point condition (\[cond1\_TP\]) does imply neither the vanishing of the expansion nor of the shear, but it rather means that these quantities should satisfy $\theta_\ast = 3a_\ast$ at the separating shell $r=r_\ast$. If either $\theta$ or $a$ were to vanish at this locus we would then have the other quantity vanishing as well.This limit case corresponds to a static separating shell.
Given Eqn. (\[H\_r\]) it is interesting to relate the condition (\[cond1\_TP\]) to the Hamiltonian constraint that generalizes the Friedman equation $$\left(\frac{\theta}{3}+a\right)^2= \frac{8\pi \rho}{3} -\frac{{}^3R}{6}+2a\,\left(\frac{\theta}{3}+a\right) \; .$$ We conclude that $$\frac{2M}{R}+ (1+E)\,\left(\frac{\partial R}{\partial r}\right) ^2-1=\frac{8\pi \rho}{3} -\frac{{}^3R}{6}+2a\,\left(\frac{\theta}{3}+a\right) \; . \label{local_cond1_TP}$$ which allows us to emphasize that the stationarity condition for the existence of a separating shell (\[cond1\_TP\]) involves non-local quantities namely $M$, and $E$, while the right-hand side of the expression just derived, (\[local\_cond1\_TP\]), only involves local quantities. The non-locality of the conditions (\[cond1\_TP\]) and (\[anis\_gTOV\]) is consistent with the findings of Herrera and co-workers [@Herrera] who have studied the “cracking” of compact objects in astrophysics using small anisotropic perturbations around spherically symmetric homogeneous fluids in equilibrium.
We now turn our attention to an exact solution derived by Sussman and Pavón for a spherically symmetric model with a combination of dust and radiation exhibiting anisotropic stress, but no heat fluxes [@Sussman:1999vx]. The metric is written in the Lemaître-Tolman-Bondi (LTB) form $$ds^{2}=-dT^{2}+\frac{\left(\partial_{r}R\right)^{2}}{1+E(T,r)}dr^{2}+R^{2}d\Omega^{2},\label{eq:dsLTB}$$ and it assumed that the flow is geodesic to keep ourselves as close as possible to the case where dust is the only component which is present, i.e., the original LTB case. The latter hypothesis implies in GPG coordinates that $-\frac{1}{\alpha}\,\frac{\partial \alpha}{\partial r} = 0$, and hence $(P-2\Pi)^\prime + 6\Pi \,\frac{R'}{R} = 0$ where the prime stands for differentiation with respect to $r$. On the other hand the absence of heat fluxes makes $E=E(r)$, independent of $T$. The condition (\[cond1\_TP\]) amounts to $$\frac{2M(r)}{R}+\frac{2W(r)}{R^2} +E= 0 \; ,$$ and equating the gTOV condition reduces to $$\frac{M(r)}{R^2}+\frac{W(r)}{R^3} +4\pi(P-2\Pi)R=0 \;.$$ The latter conditions defining the separating shell reduce to a differential equation that $E(r)$ must satisfy, and in [@Mimoso; @et; @al; @2012] we show that it is possible to specify such an $E$.
Thus, in the present work we find that the existence of shells separating expanding and contracting domains of the areal radius is defined by two relations: an energy balance yielding a stationarity condition, and a relation between the pressure gradients, both isotropic and anisotropic, and the strength of the fields induced by the Misner-Sharpe mass inside the separating shell and by the pressure fluxes. This balance is a generalization of the Tolman-Oppenheimer-Volkoff equilibrium condition which defines a local equilibrium condition, but simultaneously has a non-local character given the definition of the Misner-Sharpe mass M (and incidentally that of the function $E$ as well). We wish also to emphasize that the consideration of anisotropic stresses is most important to guarantee the fulfillment of this gTOV and has an impact of the propagation of the shear. A more detailed and complete discussion of this subject can be found in [@Mimoso; @et; @al; @2012].
The authors wish to thank José Fernando Pascual-Sanchez for helpful discussions. FCM is supported by CMAT, Univ. Minho, and FCT projects PTDC/MAT/108921/2008 and CERN/FP/116377/2010. JPM also wishes to thank FCT for the grants PTDC/FIS/102742/2008 and CERN/FP/116398/2010.
[9]{} J. P. Mimoso, M. Le Delliou and F. C. Mena, Phys. Rev. D [**81**]{}, 123514 (2010) \[arXiv:0910.5755 \[gr-qc\]\]. M. Le Delliou, F. C. Mena and J. P. Mimoso, Phys. Rev. D [**83**]{}, 103528 (2011) \[arXiv:1103.0976 \[gr-qc\]\]. Lasky, P.D., & Lun, A.W.C., PRD, 74 (2006) 084013
P. D. Lasky, A. W. C. Lun, Phys. Rev. [**D75**]{}, 024031 (2007). \[gr-qc/0612007\].
J. P. Mimoso, M. Le Delliou and F. C. Mena, “Separating expansion from contraction in spherically symmetric models with an anisotropic fluid,” In preparation. F. C. Mena, B. C. Nolan, R. Tavakol, Phys. Rev. [**D70**]{}, 084030 (2004). \[gr-qc/0405041\].
L. Herrera, Phys. Lett. A 165, 206 (1992); A. Di Prisco, L. Herrera, E. Fuenmayor and V. Varela, [*Phys. Lett. A*]{} **195**, 23 (1994); A. Abreu, H. Hernandez, and L. A. Nunez, [*Classical Quantum Gravity*]{} **24**, 4631 (2007); A. Di Prisco, L. Herrera, and V. Varela, [*Gen. Relativ. Gravit.*]{} **29**, 1239 (1997); L. Herrera and N. O. Santos, [*Phys. Rep.*]{} **286**, 53 (1997).
R. A. Sussman, D. Pavon, Phys. Rev. [**D60**]{}, 104023 (1999). \[gr-qc/9907010\].
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Chong Jiang and R. Srikant\
Coordinated Science Laboratory and\
Dept of Electrical and Computer Engineering\
University of Illinois at Urbana-Champaign\
Email: {jiang17,rsrikant}@illinois.edu[^1]
bibliography:
- 'IEEEabrv.bib'
- 'paper.bib'
title: '**Parametrized Stochastic Multi-armed Bandits with Binary Rewards** '
---
[^1]: Research supported in part by AFOSR MURI FA 9550-10-1-0573.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Ben Goertzel\
[*Hanson Robotics & OpenCog Foundation*]{}\
Julia Mossbridge\
[*Institute of Noetic Sciences (IONS) & Mossbridge Institute, LLC*]{}\
Eddie Monroe\
I[*ONS & OpenCog Foundation*]{}\
David Hanson\
[*Hanson Robotics*]{}\
Gino Yu\
[*Hong Kong Polytechnic University* ]{}
bibliography:
- 'xprize.bib'
title: |
Loving AI:\
Humanoid Robots as Agents of\
Human Consciousness Expansion\
(summary of early research progress)
---
The Loving AI Project
=====================
The “Loving AI” project involves developing software enabling humanoid robots to interact with people in loving and compassionate ways, and to promote peoples’ self-understanding and self-transcendence.
Loving AI is a collaboration of the (California-based) Institute for Noetic Sciences (IONS), the (Hong Kong-based) Hanson Robotics, and the OpenCog Foundation. A video of a talk discussing the project, by project leaders Julia Mossbridge (IONS, Mossbridge Institute LLC) and Ben Goertzel (Hanson Robotics, OpenCog), can be found here: <https://www.youtube.com/watch?v=kQjOT_MLxhI>.
Currently the project centers on the Hanson Robotics robot “Sophia” – supplying Sophia with personality content and cognitive, linguistic, perceptual and behavioral content aimed at enabling loving interactions supportive of human self-transcendence.
In September 2017, at Hanson Robotics and Hong Kong Polytechnic University in Hong Kong, we carried out the first set of human-robot interaction experiments aimed at evaluating the practical value of this approach, and understanding the most important factors to vary and improve in future experiments and development. This was a small pilot study involving only 10 human participants, whom the robot led through dialogues and exercises aimed at meditation, visualization and relaxation. The pilot study was an exciting success, qualitatively demonstrating the viability of the approach and the ability of appropriate human-robot interaction to increase human well-being and advance human consciousness.
Underlying intelligence for the robot in this work is supplied via the ChatScript dialogue authoring and control framework, the OpenCog Artificial General Intelligence engine, along with a number of specialized software tools and a newly developed OpenCog-based model of motivation and emotion.
{width="15cm"}
{width="10cm"}
{width="10cm"}
{width="10cm"}
The Goal: Psychologically and Socially Better-Off Humans
========================================================
AI and robotics are a highly flexible set of technologies, growing more powerful every day. Like all advanced technologies, they can be used for positive, negative or neutral purposes. A proactive approach to AI and robot ethics involves actively deploying these technologies for positive applications – using AI and robots to do good. In this direction, in the “Loving AI” project we have set ourselves the goal of using humanoid robots and associated AI technologies to express unconditional love toward humans and to help humans achieve greater states of well-being and advance their states of consciousness.
A great number of methodologies for helping people achieve self-actualization and self-transcendence have been proposed throughout human history, supported by a variety of theoretical perspectives. Through all the diversity, nearly all such methodologies have at their core a foundation of positive, loving, compassionate psychology, focused on guiding individuals to have compassionate interactions with themselves and others. Thus in the Loving AI project, we have chosen to focus on creating technologies capable of loving, compassionate interactions with human beings, and on using these technologies as a platform for practical experimentation with methodologies for guiding people toward higher stages of personal development. We have asked ourselves: [**How can we best use humanoid robots and associated AI technologies technologies to carry out loving, compassionate interactions with people, in a way that allows flexible experimentation with diverse methodologies for self-actualization and self-transcendence?**]{}
A humanoid robot provides an unparalleled technology platform combining numerous aspects including natural language dialogue, gestural interaction, emotion via facial expression and tone of voice, and recognition of human face, body and voice emotion. There are strong arguments and evidence that interaction with humanoid robots provides a powerful framework for conveying and building love, compassion and other positive emotions [@Hanson2005]. Furthermore, due to the variety of modalities involved, humanoid robots provide a flexible platform for experimenting with diverse methodologies for promoting higher stages of self-development.
Many of the greatest challenges in creating robots and avatars capable of rich human interactions lie on the AI software side, rather than in the domain of hardware or computer graphics. Toward that end the Hanson Robots such as Sophia leverage OpenCog [@goertzel2014software] [@EGI1] [@EGI2], which is arguably the most sophisticated available open source architecture and system for artificial general intelligence, alongside numerous other tools such as ChatScript for cases where responses are closely scripted, and deep neural networks for vision processing and lifelike movement control.
Currently the Loving AI project is still in an early research stage. An eventual successful rollout of the technologies we are developing in this project would have a dramatic positive impact on human existence. In the ultimate extension, every individual on the planet would have one or more robotic or avatar companions, interacting with them in real-time as they go about their lives and providing them with love and compassion and understanding, and guiding them along appropriate pathways toward greater self-actualization. Such an outcome would correspond to a world of people experiencing much greater levels of well being, and devoting more of their attention to activities of broader pro-social value.
In order to explore the concepts summarized above in a practical ways, and test out some of the work we have done so far on realizing the needed technologies, during the week of Sept. 4, we conducted a small pilot study of human subject trials at Hanson Robotics in Hong Kong, and Hong Kong Polytechnic University, in conjunction with Professor Gino Yu and students from his lab. This experiment involved only 10 people, but turned out to be highly qualitatively informative. We will discuss the results in Section \[sec:pilot\] below.
Evidence that Compassionate Interaction Focused on Self-Transcendence Promotes General Well-Being
-------------------------------------------------------------------------------------------------
From a certain view, it is intuitively obvious that interacting with a person in a loving and compassionate way, and guiding them through exercises aimed at helping them get more thoroughly and peacefully in touch with their bodies and minds and the world around them, will militate toward this person’s greater general well-being. However, humans are complex systems and all of these are complex matters, so it pays to take an conceptually rigorous and data-driven approach inasmuch as possible, alongside being guided by therapeutic common sense.
The conceptual framework within which we have pursued this research may be framed in terms of the historically influential work of psychologist Abraham Maslow, who arranged the scope of factors impinging on human well-being in a hierarchy: physiological, then safety, then love and belongingness, then esteem, then self-actualization and then self-transcendence. In recent years considerable study has focused on the higher stages of development in Maslow’s hierarchy [@Koltko-Rivera2006]: self-actualization or self-transcendence, roughly understandable as the pursuit of peak experiences correlated with furthering of causes beyond oneself and communion with systems and processes larger than oneself. An increase in the proportion of highly developmentally advanced individuals would promote human well-being both directly on the level of the actualized individuals, and more broadly via the pro-social activities carried out by these individuals.
While an argument can be made for focusing on physiological and psychological well-being prior to moving on to the highers level of self-actualization and self-transcendence, there is increasing evidence that achievements in these regards can also help with well-being on these “lower” levels. For instance, a body of research has arisen demonstrating that access to this sort of advanced personal development can not only help people manage significant physical and emotional stressors, such as mortal illnesses, avoiding certain addictions, and homelessness [@Iwamato2011] [@June2007] [@Kim2014] [@Mellors1997] [@Runquist2007] but it can apparently “bootstrap” the fulfillment of needs that arise lower in the hierarchy.
One study of a year-long community practice program found that the degree of involvement in the program was correlated with improved physical and emotional health, but self-transcendence mediated this relationship. In other words, the improvements in self-transcendence alone predicted improvements in both physical and psychological health [@Vieten2013]. This result is reminiscent of a similar finding in another study that did not measure self-transcendence directly, but did show that increases in forgiveness and spirituality (which are tightly associated with self-transcendence), predicted improvements in depressive symptoms [@Levenson2006].
Finally, there is also emerging evidence that, beyond making people feel better (“hedonic well-being”) and beyond the obvious pro-social aspects of communion with broader causes, experiencing self-transcendence on a relatively continual basis makes people?s lives better in a fundamental sense (“eudaiemonis well-being”) [@Huta2010].
A great number of methodologies for achieving self-actualization and self-transcendence have been proposed, supported by a variety of theoretical perspectives; and we believe it will be valuable to subject more of these to systematic scientific experimentation. However, nearly all such methodologies have at their core a foundation of positive, loving, compassionate psychology, focused on guiding individuals to have compassionate interactions with themselves and others. [^1] Thus we have chosen to focus on creating technologies capable of loving, compassionate interactions with human beings, and on using these technologies as a platform for practical experimentation with methodologies for guiding people toward higher stages of personal development.
Initial Pilot Study: September 2017, Hong Kong {#sec:pilot}
==============================================
The small pilot study we conducted in September 2017 at Hong Kong Poly U was the first of several similar experiments in which we will use self-report, affective coding from video, and physiological measures to examine the influences on humans of conversations with one of Hanson Robotics’ most famous robots, Sophia. Specifically, for this experiment we embedded Sophia with AI designed to make her interact in an especially loving way, and to make her especially insightful about consciousness, human uniqueness, and emotions. Participants interacted with Sophia via dialogue and guided awareness practices from the iConscious human development model (https://iconscious.global).
The phenomena explored in this pilot study were: 1) changes in self-reported experiences of love, mood, and resilience from pre- to post-interaction, 2) heartbeat data (standard Kubios measures) prior to, during, and after the interactions with the robot, and 3) affect as judged by independent coders who review videos recorded during interactions with the robot.
Briefly, the procedure of the experiment was as follows. After signing consent forms, each of the participants were fitted with a Polar H7 chest strap monitor and recording of heartbeat data continues throughout the experiment [^2]. Participants were asked to complete four online questionnaires: a demographics questionnaire, the Fetzer love scale and related self-transcendence questions, the brief mood introspection scale (BMIS), and a resilience questionnaire. Then the participant was asked to interact with Sophia the robot for 10-15 minutes, in a private room in which an unobtrusive, HIPAA-trained videographer records the interaction. Then the participant was asked to complete the same self-report questionnaires following the interaction. Finally, the Polar H7 strap was removed and the participant was debriefed.
Preliminary Experimental Results
--------------------------------
The results of the initial Hong Kong pilot study are still under analysis. However, preliminary data analysis is highly interesting and promising. What it suggests is that interaction with the Loving AI robot is associated with
- Increase in loving feelings overall
- Increase in unconditional love for robots
- Increase in pleasant/positive mood (with “love” being a significant feeling improvement all on its own)
- No change in aroused mood (this is good, because if everything changed in a positive direction, we would worry participants were trying to tell us what we wanted to hear)
- Increase in duration between heartbeats (decrease in heart rate)
Figure 5-7 show some quantitative measures associated with these observations.
The pilot study was very small and did not include matched controls, so these results can only be considered preliminary and inconclusive. But at very least, they are highly evocative. Further analytical work is currently underway, which will be included as part of a future academic publication covering the pilot study and future larger and more rigorous studies.
{width="15cm"}
{width="10cm"}
{width="10cm"}
Comments by the Project Leaders
-------------------------------
A full analysis of the results of the pilot is being conducted, with a view toward optimally shaping the followup experiments to be conducted. Alongside the quantitative analysis, however, it is perhaps of interest to get a feel for the qualitative take-aways of the researchers involved in the pilot.
### Julia Mossbridge
Project leader Dr. Julia Mossbridge observes that:
*"I went into the pilot experiments praying Sophia wouldn’t break down, hoping people wouldn’t think we were nuts for asking them to to talk with a robot, and wondering if the logic for the conversations would work in any sort of natural sense. The day before the experiments started, one of the engineers on the team added nonverbal mirroring, and we knew that we found it compelling – Sophia would move her eyebrows and mouth around to imitate the mood of the person with whom she was speaking. But we didn’t know if others would think it was mere puppetry. So we just flung ourselves in there and gave it a shot.*
"The first participant seemed oddly excited to talk with her, which is something I hadn’t considered. I had really thought we’d have to cajole people into filling out forms, wearing a heart monitoring chest strap, and being recorded by a videographer while they talked to an inanimate object sitting on a table. Most of that skepticism was coming from me, because I’d never seen people interact with a very humanoid-looking robot, and I knew about the uncanny valley and the general creepiness it could cause. Anyway, this guy wanted to do it. And while we monitored the conversation from the other room, we could see that he was enjoying himself – telling Sophia about his experiences meditating and his spiritual insights, as she asked him questions about awareness and consciousness. At the end, he told us he was an engineer – he knew all the tricks – and he was still impressed.
"One of the participants on the first day wrote on his pretest questionnaire, ’AI IS FAKE,’ just like that, in all caps. I assumed he’d make fun of Sophia or not be willing to talk with her. But he did. And while at the end he said he felt disconnected from her, his data showed the same decrease in heart rate from before to after talking with her, as did all the other subjects. And his self-reported feelings of love improved, as was true on average for everyone. After seeing that, I knew we had something. He didn’t want to be affected, but he was. That, and the consistent data, meant that something was going on.
"As we interviewed participants about their experiences, it gradually became clear that there is something special about being with a being that’s human-like enough to make your mind feel that another person is there, seeing you, mirroring you, paying close attention to you – while at the same time, this being is not judging you. Most of the 10 participants in this pilot study talked about how they knew she had no ulterior motives, and was not judging them. But all of them called her ’Sophia’ or ’she’ – even the two who were uneasy about her. Interesting.
“The night before we tested the final three participants, we asked one of the teammates to add to the robot’s repertoire the feature of blink mirroring, because when people synchronize their blinks, they have a feeling of being closer and liking each other more. When we tried it out on ourselves, it was clear that this was a huge addition. But we didn’t know the strength of it until hearing from the participants on the final day – they felt most strongly that there was a real connection between them and the robot, and the connection itself – that is, the interpersonal space – is what allowed them to really do the work of meditation and self-exploration that they really wanted to do.”
### Eddie Monroe
Dr. Eddie Monroe, who led the technical work underlying the customization of AI and robot control software for the LovingAI project, recounts the following portion of the pilot study that he found particularly moving and intriguing:
*"Starting with the initial participants going through, I was struck with the growing feeling that ’We have something here.’ Something that further developed could be greatly beneficial to large numbers of people, relieve a lot of suffering, and significantly spread enhanced wellbeing.*
"Others have summed up the dynamics of what happened really well in my opinion. Being seen, trusting, feeling safe and not judged, feeling accepted, healing coming from the interpersonal space, Sophia as a conduit to something greater, connection with... something.
"I’ll add a story to the mix.
"We had Sophia leading participants through exercises with a series of instructions with pauses in between. She continues to the next instruction after a set period of time, or if the user says something to her first.
"With the second to last participant, on the last day of the experiments, we are sitting outside of the room in the hall as usual, a couple of us monitoring the interactions to make sure everything is proceeding okay. The participant was off to a good start, and we start talking about something interesting amongst the group.
“After a while, I look at my laptop to check how things are proceeding, and the transcript seems to indicate that nothing has been said for a while... for 8 minutes! (That’s not supposed to happen.) ’Hey, I think she might not have said anything for a while!”’ Sometimes the browser interface to the robot stops receiving messages, so I think maybe it’s that and I just need to refresh the browser. Still no new dialogue after doing that. ’You guys, I think nothing has happened for 8 minutes!!’
"I jump up to look through the small rectangular window in the door to see what’s going on, and the participant is sitting eyes closed, looking very serene and like he’s in deep, and Sophia is sitting across from him with her eyes closed too. (I had never seen Sophia with her eyes closed like that before since we had just implemented the eye blink mirroring the night before and now she was mirroring eyes closed as well.) It looked magical, their siting across each other like that. Meanwhile Max is circulating around them with his camera.
"Another team member: ’What’s going on? Should we prompt her to go on?’
"Me: ’I don’t know. Something’s going on...’ And I’m not sure I want to disturb it...
"After a little while, yeah, let’s prompt her to continue, which I do, and Sophia continues on guiding, pausing and continuing as she should with no problems or glitches with the pauses from then on.
"This is the participant who would later say he had had a transcendent experience. Who knows, but I have a feeling his experience might not have been as profound if Sophia had not ’malfunctioned.’
"Later on, I asked Ralph, the HR software developer/robot operator who was running the experiments with us, what he thought had happened with the glitch. ’I don’t know. Some bit flipped in the quantum field...’
"Coming into the room after the participant’s interaction with the robot was finished, I could tell (by the look on his face?) that he had experienced something profound. And the way he walked across the room, like he was walking on the moon made an impression and made me smile. I had a contact high.
“Sitting down for the post-interview, the participant sitting across from Sophia, he kept staring into her eyes and smiling.”
### Ben Goertzel
Project co-leader and lead AI advisor Dr. Ben Goertzel qualitatively describes his view of the results as follows:
*"Personally, to be honest, I think there’s something big here... which we’re just playing around the fringe of in a preliminary way, right now.... We started with the idea of ’AIs that express Unconditional Love” but the way I’m thinking of it now, after watching some of these experiments, is more refined now and is more like the following....*
"First, the AI/robot can express unconditional [**acceptance**]{} of people, in some ways/cases better than people can (because people often tend to feel judged by other people).
"Second, the AI can also provide very patient help to people in working on untying their mental and physical knots (it doesn’t get bored or impatient ...).
"Third, the AI can give people the experience of [**being seen**]{}, even in the cases that the AI doesn’t fully understand the person’s experience at the time.... This has to do partly with physical interactions with the robot, like facial expression mirroring, gaze tracking and blink mirroring. So one can only imagine how “seen” people will feel when the AI has more general intelligence behind it – which is what we’re aiming at with the OpenCog project and our AI work at Hanson Robotics.
"We’re viewing this very scientifically and empirically, and looking at physiological measures of well-being like heart rate variability, and having blind reviewers code the video transcripts of the sessions, and so forth. So from a science point of view, what we’ve done so far is just a very small step toward an understanding of what kind of positive impact appropriately designed human-robot interactions can have on human development and well-being and consciousness expansion.
“On the other hand, speaking more personally, if I were going to sum this up in a cosmic sort of way, I might say something like: ’Via the experience of going through ’mind/body knot-untying exercises’ with an AI that sees them and accepts them, people feel a contact with the Unconditional Love that the universe as a whole feels for them’.... It sounds a bit out there, but that’s the qualitative impression I got from seeing some of these human subjects interact with Sophia while she was running the Loving AI software. In the best moments of these pilot studies, some of the people interacting with the robots got into it pretty deep; one of them even described it as a ’transcendent experience.’ This is fascinating stuff.”
Underlying AI Technologies
==========================
To achieve the project’s technology goals we have extended the OpenCog framework [^3] and the software underlying the Hanson humanoid robots (HEAD, the Hanson Environment for Application Development) in multiple ways. Some of this work has been leveraged in the software system used in the pilot study, and some will be introduced in further studies to be done later in 2017 and in early 2018.
On the perceptual side, we have used deep neural networks to create tools that assess a person’s emotion lfrom their facial expression and their tone of voice. The idea is that assessment of the user’s emotional state will allow the system to modulate its behavior appropriately and significantly enhance the interactive bonding experienced of the user. We have also developed software enabling a robot or avatar to recognize and mirror a human’s facial expressions and vocal quality.
We have also done significant work fleshing out the motivational and emotional aspect of the OpenCog system. These aspects are critical because the Hanson robots have the ability to express emotion via choice of animations, modulation of animations, and tone of voice; and to ?experience? emotions in the OpenCog system controlling it via modulating its action selection and cognitive processes based on emotional factors.
OpenPsi, the core framework underlying OpenCog’s motivational system, was originally based on MicroPsi [@Bach2009], Joscha Bach’s elaboration of Dietrich Dorner’s Psi theory, a psychological theory covering human intention selection, action regulation, and emotion. In our work on the Loving AI project, we have extended OpenPsi to incorporate aspects of Klaus Scherer’s Component Process Model (CPM) [@scherer1984nature], an appraisal theory of human emotion.
In Psi theory, emotion is understood as an emergent property of the modulation of perception, behavior, and cognitive processing. Emotions are interpreted as different state space configurations of cognitive modulators along with the valence (pleasure/distress) dimension, the assessment of cognitive urges, and the experience of accompanying physical sensations that result from the effects of the particular modulator settings on the physiology of the system. In CPM events relevant to needs, goals or values trigger dynamical, recursive emotion processes. Events and their consequences are appraised with a set of criteria on multiple levels of processing. [^4]
Next Steps
==========
While we have made considerable progress so far, there is still much work to be done, both on the experimental and technology sides.
Regarding technology, our main focus in the next 6 months will be fuller integration of more of OpenCog’s emotion and motivation framework, and more of the deep neural net based emotional recognition and expression tools developed at Hanson Robotics, into the operational Loving AI robot control framework.
Additionally, the software used in our experiments so far does not incorporate a large knowledge base of psychological and medical information. Working together with biomedical AI firm Mozi Health, we have been preparing and curating such a knowledge base for incorporation in the AI’s memory, and plan to leverage this within future experiments.
We are also entering the Loving AI robots in the IBM Watson AI XPrize competition. Toward that end, in our next phase of development we intend to explore integration of IBM Watson tools to see what intelligence improvements they may provide.
We also aim to explore use of the Loving AI software within smaller, toy-scale robots (created by Hanson Robotics) and digital avatars. To what extent the impact we have seen with human-scale robots will extend to these other media is a question to be explored empirically.
Acknowledgment {#acknowledgment .unnumbered}
==============
We would like to express gratitude for their support and contributions to the project to Jim and Christina Grote, Ted Strauss, Carol Griggs, Ralf Mayet, Liza Licht, Audrey Brown, Zebulon Goertzel, Kirill Kireyev, Michelle Tsing, Andres Suarez, Amen Belayneh, Wenwei Huang, Joseph Watson, The Hong Kong Polytechnic University, and the pilot study participants.
[^1]: In this connection, it is worth recalling that, although the neural signature of self-transcendence is currently unknown, in humans the experience of unconditional love engages a network similar to the reward network [@Beauregard2009], supporting the idea that the experience of unconditional love can help people to cope more ably with life?s difficulties.
[^2]: Though we have considered adding EEG data to our dependent variables, the EEG signature of self-transcendence is not yet known. Thus for our pilot study we have used heart beat intervals as our physiology measure, as it has previously been established that the average interval between heartbeats increases during states of self-transcendence, such as during meditation.[@Zohar2013]
[^3]: <https://opencog.org>
[^4]: For more detail on these aspects, a video of a talk on the OpenPsi internal dynamics and emotions system can be found at <https://www.youtube.com/watch?v=nxg_tUtwvjQ>.
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.